Academic literature on the topic '3D head'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic '3D head.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "3D head"

1

KOVÁČ, Ondrej, and Ján MIHALÍK. "LOSSLESS ENCODING OF 3D HUMAN HEAD MODEL TEXTURES." Acta Electrotechnica et Informatica 15, no. 3 (September 1, 2015): 18–23. http://dx.doi.org/10.15546/aeei-2015-0024.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kiński, Wojciech, Krzysztof Nalepa, and Wojciech Miąskowski. "Analysis of thermal 3D printer head." Mechanik, no. 7 (July 2016): 726–27. http://dx.doi.org/10.17814/mechanik.2016.7.144.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Surman, Phil, Sally Day, Xianzi Liu, Joshua Benjamin, Hakan Urey, and Kaan Aksit. "Head tracked retroreflecting 3D display." Journal of the Society for Information Display 23, no. 2 (February 2015): 56–68. http://dx.doi.org/10.1002/jsid.295.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Song, Eungyeol, Jaesung Choi, Taejae Jeon, and Sangyoun Lee. "3D Head Modeling using Depth Sensor." Journal of International Society for Simulation Surgery 2, no. 1 (June 10, 2015): 13–16. http://dx.doi.org/10.18204/jissis.2015.2.1.013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zhang, Ye, and Chandra Kambhamettu. "3D head tracking under partial occlusion." Pattern Recognition 35, no. 7 (July 2002): 1545–57. http://dx.doi.org/10.1016/s0031-3203(01)00140-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Will, Kipling, and Ian Steplowski. "A 3D printed Malaise trap head." Pan-Pacific Entomologist 92, no. 2 (April 2016): 86–91. http://dx.doi.org/10.3956/2016-92.2.86.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

LU, Xuelong, and Yuzuru SAKAI. "606 3D Human Head Crash Simulation." Proceedings of The Computational Mechanics Conference 2009.22 (2009): 516–17. http://dx.doi.org/10.1299/jsmecmd.2009.22.516.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

He, Huayun, Guiqing Li, Zehao Ye, Aihua Mao, Chuhua Xian, and Yongwei Nie. "Data-driven 3D human head reconstruction." Computers & Graphics 80 (May 2019): 85–96. http://dx.doi.org/10.1016/j.cag.2019.03.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ye, Kang-Hyun, and Hae-Woon Choi. "Laser Head Design and Heat Transfer Analysis for 3D Patterning." Journal of the Korean Society of Manufacturing Process Engineers 15, no. 4 (August 31, 2016): 46–50. http://dx.doi.org/10.14775/ksmpe.2016.15.4.046.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Wu, Long, Kit-Lun Yick, Joanne Yip, and Sun-Pui Ng. "Numerical simulation of foam cup molding process for mold head design." International Journal of Clothing Science and Technology 29, no. 4 (August 7, 2017): 504–13. http://dx.doi.org/10.1108/ijcst-08-2016-0103.

Full text
Abstract:
Purpose One of the crucial steps in the molded bra production is the process of developing the mold head. The purpose of this paper is to determine the final cups style and size. Compared with traditional development process of the mold head, less time-consuming and a more quantitative method is needed for the design and modification of the mold head. Design/methodology/approach A three-dimensional (3D) numerical model for the simulation of large compressive deformation was built in this paper to research the foam bra cup molding process. Since the head cones have more representative than the mold heads, the male and female head cones were used in the simulation. All of the solid shapes are modeled by using 3D Solid 164 elements as well as an automatic surface-to-surface contact between head cones. Findings Simulation of the foam cup molding process is conducted by inputting different properties of the foam material and stress-strain curves under different molding temperatures. Research limitations/implications In order to simulate the laminated foam moulding process, heat transfer through a layered textile assembly can be studied by using the thermo-mechanical coupled FE model. Practical implications According to the different foam performance parameters under different temperatures along with different head cone shapes, distribution and variation in the stress field can be obtained as well as the ultimate capacity of foam materials. Social implications A computer-aided parametric design system for the mold heads provides an effective solution to improving the development process of mold heads. Originality/value The distribution and variation in the stress fields can be analyzed through simulation, providing a reference for the mold head design.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "3D head"

1

Surman, Phil. "Head tracking two-image 3D television displays." Thesis, De Montfort University, 2003. http://hdl.handle.net/2086/10745.

Full text
Abstract:
The research covered in this thesis encompasses the design of novel 3D displays, a consideration of 3D television requirements and a survey of autostereoscopic methods is also presented. The principle of operation of simple 3D display prototypes is described, and design of the components of optical systems is considered. A description of an appropriate non-contact infrared head tracking method suitable for use with 3D television displays is also included. The thesis describes how the operating principle of the displays is based upon a twoimage system comprising a pair of images presented to the appropriate viewers' eyes. This is achieved by means of novel steering optics positioned behind a direct view liquid crystal display (LCD) that is controlled by a head position tracker. Within the work, two separate prototypes are described, both of which provide 3D to a single viewer who has limited movement. The thesis goes on to describe how these prototypes can be developed into a multiple-viewer display that is suitable for television use. A consideration of 3D television requirements is documented showing that glassesfree viewing (autostereoscopic), freedom of viewer movement and practical designs are important factors for 3D television displays. The displays are novel in design in several important aspects that comply with the requirements for 3D television. Firstly they do not require viewers to wear special glasses, secondly the displays allow viewers to move freely when viewing and finally the design of the displays is practical with a housing size similar to modem television sets and a cost that is not excessive. Surveys of other autostereoscopic methods included within the work suggest that no contemporary 3D display offers all of these important factors.
APA, Harvard, Vancouver, ISO, and other styles
2

Ntawiniga, Frédéric. "Head Motion Tracking in 3D Space for Drivers." Thesis, Université Laval, 2008. http://www.theses.ulaval.ca/2008/25229/25229.pdf.

Full text
Abstract:
Ce travail présente un système de vision par ordinateur capable de faire un suivi du mouvement en 3D de la tête d’une personne dans le cadre de la conduite automobile. Ce système de vision par ordinateur a été conçu pour faire partie d'un système intégré d’analyse du comportement des conducteurs tout en remplaçant des équipements et des accessoires coûteux, qui sont utilisés pour faire le suivi du mouvement de la tête, mais sont souvent encombrants pour le conducteur. Le fonctionnement du système est divisé en quatre étapes : l'acquisition d'images, la détection de la tête, l’extraction des traits faciaux, la détection de ces traits faciaux et la reconstruction 3D des traits faciaux qui sont suivis. Premièrement, dans l'étape d'acquisition d'images, deux caméras monochromes synchronisées sont employées pour former un système stéréoscopique qui facilitera plus tard la reconstruction 3D de la tête. Deuxièmement, la tête du conducteur est détectée pour diminuer la dimension de l’espace de recherche. Troisièmement, après avoir obtenu une paire d’images de deux caméras, l'étape d'extraction des traits faciaux suit tout en combinant les algorithmes de traitement d'images et la géométrie épipolaire pour effectuer le suivi des traits faciaux qui, dans notre cas, sont les deux yeux et le bout du nez du conducteur. Quatrièmement, dans une étape de détection des traits faciaux, les résultats 2D du suivi sont consolidés par la combinaison d'algorithmes de réseau de neurones et la géométrie du visage humain dans le but de filtrer les mauvais résultats. Enfin, dans la dernière étape, le modèle 3D de la tête est reconstruit grâce aux résultats 2D du suivi et ceux du calibrage stéréoscopique des caméras. En outre, on détermine les mesures 3D selon les six axes de mouvement connus sous le nom de degrés de liberté de la tête (longitudinal, vertical, latéral, roulis, tangage et lacet). La validation des résultats est effectuée en exécutant nos algorithmes sur des vidéos préenregistrés des conducteurs utilisant un simulateur de conduite afin d'obtenir des mesures 3D avec notre système et par la suite, à les comparer et les valider plus tard avec des mesures 3D fournies par un dispositif pour le suivi de mouvement installé sur la tête du conducteur.
This work presents a computer vision module capable of tracking the head motion in 3D space for drivers. This computer vision module was designed to be part of an integrated system to analyze the behaviour of the drivers by replacing costly equipments and accessories that track the head of a driver but are often cumbersome for the user. The vision module operates in five stages: image acquisition, head detection, facial features extraction, facial features detection, and 3D reconstruction of the facial features that are being tracked. Firstly, in the image acquisition stage, two synchronized monochromatic cameras are used to set up a stereoscopic system that will later make the 3D reconstruction of the head simpler. Secondly the driver’s head is detected to reduce the size of the search space for finding facial features. Thirdly, after obtaining a pair of images from the two cameras, the facial features extraction stage follows by combining image processing algorithms and epipolar geometry to track the chosen features that, in our case, consist of the two eyes and the tip of the nose. Fourthly, in a detection stage, the 2D tracking results are consolidated by combining a neural network algorithm and the geometry of the human face to discriminate erroneous results. Finally, in the last stage, the 3D model of the head is reconstructed from the 2D tracking results (e.g. tracking performed in each image independently) and calibration of the stereo pair. In addition 3D measurements according to the six axes of motion known as degrees of freedom of the head (longitudinal, vertical and lateral, roll, pitch and yaw) are obtained. The validation of the results is carried out by running our algorithms on pre-recorded video sequences of drivers using a driving simulator in order to obtain 3D measurements to be compared later with the 3D measurements provided by a motion tracking device installed on the driver’s head.
APA, Harvard, Vancouver, ISO, and other styles
3

Anderson, Robert. "Building an expressive text driven 3D talking head." Thesis, University of Cambridge, 2014. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.708119.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Brar, Rajwinder Singh. "Head tracked multi user autostereoscopic 3D display investigations." Thesis, De Montfort University, 2012. http://hdl.handle.net/2086/6532.

Full text
Abstract:
The research covered in this thesis encompasses a consideration of 3D television requirements and a survey of stereoscopic and autostereoscopic methods. This confirms that although there is a lot of activity in this area, very little of this work could be considered suitable for television. The principle of operation, design of the components of the optical system and evaluation of two EU-funded (MUTED & HELIUM3D projects) glasses-free (autostereoscopic) displays is described. Four iterations of the display were built in MUTED, with the results of the first used in designing the second, third and fourth versions. The first three versions of the display use two-49 element arrays, one for the left eye and one for the right. A pattern of spots is projected onto the back of the arrays and these are converted into a series of collimated beams that form exit pupils after passing through the LCD. An exit pupil is a region in the viewing field where either a left or a right image is seen across the complete area of the screen; the positions of these are controlled by a multi-user head tracker. A laser projector was used in the first two versions and, although this projector operated on holographic principles in order to obtain the spot pattern required to produce the exit pupils, it should be noted that images seen by the viewers are not produced holographically so the overall display cannot be described as holographic. In the third version, the laser projector is replaced with a conventional LCOS projector to address the stability and brightness issues discovered in the second version. In 2009, true 120Hz displays became available; this led to the development of a fourth version of the MUTED display that uses 120Hz projector and LCD to overcome the problems of projector instability, produces full-resolution images and simplifies the display hardware. HELIUM3D: A multi-user autostereoscopic display based on laser scanning is also described in this thesis. This display also operates by providing head-tracked exit pupils. It incorporates a red, green and blue (RGB) laser illumination source that illuminates a light engine. Light directions are controlled by a spatial light modulator and are directed to the users’ eyes via a front screen assembly incorporating a novel Gabor superlens. In this work is described that covered the development of demonstrators that showed the principle of temporal multiplexing and a version of the final display that had limited functionality; the reason for this was the delivery of components required for a display with full functionality.
APA, Harvard, Vancouver, ISO, and other styles
5

Hassanpour, Reza Zare. "Reconstruction Of A 3d Human Head Model From Images." Phd thesis, METU, 2003. http://etd.lib.metu.edu.tr/upload/1168269/index.pdf.

Full text
Abstract:
The main aim of this thesis is to generate 3D models of human heads from uncalibrated images. In order to extract geometric values of a human head, we find camera parameters using camera auto calibration. However, some image sequences generate non-unique (degenerate) solutions. An algorithm for removing degeneracy from the most common form of camera movement in face image acquisition is described. The geometric values of main facial features are computed initially. The model is then generated by gradual deformation of a generic polygonal model of a head. The accuracy of the models is evaluated using ground truth data from a range scanner. 3D models are covered with cylindrical texture values obtained from images. The models are appropriate for animation or identification applications.
APA, Harvard, Vancouver, ISO, and other styles
6

Cordea, Marius Daniel. "Real time 3D head pose recovery for model-based video coding." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape8/PQDD_0015/MQ48145.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Morgan, Kelly. "Novel 3D Head and Neck Cancer Model to Evaluate Chemotherapeutic Efficacy." VCU Scholars Compass, 2014. http://scholarscompass.vcu.edu/etd/3556.

Full text
Abstract:
HNSCC accounts for 7 percent of all new cancer occurrences. Despite currently available treatments, there continues to be a high mortality and recurrence rate in HNSCC. Well over 50 percent of all cancer patients receive chemotherapy as a standard treatment. However, only 5 percent of these cases have been shown to help with treatment of the disease. Formerly, two options were available for drug testing: in vivo animal models, and in vitro two-dimensional models. While in vivo models remain the most representative, their use is burdened by high costs, time constraints, and ethical concerns. 2D models are simple to use and cost effective, although they have been shown to produce inaccurate data regarding chemotherapeutic drug resistance due to their 2D arrangement and altered gene expression. Researchers for the past decade have been working to create 3D models that more accurately represent in vivo systems in order to evaluate chemotherapeutic efficacy and improve clinical outcomes. In line with this agenda, novel 3D head and neck cancer models were created out of electrospun synthetic polymers seeded with either HN6 or HN12 cancer cells. The models were then treated with chemotherapeutic drugs (either paclitaxel or cisplatin), and, after 72 hours, subjected to a live-dead assay in order to determine the cytotoxic effects of the drugs. 2D cultures of HN6 and HN12 were also and subject to a WST-1 assay after 72 hours. The results of the treated-scaffold assays were then compared to the results of the 2D culture assays, and, as predicted, the cancer cells in a 3D culture system proved to be more resistant to chemotherapeutic drugs. The underlying assumption for this study being that a 3D culture system based on precisely defined structural parameters would provide a practical environment to screen therapeutics for anti-cancer efficacy. To prove this, 3D scaffolds of three different fiber sizes were developed by electrospinning different concentrations of Poly(L-lactic acid) (“PLLA”) (55mg/ml, 115mg/ml, and 180mg/ml) onto a mandrel that was perforated to allow for increased porosity. The resultant small, medium, and large scaffolds were then subjected to concentrated hydrochloric acid (HCl) pretreatment in order to make them less hydrophobic. Different fiber diameters represented different ECM environments for both HN6 and HN12. It was proven that both cell types thrived best in small fibers (55mg/ml-115mg/ml) than in large fibers. It was also reaffirmed through live-dead anlaysis of cells seeded on 3D scaffolds and treated with IC90 values of cisplatin that the head and neck cancer cells were more resistant which is more representative to the 3D environment of cancer cells in vivo.
APA, Harvard, Vancouver, ISO, and other styles
8

Derkach, Dmytro. "Spectrum analysis methods for 3D facial expression recognition and head pose estimation." Doctoral thesis, Universitat Pompeu Fabra, 2018. http://hdl.handle.net/10803/664578.

Full text
Abstract:
Al llarg de les últimes dècades, l'anàlisi facial ha atret un interès creixent i considerable per part de la comunitat investigadora amb l’objectiu de millorar la interacció i la cooperació entre les persones i les màquines. Aquest interès ha propiciat la creació de sistemes automàtics capaços de reaccionar a diversos estímuls com ara els moviments del cap o les emocions d’una persona. Més enllà, les tasques automatitzades s’han de poder realitzar amb gran precisió dins d’entorns no controlats, fet que ressalta la necessitat d'algoritmes que aprofitin al màxim els avantatges que proporcionen les dades 3D. Aquests sistemes poden ser útils en molts àmbits com ara la interacció home-màquina, tutories, entrevistes, atenció sanitària, màrqueting, etc. En aquesta tesi, ens centrem en dos aspectes de l'anàlisi facial: el reconeixement d'expressions i l'estimació de l'orientació del cap. En ambdós casos, ens enfoquem en l’ús de dades 3D i presentem contribucions que tenen com a objectiu la identificació de representacions significatives de la geometria facial mitjançant mètodes basats en la descomposició espectral: 1. Proposem una tecnologia basada en la representació espectral per al reconeixement d’expressions facials utilitzant exclusivament la geometria 3D, la qual ens permet una descripció completa de la superfície subjacent que pot ser ajustada al nivell de detall desitjat. Dita tecnologia, es basa en la descomposició de fragments locals de la superfície en les seves components de freqüència espacial, d’una manera semblant a la transformada de Fourier, que estan relacionades amb característiques intrínseques de la superfície. Concretament, proposem la utilització de les Graph Laplacian Features (GLFs) que resulten de la projecció dels fragments locals de la superfície a una base comuna obtinguda a partir del Graph Laplacian eigenspace. El mètode proposat s’ha avaluat en termes de reconeixement d’expressions i Action Units (activacions musculars facials), i els resultats obtinguts confirmen que les GLFs produeixen taxes de reconeixement comparables a l’estat de l’art. 2. Proposem un mètode per a l’estimació de l’orientació del cap que permet modelar el manifold subjacent que formen les rotacions generals en 3D. En primer lloc, construïm un sistema completament automàtic que combina la detecció de landmarks (punts facials rellevants) i característiques basades en diccionari, el qual ha obtingut els millors resultats al FG2017 Head Pose Estimation Challenge. Posteriorment, utilitzem una representació basada en tensors i la seva descomposició en els valors singulars d’ordre més alt per tal de separar els subespais de cada factor de rotació i mostrar que cada un d’ells té una estructura clara que pot ser modelada amb funcions trigonomètriques. Aquesta representació proporciona un coneixement detallat del comportament de les dades i pot ser utilitzada per millorar l’estimació de les orientacions dels angles del cap.
Facial analysis has attracted considerable research efforts over the last decades, with a growing interest in improving the interaction and cooperation between people and computers. This makes it necessary that automatic systems are able to react to things such as the head movements of a user or his/her emotions. Further, this should be done accurately and in unconstrained environments, which highlights the need for algorithms that can take full advantage of 3D data. These systems could be useful in multiple domains such as human-computer interaction, tutoring, interviewing, health-care, marketing etc. In this thesis, we focus on two aspects of facial analysis: expression recognition and head pose estimation. In both cases, we specifically target the use of 3D data and present contributions that aim to identify meaningful representations of the facial geometry based on spectral decomposition methods: 1. We propose a spectral representation framework for facial expression recognition using exclusively 3D geometry, which allows a complete description of the underlying surface that can be further tuned to the desired level of detail. It is based on the decomposition of local surface patches in their spatial frequency components, much like a Fourier transform, which are related to intrinsic characteristics of the surface. We propose the use of Graph Laplacian Features (GLFs), which result from the projection of local surface patches into a common basis obtained from the Graph Laplacian eigenspace. The proposed approach is tested in terms of expression and Action Unit recognition and results confirm that the proposed GLFs produce state-of-the-art recognition rates. 2. We propose an approach for head pose estimation that allows modeling the underlying manifold that results from general rotations in 3D. We start by building a fully-automatic system based on the combination of landmark detection and dictionary-based features, which obtained the best results in the FG2017 Head Pose Estimation Challenge. Then, we use tensor representation and higher order singular value decomposition to separate the subspaces that correspond to each rotation factor and show that each of them has a clear structure that can be modeled with trigonometric functions. Such representation provides a deep understanding of data behavior, and can be used to further improve the estimation of the head pose angles.
APA, Harvard, Vancouver, ISO, and other styles
9

Kučera, Michal. "3D vodní paprsek." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2014. http://www.nusl.cz/ntk/nusl-231503.

Full text
Abstract:
This master thesis deals with the use and technological capabilities of abrasive water jet in the application of 3D cutting head. One part of this thesis is the description of waterjet technology and its other applications. The experimental comparison of the effect of the parameters was carried out for different materials Mach 4 equipment Flow International Corporation in company AWAC, spol. s.r.o. and subsequently the workpieces were subjected to evaluation of the shape and dimensional accuracy.
APA, Harvard, Vancouver, ISO, and other styles
10

Ostyn, Mark R. "Reducing Uncertainty in Head and Neck Radiotherapy with Plastic Robotics." VCU Scholars Compass, 2018. https://scholarscompass.vcu.edu/etd/5558.

Full text
Abstract:
One of the greatest challenges in achieving accurate positioning in head and neck radiotherapy is that the anatomy at and above the cervical spine does not act as a single, mechanically rigid body. Current immobilization techniques contain residual uncertainties that are especially present in the lower neck that cannot be reduced by setting up to any single landmark. The work presented describes the development of a radiotherapy friendly mostly-plastic 6D robotic platform for positioning independent landmarks, (i.e., allowing remote, independent positioning of the skull relative to landmarks in the thorax), including analysis of kinematics, stress, radiographic compatibility, trajectory planning, physical construction, and phantom measurements of correction accuracy. No major component of the system within the field of imaging or treatment had a measured attenuation value greater than 250 HU, showing compatibility with x-ray-based imaging techniques. Relative to arbitrary overall setup errors of the head (min = 1.1 mm, max = 5.2 mm vector error) the robotic platform corrected the position down to a residual overall error of 0.75 mm +/- 0.33 mm over 15 cases as measured with optical tracking. This device shows the potential for providing reductions to dose margins in head and neck therapy cases, while also reducing setup time and effort.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "3D head"

1

Surman, Phil. Head tracking two-image 3D television displays. Leicester: De Montfort University, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

1970-, Kissler-Patig Markus, Walsh Jeremy R. 1952-, and Roth M. M, eds. Science perspectives for 3D spectroscopy: Proceedings of the ESO Workshop held in Garching, Germany, 10-14 October 2005. Berlin: Springer, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

GAMM Workshop on 3D-Computation of Incompressible Internal Flows (1989 Lausanne, Switzerland). 3D-computation of incompressible internal flows: Proceeding of the GAMM Workshop held at EPFL, 13-15 September 1989, Lausanne, Switzerland. Braunschweig: Vieweg, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Evans, Stuart, Joanna Greenhill, and Ingrid Swenson. Matrix 3d, Sculpture, method, research: A report from a conference held at Central Saint Martins College of Art and Design, 14-15 September 1995. [London]: Lethaby Press, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

1966-, Turcotte Sylvain, Keller S. C, and Cavallo R. M, eds. 3D stellar evolution: Proceedings of a conference held at the Department of Applied Sciences, University of California, Davis, Livermore, California, USA, 22-26-July 2002. San Francisco, Calif: Astronomical Society of the Pacific, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

The rise of 3D printing: Opportunities for entrepreneurs : hearing before the Committee on Small Business, United States House of Representatives, One Hundred Thirteenth Congress, second session, hearing held March 12, 2014. Washington: U.S. Government Printing Office, 2014.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Mordaunt, John. The proceedings of a general court-martial held in the Council-Chamber at Whitehall, on Wednesday the 14th, and continued by several adjournments to Tuesday the 20th of December 1757, upon the trial of Lieutenant-General Sir John Mordaunt, by virtue of His Majesty's warrant, bearing date the 3d day of the same month. London: Printed for A. Millar ..., 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Awesome Super Nintendo Secrets 4. Lahaina, HI: Sandwich Islands Publishing, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Nathan, Jacintha, and Walter G. Oleksy. The Head and Neck in 3D. Rosen Central, 2015.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Reynolds, Patricia A., Scott Rice, Natasha L. Berridge, and G. A. E. Burke. 3D Head and Neck Anatomy for Dentistry. Quintessence Pub Co, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "3D head"

1

Surman, Phil, Ian Sexton, Klaus Hopf, Richard Bates, and Wing Kai Lee. "Head Tracked 3D Displays." In Multimedia Content Representation, Classification and Security, 769–76. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11848035_101.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gobbetti, Enrico, Riccardo Scateni, and Gianluigi Zanetti. "Head and Hand Tracking Devices in Virtual Reality." In 3D Image Processing, 287–92. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/978-3-642-59438-0_26.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ma, Bo, Hui-yang Qu, Hau-san Wong, and Yao Lu. "3D Head Model Classification Using KCDA." In Advances in Multimedia Information Processing - PCM 2006, 1008–17. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11922162_114.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Dai, Hang, Nick Pears, Patrik Huber, and William A. P. Smith. "3D Morphable Models: The Face, Ear and Head." In 3D Imaging, Analysis and Applications, 463–512. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-44070-1_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Viéville, Thierry. "Auto-Calibration of a Robotic Head." In A Few Steps Towards 3D Active Vision, 55–84. Berlin, Heidelberg: Springer Berlin Heidelberg, 1997. http://dx.doi.org/10.1007/978-3-642-60842-1_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Viéville, Thierry. "3D Active Vision on a Robotic Head." In A Few Steps Towards 3D Active Vision, 23–53. Berlin, Heidelberg: Springer Berlin Heidelberg, 1997. http://dx.doi.org/10.1007/978-3-642-60842-1_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Leroy, Julien, Francois Rocca, Matei Mancaş, and Bernard Gosselin. "3D Head Pose Estimation for TV Setups." In Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, 55–64. Cham: Springer International Publishing, 2013. http://dx.doi.org/10.1007/978-3-319-03892-6_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Rougier, Caroline, and Jean Meunier. "3D Head Trajectory Using a Single Camera." In Lecture Notes in Computer Science, 505–12. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-13681-8_59.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Niu, Jianwei, Zhizhong Li, and Song Xu. "Block Division for 3D Head Shape Clustering." In Digital Human Modeling, 64–71. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-02809-0_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Shapiro, Linda G., Katarzyna Wilamowska, Indriyati Atmosukarto, Jia Wu, Carrie Heike, Matthew Speltz, and Michael Cunningham. "Shape-Based Classification of 3D Head Data." In Image Analysis and Processing – ICIAP 2009, 692–700. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04146-4_74.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "3D head"

1

Moteki, A., N. Hara, T. Murase, N. Ozawa, T. Nakai, T. Matsuda, and K. Fujimoto. "Poster: Head gesture 3D interface using a head mounted camera." In 2012 IEEE Symposium on 3D User Interfaces (3DUI). IEEE, 2012. http://dx.doi.org/10.1109/3dui.2012.6184206.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Martin, Manuel, and Rainer Stiefelhagen. "Real Time Head Model Creation and Head Pose Estimation on Consumer Depth Cameras." In 2014 2nd International Conference on 3D Vision (3DV). IEEE, 2014. http://dx.doi.org/10.1109/3dv.2014.54.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Surman, Phil, Shizheng Wang, Xiangyu Zhang, Lei Zhang, and Xiao Wei Sun. "Head tracked multiview 3D display." In 2015 Visual Communications and Image Processing (VCIP). IEEE, 2015. http://dx.doi.org/10.1109/vcip.2015.7457927.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Colmenarez, Antonio J., Ricardo Lopez, and Thomas S. Huang. "3D-model-based head tracking." In Electronic Imaging '97, edited by Jan Biemond and Edward J. Delp III. SPIE, 1997. http://dx.doi.org/10.1117/12.263254.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ryu, Wooju, and Daijin Kim. "Real-time 3D Head Tracking and Head Gesture Recognition." In RO-MAN 2007 - The 16th IEEE International Symposium on Robot and Human Interactive Communication. IEEE, 2007. http://dx.doi.org/10.1109/roman.2007.4415074.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Pal, Swaroop K., Marriam Khan, and Ryan P. McMahan. "The benefits of rotational head tracking." In 2016 IEEE Symposium on 3D User Interfaces (3DUI). IEEE, 2016. http://dx.doi.org/10.1109/3dui.2016.7460028.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Georgoulis, Stamatios, Marc Proesmans, and Luc Van Gool. "Tackling Shapes and BRDFs Head-On." In 2014 2nd International Conference on 3D Vision (3DV). IEEE, 2014. http://dx.doi.org/10.1109/3dv.2014.81.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Joongrock Kim, Sunjin Yu, and Sangyoun Lee. "3D Head pose-normalization using 2D and 3D interaction." In 2007 International Conference on Control, Automation and Systems. IEEE, 2007. http://dx.doi.org/10.1109/iccas.2007.4406736.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kiyokawa, Kiyoshi. "Redesigning Vision by Head Worn Displays." In 3D Image Acquisition and Display: Technology, Perception and Applications. Washington, D.C.: OSA, 2016. http://dx.doi.org/10.1364/3d.2016.tm4a.2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lai, Chao, Fangzhao Li, and Shiyao Jin. "3D head texture using multiple kinect." In 2015 IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC). IEEE, 2015. http://dx.doi.org/10.1109/icspcc.2015.7338804.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "3D head"

1

Lequesne, Bruno, Ryoichi S. Amano, Joseph Millevolte, Ahmad Abbas, Tomoki Sakamoto, Mandana Saravani, Muhannad Al-Haddad, Tarek El-Gammal, and Hisham Alyahya. 3D-printed, integrated plug-and-play turbine-generator set for very-low-head hydro. Final Technical Report DOE-EMotors-15757-1. Office of Scientific and Technical Information (OSTI), March 2017. http://dx.doi.org/10.2172/1348117.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

McCann, Larry D. Assessing the RELAPS-3D Heat Conduction Enclosure Model. Office of Scientific and Technical Information (OSTI), September 2008. http://dx.doi.org/10.2172/940232.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Author, Not Given. 3D CFD Electrochemical and Heat Transfer Model of an Integrated-Planar Solid Oxide Electrolysis Cells. Office of Scientific and Technical Information (OSTI), November 2008. http://dx.doi.org/10.2172/953673.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Vegendla, Prasad, Yang Liu, Rui Hu, and Ling Zou. 3D CFD Model Validation Using Benchmark Data of 1/16th Scaled VHTR Upper Plenum and Development of Wall Heat-Transfer Correlation For Laminar Flow. Office of Scientific and Technical Information (OSTI), March 2020. http://dx.doi.org/10.2172/1638352.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Modeling a Printed Circuit Heat Exchanger with RELAP5-3D for the Next Generation Nuclear Plant. Office of Scientific and Technical Information (OSTI), December 2010. http://dx.doi.org/10.2172/1004237.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography