Academic literature on the topic 'OpenNI'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'OpenNI.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "OpenNI"

1

Tang, Chen, and Zhong Hua Hu. "Basketball Detection Based on Moving Robot." Applied Mechanics and Materials 734 (February 2015): 629–32. http://dx.doi.org/10.4028/www.scientific.net/amm.734.629.

Full text
Abstract:
The purpose of this study is to introduce a method based on color space and object shape, in order to get data of ball by openni and to obtain a coordinate of the point. The distance from robot to basketball with respect to basketball position was achieved. Our results shows that the method has strong stability and real-time performance to use in robots.
APA, Harvard, Vancouver, ISO, and other styles
2

Adachi, Takayuki, Masafumi Goseki, Hiroshi Takemura, Hiroshi Mizoguchi, Fusako Kusunoki, Masanori Sugimoto, Etsuji Yamaguchi, Shigenori Inagaki, and Yoshiaki Takeda. "Integration of Ultrasonic Sensors and Kinect Sensors for People Distinction and 3D Localization." Journal of Robotics and Mechatronics 25, no. 4 (August 20, 2013): 762–66. http://dx.doi.org/10.20965/jrm.2013.p0762.

Full text
Abstract:
The method proposed here for 3D position measurement and identification of individuals by integrating ultrasonic and Kinect sensors uses ultrasonic transmitter tags with unique identifiers. Ultrasonic sensors measure the 3D positions of and identify tagged individuals, but cannot make measurements if there are no receivers in the direction of ultrasonic waves from transmitters. Kinect sensors measure 3D positions of individuals and track them with OpenNI, but Kinect sensors cannot make measurements if occlusion occurs due to the overlapping of individuals. Evaluation results show that the method proposed here is more robust than methods only using either ultrasonic or Kinect sensors.
APA, Harvard, Vancouver, ISO, and other styles
3

Liu, Tsun Te, Ching Tang Hsieh, Ruei Chi Chung, and Yuan Sheng Wang. "Physical Rehabilitation Assistant System Based on Kinect." Applied Mechanics and Materials 284-287 (January 2013): 1686–90. http://dx.doi.org/10.4028/www.scientific.net/amm.284-287.1686.

Full text
Abstract:
In this paper, we present a physical rehabilitation assistant system based on skeleton detection with Kinect. The users do not have to install the detectors on the exercise equipment anymore. And then, they can just use the rehabilitation equipment with Kinect using the skeleton detection technique. In this study, we build a normalized three-dimensional Cartesian coordinates location of correct postures under OpenNI system. We find out 15 human skeleton joints with three dimensional coordinates and calculate the feature values, than we use support vector machine (SVM) as classifier to define the accuracy of posture. Finally, the system can judge the correct degree of user’s postures. Also, we can have the rehabilitation purpose.
APA, Harvard, Vancouver, ISO, and other styles
4

Kim, Hyesuk, and Incheol Kim. "Human Activity Recognition as Time-Series Analysis." Mathematical Problems in Engineering 2015 (2015): 1–9. http://dx.doi.org/10.1155/2015/676090.

Full text
Abstract:
We propose a system that can recognize daily human activities with a Kinect-style depth camera. Our system utilizes a set of view-invariant features and the hidden state conditional random field (HCRF) model to recognize human activities from the 3D body pose stream provided by MS Kinect API or OpenNI. Many high-level daily activities can be regarded as having a hierarchical structure where multiple subactivities are performed sequentially or iteratively. In order to model effectively these high-level daily activities, we utilized a multiclass HCRF model, which is a kind of probabilistic graphical models. In addition, in order to get view-invariant, but more informative features, we extract joint angles from the subject’s skeleton model and then perform the feature transformation to obtain three different types of features regarding motion, structure, and hand positions. Through various experiments using two different datasets, KAD-30 and CAD-60, the high performance of our system is verified.
APA, Harvard, Vancouver, ISO, and other styles
5

Hastomo, Widi. "GESTURE RECOGNITION FOR PENCAK SILAT TAPAK SUCI REAL-TIME ANIMATION." Jurnal Ilmu Komputer dan Informasi 13, no. 2 (July 1, 2020): 77–87. http://dx.doi.org/10.21609/jiki.v13i2.855.

Full text
Abstract:
The main target in this research is a design of a virtual martial arts training system in real-time and as a tool in learning martial arts independently using genetic algorithm methods and dynamic time warping. In this paper, it is still in the initial stages, which is focused on taking data sets of martial arts warriors using 3D animation and the Kinect sensor cameras, there are 2 warriors x 8 moves x 596 cases/gesture = 9,536 cases. Gesture Recognition Studies are usually distinguished: body gesture and hand and arm gesture, head and face gesture, and, all three can be studied simultaneously in martial arts pencak silat, using martial arts stance detection with scoring methods. Silat movement data is recorded in the form of oni files using the OpenNI ™ (OFW) framework and BVH (Bio Vision Hierarchical) files as well as plug-in support software on Mocap devices. Responsiveness is a measure of time responding to interruptions, and is critical because the system must be able to meet the demand.
APA, Harvard, Vancouver, ISO, and other styles
6

Heickal, Hasnain, Tao Zhang, and Md Hasanuzzaman. "Computer Vision-Based Real-Time 3D Gesture Recognition Using Depth Image." International Journal of Image and Graphics 15, no. 01 (January 2015): 1550004. http://dx.doi.org/10.1142/s0219467815500047.

Full text
Abstract:
Gesture is one of the fundamental ways of human machine natural interaction. To understand gesture, the system should be able to interpret 3D movements of human. This paper presents a computer vision-based real-time 3D gesture recognition system using depth image which tracks 3D joint position of head, neck, shoulder, arms, hands and legs. This tracking is done by Kinect motion sensor with OpenNI API and 3D motion gesture is recognized using the movement trajectory of those joints. User to Kinect sensor distance is adapted using proposed center of gravity (COG) correction method and 3D joint position is normalized using proposed joint position normalization method. For gesture learning and recognition, data mining classification algorithms such as Naive Bayes and neural network is used. The system is trained to recognize 12 gestures used by umpires in a cricket match. It is trained and tested using about 2000 training instances for 12 gesture of 15 persons. The system is tested using 5-fold cross validation method and achieved 98.11% accuracy with neural network and 88.84% accuracy with Naive Bayes classification method.
APA, Harvard, Vancouver, ISO, and other styles
7

Hsieh, Cheng Tiao. "A New Kinect-Based Scanning System and its Application." Applied Mechanics and Materials 764-765 (May 2015): 1375–79. http://dx.doi.org/10.4028/www.scientific.net/amm.764-765.1375.

Full text
Abstract:
This paper aims at presenting a simple approach utilizing a Kinect-based scanner to create models available for 3D printing or other digital manufacturing machines. The outputs of Kinect-based scanners are a depth map and they usually need complicated computational processes to prepare them ready for a digital fabrication. The necessary processes include noise filtering, point cloud alignment and surface reconstruction. Each process may require several functions and algorithms to accomplish these specific tasks. For instance, the Iterative Closest Point (ICP) is frequently used in a 3D registration and the bilateral filter is often used in a noise point filtering process. This paper attempts to develop a simple Kinect-based scanner and its specific modeling approach without involving the above complicated processes.The developed scanner consists of an ASUS’s Xtion Pro and rotation table. A set of organized point cloud can be generated by the scanner. Those organized point clouds can be aligned precisely by a simple transformation matrix instead of the ICP. The surface quality of raw point clouds captured by Kinect are usually rough. For this drawback, this paper introduces a solution to obtain a smooth surface model. Inaddition, those processes have been efficiently developed by free open libraries, VTK, Point Cloud Library and OpenNI.
APA, Harvard, Vancouver, ISO, and other styles
8

Pham, Huy-Hieu, Thi-Lan Le, and Nicolas Vuillerme. "Real-Time Obstacle Detection System in Indoor Environment for the Visually Impaired Using Microsoft Kinect Sensor." Journal of Sensors 2016 (2016): 1–13. http://dx.doi.org/10.1155/2016/3754918.

Full text
Abstract:
Any mobility aid for the visually impaired people should be able to accurately detect and warn about nearly obstacles. In this paper, we present a method for support system to detect obstacle in indoor environment based on Kinect sensor and 3D-image processing. Color-Depth data of the scene in front of the user is collected using the Kinect with the support of the standard framework for 3D sensing OpenNI and processed by PCL library to extract accurate 3D information of the obstacles. The experiments have been performed with the dataset in multiple indoor scenarios and in different lighting conditions. Results showed that our system is able to accurately detect the four types of obstacle: walls, doors, stairs, and a residual class that covers loose obstacles on the floor. Precisely, walls and loose obstacles on the floor are detected in practically all cases, whereas doors are detected in 90.69% out of 43 positive image samples. For the step detection, we have correctly detected the upstairs in 97.33% out of 75 positive images while the correct rate of downstairs detection is lower with 89.47% from 38 positive images. Our method further allows the computation of the distance between the user and the obstacles.
APA, Harvard, Vancouver, ISO, and other styles
9

Gember-Jacobson, Aaron, Raajay Viswanathan, Chaithan Prakash, Robert Grandl, Junaid Khalid, Sourav Das, and Aditya Akella. "OpenNF." ACM SIGCOMM Computer Communication Review 44, no. 4 (February 25, 2015): 163–74. http://dx.doi.org/10.1145/2740070.2626313.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ridler, Marc E., Nils van Velzen, Stef Hummel, Inge Sandholt, Anne Katrine Falk, Arnold Heemink, and Henrik Madsen. "Data assimilation framework: Linking an open data assimilation library (OpenDA) to a widely adopted model interface (OpenMI)." Environmental Modelling & Software 57 (July 2014): 76–89. http://dx.doi.org/10.1016/j.envsoft.2014.02.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "OpenNI"

1

Milzoni, Alessandro. "Kinect e openNI a supporto delle NUI (Natural User Interface) applications." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2013. http://amslaurea.unibo.it/6113/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Málek, Miroslav. "Mobilní robot řízený KINECTem." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2013. http://www.nusl.cz/ntk/nusl-219977.

Full text
Abstract:
This project deals with design of a mobile robot controlled by MS Kinect. The movement of the robot is driven by depth data which is processed with a suitable ARM processor. There is a module designed for serial communication between the processor and the robot chassis. For user computer and ARM processor there are developed software applications to control each part of the robot as well. Finally, this project contains form of the built robot controlled by an ARM processor software. The robot has the ability of controlled movement between obstacles. This allows the robot to not come into contact with any obstacle.
APA, Harvard, Vancouver, ISO, and other styles
3

Jaroň, Lukáš. "Ovládání počítače gesty." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2012. http://www.nusl.cz/ntk/nusl-236609.

Full text
Abstract:
This masters thesis describes possibilities and principles of gesture-based computer interface. The work describes general approaches for gesture control.  It also deals with implementation of the selected detection method of the hands and fingers using depth maps loaded form Kinect sensor. The implementation also deals with gesture recognition using hidden Markov models. For demonstration purposes there is also described implementation of a simple photo viewer that uses developed gesture-based computer interface. The work also focuses on quality testing and accuracy evaluation for selected gesture recognizer.
APA, Harvard, Vancouver, ISO, and other styles
4

Grotti, Simone. "Ingegnerizzazione di sistemi software basati su schermi adattativi pervasivi." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2013. http://amslaurea.unibo.it/6396/.

Full text
Abstract:
Ingegnerizzazione di sistemi software per schermi pervasivi che riconoscono l'attenzione degli osservatori e adattano ad essi i propri contenuti, interagendo tramite interfacce naturali. Viene proposto un framework per facilitare lo sviluppo di applicazioni che utilizzano kinect e OpenNI. Sulla base del framework realizzato viene presentato anche lo sviluppo di un prototipo per uno di questi sistemi, calato nel contesto accademico.
APA, Harvard, Vancouver, ISO, and other styles
5

Cinesi, Andrea. "Installazione del navigation stack su rover terrestre e applicazione del kinect nello human-robot interaction." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amslaurea.unibo.it/10392/.

Full text
Abstract:
Progetto SHERPA. Installazione e configurazione del Navigaton Stack su Rover terrestre. Utilizzo e configurazione di LMS151 Sick. Utilizzo e configurazione di Asus Xtion Pro. Progettazione di software per la localizzazione e l'inseguimento di persone tramite camera di profondita.
APA, Harvard, Vancouver, ISO, and other styles
6

Nyman, Edward Jr. "The Effects of an OpenNI / Kinect-Based Biofeedback Intervention on Kinematics at the Knee During Drop Vertical Jump Landings: Implications for Reducing Neuromuscular Predisposition to Non-Contact ACL Injury Risk in the Young Female Athlete." University of Toledo / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1381269608.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ali, Akhtar. "Comparative study of parallel programming models for multicore computing." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-94296.

Full text
Abstract:
Shared memory multi-core processor technology has seen a drastic developmentwith faster and increasing number of processors per chip. This newarchitecture challenges computer programmers to write code that scales overthese many cores to exploit full computational power of these machines.Shared-memory parallel programming paradigms such as OpenMP and IntelThreading Building Blocks (TBB) are two recognized models that offerhigher level of abstraction, shields programmers from low level detailsof thread management and scales computation over all available resources.At the same time, need for high performance power-ecient computing iscompelling developers to exploit GPGPU computing due to GPU's massivecomputational power and comparatively faster multi-core growth. Thistrend leads to systems with heterogeneous architectures containing multicoreCPUs and one or more programmable accelerators such as programmableGPUs. There exist dierent programming models to program these architecturesand code written for one architecture is often not portable to anotherarchitecture. OpenCL is a relatively new industry standard framework, de-ned by Khronos group, which addresses the portability issue. It oers aportable interface to exploit the computational power of a heterogeneous setof processors such as CPUs, GPUs, DSP processors and other accelerators. In this work, we evaluate the eectiveness of OpenCL for programmingmulti-core CPUs in a comparative case study with two CPU specic stableframeworks, OpenMP and Intel TBB, for ve benchmark applicationsnamely matrix multiply, LU decomposition, image convolution, Pi value approximationand image histogram generation. The evaluation includes aperformance comparison of the three frameworks and a study of the relativeeects of applying compiler optimizations on performance numbers.OpenCL performance on two vendor-dependent platforms Intel and AMD,is also evaluated. Then the same OpenCL code is ported to a modern GPUand its code correctness and performance portability is investigated. Finally,usability experience of coding using the three multi-core frameworksis presented.
APA, Harvard, Vancouver, ISO, and other styles
8

Örtenberg, Alexander. "Parallelization of DIRA and CTmod Using OpenMP and OpenCL." Thesis, Linköpings universitet, Informationskodning, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-119183.

Full text
Abstract:
Parallelization is the answer to the ever-growing demands of computing power by taking advantage of multi-core processor technology and modern many-core graphics compute units. Multi-core CPUs and many-core GPUs have the potential to substantially reduce the execution time of a program but it is often a challenging task to ensure that all available hardware is utilized. OpenMP and OpenCL are two parallel programming frameworks that have been developed to allow programmers to focus on high-level parallelism rather than dealing with low-level thread creation and management. This thesis applies these frameworks to the area of computed tomography by parallelizing the image reconstruction algorithm DIRA and the photon transport simulation toolkit CTmod. DIRA is a model-based iterative reconstruction algorithm in dual-energy computed tomography, which has the potential to improve the accuracy of dose planning in radiation therapy. CTmod is a toolkit for simulating primary and scatter projections in computed tomography to optimize scanner design and image reconstruction algorithms. The results presented in this thesis show that parallelization combined with computational optimization substantially decreased execution times of these codes. For DIRA the execution time was reduced from two minutes to just eight seconds when using four iterations and a 16-core CPU so a speedup of 15 was achieved. CTmod produced similar results with a speedup of 14 when using a 16-core CPU. The results also showed that for these particular problems GPU computing was not the best solution.
APA, Harvard, Vancouver, ISO, and other styles
9

Balasubramanian, ArunKumar. "Benchmarking of Vision-Based Prototyping and Testing Tools." Master's thesis, Universitätsbibliothek Chemnitz, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-229999.

Full text
Abstract:
The demand for Advanced Driver Assistance System (ADAS) applications is increasing day by day and their development requires efficient prototyping and real time testing. ADTF (Automotive Data and Time Triggered Framework) is a software tool from Elektrobit which is used for Development, Validation and Visualization of Vision based applications, mainly for ADAS and Autonomous driving. With the help of ADTF tool, Image or Video data can be recorded and visualized and also the testing of data can be processed both on-line and off-line. The development of ADAS applications needs image and video processing and the algorithm has to be highly efficient and must satisfy Real-time requirements. The main objective of this research would be to integrate OpenCV library with ADTF cross platform. OpenCV libraries provide efficient image processing algorithms which can be used with ADTF for quick benchmarking and testing. An ADTF filter framework has been developed where the OpenCV algorithms can be directly used and the testing of the framework is carried out with .DAT and image files with a modular approach. CMake is also explained in this thesis to build the system with ease of use. The ADTF filters are developed in Microsoft Visual Studio 2010 in C++ and OpenMP API are used for Parallel programming approach.
APA, Harvard, Vancouver, ISO, and other styles
10

Cadavez, Tiago João Gonçalves. "Análise de imagens tomográficas: visualização e paralelização de processamento." Master's thesis, FCT - UNL, 2008. http://hdl.handle.net/10362/2791.

Full text
Abstract:
Dissertação de Mestrado em Engenharia Informática
A micro-tomografia de raios-X por radiação do sincrotrão é uma técnica bem desenvolvida no domínio da medicina, e mais recentemente, foi adoptada em outras áreas, nomeadamente na Engenharia de Materiais. É uma técnica não destrutiva, que permite analisar a estrutura interna de componentes. O objecto em estudo é alvo de um feixe de radiação por toda a superfície que penetra no material, e um conjunto de detectores vai registando a intensidade dos raios à medida que o objecto vai rodando. Este procedimento origina um ficheiro de dados, que pode ter uma dimensão da ordem dos gigabytes e que necessita de ser processado para visualização da estrutura interna. Atendendo ao grande volume de dados e à complexidade de alguns algoritmos de processamento, certos tipos de processamentos podem mesmo levar dias. O programa do cientista francês Gerard Vignoles, Tritom, processando os dados tomográficos sequencialmente, demorava muito tempo em algumas operações. Num esforço anterior, Paulo Quaresma optimizou e paralelizou algumas das operações mais demoradas, usando um agregado (clusters) de computadores e programação baseada em troca de mensagens. Nesta tese, paralelizaram-se as operações mais pesadas do Tritom utilizando o modelo de memória partilhada, concretizado através da ferramenta OpenMP. Esta aceleração de obtenção de resultados é vantajosa para a investigação sobre os materiais e permite tirar partido da introdução de multi-processadores nas arquitecturas de computadores pessoais comuns. Foram realizados testes para análise das melhorias de tempo de execução com este método. Nesta tese, também se integrou também o Tritom num ambiente de visualização de dados gráfico e interactivo de nome OpenDX que facilita muito a utilização do programa aos menos experientes. O utilizador pode escolher os processamentos que deseja realizar sob a forma de módulos e pela ordem que quiser tudo num ambiente gráfico. Permite também a visualização tridimensional de dados que se torna vital para percepcionar alguns fenómenos nos objectos de estudo. Foram também criados alguns novos módulos a pedido dos investigadores de Engenharia de Materiais. O Tritom, assim paralelizado, oferecerá aos cientistas da área de materiais uma boa ferramenta de análise de imagens tomográficas com interacção simples e intuitiva. Poderão, dispor de uma mais-valia para leituras rápidas dos seus objectos de estudo sem recorrerem a clusters ou configurações de computadores complexas e pouco acessíveis.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "OpenNI"

1

Davison, Andrew. Kinect open source programming secrets: Hacking the Kinect with OpenNI, NITE, and Java. New York: McGraw-Hill, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Soma, Śobhana. Openṭi bāiskopa. Kalakātā: Kyāmpa, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ĭoncheva, Gali︠a︡. Operni pŭteki. Sofii︠a︡: Izd-vo "Sibi", 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hrvatski operni pjevači. Zagreb: Nakladni zavod Matice hrvatske, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Stefano, Musso. Operai. Torino: Rosenberg & Sellier, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kelly, Matthew. Opening. New York: Samuel French, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Alexander, Becky D., ed. Opening. Cambridge, Canada: Craigleigh Press, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kelly, Matthew. Opening. New York: Samuel French, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Joy, Cincerelli Carol. Opening. Mentor, Ohio: Learning Concepts, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Hoffmann, Simon, and Rainer Lienhart. OpenMP. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-73123-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "OpenNI"

1

Cho, Ok-Hue, and Won-Hyung Lee. "Gesture Recognition Using Simple-OpenNI for Implement Interactive Contents." In Lecture Notes in Electrical Engineering, 141–46. Dordrecht: Springer Netherlands, 2012. http://dx.doi.org/10.1007/978-94-007-5064-7_21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Tsai, Chih-Hsiao, and Jung-Chuan Yen. "Teaching Spatial Visualization Skills Using OpenNI and the Microsoft Kinect Sensor." In Lecture Notes in Electrical Engineering, 617–24. Berlin, Heidelberg: Springer Berlin Heidelberg, 2014. http://dx.doi.org/10.1007/978-3-642-55038-6_97.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Nandy, Abhishek, and Manisha Biswas. "OpenAI Basics." In Reinforcement Learning, 71–87. Berkeley, CA: Apress, 2017. http://dx.doi.org/10.1007/978-1-4842-3285-9_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Moraña, Mabel. "Opening." In Arguedas / Vargas Llosa, 7–11. New York: Palgrave Macmillan US, 2016. http://dx.doi.org/10.1057/978-1-137-57187-8_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Elleström, Lars. "Opening." In Transmedial Narration, 3–19. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-01294-6_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chivers, Ian, and Jane Sleightholme. "OpenMP." In Introduction to Programming with Fortran, 605–19. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-75502-1_33.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wonnacott, David, Barbara Chapman, James LaGrone, Karl Fürlinger, Stephen W. Poole, Oscar Hernandez, Jeffery A. Kuehn, et al. "OpenMP." In Encyclopedia of Parallel Computing, 1365–71. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-09766-4_50.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Chivers, Ian, and Jane Sleightholme. "OpenMP." In Introduction to Programming with Fortran, 489–99. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-17701-4_31.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Gobet, Fernand. "Opening." In The Psychology of Chess, 1–6. New York : Routledge, 2019. | Series: The Psychology of Everything: Routledge, 2018. http://dx.doi.org/10.4324/9781315441887-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Singh, Deshanand, and Peter Yiannacouras. "OpenCL." In FPGAs for Software Programmers, 97–114. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-26408-0_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "OpenNI"

1

Cho, Ok-Hue, and Won-Hyung Lee. "The flying : Kinect art using OpenNI and learning system." In ACM SIGGRAPH 2012 Posters. New York, New York, USA: ACM Press, 2012. http://dx.doi.org/10.1145/2342896.2342931.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Villaroman, Norman, Dale Rowe, and Bret Swan. "Teaching natural user interaction using OpenNI and the Microsoft Kinect sensor." In the 2011 conference. New York, New York, USA: ACM Press, 2011. http://dx.doi.org/10.1145/2047594.2047654.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bucur, Adrian. "OpenCL - OpenGL ES interop." In ACM SIGGRAPH 2013 Mobile. New York, New York, USA: ACM Press, 2013. http://dx.doi.org/10.1145/2503512.2503532.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Shindev, Ivan, Shane Marlin, Nathan Preseault, Rodrigo Tamayo, William Pence, and Redwan Alqasemi. "Obstacle Detection and Avoidance Using Wavefront Planner and Kinect on a WMRA." In ASME 2012 Summer Bioengineering Conference. American Society of Mechanical Engineers, 2012. http://dx.doi.org/10.1115/sbc2012-80681.

Full text
Abstract:
Obstacle avoidance in autonomous navigation platforms is a well known problem that can be solved in numerous ways. This paper considers and analyzes the use of wavefront planner as an obstacle avoidance algorithm for a 9-DoF wheelchair-mounted robotic arm (WMRA) [1]. It also presents a suitable solution for obstacle detection using the OpenNI driver for interfacing with Microsoft’s Kinect. It further analyzes the capabilities of an autonomous operation of the WMRA and explains how this algorithm can be implemented into its navigation control. The results of this project showed that the Kinect can provide a very accurate representation of the surroundings. The wavefront planner can use this data to find a path from a start position to a goal without running into an obstacle.
APA, Harvard, Vancouver, ISO, and other styles
5

Lin, Yu-Te, Shao-Chung Wang, Wen-Li Shih, Brian Kun-Yuan Hsieh, and Jenq-Kuen Lee. "Enable OpenCL Compiler with Open64 Infrastructures." In Communication (HPCC). IEEE, 2011. http://dx.doi.org/10.1109/hpcc.2011.123.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Burns, Brian, and Biswanath Samanta. "Human Identification for Human-Robot Interactions." In ASME 2014 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2014. http://dx.doi.org/10.1115/imece2014-38496.

Full text
Abstract:
In co-robotics applications, the robots must identify human partners and recognize their status in dynamic interactions for enhanced acceptance and effectiveness as socially interactive agents. Using the data from depth cameras, people can be identified from a person’s skeletal information. This paper presents the implementation of a human identification algorithm using a depth camera (Carmine from PrimeSense), an open-source middleware (NITE from OpenNI) with the Java-based Processing language and an Arduino microcontroller. This implementation and communication sets a framework for future applications of human-robot interactions. Based on the movements of the individual in the depth sensor’s field of view, the program can be set to track a human skeleton or the closest pixel in the image. Joint locations in the tracked human can be isolated for specific usage by the program. Joints include the head, torso, shoulders, elbows, hands, knees and feet. Logic and calibration techniques were used to create systems such as a facial tracking pan and tilt servomotor mechanism. The control system presented here sets groundwork for future implementation into student built animatronic figures and mobile robot platforms such as Turtlebot.
APA, Harvard, Vancouver, ISO, and other styles
7

Gasparakis, Harris. "Heterogeneous compute in computer vision: OpenCL in OpenCV." In IS&T/SPIE Electronic Imaging, edited by Amir Said, Onur G. Guleryuz, and Robert L. Stevenson. SPIE, 2014. http://dx.doi.org/10.1117/12.2054961.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Burns, Brian, and Biswanath Samanta. "Mechanical Design and Control Calibration for an Interactive Animatronic System." In ASME 2015 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2015. http://dx.doi.org/10.1115/imece2015-52477.

Full text
Abstract:
Animatronic figures provide key show effects in the entertainment and theme park industry by simulating life-like animations and sounds. There is a need for interactive, autonomous animatronic systems to create engaging and compelling experiences for the guests. The animatronic figures must identify the guests and recognize their status in dynamic interactions for enhanced acceptance and effectiveness as socially interactive agents, in the general framework of human-robot interactions. The design and implementation of an interactive, autonomous animatronic system in form of a tabletop dragon and the comparisons of guest responses in its passive and interactive modes are presented in this work. The purpose of this research is to create a platform that may be used to validate autonomous, interactive behaviors in animatronics, utilizing both quantitative and qualitative analysis methods of guest response. The dragon capabilities include a four degrees-of-freedom head, moving wings, tail, jaw, blinking eyes and sound effects. Human identification, using a depth camera (Carmine from PrimeSense), an open-source middleware (NITE from OpenNI), Java-based Processing and an Arduino microcontroller, has been implemented into the system in order to track a guest or guests, within the field of view of the camera. The details of design and fabrication of the dragon model, algorithm development for interactive autonomous behavior using a vision system, the experimental setup and implementation results under different conditions are presented.
APA, Harvard, Vancouver, ISO, and other styles
9

Chen, Jinfa, and Won-jong Kim. "Development of a Mobile Robot Providing a Natural Way to Interact With Electronic Devices." In ASME 2016 Dynamic Systems and Control Conference. American Society of Mechanical Engineers, 2016. http://dx.doi.org/10.1115/dscc2016-9751.

Full text
Abstract:
This paper provides a natural, yet low-cost way for human to interact with electronic devices indoor: the development of a human-following mobile robot capable of controlling other electrical devices for the user based on the user’s gesture commands. The overall experimental setup consists of a skid-steered mobile robot, Kinect sensor, laptop, wide-angle camera and two lamps. The OpenNI middleware is used to process data from the Kinect sensor, and the OpenCV is used to process data from the wide-angle camera. A new human-following algorithm is proposed based on human motion estimation. The human-following control system consists of two feedback control loop for linear and rotational motions, respectively. A lead-lag and lead controller are developed for the linear and rotational motion control loop, respectively. Experimental results show that the tracking algorithm is robust and reduced the distance and angular error by 40% and 50%, respectively. There are small delays (0.5 s for linear motion and 1.5 s for rotational motion) and steady-state errors (0.1 m for linear motion and 1.5° for rotational motion) of the system’s response. However, the delays and errors are acceptable since they do not cause the tracking distance or angle out of the desirable range (±0.05m and ±10° of the reference input). There are four gestures designed for the user to control the robot, two switch-mode gestures, lamp-creation, lamp-selection and color change gesture. Success rates of gesture recognition are more than 90% within the detectable range of the Kinect sensor.
APA, Harvard, Vancouver, ISO, and other styles
10

Denysenko, O. "DIFFERENTIATION OF VULKAN API, OPENCL, AND OPENGL FOR RAY TRACING ALGORITHMS." In Scientific discoveries: projects, strategies and development. European Scientific Platform, 2019. http://dx.doi.org/10.36074/25.10.2019.v1.09.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "OpenNI"

1

Scott, III, and John M. Open Component Portability Infrastructure (OPENCPI). Fort Belvoir, VA: Defense Technical Information Center, November 2009. http://dx.doi.org/10.21236/ada510918.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kulp, James, Shepard Siegel, and John Miller. Open Component Portability Infrastructure (OPENCPI). Fort Belvoir, VA: Defense Technical Information Center, March 2013. http://dx.doi.org/10.21236/ada580701.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Tam, Wai Cheong, and Walter W. Yuen. OpenSC :. Gaithersburg, MD: National Institute of Standards and Technology, September 2019. http://dx.doi.org/10.6028/nist.tn.2064.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Darnley, A. G. Opening remarks. Natural Resources Canada/ESS/Scientific and Technical Publishing Services, 1986. http://dx.doi.org/10.4095/122340.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Narayanan, Niju. Openair Biofactories. Office of Scientific and Technical Information (OSTI), July 2019. http://dx.doi.org/10.2172/1542792.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zhao, Y., X. Shen, and C. Liao. OpenK: An Open Infrastructure for the Accumulation, Sharing and Reuse of High Performance Computing Knowledge. Office of Scientific and Technical Information (OSTI), May 2020. http://dx.doi.org/10.2172/1617288.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kim, Woohyun, Robert G. Lutes, Srinivas Katipamula, Jereme N. Haack, Brandon J. Carpenter, Bora A. Akyol, Kyle E. Monson, Craig H. Allwardt, Timothy Kang, and Poorva Sharma. OpenEIS. Users Guide. Office of Scientific and Technical Information (OSTI), February 2015. http://dx.doi.org/10.2172/1203910.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Lutes, Robert G., Casey C. Neubauer, Jereme N. Haack, Brandon J. Carpenter, Kyle E. Monson, Craig H. Allwardt, Poorva Sharma, and Bora A. Akyol. OpenEIS. Developer Guide. Office of Scientific and Technical Information (OSTI), March 2015. http://dx.doi.org/10.2172/1203911.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Maron, Nancy. Opening the Textbook. New York: Ithaka S+R, March 2014. http://dx.doi.org/10.18665/sr.24783.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Li, Ying Wai. Basic OpenMP and Profiling. Office of Scientific and Technical Information (OSTI), May 2020. http://dx.doi.org/10.2172/1618304.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography