To see the other types of publications on this topic, follow the link: 3D Navigation.

Journal articles on the topic '3D Navigation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic '3D Navigation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Karetnikov, V. V., A. A. Prokhorenkov, and Yu G. Andreev. "Limiting navigational and hydrographic factors when using 3D electronic navigation charts for ship handling during locks passage." Vestnik Gosudarstvennogo universiteta morskogo i rechnogo flota imeni admirala S. O. Makarova 16, no. 6 (January 16, 2025): 825–36. https://doi.org/10.21821/2309-5180-2024-16-6-825-836.

Full text
Abstract:
A vessel is a qualitatively complex movable object, which motion can easily predicted when proceeding in unchanged navigation conditions. When navigation conditions become more difficult prediction of vessel requires the implementation of a complex set of actions to assess the position and parameters of the vessel’s motion relative to the limits and directions of the waterway. Lock approaches are characterized by significantly congested conditions and variable dimensions, tortuosity of the waterway, and spatial navigational hazards of complex form, to indicate which floating and land marks are used, organized in complex patterns. Thus, the complexity of navigational and hydrographic conditions has a limiting effect on the navigator’s ability to control the vessel. As navigation conditions become more complex, the navigator changes priorities in controlling the vessel motion, gradually shifting from controlling the course of the vessel, to controlling the vessel’s speed vector, and then to controlling the motion of the stem and stern. Navigating a vessel in difficult navigational and hydrographic conditions requires periodic clarification of navigational information, for which the navigator turns to the navigational chart, which is a visual source of navigational information. Notwithstanding use of modern aids to navigation to display navigation information, such as, for example, an electronic chart navigation and information system (ECDIS), approaches to locks usually present difficulties when taking into account navigation and hydrographic factors, due to the variability of navigation conditions and the static nature of the navigation equipment system waterway. The use of ECDIS, which enables displaying 3D electronic navigation charts (ENCs) to solve the navigator’s tasks enroute lock approaches, is new. The effectiveness of their use in the process of ship handling through locks, which is significant for the navigator, is performed by modern methods in this article.
APA, Harvard, Vancouver, ISO, and other styles
2

Sanftmann, Harald, and D. Weiskopf. "3D Scatterplot Navigation." IEEE Transactions on Visualization and Computer Graphics 18, no. 11 (November 2012): 1969–78. http://dx.doi.org/10.1109/tvcg.2012.35.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Vosinakis, Spyros, and Anna Gardeli. "On the Use of Mobile Devices as Controllers for First-Person Navigation in Public Installations." Information 10, no. 7 (July 11, 2019): 238. http://dx.doi.org/10.3390/info10070238.

Full text
Abstract:
User navigation in public installations displaying 3D content is mostly supported by mid-air interactions using motion sensors, such as Microsoft Kinect. On the other hand, smartphones have been used as external controllers of large-screen installations or game environments, and they may also be effective in supporting 3D navigations. This paper aims to examine whether a smartphone-based control is a reliable alternative to mid-air interaction for four degrees of freedom (4-DOF) fist-person navigation, and to discover suitable interaction techniques for a smartphone controller. For this purpose, we setup two studies: A comparative study between smartphone-based and Kinect-based navigation, and a gesture elicitation study to collect user preferences and intentions regarding 3D navigation methods using a smartphone. The results of the first study were encouraging, as users with smartphone input performed at least as good as with Kinect and most of them preferred it as a means of control, whilst the second study produced a number of noteworthy results regarding proposed user gestures and their stance towards using a mobile phone for 3D navigation.
APA, Harvard, Vancouver, ISO, and other styles
4

Mayalu, Alfred, Kevin Kochersberger, Barry Jenkins, and François Malassenet. "Lidar Data Reduction for Unmanned Systems Navigation in Urban Canyon." Remote Sensing 12, no. 11 (May 27, 2020): 1724. http://dx.doi.org/10.3390/rs12111724.

Full text
Abstract:
This paper introduces a novel protocol for managing low altitude 3D aeronautical chart data to address the unique navigational challenges and collision risks associated with populated urban environments. Based on the Open Geospatial Consortium (OGC) 3D Tiles standard for geospatial data delivery, the proposed extension, called 3D Tiles Nav., uses a navigation-centric packet structure which automatically decomposes the navigable regions of space into hyperlocal navigation cells and encodes environmental surfaces that are potentially visible from each cell. The developed method is sensor agnostic and provides the ability to quickly and conservatively encode visibility directly from a region by enabling an expanded approach to viewshed analysis. In this approach, the navigation cells themselves are used to represent the intrinsic positional uncertainty often needed for navigation. Furthermore, we present in detail this new data format and its unique features as well as a candidate framework illustrating how an Unmanned Traffic Management (UTM) system could support trajectory-based operations and performance-based navigation in the urban canyon. Our results, experiments, and simulations conclude that this data reorganization enables 3D map streaming using less bandwidth and efficient 3D map-matching systems with limited on-board compute, storage, and sensor resources.
APA, Harvard, Vancouver, ISO, and other styles
5

Kim, Youngwon Ryan, Hyeonah Choi, Minwook Chang, and Gerard J. Kim. "Applying Touchscreen Based Navigation Techniques to Mobile Virtual Reality with Open Clip-On Lenses." Electronics 9, no. 9 (September 5, 2020): 1448. http://dx.doi.org/10.3390/electronics9091448.

Full text
Abstract:
Recently, a new breed of mobile virtual reality (dubbed as “EasyVR” in this work), has appeared in the form of conveniently clipping on a non-isolating magnifying lenses on the smartphone, still offering a reasonable level of immersion to using the isolated headset. Furthermore, such a form factor allows the fingers to touch the screen and select objects quite accurately, despite the finger(s) being seen unfocused over the lenses. Many navigation techniques have existed for both casual smartphone 3D applications using the touchscreen and immersive VR environments using the various controllers/sensors. However, no research has focused on the proper navigation interaction technique for a platform like EasyVR which necessitates the use of the touchscreen while holding the display device to the head and looking through the magnifying lenses. To design and propose the most fitting navigation method(s) with EasyVR, we mixed and matched the conventional touchscreen based and headset oriented navigation methods to come up with six viable navigation techniques—more specifically for selecting the travel direction and invoking the movement itself—including the use of head-rotation, on-screen keypads/buttons, one-touch teleport, drag-to-target, and finger gestures. These methods were experimentally compared for their basic usability and the level of immersion in navigating in 3D space with six degrees of freedom. The results provide a valuable guideline for designing/choosing the proper navigation method under different navigational needs of the given VR application.
APA, Harvard, Vancouver, ISO, and other styles
6

Nolte, L. P. "3D imaging, planning, navigation." Minimally Invasive Therapy & Allied Technologies 12, no. 1-2 (January 2003): 3–4. http://dx.doi.org/10.1080/13645700310013187.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Vasin, Yu G., M. P. Osipov, A. A. Egorov, and Yu V. Yasakov. "Autonomous indoor 3D navigation." Pattern Recognition and Image Analysis 25, no. 3 (July 2015): 373–77. http://dx.doi.org/10.1134/s1054661815030256.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Haigron, P., G. Le Berre, and J. L. Coatrieux. "3D navigation in medicine." IEEE Engineering in Medicine and Biology Magazine 15, no. 2 (1996): 70–78. http://dx.doi.org/10.1109/51.486721.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Bansal, Vijay Kumar. "A Road-Based 3D Navigation System in GIS: A Case Study of an Institute Campus." International Journal of Applied Geospatial Research 14, no. 1 (January 27, 2023): 1–20. http://dx.doi.org/10.4018/ijagr.316887.

Full text
Abstract:
A user-controlled navigation system is one of the important aspects of human-computer interaction. Finding best path from one location to another and navigating through that path has been a great concern in the geo-virtual navigation. Earlier studies on the geo-virtual navigation systems mainly focus on navigation and visualization, but lack in geo-spatial analysis. Geo-spatial analysis is the domain of geographic information systems (GIS) in which 3D geo-spatial information is used for navigation, geo-visualization, and geo-spatial analysis. The present study deals with wayfinding in the road network of the campus of National Institute of Technology (NIT) Hamirpur, India, in a hilly terrain. It facilitates perform various types of geo-spatial analyses on the road network and virtual travelling in a 3D space.
APA, Harvard, Vancouver, ISO, and other styles
10

Stateczny, Andrzej, Wioleta Błaszczak-Bąk, Anna Sobieraj-Żłobińska, Weronika Motyl, and Marta Wisniewska. "Methodology for Processing of 3D Multibeam Sonar Big Data for Comparative Navigation." Remote Sensing 11, no. 19 (September 26, 2019): 2245. http://dx.doi.org/10.3390/rs11192245.

Full text
Abstract:
Autonomous navigation is an important task for unmanned vehicles operating both on the surface and underwater. A sophisticated solution for autonomous non-global navigational satellite system navigation is comparative (terrain reference) navigation. We present a method for fast processing of 3D multibeam sonar data to make depth area comparable with depth areas from bathymetric electronic navigational charts as source maps during comparative navigation. Recording the bottom of a channel, river, or lake with a 3D multibeam sonar data produces a large number of measuring points. A big dataset from 3D multibeam sonar is reduced in steps in almost real time. Usually, the whole data set from the results of a multibeam echo sounder results are processed. In this work, new methodology for processing of 3D multibeam sonar big data is proposed. This new method is based on the stepwise processing of the dataset with 3D models and isoline maps generation. For faster products generation we used the optimum dataset method which has been modified for the purposes of bathymetric data processing. The approach enables detailed examination of the bottom of bodies of water and makes it possible to capture major changes. In addition, the method can detect objects on the bottom, which should be eliminated during the construction of the 3D model. We create and combine partial 3D models based on reduced sets to inspect the bottom of water reservoirs in detail. Analyses were conducted for original and reduced datasets. For both cases, 3D models were generated in variants with and without overlays between them. Tests show, that models generated from reduced dataset are more useful, due to the fact, that there are significant elements of the measured area that become much more visible, and they can be used in comparative navigation. In fragmentary processing of the data, the aspect of present or lack of the overlay between generated models did not relevantly influence the accuracy of its height, however, the time of models generation was shorter for variants without overlay.
APA, Harvard, Vancouver, ISO, and other styles
11

Lamb, Matthew, and J. G. Hollands. "Viewpoint Tethering in Complex Terrain Navigation and Awareness." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 49, no. 17 (September 2005): 1573–77. http://dx.doi.org/10.1177/154193120504901716.

Full text
Abstract:
Twelve participants navigated a simulated vehicle across complex virtual terrain using five different display viewpoints: egocentric, dynamic tether, rigid tether, three-dimensional (3D) exocentric, and twodimensional (2D) exocentric. While navigating, participants had to avoid being seen by simulated enemy units. After the navigation task, participants' spatial awareness was assessed using a recognition task. The egocentric display was more effective than exocentric displays (2D or 3D) for navigation, and the exocentric displays were more effective than egocentric for time seen during navigation and the recognition task. The tethered displays generally produced intermediate results, but minimized the time during which the participant's avatar was visible to enemy positions. In summary, it would appear that the tether facilitated spatial awareness involving knowledge of locations of interest with respect to one's own position while navigating.
APA, Harvard, Vancouver, ISO, and other styles
12

Yu, Jie, and Qiang Shi. "Efficacy Evaluation of 3D Navigational Template for Salter Osteotomy of DDH in Children." BioMed Research International 2021 (May 22, 2021): 1–7. http://dx.doi.org/10.1155/2021/8832617.

Full text
Abstract:
Background. The aim of this study is to retrospectively evaluate the efficacy of 3D navigational template for Salter osteotomy of DDH in children. Methods. Thirty-two consecutive patients with DDH who underwent Salter osteotomy were evaluated between July 2014 and August 2017, and they were divided into the conventional group ( n = 16 ) and navigation template group ( n = 16 ) according to different surgical methods. The corrective acetabular degrees, radiation exposure, and operation time were compared between the two groups. Results. No nerve palsy or redislocation was reported in the navigation template group. Compared with the conventional group, the navigation template group had the advantages of more accurate acetabular degrees, less radiation exposure, and shorter operation time ( P < 0.05 ). Meanwhile, the navigation template group achieved a better surgical outcome than the conventional group (McKay, P = 0.0293 ; Severin, P = 0.0949 ). Conclusions. The 3D navigational template for Salter osteotomy of DDH is simple and effective, which could be an alternative approach to improve the Salter osteotomy accuracy and optimize the efficacy.
APA, Harvard, Vancouver, ISO, and other styles
13

Díaz-Vilariño, L., P. Boguslawski, K. Khoshelham, H. Lorenzo, and L. Mahdjoubi. "INDOOR NAVIGATION FROM POINT CLOUDS: 3D MODELLING AND OBSTACLE DETECTION." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B4 (June 13, 2016): 275–81. http://dx.doi.org/10.5194/isprs-archives-xli-b4-275-2016.

Full text
Abstract:
In the recent years, indoor modelling and navigation has become a research of interest because many stakeholders require navigation assistance in various application scenarios. The navigational assistance for blind or wheelchair people, building crisis management such as fire protection, augmented reality for gaming, tourism or training emergency assistance units are just some of the direct applications of indoor modelling and navigation. <br><br> Navigational information is traditionally extracted from 2D drawings or layouts. Real state of indoors, including opening position and geometry for both windows and doors, and the presence of obstacles is commonly ignored. <br><br> In this work, a real indoor-path planning methodology based on 3D point clouds is developed. The value and originality of the approach consist on considering point clouds not only for reconstructing semantically-rich 3D indoor models, but also for detecting potential obstacles in the route planning and using these for readapting the routes according to the real state of the indoor depictured by the laser scanner.
APA, Harvard, Vancouver, ISO, and other styles
14

Díaz-Vilariño, L., P. Boguslawski, K. Khoshelham, H. Lorenzo, and L. Mahdjoubi. "INDOOR NAVIGATION FROM POINT CLOUDS: 3D MODELLING AND OBSTACLE DETECTION." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B4 (June 13, 2016): 275–81. http://dx.doi.org/10.5194/isprsarchives-xli-b4-275-2016.

Full text
Abstract:
In the recent years, indoor modelling and navigation has become a research of interest because many stakeholders require navigation assistance in various application scenarios. The navigational assistance for blind or wheelchair people, building crisis management such as fire protection, augmented reality for gaming, tourism or training emergency assistance units are just some of the direct applications of indoor modelling and navigation. &lt;br&gt;&lt;br&gt; Navigational information is traditionally extracted from 2D drawings or layouts. Real state of indoors, including opening position and geometry for both windows and doors, and the presence of obstacles is commonly ignored. &lt;br&gt;&lt;br&gt; In this work, a real indoor-path planning methodology based on 3D point clouds is developed. The value and originality of the approach consist on considering point clouds not only for reconstructing semantically-rich 3D indoor models, but also for detecting potential obstacles in the route planning and using these for readapting the routes according to the real state of the indoor depictured by the laser scanner.
APA, Harvard, Vancouver, ISO, and other styles
15

Guha, Daipayan, Raphael Jakubovic, Shaurya Gupta, Michael G. Fehlings, Todd G. Mainprize, Albert Yee, and Victor X. D. Yang. "Intraoperative Error Propagation in 3-Dimensional Spinal Navigation From Nonsegmental Registration: A Prospective Cadaveric and Clinical Study." Global Spine Journal 9, no. 5 (October 9, 2018): 512–20. http://dx.doi.org/10.1177/2192568218804556.

Full text
Abstract:
Study Design: Prospective pre-clinical and clinical cohort study. Objectives: Current spinal navigation systems rely on a dynamic reference frame (DRF) for image-to-patient registration and tool tracking. Working distant to a DRF may generate inaccuracy. Here we quantitate predictors of navigation error as a function of distance from the registered vertebral level, and from intersegmental mobility due to surgical manipulation and patient respiration. Methods: Navigation errors from working distant to the registered level, and from surgical manipulation, were quantified in 4 human cadavers. The 3-dimensional (3D) position of a tracked tool tip at 0 to 5 levels from the DRF, and during targeting of pedicle screw tracts, was captured in real-time by an optical navigation system. Respiration-induced vertebral motion was quantified from 10 clinical cases of open posterior instrumentation. The 3D position of a custom spinous-process clamp was tracked over 12 respiratory cycles. Results: An increase in mean 3D navigation error of ≥2 mm was observed at ≥2 levels from the DRF in the cervical and lumbar spine. Mean ± SD displacement due to surgical manipulation was 1.55 ± 1.13 mm in 3D across all levels, ≥2 mm in 17.4%, 19.2%, and 38.5% of levels in the cervical, thoracic, and lumbar spine, respectively. Mean ± SD respiration-induced 3D motion was 1.96 ± 1.32 mm, greatest in the lower thoracic spine ( P < .001). Tidal volume and positive end-expiratory pressure correlated positively with increased vertebral displacement. Conclusions: Vertebral motion is unaccounted for during image-guided surgery when performed at levels distant from the DRF. Navigating instrumentation within 2 levels of the DRF likely minimizes the risk of navigation error.
APA, Harvard, Vancouver, ISO, and other styles
16

Liu, Tao, Depeng Zhao, and Mingyang Pan. "Generating 3D Depiction for a Future ECDIS Based on Digital Earth." Journal of Navigation 67, no. 6 (June 17, 2014): 1049–68. http://dx.doi.org/10.1017/s0373463314000381.

Full text
Abstract:
An Electronic Navigational Chart (ENC) is a two-dimensional abstraction and generalisation of the real world and it limits users' ability to obtain more real and rich spatial information of the navigation environment. However, a three-dimensional (3D) chart could dramatically reduce the number of human errors and improve the accuracy and efficiency of manoeuvring. Thus it is important to be able to visualize charts in 3D. This article proposes a new model for future Electronic Chart Display and Information Systems (ECDIS) and describes our approach for the construction of web-based multi-resolution future ECDIS implemented in our system Automotive Intelligent Chart (AIC) 3D ECDIS, including multi-resolution riverbed construction technology, multi-layer technology for data fusion, Mercator transformation of the model, rendering and web publishing methods. AIC 3D ECDIS can support global spatial data and 3D visualization, which merges the 2D vector electronic navigational chart with the three-dimensional navigation environment in a unified framework and interface, and is also published on the web to provide application and data service through the network.
APA, Harvard, Vancouver, ISO, and other styles
17

Stebnev, V. S., S. D. Stebnev, I. V. Malov, and N. I. Skladchikova. "3D-navigation surgery of the lens." Modern technologies in ophtalmology, no. 4 (December 7, 2020): 393–94. http://dx.doi.org/10.25276/2312-4911-2020-4-393-394.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Kirnaz, Sertac, Rodrigo Navarro-Ramirez, Christoph Wipplinger, Franziska Anna Schmidt, Ibrahim Hussain, Eliana Kim, and Roger Härtl. "Minimally Invasive Transforaminal Lumbar Interbody Fusion using 3-Dimensional Total Navigation: 2-Dimensional Operative Video." Operative Neurosurgery 18, no. 1 (March 19, 2019): E9—E10. http://dx.doi.org/10.1093/ons/opz042.

Full text
Abstract:
Abstract This video demonstrates the workflow of a minimally invasive transforaminal interbody fusion (MIS-TLIF) using a portable intraoperative CT (iCT) scanner, (Airo®, Brainlab AG, Feldkirchen, Germany), combined with state-of-the-art total 3D computer navigation. The navigation is used not only for instrumentation but also for intraoperative planning throughout the procedure, inserting the cage, therefore, completely eliminating the need for fluoroscopy. In this video, we present a case of a 72-yr-old female patient with a history of lower back pain, right lower extremity radicular pain and weakness for 2 yr due to L4-L5 spondylolisthesis with instability and severe lumbar spinal stenosis. The patient is treated by a L4-L5 unilateral laminotomy for bilateral decompression (ULBD) and MIS-TLIF. MIS-TLIF using total 3D navigation significantly improves the workflow of the conventional TLIF procedure. The tailored access to the spine is translated into smaller but more efficient surgical corridors. This modification in a “total navigation” modality minimizes the staff radiation exposure to 0 by navigating in real time over iCT obtained images that can be acquired while the surgical staff is protected or outside the OR. Furthermore, this technique makes real-time and virtual intraoperative imaging of screws and their planned trajectory feasible. 3D Navigation eliminates the need for K-Wires, thus decreasing the risk of vascular penetration injury due to K-Wire malpositioning. 3D navigation can also predict the positioning of the interbody cage, thereby, decreasing the risk of malpositioning or subsidence. Patient consent was obtained prior to performing the procedure.
APA, Harvard, Vancouver, ISO, and other styles
19

Ghawana, T., M. Aleksandrov, and S. Zlatanova. "3D GEOSPATIAL INDOOR NAVIGATION FOR DISASTER RISK REDUCTION AND RESPONSE IN URBAN ENVIRONMENT." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences IV-4 (September 19, 2018): 49–57. http://dx.doi.org/10.5194/isprs-annals-iv-4-49-2018.

Full text
Abstract:
<p><strong>Abstract.</strong> Disaster management for urban environments with complex structures requires 3D extensions of indoor applications to support better risk reduction and response strategies. The paper highlights the need for assessment and explores the role of 3D geospatial information and modeling regarding the indoor structure and navigational routes which can be utilized as disaster risk reduction and response strategy. The reviewed models or methods are analysed testing parameters in the context of indoor risk and disaster management. These parameters are level of detail, connection to outdoor, spatial model and network, handling constraints. 3D reconstruction of indoors requires the structural data to be collected in a feasible manner with sufficient details. Defining the indoor space along with obstacles is important for navigation. Readily available technologies embedded in smartphones allow development of mobile applications for data collection, visualization and navigation enabling access by masses at low cost. The paper concludes with recommendations for 3D modeling, navigation and visualization of data using readily available smartphone technologies, drones as well as advanced robotics for Disaster Management.</p>
APA, Harvard, Vancouver, ISO, and other styles
20

Liu, Cheng-Li, and Shiaw-Tsyr Uang. "The Cross-Zone Navigation and Signage Systems for Combatting Cybersickness and Disorientation in Middle-Aged and Older People within a 3D Virtual Store." Applied Sciences 12, no. 19 (September 29, 2022): 9821. http://dx.doi.org/10.3390/app12199821.

Full text
Abstract:
With the maturation and popularization of 3D virtual reality (3D VR) technology, various corporations have employed 3D VR animations to enrich the experience of visiting a store and change how products are presented. Because middle-aged and older adults have weaker mobility and perception abilities, their behaviors in 3D virtual stores may differ entirely from those of younger age groups. This study aimed to develop a cross-zone navigation system and a signage system for 3D virtual retail stores to provide middle-aged and older consumers with high-efficiency navigation for finding products quickly. Additionally, the effect of the systems on combating perceptual conflict was assessed to confirm the practicability of 3D virtual retail shopping. The results revealed that the cross-zone navigation system effectively assisted participants in searching for their desired products. Additionally, the cybersickness score (SSQ) of the cross-zone navigation system group was significantly lower than that of the map-based navigation system group. The participants who used both the cross-zone navigation system and the signage system exhibited the lowest perceptual conflict scores. Therefore, this study provides references for developing a novel navigation system for 3D virtual retail stores (i.e., a cross-zone navigation system with signage).
APA, Harvard, Vancouver, ISO, and other styles
21

Li, Yiduo, Debao Wang, Qipeng Li, Guangtao Cheng, Zhuoran Li, and Peiqing Li. "Advanced 3D Navigation System for AGV in Complex Smart Factory Environments." Electronics 13, no. 1 (December 28, 2023): 130. http://dx.doi.org/10.3390/electronics13010130.

Full text
Abstract:
The advancement of Industry 4.0 has significantly propelled the widespread application of automated guided vehicle (AGV) systems within smart factories. As the structural diversity and complexity of smart factories escalate, the conventional two-dimensional plan-based navigation systems with fixed routes have become inadequate. Addressing this challenge, we devised a novel mobile robot navigation system encompassing foundational control, map construction positioning, and autonomous navigation functionalities. Initially, employing point cloud matching algorithms facilitated the construction of a three-dimensional point cloud map within indoor environments, subsequently converted into a navigational two-dimensional grid map. Simultaneously, the utilization of a multi-threaded normal distribution transform (NDT) algorithm enabled precise robot localization in three-dimensional settings. Leveraging grid maps and the robot’s inherent localization data, the A* algorithm was utilized for global path planning. Moreover, building upon the global path, the timed elastic band (TEB) algorithm was employed to establish a kinematic model, crucial for local obstacle avoidance planning. This research substantiated its findings through simulated experiments and real vehicle deployments: Mobile robots scanned environmental data via laser radar and constructing point clouds and grid maps. This facilitated centimeter-level localization and successful circumvention of static obstacles, while simultaneously charting optimal paths to bypass dynamic hindrances. The devised navigation system demonstrated commendable autonomous navigation capabilities. Experimental evidence showcased satisfactory accuracy in practical applications, with positioning errors of 3.6 cm along the x-axis, 3.3 cm along the y-axis, and 4.3° in orientation. This innovation stands to substantially alleviate the low navigation precision and sluggishness encountered by AGV vehicles within intricate smart factory environments, promising a favorable prospect for practical applications.
APA, Harvard, Vancouver, ISO, and other styles
22

Zhang, Weisong, Yukang Wang, and Xiaoping Zhou. "Automatic Generation of 3D Indoor Navigation Networks from Building Information Modeling Data Using Image Thinning." ISPRS International Journal of Geo-Information 12, no. 6 (June 5, 2023): 231. http://dx.doi.org/10.3390/ijgi12060231.

Full text
Abstract:
Navigation networks are a common form of indoor map that provide the basis for a wide range of indoor location-based services, intelligent tasks for indoor robots, and three-dimensional (3D) geographic information systems. The majority of current indoor navigation networks are manually modeled, resulting in a laborious and fallible process. Building Information Modeling (BIM) captures design information, allowing for the automated generation of indoor maps. Most existing BIM-based navigation systems for floor-level wayfinding rely on well-defined spatial semantics, and do not adapt well to buildings with irregular 3D shapes, which can make cross-floor path generation difficult. This research introduces an innovative approach to generating 3D indoor navigation networks automatically from BIM data using image thinning, which is referred to as GINIT. Firstly, GINIT extracts grid-based maps for floors from BIM data using only two types of semantics, i.e., slabs and doors. Secondly, GINIT captures cross-floor paths from building components by projecting 3D forms onto a 2D image, thinning the 2D image to capture the 2D projection path, and crossing over the 2D routes with 3D routes to restore the 3D path. Finally, to demonstrate the effectiveness of GINIT, experiments were conducted on three real-world multi-floor buildings, evaluating its performance across eight types of cross-layer architectural component. GINIT overcomes the dependency of space definitions in current BIM-based navigation network generation schemes by introducing image thinning. Due to the adaptability of navigation image thinning to any binary image, GINIT is capable of generating navigation networks from building components with diverse 3D shapes. Moreover, the current studies on indoor navigation network extraction mainly use geometry theory, while this study is the first to generate 3D indoor navigation networks automatically using image thinning theory. The results of this study will offer a unique perspective and foster the exploration of imaging theory applications of BIM.
APA, Harvard, Vancouver, ISO, and other styles
23

Ming, Zhenxing, and Hailong Huang. "A 3D Vision Cone Based Method for Collision Free Navigation of a Quadcopter UAV among Moving Obstacles." Drones 5, no. 4 (November 12, 2021): 134. http://dx.doi.org/10.3390/drones5040134.

Full text
Abstract:
In the near future, it’s expected that unmanned aerial vehicles (UAVs) will become ubiquitous surrogates for human-crewed vehicles in the field of border patrol, package delivery, etc. Therefore, many three-dimensional (3D) navigation algorithms based on different techniques, e.g., model predictive control (MPC)-based, navigation potential field-based, sliding mode control-based, and reinforcement learning-based, have been extensively studied in recent years to help achieve collision-free navigation. The vast majority of the 3D navigation algorithms perform well when obstacles are sparsely spaced, but fail when facing crowd-spaced obstacles, which causes a potential threat to UAV operations. In this paper, a 3D vision cone-based reactive navigation algorithm is proposed to enable small quadcopter UAVs to seek a path through crowd-spaced 3D obstacles to the destination without collisions. The proposed algorithm is simulated in MATLAB with different 3D obstacles settings to demonstrate its feasibility and compared with the other two existing 3D navigation algorithms to exhibit its superiority. Furthermore, a modified version of the proposed algorithm is also introduced and compared with the initially proposed algorithm to lay the foundation for future work.
APA, Harvard, Vancouver, ISO, and other styles
24

Jamali, A., A. A. Rahman, P. Boguslawski, and C. M. Gold. "AN AUTOMATED 3D INDOOR TOPOLOGICAL NAVIGATION NETWORK MODELLING." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences II-2/W2 (October 19, 2015): 47–53. http://dx.doi.org/10.5194/isprsannals-ii-2-w2-47-2015.

Full text
Abstract:
Indoor navigation is important for various applications such as disaster management and safety analysis. In the last decade, indoor environment has been a focus of wide research; that includes developing techniques for acquiring indoor data (e.g. Terrestrial laser scanning), 3D indoor modelling and 3D indoor navigation models. In this paper, an automated 3D topological indoor network generated from inaccurate 3D building models is proposed. In a normal scenario, 3D indoor navigation network derivation needs accurate 3D models with no errors (e.g. gap, intersect) and two cells (e.g. rooms, corridors) should touch each other to build their connections. The presented 3D modeling of indoor navigation network is based on surveying control points and it is less dependent on the 3D geometrical building model. For reducing time and cost of indoor building data acquisition process, Trimble LaserAce 1000 as surveying instrument is used. The modelling results were validated against an accurate geometry of indoor building environment which was acquired using Trimble M3 total station.
APA, Harvard, Vancouver, ISO, and other styles
25

Wang, Chaoqun, Jiankun Wang, Chenming Li, Danny Ho, Jiyu Cheng, Tingfang Yan, Lili Meng, and Max Q. H. Meng. "Safe and Robust Mobile Robot Navigation in Uneven Indoor Environments." Sensors 19, no. 13 (July 7, 2019): 2993. http://dx.doi.org/10.3390/s19132993.

Full text
Abstract:
Complex environments pose great challenges for autonomous mobile robot navigation. In this study, we address the problem of autonomous navigation in 3D environments with staircases and slopes. An integrated system for safe mobile robot navigation in 3D complex environments is presented and both the perception and navigation capabilities are incorporated into the modular and reusable framework. Firstly, to distinguish the slope from the staircase in the environment, the robot builds a 3D OctoMap of the environment with a novel Simultaneously Localization and Mapping (SLAM) framework using the information of wheel odometry, a 2D laser scanner, and an RGB-D camera. Then, we introduce the traversable map, which is generated by the multi-layer 2D maps extracted from the 3D OctoMap. This traversable map serves as the input for autonomous navigation when the robot faces slopes and staircases. Moreover, to enable robust robot navigation in 3D environments, a novel camera re-localization method based on regression forest towards stable 3D localization is incorporated into this framework. In addition, we utilize a variable step size Rapidly-exploring Random Tree (RRT) method which can adjust the exploring step size automatically without tuning this parameter manually according to the environment, so that the navigation efficiency is improved. The experiments are conducted in different kinds of environments and the output results demonstrate that the proposed system enables the robot to navigate efficiently and robustly in complex 3D environments.
APA, Harvard, Vancouver, ISO, and other styles
26

Oertel, Matthias F., Juliane Hobart, Marco Stein, Vanessa Schreiber, and Wolfram Scharbrodt. "Clinical and methodological precision of spinal navigation assisted by 3D intraoperative O-arm radiographic imaging." Journal of Neurosurgery: Spine 14, no. 4 (April 2011): 532–36. http://dx.doi.org/10.3171/2010.10.spine091032.

Full text
Abstract:
Object In recent years, the importance of intraoperative navigation in neurosurgery has been increasing. Multiple studies have proven the advantages and safety of computer-assisted spinal neurosurgery. The use of intraoperative 3D radiographic imaging to acquire image information for navigational purposes has several advantages and should increase the accuracy and safety of screw guidance with navigation. The aim of this study was to evaluate the clinical and methodological precision of navigated spine surgery in combination with the O-arm multidimensional imaging system. Methods Thoracic, lumbar, and sacral pedicle screws that were placed with the help of the combination of the O-arm and StealthStation TREON plus navigation systems were analyzed. To evaluate clinical precision, 278 polyaxial pedicle screws in 139 vertebrae were reviewed for medial or caudal perforations on coronal projection. For the evaluation of the methodological accuracy, virtual and intraoperative images were compared, and the angulation of the pedicle screw to the midsagittal line was measured. Results Pedicle perforations were recorded in 3.2% of pedicle screws. None of the perforated pedicle screws damaged a nerve root. The difference in angulation between the actual and virtual pedicle screws was 2.8° ± 1.9°. Conclusions The use of the StealthStation TREON plus navigation system in combination with the O-arm system showed the highest accuracy for spinal navigation compared with other studies that used traditional image acquisition and registration for navigation.
APA, Harvard, Vancouver, ISO, and other styles
27

Dong, Weihua, and Hua Liao. "EYE TRACKING TO EXPLORE THE IMPACTS OF PHOTOREALISTIC 3D REPRESENTATIONS IN PEDSTRIAN NAVIGATION PERFORMANCE." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B2 (June 8, 2016): 641–45. http://dx.doi.org/10.5194/isprs-archives-xli-b2-641-2016.

Full text
Abstract:
Despite the now-ubiquitous two-dimensional (2D) maps, photorealistic three-dimensional (3D) representations of cities (e.g., Google Earth) have gained much attention by scientists and public users as another option. However, there is no consistent evidence on the influences of 3D photorealism on pedestrian navigation. Whether 3D photorealism can communicate cartographic information for navigation with higher effectiveness and efficiency and lower cognitive workload compared to the traditional symbolic 2D maps remains unknown. This study aims to explore whether the photorealistic 3D representation can facilitate processes of map reading and navigation in digital environments using a lab-based eye tracking approach. Here we show the differences of symbolic 2D maps versus photorealistic 3D representations depending on users’ eye-movement and navigation behaviour data. We found that the participants using the 3D representation were less effective, less efficient and were required higher cognitive workload than using the 2D map for map reading. However, participants using the 3D representation performed more efficiently in self-localization and orientation at the complex decision points. The empirical results can be helpful to improve the usability of pedestrian navigation maps in future designs.
APA, Harvard, Vancouver, ISO, and other styles
28

Dong, Weihua, and Hua Liao. "EYE TRACKING TO EXPLORE THE IMPACTS OF PHOTOREALISTIC 3D REPRESENTATIONS IN PEDSTRIAN NAVIGATION PERFORMANCE." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B2 (June 8, 2016): 641–45. http://dx.doi.org/10.5194/isprsarchives-xli-b2-641-2016.

Full text
Abstract:
Despite the now-ubiquitous two-dimensional (2D) maps, photorealistic three-dimensional (3D) representations of cities (e.g., Google Earth) have gained much attention by scientists and public users as another option. However, there is no consistent evidence on the influences of 3D photorealism on pedestrian navigation. Whether 3D photorealism can communicate cartographic information for navigation with higher effectiveness and efficiency and lower cognitive workload compared to the traditional symbolic 2D maps remains unknown. This study aims to explore whether the photorealistic 3D representation can facilitate processes of map reading and navigation in digital environments using a lab-based eye tracking approach. Here we show the differences of symbolic 2D maps versus photorealistic 3D representations depending on users’ eye-movement and navigation behaviour data. We found that the participants using the 3D representation were less effective, less efficient and were required higher cognitive workload than using the 2D map for map reading. However, participants using the 3D representation performed more efficiently in self-localization and orientation at the complex decision points. The empirical results can be helpful to improve the usability of pedestrian navigation maps in future designs.
APA, Harvard, Vancouver, ISO, and other styles
29

Cong, Ting, Ahilan Sivaganesan, Christopher M. Mikhail, Avani S. Vaishnav, James Dowdell, Joseph Barbera, Hiroshi Kumagai, Jonathan Markowitz, Evan Sheha, and Sheeraz A. Qureshi. "Facet Violation With Percutaneous Pedicle Screw Placement: Impact of 3D Navigation and Facet Orientation." HSS Journal®: The Musculoskeletal Journal of Hospital for Special Surgery 17, no. 3 (July 3, 2021): 281–88. http://dx.doi.org/10.1177/15563316211026324.

Full text
Abstract:
Background: The gold standard for percutaneous pedicle screw placement is 2-dimensional (2D) fluoroscopy. Data are sparse on the accuracy of 3-dimensional (3D) navigation percutaneous screw placement in minimally invasive spine procedures. Objective: We sought to compare a single surgeon’s percutaneous pedicle screw placement accuracy using 2D fluoroscopy versus 3D navigation, as well as to investigate the effect of facet orientation on facet violation when using 2D fluoroscopy. Methods: We conducted a retrospective radiographic study of consecutive cohort of patients who underwent percutaneous lumbar instrumentation using either 2D fluoroscopy or 3D navigation. All procedures were performed by a single surgeon at 2 academic institutions between 2011 and 2018. Radiographic measurement of screw accuracy was assessed using a postoperative computed tomographic scan. The primary outcome was facet violation, and secondary outcomes were endplate/tip breaches, the Gertzbein-Robbins classification for cortical breaches, and the Simplified Screw Accuracy grade. Statistical comparisons were made between screws placed using 2D fluoroscopy versus 3D navigation. Axial facet angles were also measured to correlate with facet violation rates. Results: In the 138 patients included, 376 screws were placed with fluoroscopy and 193 with navigation. Superior (unfused) level facet violation was higher with 2D fluoroscopy than with 3D navigation (9% vs 0.5%), which comprises the main cause for poor screw placement. Axial facet angles exceeding 45° at L4 and 60° at L5 were correlated with facet violations. Conclusion: This retrospective study found that 3D navigation is associated with lower facet violation rates in percutaneous lumbar pedicle screw placement when compared with 2D fluoroscopy. These findings suggest that 3D navigation may be of particular value when facet joints are coronally oriented.
APA, Harvard, Vancouver, ISO, and other styles
30

Wei, Xiao Yang, Yang Wang, Xin Ping Yan, Yan Fei Tian, and Bing Wu. "Design and Realization of the 3D Electronic Chart Based on GIS and Virtual Reality Technology." Applied Mechanics and Materials 744-746 (March 2015): 1669–73. http://dx.doi.org/10.4028/www.scientific.net/amm.744-746.1669.

Full text
Abstract:
Two-dimensional electronic chart system can’t intuitively express the various information of 3D navigation environment, and the existing 3D electronic chart system lacks comprehensive display and scientific quantitative analysis. In that context, designed a 3D electronic chart system framework based on GIS and virtual reality technology (VR technology) and studied the key technologies such as visualization of 3D navigation environment, numerical simulation of water current and dynamic ship information display. Consequently, a 3D electronic chart system has been developed for Sutong Bridge section of Yangtze River waterway and it is verified to be feasible and reliable for ensuring safety of navigation.
APA, Harvard, Vancouver, ISO, and other styles
31

Bhattacharji, Priya, and William Moore. "Application of Real-Time 3D Navigation System in CT-Guided Percutaneous Interventional Procedures: A Feasibility Study." Radiology Research and Practice 2017 (2017): 1–7. http://dx.doi.org/10.1155/2017/3151694.

Full text
Abstract:
Introduction. To evaluate the accuracy of a quantitative 3D navigation system for CT-guided interventional procedures in a two-part study. Materials and Methods. Twenty-two procedures were performed in abdominal and thoracic phantoms. Accuracies of the 3D anatomy map registration and navigation were evaluated. Time used for the navigated procedures was recorded. In the IRB approved clinical evaluation, 21 patients scheduled for CT-guided thoracic and hepatic biopsy and ablations were recruited. CT-guided procedures were performed without following the 3D navigation display. Accuracy of navigation as well as workflow fitness of the system was evaluated. Results. In phantoms, the average 3D anatomy map registration error was 1.79 mm. The average navigated needle placement accuracy for one-pass and two-pass procedures, respectively, was 2.0±0.7 mm and 2.8±1.1 mm in the liver and 2.7±1.7 mm and 3.0±1.4 mm in the lung. The average accuracy of the 3D navigation system in human subjects was 4.6 mm ± 3.1 for all procedures. The system fits the existing workflow of CT-guided interventions with minimum impact. Conclusion. A 3D navigation system can be performed along the existing workflow and has the potential to navigate precision needle placement in CT-guided interventional procedures.
APA, Harvard, Vancouver, ISO, and other styles
32

Mohamed, H. A., J. M. Hansen, M. M. Elhabiby, N. El-Sheimy, and A. B. Sesay. "PERFORMANCE CHARACTERISTIC MEMS-BASED IMUs FOR UAVs NAVIGATION." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-1/W4 (August 26, 2015): 337–43. http://dx.doi.org/10.5194/isprsarchives-xl-1-w4-337-2015.

Full text
Abstract:
Accurate 3D reconstruction has become essential for non-traditional mapping applications such as urban planning, mining industry, environmental monitoring, navigation, surveillance, pipeline inspection, infrastructure monitoring, landslide hazard analysis, indoor localization, and military simulation. The needs of these applications cannot be satisfied by traditional mapping, which is based on dedicated data acquisition systems designed for mapping purposes. Recent advances in hardware and software development have made it possible to conduct accurate 3D mapping without using costly and high-end data acquisition systems. Low-cost digital cameras, laser scanners, and navigation systems can provide accurate mapping if they are properly integrated at the hardware and software levels. Unmanned Aerial Vehicles (UAVs) are emerging as a mobile mapping platform that can provide additional economical and practical advantages. However, such economical and practical requirements need navigation systems that can provide uninterrupted navigation solution. Hence, testing the performance characteristics of Micro-Electro-Mechanical Systems (MEMS) or low cost navigation sensors for various UAV applications is important research. This work focuses on studying the performance characteristics under different manoeuvres using inertial measurements integrated with single point positioning, Real-Time-Kinematic (RTK), and additional navigational aiding sensors. Furthermore, the performance of the inertial sensors is tested during Global Positioning System (GPS) signal outage.
APA, Harvard, Vancouver, ISO, and other styles
33

Roncari, Andrea, Alberto Bianchi, Fulvia Taddei, Claudio Marchetti, Enrico Schileo, and Giovanni Badiali. "Navigation in Orthognathic Surgery: 3D Accuracy." Facial Plastic Surgery 31, no. 05 (November 18, 2015): 463–73. http://dx.doi.org/10.1055/s-0035-1564716.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Ahmad, Abdel-Mehsen, Zouhair Bazzal, Roba Al Majzoub, and Ola Charanek. "3D Senor-based Library Navigation System." Advances in Science, Technology and Engineering Systems Journal 2, no. 3 (2017): 967–73. http://dx.doi.org/10.25046/aj0203122.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Bergman, Jared P., Matthew J. Bovyn, Florence F. Doval, Abhimanyu Sharma, Manasa V. Gudheti, Steven P. Gross, Jun F. Allard, and Michael D. Vershinin. "Cargo navigation across 3D microtubule intersections." Proceedings of the National Academy of Sciences 115, no. 3 (January 2, 2018): 537–42. http://dx.doi.org/10.1073/pnas.1707936115.

Full text
Abstract:
The eukaryotic cell’s microtubule cytoskeleton is a complex 3D filament network. Microtubules cross at a wide variety of separation distances and angles. Prior studies in vivo and in vitro suggest that cargo transport is affected by intersection geometry. However, geometric complexity is not yet widely appreciated as a regulatory factor in its own right, and mechanisms that underlie this mode of regulation are not well understood. We have used our recently reported 3D microtubule manipulation system to build filament crossings de novo in a purified in vitro environment and used them to assay kinesin-1–driven model cargo navigation. We found that 3D microtubule network geometry indeed significantly influences cargo routing, and in particular that it is possible to bias a cargo to pass or switch just by changing either filament spacing or angle. Furthermore, we captured our experimental results in a model which accounts for full 3D geometry, stochastic motion of the cargo and associated motors, as well as motor force production and force-dependent behavior. We used a combination of experimental and theoretical analysis to establish the detailed mechanisms underlying cargo navigation at microtubule crossings.
APA, Harvard, Vancouver, ISO, and other styles
36

Camiciottoli, R., J. M. Corrifoni, A. D. Bimbo, E. Vicario, and D. Lucarella. "3D navigation of geographic data sets." IEEE Multimedia 5, no. 2 (April 1998): 29–41. http://dx.doi.org/10.1109/93.682523.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Mason, Alexander, Renee Paulsen, Jason M. Babuska, Sharad Rajpal, Sigita Burneikiene, E. Lee Nelson, and Alan T. Villavicencio. "The accuracy of pedicle screw placement using intraoperative image guidance systems." Journal of Neurosurgery: Spine 20, no. 2 (February 2014): 196–203. http://dx.doi.org/10.3171/2013.11.spine13413.

Full text
Abstract:
Object Several retrospective studies have demonstrated higher accuracy rates and increased safety for navigated pedicle screw placement than for free-hand techniques; however, the accuracy differences between navigation systems has not been extensively studied. In some instances, 3D fluoroscopic navigation methods have been reported to not be more accurate than 2D navigation methods for pedicle screw placement. The authors of this study endeavored to identify if 3D fluoroscopic navigation methods resulted in a higher placement accuracy of pedicle screws. Methods A systematic analysis was conducted to examine pedicle screw insertion accuracy based on the use of 2D, 3D, and conventional fluoroscopic image guidance systems. A PubMed and MEDLINE database search was conducted to review the published literature that focused on the accuracy of pedicle screw placement using intraoperative, real-time fluoroscopic image guidance in spine fusion surgeries. The pedicle screw accuracy rates were segregated according to spinal level because each spinal region has individual anatomical and morphological variations. Descriptive statistics were used to compare the pedicle screw insertion accuracy rate differences among the navigation methods. Results A total of 30 studies were included in the analysis. The data were abstracted and analyzed for the following groups: 12 data sets that used conventional fluoroscopy, 8 data sets that used 2D fluoroscopic navigation, and 20 data sets that used 3D fluoroscopic navigation. These studies included 1973 patients in whom 9310 pedicle screws were inserted. With conventional fluoroscopy, 2532 of 3719 screws were inserted accurately (68.1% accuracy); with 2D fluoroscopic navigation, 1031 of 1223 screws were inserted accurately (84.3% accuracy); and with 3D fluoroscopic navigation, 4170 of 4368 screws were inserted accurately (95.5% accuracy). The accuracy rates when 3D was compared with 2D fluoroscopic navigation were also consistently higher throughout all individual spinal levels. Conclusions Three-dimensional fluoroscopic image guidance systems demonstrated a significantly higher pedicle screw placement accuracy than conventional fluoroscopy or 2D fluoroscopic image guidance methods.
APA, Harvard, Vancouver, ISO, and other styles
38

Sun, Yunlong, Lianwu Guan, Menghao Wu, Yanbin Gao, and Zhanyuan Chang. "Vehicular Navigation Based on the Fusion of 3D-RISS and Machine Learning Enhanced Visual Data in Challenging Environments." Electronics 9, no. 1 (January 20, 2020): 193. http://dx.doi.org/10.3390/electronics9010193.

Full text
Abstract:
Based on the 3D Reduced Inertial Sensor System (3D-RISS) and the Machine Learning Enhanced Visual Data (MLEVD), an integrated vehicle navigation system is proposed in this paper. In demanding conditions such as outdoor satellite signal interference and indoor navigation, this work incorporates vehicle smooth navigation. Firstly, a landmark is set up and both of its size and position are accurately measured. Secondly, the image with the landmark information is captured quickly by using the machine learning. Thirdly, the template matching method and the Extended Kalman Filter (EKF) are then used to correct the errors of the Inertial Navigation System (INS), which employs the 3D-RISS to reduce the overall cost and ensuring the vehicular positioning accuracy simultaneously. Finally, both outdoor and indoor experiments are conducted to verify the performance of the 3D-RISS/MLEVD integrated navigation technology. Results reveal that the proposed method can effectively reduce the accumulated error of the INS with time while maintaining the positioning error within a few meters.
APA, Harvard, Vancouver, ISO, and other styles
39

Liu, Jasmine. "Action-Aware Vision Language Navigation (AAVLN): AI vision system based on cross-modal transformer for understanding and navigating dynamic environments." Applied and Computational Engineering 57, no. 1 (April 30, 2024): 40–55. http://dx.doi.org/10.54254/2755-2721/57/20241309.

Full text
Abstract:
Visually impaired individuals face great challenges with independently navigating dynamic environments because of their inability to fully comprehend the environment and actions of surrounding people. Conventional navigation approaches like Simultaneous Localization And Mapping (SLAM) rely on complete scanned maps to navigate static, fixed environments. With Vision Language Navigation (VLN), agents can understand semantic information to expand navigation to similar environments. However, both cannot accurately navigate dynamic environments containing human actions. To address this challenge, we propose a novel cross-modal transformer-based Action-Aware VLN system (AAVLN). Our AAVLN Agent Algorithm is trained using Reinforcement Learning in our Environment Simulator. AAVLNs novel cross-modal transformer structure allows the Agent Algorithm to understand natural language instructions and semantic information for navigating dynamic environments and recognizing human actions. For training, we use Reinforcement Learning in our action-based environment simulator. We created it by combining an existing simulator with our novel 3D human action generator. Our experimental results demonstrate the effectiveness of our approach, outperforming current methods on various metrics across challenging benchmarks. Our ablation studies also highlight that we increase dynamic navigation accuracy with our Vision Transformer based human action recognition module and cross-modal encoding. We are currently constructing 3D models of real-world environments, including hospitals and schools, for further training AAVLN. Our project will be combined with Chat-GPT to improve natural language interactions. AAVLN will have numerous applications in robotics, AR, and other computer vision fields.
APA, Harvard, Vancouver, ISO, and other styles
40

Wang, Tsung-Wun, Han-Pang Huang, and Yu-Lin Zhao. "Vision-Guided Autonomous Robot Navigation in Realistic 3D Dynamic Scenarios." Applied Sciences 15, no. 5 (February 21, 2025): 2323. https://doi.org/10.3390/app15052323.

Full text
Abstract:
This paper presents a 3D vision-based autonomous navigation system for wheeled mobile robots equipped with an RGB-D camera. The system integrates SLAM (simultaneous localization and mapping), motion planning, and obstacle avoidance to operate in both static and dynamic environments. A real-time pipeline is developed to construct sparse and dense maps for precise localization and path planning. Navigation meshes (NavMeshes) derived from 3D reconstructions facilitate efficient A* path planning. Additionally, a dynamic “U-map” generated from depth data identifies obstacles, enabling rapid NavMesh updates for obstacle avoidance. The proposed system achieves real-time performance and robust navigation across diverse terrains, including uneven surfaces and ramps, offering a comprehensive solution for 3D vision-guided robotic navigation.
APA, Harvard, Vancouver, ISO, and other styles
41

Suri, Ikaasa, Sayahi Suthakaran, Bahie Ezzat, Juan Sebastian Arroyave Villada, Daniel Kwon, Lily Martin, James Hu, Kurt Yaeger, and Matthew Thomas Carr. "472 Complications of 3D-Navigated Spinal Procedures: A Systematic Review." Neurosurgery 70, Supplement_1 (April 2024): 144. http://dx.doi.org/10.1227/neu.0000000000002809_472.

Full text
Abstract:
INTRODUCTION: 3D surgical navigation has revolutionized spinal surgery. However, information on the overall usability of these tools is limited and inconsistent, especially with regard to operative time and radiation exposure. Complication rates across institutions and platforms have not been systematically examined, leading to a lack of quantitative understanding on surgical success rates. METHODS: This study used PRISMA guidelines and a PROSPERO-registered protocol to identify studies on FDA-approved 3D navigation systems in spinal procedures from Ovid MEDLINE(R), Embase, and Cochrane CENTRAL. Results were assessed by two independent reviewers using inclusion/exclusion criteria and risk of bias tools. RESULTS: 70 studies, totaling 3500 patients, were included. Complication and surgical success rates have remained constant since 2004, with overall rates of 5.18% and 93.74%, respectively. When segmented by spinal region, complication rates may be moderately positively correlated with frequency of cervical and thoracic procedures (r = 0.49, p = 0.40). 23 studies, totaling 1723 patients, compared the performance of 3D navigation to 2D fluoroscopy or freehand navigation. There was a significant difference of 6.47% between surgical success rates of the interventional and control groups (p = 0.01). However, there was no significant difference in radiation exposure or operative time between 3D navigated procedures and their controls. CONCLUSIONS: This study concludes that 3D navigation may have higher surgical success rates than 2D fluoroscopy and freehand navigation; increased surgeon comfort with riskier procedures on more complex segments of the spine may explain the slight increase in complication rates observed since 2004; and there is a need for improvement in 3D navigation tools to decrease operative time and radiation exposure.
APA, Harvard, Vancouver, ISO, and other styles
42

Nikoohemat, S., A. Diakité, S. Zlatanova, and G. Vosselman. "INDOOR 3D MODELING AND FLEXIBLE SPACE SUBDIVISION FROM POINT CLOUDS." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences IV-2/W5 (May 29, 2019): 285–92. http://dx.doi.org/10.5194/isprs-annals-iv-2-w5-285-2019.

Full text
Abstract:
<p><strong>Abstract.</strong> Indoor navigation can be a tedious process in a complex and unknown environment. It gets more critical when the first responders try to intervene in a big building after a disaster has occurred. For such cases, an accurate map of the building is among the best supports possible. Unfortunately, such a map is not always available, or generally outdated and imprecise, leading to error prone decisions. Thanks to advances in the laser scanning, accurate 3D maps can be built in relatively small amount of time using all sort of laser scanners (stationary, mobile, drone), although the information they provide is generally an unstructured point cloud. While most of the existing approaches try to extensively process the point cloud in order to produce an accurate architectural model of the scanned building, similar to a Building Information Model (BIM), we have adopted a space-focused approach. This paper presents our framework that starts from point-clouds of complex indoor environments, performs advanced processes to identify the 3D structures critical to navigation and path planning, and provides fine-grained navigation networks that account for obstacles and spatial accessibility of the navigating agents. The method involves generating a volumetric-wall vector model from the point cloud, identifying the obstacles and extracting the navigable 3D spaces. Our work contributes a new approach for space subdivision without the need of using laser scanner positions or viewpoints. Unlike 2D cell decomposition or a binary space partitioning, this work introduces a space enclosure method to deal with 3D space extraction and non-Manhattan World architecture. The results show more than 90% of spaces are correctly extracted. The approach is tested on several real buildings and relies on the latest advances in indoor navigation.</p>
APA, Harvard, Vancouver, ISO, and other styles
43

Dho, Yun-Sik, Sang Joon Park, Haneul Choi, Youngdeok Kim, Hyeong Cheol Moon, Kyung Min Kim, Ho Kang, et al. "Development of an inside-out augmented reality technique for neurosurgical navigation." Neurosurgical Focus 51, no. 2 (August 2021): E21. http://dx.doi.org/10.3171/2021.5.focus21184.

Full text
Abstract:
OBJECTIVE With the advancement of 3D modeling techniques and visualization devices, augmented reality (AR)–based navigation (AR navigation) is being developed actively. The authors developed a pilot model of their newly developed inside-out tracking AR navigation system. METHODS The inside-out AR navigation technique was developed based on the visual inertial odometry (VIO) algorithm. The Quick Response (QR) marker was created and used for the image feature–detection algorithm. Inside-out AR navigation works through the steps of visualization device recognition, marker recognition, AR implementation, and registration within the running environment. A virtual 3D patient model for AR rendering and a 3D-printed patient model for validating registration accuracy were created. Inside-out tracking was used for the registration. The registration accuracy was validated by using intuitive, visualization, and quantitative methods for identifying coordinates by matching errors. Fine-tuning and opacity-adjustment functions were developed. RESULTS ARKit-based inside-out AR navigation was developed. The fiducial marker of the AR model and those of the 3D-printed patient model were correctly overlapped at all locations without errors. The tumor and anatomical structures of AR navigation and the tumors and structures placed in the intracranial space of the 3D-printed patient model precisely overlapped. The registration accuracy was quantified using coordinates, and the average moving errors of the x-axis and y-axis were 0.52 ± 0.35 and 0.05 ± 0.16 mm, respectively. The gradients from the x-axis and y-axis were 0.35° and 1.02°, respectively. Application of the fine-tuning and opacity-adjustment functions was proven by the videos. CONCLUSIONS The authors developed a novel inside-out tracking–based AR navigation system and validated its registration accuracy. This technical system could be applied in the novel navigation system for patient-specific neurosurgery.
APA, Harvard, Vancouver, ISO, and other styles
44

Li, Yuan. "Building Information Model for 3D Indoor Navigation in Emergency Response." Advanced Materials Research 368-373 (October 2011): 3837–40. http://dx.doi.org/10.4028/www.scientific.net/amr.368-373.3837.

Full text
Abstract:
In this paper, we discuss the possibility of using BIM into 3D indoor navigation in emergency response. We first analyze the functional requirements of 3D indoor emergency response, and then illustrate the workflow of transforming BIM into 3D, ontology-based, indoor navigation model, shortly as 3DOntoINM. Finally, we explain the algorithm and implication issues.
APA, Harvard, Vancouver, ISO, and other styles
45

Saeedi, Sajad, Carl Thibault, Michael Trentini, and Howard Li. "3D Mapping for Autonomous Quadrotor Aircraft." Unmanned Systems 05, no. 03 (July 2017): 181–96. http://dx.doi.org/10.1142/s2301385017400064.

Full text
Abstract:
Autonomous navigation in global positioning system (GPS)-denied environments is one of the challenging problems in robotics. For small flying robots, autonomous navigation is even more challenging. These robots have limitations such as fast dynamics and limited sensor payload. To develop an autonomous robot, many challenges including two-dimensional (2D) and three-dimensional (3D) perception, path planning, exploration, and obstacle avoidance should be addressed in real-time and with limited resources. In this paper, a complete solution for autonomous navigation of a quadrotor rotorcraft is presented. The proposed solution includes 2D and 3D mapping with several autonomous behaviors such as target localization and displaying maps on multiple remote tablets. Multiple tests were performed in simulated and indoor/outdoor environments to show the effectiveness of the proposed solution.
APA, Harvard, Vancouver, ISO, and other styles
46

Tekavec, Jernej, and Anka Lisec. "3D Geometry-Based Indoor Network Extraction for Navigation Applications Using SFCGAL." ISPRS International Journal of Geo-Information 9, no. 7 (June 29, 2020): 417. http://dx.doi.org/10.3390/ijgi9070417.

Full text
Abstract:
This study is focused on indoor navigation network extraction for navigation applications based on available 3D building data and using SFCGAL library, e.g. simple features computational geometry algorithms library. In this study, special attention is given to 3D cadastre and BIM (building information modelling) datasets, which have been used as data sources for 3D geometric indoor modelling. SFCGAL 3D functions are used for the extraction of an indoor network, which has been modelled in the form of indoor connectivity graphs based on 3D geometries of indoor features. The extraction is performed by the integration of extract transform load (ETL) software and the spatial database to support multiple data sources and provide access to SFCGAL functions. With this integrated approach, the current lack of straightforward software support for complex 3D spatial analyses is addressed. Based on the developed methodology, we perform and discuss the extraction of an indoor navigation network from 3D cadastral and BIM data. The efficiency and performance of the network analyses were evaluated using the processing and query execution times. The results show that the proposed methodology for geometry-based navigation network extraction of buildings is efficient and can be used with various types of 3D geometric indoor data.
APA, Harvard, Vancouver, ISO, and other styles
47

Li, B. S., S. K. Wu, X. Li, J. H. Guo, B. L. Zhu, and W. G. Yang. "THE METHODOLOGY AND APPLICATION OF NEW THREE-DIMENSIONAL GUIDE SYSTEM AND INTELLIGENT SERVICE IN INTELLIGENT SCENIC SPOT: A CASE STUDY OF QIXING SCENIC SPOT IN GUILIN CITY, GUANGXI, CHINA." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-3/W10 (February 8, 2020): 1139–46. http://dx.doi.org/10.5194/isprs-archives-xlii-3-w10-1139-2020.

Full text
Abstract:
Abstract. Nowadays, 3D navigation systems and intelligent services are costly to use, and the accuracy of intelligent navigation does not meet the needs of tourists, and the practicality is not strong. Based on the current situation of the construction of the smart scenic navigation system and the actual needs of tourists, this study determines the main principles and methods of the tour guide system and intelligent services. Taking a certain scenic spot as the specific research object, the A* algorithm is used to solve the path. By optimizing the online gracing service of the arc gis server network, the appropriate data interpolation method is used to improve the accuracy of the elevation data of the scenic spot, and the characteristics of the three-dimensional symbol system are combined. The 3D symbol design system is optimized, and a high-precision 3D navigation map model is established. Finally, the corresponding 3D tour guide system and scenic area intelligent service are designed based on this, and the principle and application flow of the intelligent scenic area navigation system construction are summarized. The three-dimensional map model and its navigation system designed by this research institute reduce the risk of tourists during the tour, improve the quality and efficiency of the tour, and provide a practical service optimization solution for the smart scenic three-dimensional navigation system. The construction and use cost of the high-precision three-dimensional navigation system in the scenic spot is reduced.
APA, Harvard, Vancouver, ISO, and other styles
48

Niu, Lei, Zhiyong Wang, Zhaoyu Lin, Yueying Zhang, Yingwei Yan, and Ziqi He. "Voxel-Based Navigation: A Systematic Review of Techniques, Applications, and Challenges." ISPRS International Journal of Geo-Information 13, no. 12 (December 19, 2024): 461. https://doi.org/10.3390/ijgi13120461.

Full text
Abstract:
In recent years, navigation has attracted widespread attention across various fields, such as geomatics, robotics, photogrammetry, and transportation. Modeling the navigation environment is a key step in building successful navigation services. While traditional navigation systems have relied solely on 2D data, advancements in 3D sensing technology have made more 3D data available, enabling more realistic environmental modeling. This paper primarily focuses on voxel-based navigation and reviews the existing literature that covers various aspects of using voxel data or models to support navigation. The paper first discusses key technologies related to voxel-based navigation, including voxel-based modeling, voxel segmentation, voxel-based analysis, and voxel storage and management. It then distinguishes and discusses indoor and outdoor navigation based on the application scenarios. Additionally, various issues related to voxel-based navigation are addressed. Finally, the paper presents several potential research opportunities that may be useful for researchers or companies in developing more advanced navigation systems for pedestrians, robots, and vehicles.
APA, Harvard, Vancouver, ISO, and other styles
49

Si, Jiawen, Chenglong Zhang, Ming Tian, Tengfei Jiang, Lei Zhang, Hongbo Yu, Jun Shi, and Xudong Wang. "Intraoral Condylectomy with 3D-Printed Cutting Guide versus with Surgical Navigation: An Accuracy and Effectiveness Comparison." Journal of Clinical Medicine 12, no. 11 (June 2, 2023): 3816. http://dx.doi.org/10.3390/jcm12113816.

Full text
Abstract:
This study compares the accuracy and effectiveness of our novel 3D-printed titanium cutting guides with intraoperative surgical navigation for performing intraoral condylectomy in patients with mandibular condylar osteochondroma (OC). A total of 21 patients with mandibular condylar OC underwent intraoral condylectomy with either 3D-printed cutting guides (cutting guide group) or with surgical navigation (navigation group). The condylectomy accuracy in the cutting guide group and navigation group was determined by analyzing the three-dimensional (3D) discrepancies between the postoperative computed tomography (CT) images and the preoperative virtual surgical plan (VSP). Moreover, the improvement of the mandibular symmetry in both groups was determined by evaluating the chin deviation, chin rotation and mandibular asymmetry index (AI). The superimposition of the condylar osteotomy area showed that the postoperative results were very close to the VSP in both groups. The mean 3D deviation and maximum 3D deviation between the planned condylectomy and the actual result were 1.20 ± 0.60 mm and 2.36 ± 0.51 mm in the cutting guide group, and 1.33 ± 0.76 mm and 4.27 ± 1.99 mm in the navigation group. Moreover, the facial symmetry was greatly improved in both groups, indicated by significantly decreased chin deviation, chin rotation and AI. In conclusion, our results show that both 3D-printed cutting-guide-assisted and surgical-navigation-assisted methods of intraoral condylectomy have high accuracy and efficiency, while using a cutting guide can generate a relatively higher surgical accuracy. Moreover, our cutting guides exhibit user-friendly features and simplicity, which represents a promising prospect in everyday clinical practice.
APA, Harvard, Vancouver, ISO, and other styles
50

Sekine, Rikuto, Tetsuo Tomizawa, and Susumu Tarao. "Trial of Utilization of an Environmental Map Generated by a High-Precision 3D Scanner for a Mobile Robot." Journal of Robotics and Mechatronics 35, no. 6 (December 20, 2023): 1469–79. http://dx.doi.org/10.20965/jrm.2023.p1469.

Full text
Abstract:
In recent years, high-precision 3D environmental maps have attracted the attention of researchers in various fields and have been put to practical use. For the autonomous movement of mobile robots, it is common to create an environmental map in advance and use it for localization. In this study, to investigate the usefulness of 3D environmental maps, we scanned physical environments using two different simultaneous localization and mapping (SLAM) approaches, specifically a wearable 3D scanner and a 3D LiDAR mounted on a robot. We used the scan data to create 3D environmental maps consisting of 3D point clouds. Wearable 3D scanners can be used to generate high-density and high-precision 3D point-cloud maps. The application of high-precision maps to the field of autonomous navigation is expected to improve the accuracy of self-localization. Navigation experiments were conducted using a robot, which was equipped with the maps obtained from the two approaches described. Autonomous navigation was achieved in this manner, and the performance of the robot using each type of map was assessed by requiring it to halt at specific landmarks set along the route. The high-density colored environmental map generated from the wearable 3D scanner’s data enabled the robot to perform autonomous navigation easily with a high degree of accuracy, showing potential for usage in digital twin applications.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography