To see the other types of publications on this topic, follow the link: Detection of road lane lines.

Journal articles on the topic 'Detection of road lane lines'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Detection of road lane lines.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Jung, Jiyoung, and Sung-Ho Bae. "Real-Time Road Lane Detection in Urban Areas Using LiDAR Data." Electronics 7, no. 11 (October 26, 2018): 276. http://dx.doi.org/10.3390/electronics7110276.

Full text
Abstract:
The generation of digital maps with lane-level resolution is rapidly becoming a necessity, as semi- or fully-autonomous driving vehicles are now commercially available. In this paper, we present a practical real-time working prototype for road lane detection using LiDAR data, which can be further extended to automatic lane-level map generation. Conventional lane detection methods are limited to simple road conditions and are not suitable for complex urban roads with various road signs on the ground. Given a 3D point cloud scanned by a 3D LiDAR sensor, we categorized the points of the drivable region and distinguished the points of the road signs on the ground. Then, we developed an expectation-maximization method to detect parallel lines and update the 3D line parameters in real time, as the probe vehicle equipped with the LiDAR sensor moved forward. The detected and recorded line parameters were integrated to build a lane-level digital map with the help of a GPS/INS sensor. The proposed system was tested to generate accurate lane-level maps of two complex urban routes. The experimental results showed that the proposed system was fast and practical in terms of effectively detecting road lines and generating lane-level maps.
APA, Harvard, Vancouver, ISO, and other styles
2

Hermosillo-Reynoso, Fernando, Deni Torres-Roman, Jayro Santiago-Paz, and Julio Ramirez-Pacheco. "A Novel Algorithm Based on the Pixel-Entropy for Automatic Detection of Number of Lanes, Lane Centers, and Lane Division Lines Formation." Entropy 20, no. 10 (September 21, 2018): 725. http://dx.doi.org/10.3390/e20100725.

Full text
Abstract:
Lane detection for traffic surveillance in intelligent transportation systems is a challenge for vision-based systems. In this paper, a novel pixel-entropy based algorithm for the automatic detection of the number of lanes and their centers, as well as the formation of their division lines is proposed. Using as input a video from a static camera, each pixel behavior in the gray color space is modeled by a time series; then, for a time period τ , its histogram followed by its entropy are calculated. Three different types of theoretical pixel-entropy behaviors can be distinguished: (1) the pixel-entropy at the lane center shows a high value; (2) the pixel-entropy at the lane division line shows a low value; and (3) a pixel not belonging to the road has an entropy value close to zero. From the road video, several small rectangle areas are captured, each with only a few full rows of pixels. For each pixel of these areas, the entropy is calculated, then for each area or row an entropy curve is produced, which, when smoothed, has as many local maxima as lanes and one more local minima than lane division lines. For the purpose of testing, several real traffic scenarios under different weather conditions with other moving objects were used. However, these background objects, which are out of road, were filtered out. Our algorithm, compared to others based on trajectories of vehicles, shows the following advantages: (1) the lowest computational time for lane detection (only 32 s with a traffic flow of one vehicle/s per-lane); and (2) better results under high traffic flow with congestion and vehicle occlusion. Instead of detecting road markings, it forms lane-dividing lines. Here, the entropies of Shannon and Tsallis were used, but the entropy of Tsallis for a selected q of a finite set achieved the best results.
APA, Harvard, Vancouver, ISO, and other styles
3

He, Peng, and Feng Gao. "Study on Lane Detection Based on Computer Vision." Advanced Materials Research 765-767 (September 2013): 2229–32. http://dx.doi.org/10.4028/www.scientific.net/amr.765-767.2229.

Full text
Abstract:
Lane detection is a crucial component of automotive driver assistance system aiming to increase safety, convenience and efficiency of driving. This paper developed a vision based algorithm of detecting road lanes which is performed by extracting edges and finding straight lines using improved Hough transform. The experimental results indicate that this algorithm is effective and precise. Furthermore, this algorithm paves the way for the implementation of automotive driver assistance system.
APA, Harvard, Vancouver, ISO, and other styles
4

Farag, Wael. "Real-Time Detection of Road Lane-Lines for Autonomous Driving." Recent Advances in Computer Science and Communications 13, no. 2 (June 3, 2020): 265–74. http://dx.doi.org/10.2174/2213275912666190126095547.

Full text
Abstract:
Background: Enabling fast and reliable lane-lines detection and tracking for advanced driving assistance systems and self-driving cars. Methods: The proposed technique is mainly a pipeline of computer vision algorithms that augment each other and take in raw RGB images to produce the required lane-line segments that represent the boundary of the road for the car. The main emphasis of the proposed technique in on simplicity and fast computation capability so that it can be embedded in affordable CPUs that are employed by ADAS systems. Results: Each used algorithm is described in details, implemented and its performance is evaluated using actual road images and videos captured by the front mounted camera of the car. The whole pipeline performance is also tested and evaluated on real videos. Conclusion: The evaluation of the proposed technique shows that it reliably detects and tracks road boundaries under various conditions.
APA, Harvard, Vancouver, ISO, and other styles
5

Huang, Qiao, and Jinlong Liu. "Practical limitations of lane detection algorithm based on Hough transform in challenging scenarios." International Journal of Advanced Robotic Systems 18, no. 2 (March 1, 2021): 172988142110087. http://dx.doi.org/10.1177/17298814211008752.

Full text
Abstract:
The vision-based road lane detection technique plays a key role in driver assistance system. While existing lane recognition algorithms demonstrated over 90% detection rate, the validation test was usually conducted on limited scenarios. Significant gaps still exist when applied in real-life autonomous driving. The goal of this article was to identify these gaps and to suggest research directions that can bridge them. The straight lane detection algorithm based on linear Hough transform (HT) was used in this study as an example to evaluate the possible perception issues under challenging scenarios, including various road types, different weather conditions and shades, changed lighting conditions, and so on. The study found that the HT-based algorithm presented an acceptable detection rate in simple backgrounds, such as driving on a highway or conditions showing distinguishable contrast between lane boundaries and their surroundings. However, it failed to recognize road dividing lines under varied lighting conditions. The failure was attributed to the binarization process failing to extract lane features before detections. In addition, the existing HT-based algorithm would be interfered by lane-like interferences, such as guardrails, railways, bikeways, utility poles, pedestrian sidewalks, buildings and so on. Overall, all these findings support the need for further improvements of current road lane detection algorithms to be robust against interference and illumination variations. Moreover, the widely used algorithm has the potential to raise the lane boundary detection rate if an appropriate search range restriction and illumination classification process is added.
APA, Harvard, Vancouver, ISO, and other styles
6

Li, Qingquan, Jian Zhou, Bijun Li, Yuan Guo, and Jinsheng Xiao. "Robust Lane-Detection Method for Low-Speed Environments." Sensors 18, no. 12 (December 4, 2018): 4274. http://dx.doi.org/10.3390/s18124274.

Full text
Abstract:
Vision-based lane-detection methods provide low-cost density information about roads for autonomous vehicles. In this paper, we propose a robust and efficient method to expand the application of these methods to cover low-speed environments. First, the reliable region near the vehicle is initialized and a series of rectangular detection regions are dynamically constructed along the road. Then, an improved symmetrical local threshold edge extraction is introduced to extract the edge points of the lane markings based on accurate marking width limitations. In order to meet real-time requirements, a novel Bresenham line voting space is proposed to improve the process of line segment detection. Combined with straight lines, polylines, and curves, the proposed geometric fitting method has the ability to adapt to various road shapes. Finally, different status vectors and Kalman filter transfer matrices are used to track the key points of the linear and nonlinear parts of the lane. The proposed method was tested on a public database and our autonomous platform. The experimental results show that the method is robust and efficient and can meet the real-time requirements of autonomous vehicles.
APA, Harvard, Vancouver, ISO, and other styles
7

Kumar H D*, Arun, and Prabhakar C J. "Detection and Tracking of Lane Crossing Vehicles in Traffic Video for Abnormality Analysis." International Journal of Engineering and Advanced Technology 10, no. 4 (April 30, 2021): 1–9. http://dx.doi.org/10.35940/ijeat.c2141.0410421.

Full text
Abstract:
In this paper, we present a novel approach for detection and tracking of lane crossing/illegal lane crossing vehicles in traffic video of urban highways. For that intention, an initial pace is performed that estimates the road region of the geometrical structure. After finding the road region, every vehicle is tracked in order to detect lane crossing vehicles according to the distance between lane lines and vehicle centre, it is followed by tracking of lane crossing vehicles based on model-based strategy. The proposed system has been evaluated using recall and precision metric, which are received using experiments carried on selected video sequences of GRAM-RTM dataset and publically available video sequences. The experimental results present that our method reaches the highest accuracy for detection of vehicles and tracking of lane crossing vehicles.
APA, Harvard, Vancouver, ISO, and other styles
8

Cao, Song, Song, Xiao, and Peng. "Lane Detection Algorithm for Intelligent Vehicles in Complex Road Conditions and Dynamic Environments." Sensors 19, no. 14 (July 18, 2019): 3166. http://dx.doi.org/10.3390/s19143166.

Full text
Abstract:
Lane detection is an important foundation in the development of intelligent vehicles. To address problems such as low detection accuracy of traditional methods and poor real-time performance of deep learning-based methodologies, a lane detection algorithm for intelligent vehicles in complex road conditions and dynamic environments was proposed. Firstly, converting the distorted image and using the superposition threshold algorithm for edge detection, an aerial view of the lane was obtained via region of interest extraction and inverse perspective transformation. Secondly, the random sample consensus algorithm was adopted to fit the curves of lane lines based on the third-order B-spline curve model, and fitting evaluation and curvature radius calculation were then carried out on the curve. Lastly, by using the road driving video under complex road conditions and the Tusimple dataset, simulation test experiments for lane detection algorithm were performed. The experimental results show that the average detection accuracy based on road driving video reached 98.49%, and the average processing time reached 21.5 ms. The average detection accuracy based on the Tusimple dataset reached 98.42%, and the average processing time reached 22.2 ms. Compared with traditional methods and deep learning-based methodologies, this lane detection algorithm had excellent accuracy and real-time performance, a high detection efficiency and a strong anti-interference ability. The accurate recognition rate and average processing time were significantly improved. The proposed algorithm is crucial in promoting the technological level of intelligent vehicle driving assistance and conducive to the further improvement of the driving safety of intelligent vehicles.
APA, Harvard, Vancouver, ISO, and other styles
9

Fan, Chao, Li Long Hou, Shuai Di, and Jing Bo Xu. "Research on the Lane Detection Algorithm Based on Zoning Hough Transformation." Advanced Materials Research 490-495 (March 2012): 1862–66. http://dx.doi.org/10.4028/www.scientific.net/amr.490-495.1862.

Full text
Abstract:
In order to improve the adaptability of the lane detection algorithm under complex conditions such as damaged lane lines, covered shadow, insufficient light, the rainy day etc. Lane detection algorithm based on Zoning Hough Transform is proposed in this paper. The road images are processed by the improved ±45° Sobel operators and the two-dimension Otsu algorithm. To eliminate the interference of ambient noise, highlight the dominant position of the lane, the Zoning Hough Transform is used, which can obtain the parameters and identify the lane accurately. The experiment results show the lane detection method can extract the lane marking parameters accurately even for which are badly broken, and covered by shadow or rainwater completely, and the algorithm has good robustness.
APA, Harvard, Vancouver, ISO, and other styles
10

Liu, Dongfang, Yaqin Wang, Tian Chen, and Eric T. Matson. "Accurate Lane Detection for Self-Driving Cars: An Approach Based on Color Filter Adjustment and K-Means Clustering Filter." International Journal of Semantic Computing 14, no. 01 (March 2020): 153–68. http://dx.doi.org/10.1142/s1793351x20500038.

Full text
Abstract:
Lane detection is a crucial factor for self-driving cars to achieve a fully autonomous mode. Due to its importance, lane detection has drawn wide attention in recent years for autonomous driving. One challenge for accurate lane detection is to deal with noise appearing in the input image, such as object shadows, brake marks, breaking lane lines. To address this challenge, we propose an effective road detection algorithm. We leverage the strength of color filters to find a rough localization of the lane marks and employ a K-means clustering filter to screen out the embedded noises. We use an extensive experiment to verify the effectiveness of our method. The result indicates that our approach is robust to process noises appearing in input image, which improves the accuracy in lane detection.
APA, Harvard, Vancouver, ISO, and other styles
11

Li, Wei, Yue Guan, Liguo Chen, and Lining Sun. "Millimeter-Wave Radar and Machine Vision-Based Lane Recognition." International Journal of Pattern Recognition and Artificial Intelligence 32, no. 05 (January 3, 2018): 1850015. http://dx.doi.org/10.1142/s0218001418500155.

Full text
Abstract:
Camera can sensor the environment on the lane by extracting the lane lines, but such detection is limited to a short distance with effect of illumination and other factors; radar can detect objects a long distance away but cannot detect the lane conditions. This paper combined machine vision with millimeter-wave radar and extracted the nearby distinct lane line through images; at the same time, the radar obtained the motion trajectory information of distant vehicles, then the least-square method was used to make curve fitting on those motion trajectory information in order to reconstruct the lane line information. Finally, in the stage of fusing two segments of lane lines, the goodness of fit was applied to complete the matching of corresponding lane lines. While, for areas between two segments of lane lines that neither camera or radar can detect, we established a lane model, utilized probabilistic neural network to select the corresponding lane model for matching, and then used approximate mathematics expression according to the selected lane model, thus obtaining the final front road information of current vehicle.
APA, Harvard, Vancouver, ISO, and other styles
12

Ren, Jianqiang, Yangzhou Chen, Le Xin, and Jianjun Shi. "Lane Detection in Video-Based Intelligent Transportation Monitoring via Fast Extracting and Clustering of Vehicle Motion Trajectories." Mathematical Problems in Engineering 2014 (2014): 1–12. http://dx.doi.org/10.1155/2014/156296.

Full text
Abstract:
Lane detection is a crucial process in video-based transportation monitoring system. This paper proposes a novel method to detect the lane center via rapid extraction and high accuracy clustering of vehicle motion trajectories. First, we use the activity map to realize automatically the extraction of road region, the calibration of dynamic camera, and the setting of three virtual detecting lines. Secondly, the three virtual detecting lines and a local background model with traffic flow feedback are used to extract and group vehicle feature points in unit of vehicle. Then, the feature point groups are described accurately by edge weighted dynamic graph and modified by a motion-similarity Kalman filter during the sparse feature point tracking. After obtaining the vehicle trajectories, a roughk-means incremental clustering with Hausdorff distance is designed to realize the rapid online extraction of lane center with high accuracy. The use of rough set reduces effectively the accuracy decrease, which results from the trajectories that run irregularly. Experimental results prove that the proposed method can detect lane center position efficiently, the affected time of subsequent tasks can be reduced obviously, and the safety of traffic surveillance systems can be enhanced significantly.
APA, Harvard, Vancouver, ISO, and other styles
13

Wang, Qidong, Zhenya Wei, Jiaen Wang, Wuwei Chen, and Naihan Wang. "Curve recognition algorithm based on edge point curvature voting." Proceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile Engineering 234, no. 4 (August 1, 2019): 1006–19. http://dx.doi.org/10.1177/0954407019866975.

Full text
Abstract:
In this paper, a new curve-lane recognition algorithm is proposed. The algorithm uses edge point curvature voting to determine the region of interest based on near-vision straight-lane information. First, information is detected in the near-vision area regarding the straight lines to the left and right of the current lane. Near-vision lane-line extraction includes lane image filtering, as well as edge detection of the region of interest below the vanishing line. The vanishing point is positioned by determining the position of the edge point and distribution of the direction angle. In addition, the straight line is extracted based on the position of the vanishing point. The straight lines that are constructed for the current lane in this way are selected and used as supplementation, in combination with the lane model. Next, the road curvature range isometry is divided into multiple subdivision regions. The near-vision lane straight-line curvature parameters extending from each edge point in the region of interest are computed by combining the straight-line near-vision lane information with the curve lane model in the pixel coordinate system. Subsequently, voting and counting are carried out for the curvature regions of each edge point to which the corresponding curvature computing values belong. Finally, the counting maximum from the corresponding curvature regions of the straight lines located to the left and right of the current lane are searched for, and the curvature region is converted, to obtain the lane line corresponding to the curvature parameter values. Experimental results indicate that the proposed curve-lane recognition algorithm can effectively detect the curve lanes of different curvatures. The results also indicate that the proposed curve detection method is highly accurate, and the algorithm is very robust in different environments.
APA, Harvard, Vancouver, ISO, and other styles
14

XU, YUANYUAN, BIN KONG, HU WEI, and QIANG TIAN. "LANE-BASED DIRECTION MARKING RECOGNITION USING HU MOMENTS." International Journal of Information Acquisition 09, no. 03n04 (September 2013): 1350016. http://dx.doi.org/10.1142/s0219878913500162.

Full text
Abstract:
In intelligent vehicle system, it is significant to detect and identify road markings for vehicles to follow traffic regulation. This paper proposes a method to recognize direction markings on road surface, which is on the basis of detected lanes and uses Hu moments. First of all, the detection of lanes is based on horizontal luminance difference, which converts the RGB color image to the luminance image, calculates the horizontal luminance difference, obtains the candidate points of lanes' edge and uses least square method to fit the lanes. Secondly, with the detected lines as guide for the search of candidate marking, the paper extracts Hu moments of candidate marking, calculates its Mahalanobis distance to every marking type and classifies it to the type which has the minimal distance with the candidate marking. From the simulation results, the method to detect lanes is more effective and time-efficient than canny or sobel edge detection methods; the method to recognize direction marking is effective and has a high accuracy.
APA, Harvard, Vancouver, ISO, and other styles
15

Jiao, Xinyu, Diange Yang, Kun Jiang, Chunlei Yu, Tuopu Wen, and Ruidong Yan. "Real-time lane detection and tracking for autonomous vehicle applications." Proceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile Engineering 233, no. 9 (August 2019): 2301–11. http://dx.doi.org/10.1177/0954407019866989.

Full text
Abstract:
This article proposes an improved lane detection and tracking method for autonomous vehicle applications. In real applications, when the pose and position of the camera are changed, parameters and thresholds in the algorithms need fine adjustment. In order to improve adaptability to different perspective conditions, a width-adaptive lane detection method is proposed. As a useful reference to reduce noises, vanishing point is widely applied in lane detection studies. However, vanishing point detection based on original image consumes many calculation resources. In order to improve the calculation efficiency for real-time applications, we proposed a simplified vanishing point detection method. In the feature extraction step, a scan-line method is applied to detect lane ridge features, the width threshold of which is set automatically based on lane tracking. With clustering, validating, and model fitting, lane candidates are obtained from the basic ridge features. A lane-voted vanishing point is obtained by the simplified grid-based method, then applied to filter out noises. Finally, a multi-lane tracking Kalman filter is applied, the confirmed lines of which also provide adaptive width threshold for ridge feature extraction. Real-road experimental results based on our intelligent vehicle testbed proved the validity and robustness of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
16

Su, Jingxin, Ryuji Miyazaki, Toru Tamaki, and Kazufumi Kaneda. "3D Modeling of Lane Marks Using a Combination of Images and Mobile Mapping Data." International Journal of Automation Technology 12, no. 3 (May 1, 2018): 386–94. http://dx.doi.org/10.20965/ijat.2018.p0386.

Full text
Abstract:
When we drive a car, the white lines on the road show us where the lanes are. The lane marks act as a reference for where to steer the vehicle. Naturally, in the field of advanced driver-assistance systems and autonomous driving, lane-line detection has become a critical issue. In this research, we propose a fast and precise method that can create a three-dimensional point cloud model of lane marks. Our datasets are obtained by a vehicle-mounted mobile mapping system (MMS). The input datasets include point cloud data and color images generated by laser scanner and CCD camera. A line-based point cloud region growing method and image-based scan-line method are used to extract lane marks from the input. Given a set of mobile mapping data outputs, our approach takes advantage of all important clues from both the color image and point cloud data. The line-based point cloud region growing is used to identify boundary points, which guarantees a precise road surface region segmentation and boundary points extraction. The boundary points are converted into 2D geometry. The image-based scan line algorithm is designed specifically for environments where it is difficult to clearly identify lane marks. Therefore, we use the boundary points acquired previously to find the road surface region from the color image. The experiments show that the proposed approach is capable of precisely modeling lane marks using information from both images and point cloud data.
APA, Harvard, Vancouver, ISO, and other styles
17

Sheu, C. Y., F. Kurz, and P. Angelo. "AUTOMATIC 3D LANE MARKING RECONSTRUCTION USING MULTI-VIEW AERIAL IMAGERY." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences IV-1 (September 26, 2018): 147–54. http://dx.doi.org/10.5194/isprs-annals-iv-1-147-2018.

Full text
Abstract:
<p><strong>Abstract.</strong> The 3D information of road infrastructures are gaining importance with the development of autonomous driving. The exact absolute position and height of lane markings, for example, support lane-accurate localization. Several approaches have been proposed for the 3D reconstruction of line features from multi-view airborne optical imagery. However, standard appearance-based matching approaches for 3D reconstruction are hardly applicable on lane markings due to the similar color profile of all lane markings and the lack of textures in their neighboring areas. We present a workflow for 3D lane markings reconstruction without explicit feature matching process using multi-view aerial imagery. The aim is to optimize the best 3D line location by minimizing the distance from its back projection to the detected 2D line in all the covering images. Firstly, the lane markings are automatically extracted from aerial images using standard line detection algorithms. By projecting these extracted lines onto the Semi-Global Matching (SGM) generated Digital Surface Model (DSM), the approximate 3D line segments are generated. Starting from these approximations, the 3D lines are iteratively refined based on the detected 2D lines in the original images and the viewing geometry. The proposed approach relies on precise detection of 2D lines in image space, a pre-knowledge of the approximate 3D line segments, and it heavily relies on image orientations. Nevertheless, it avoids the problem of non-textured neighborhood and is not limited to lines of finite length. The theoretical precision of 3D reconstruction with the proposed framework is evaluated.</p>
APA, Harvard, Vancouver, ISO, and other styles
18

Kim, JongBae. "Efficient Vanishing Point Detection for Driving Assistance Based on Visual Saliency Map and Image Segmentation from a Vehicle Black-Box Camera." Symmetry 11, no. 12 (December 7, 2019): 1492. http://dx.doi.org/10.3390/sym11121492.

Full text
Abstract:
Techniques for detecting a vanishing point (VP) which estimates the direction of a vehicle by analyzing its relationship with surrounding objects have gained considerable attention recently. VPs can be used to support safe vehicle driving in areas such as for autonomous driving, lane-departure avoidance, distance estimation, and road-area detection, by detecting points in which parallel extension lines of objects are concentrated at a single point in a 3D space. In this paper, we proposed a method of detecting the VP in real time for applications to intelligent safe-driving support systems. In order to support safe driving of autonomous vehicles, it is necessary to drive the vehicle with the VP in center of the road image in order to prevent the vehicle from moving out of the road area while driving. Accordingly, in order to detect the VP in the road image, a method of detecting a point where straight lines intersect in an area where edge directional feature information is concentrated is required. The visual attention model and image segmentation process are applied to quickly identify candidate VPs in the area where the edge directional feature-information is concentrated and the intensity contrast difference is large. In the proposed method, VPs are detected by analyzing the edges, visual-attention regions, linear components using the Hough transform, and image segmentation results in an input image. Our experimental results have shown that the proposed method could be applied to safe-driving support systems.
APA, Harvard, Vancouver, ISO, and other styles
19

Wang, Zengcai, Xiaojin Wang, Lei Zhao, and Guoxin Zhang. "Vision-Based Lane Departure Detection Using a Stacked Sparse Autoencoder." Mathematical Problems in Engineering 2018 (September 16, 2018): 1–15. http://dx.doi.org/10.1155/2018/9837359.

Full text
Abstract:
This paper presents a lane departure detection approach that utilizes a stacked sparse autoencoder (SSAE) for vehicles driving on motorways or similar roads. Image preprocessing techniques are successfully executed in the initialization procedure to obtain robust region-of-interest extraction parts. Lane detection operations based on Hough transform with a polar angle constraint and a matching algorithm are then implemented for two-lane boundary extraction. The slopes and intercepts of lines are obtained by converting the two lanes from polar to Cartesian space. Lateral offsets are also computed as an important step of feature extraction in the image pixel coordinate without any intrinsic or extrinsic camera parameter. Subsequently, a softmax classifier is designed with the proposed SSAE. The slopes and intercepts of lines and lateral offsets are the feature inputs. A greedy, layer-wise method is employed based on the inputs to pretrain the weights of the entire deep network. Fine-tuning is conducted to determine the global optimal parameters by simultaneously altering all layer parameters. The outputs are three detection labels. Experimental results indicate that the proposed approach can detect lane departure robustly with a high detection rate. The efficiency of the proposed method is demonstrated on several real images.
APA, Harvard, Vancouver, ISO, and other styles
20

Liu, Jian Guo, Jun Luo, and Xi Li. "A Research of an Improved Method for Lane Detection in a High Light Condition." Advanced Materials Research 694-697 (May 2013): 1914–18. http://dx.doi.org/10.4028/www.scientific.net/amr.694-697.1914.

Full text
Abstract:
Most of the methods, which being used to the normal light, are easy to make a mistake in the high light condition. An improved method is proposed in this paper to solve these problems. It includes a series of pretreatment for road image, like white balance, contrast enhancement, edge enhancement. Then the method makes an image segmentation to increase the efficiency of the identification. Use an improved Hough transformation to recognize the parameters of lane line. Finally, establish a trapezoidal interested region to achieve a real-time dynamic extraction of lane lines parameters from the continuous image. The results of identification show that the improved method for the high light condition makes a better work and its more accurate and efficient to acquire the parameters.
APA, Harvard, Vancouver, ISO, and other styles
21

Li, Yunwu, Xiaojuan Wang, and Dexiong Liu. "3D Autonomous Navigation Line Extraction for Field Roads Based on Binocular Vision." Journal of Sensors 2019 (March 3, 2019): 1–16. http://dx.doi.org/10.1155/2019/6832109.

Full text
Abstract:
This paper proposes a 3D autonomous navigation line extraction method for field roads in hilly regions based on a low-cost binocular vision system. Accurate guide path detection of field roads is a prerequisite for the automatic driving of agricultural machines. First, considering the lack of lane lines, blurred boundaries, and complex surroundings of field roads in hilly regions, a modified image processing method was established to strengthen shadow identification and information fusion to better distinguish the road area from its surroundings. Second, based on nonobvious shape characteristics and small differences in the gray values of the field roads inside the image, the centroid points of the road area as its statistical feature was extracted and smoothed and then used as the geometric primitives of stereo matching. Finally, an epipolar constraint and a homography matrix were applied for accurate matching and 3D reconstruction to obtain the autonomous navigation line of the field roads. Experiments on the automatic driving of a carrier on field roads showed that on straight roads, multicurvature complex roads and undulating roads, the mean deviations between the actual midline of the road and the automatically traveled trajectory were 0.031 m, 0.069 m, and 0.105 m, respectively, with maximum deviations of 0.133, 0.195 m, and 0.216 m, respectively. These test results demonstrate that the proposed method is feasible for road identification and 3D navigation line acquisition.
APA, Harvard, Vancouver, ISO, and other styles
22

Yu, Hou Yun, and Wei Gong Zhang. "Visual Detection of Moving Vehicles Ahead Based on the Characteristics." Applied Mechanics and Materials 103 (September 2011): 165–69. http://dx.doi.org/10.4028/www.scientific.net/amm.103.165.

Full text
Abstract:
Machine vision perception technology is widely used in the vehicle’s active safety system. It provides more immediate and correct information of road and vehicles around, in which inspection of moving vehicle ahead is one of the important items. A method of inspection fused of detection of the shadow under the vehicle and symmetry of the vehicle’s tail is presented in this paper. At first, a region of interest is selected according to the lane lines. Then, the shadow can be detected with grayscale histogram in the region of interest and a suspected area of vehicle is obtained by expanding the shadow with empirical proportion. At last, the vehicle ahead is further affirmed by calculating the symmetry of such characteristic at its tail as grayscale value, taillight and the edges. Experimental results prove that this method can well solve the actual problems of vehicle detection.
APA, Harvard, Vancouver, ISO, and other styles
23

Singh, P. P., and R. D. Garg. "Road Detection from Remote Sensing Images using Impervious Surface Characteristics: Review and Implication." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-8 (November 28, 2014): 955–59. http://dx.doi.org/10.5194/isprsarchives-xl-8-955-2014.

Full text
Abstract:
The extraction of road network is an emerging area in information extraction from high-resolution satellite images (HRSI). It is also an interesting field that incorporates various tactics to achieve road network. The process of road detection from remote sensing images is quite complex, due to the presence of various noises. These noises could be the vehicles, crossing lines and toll bridges. Few small and large false road segments interrupt the extraction of road segments that happens due to the similar spectral behavior in heterogeneous objects. To achieve a better level of accuracy, numerous factors play their important role, such as spectral data of satellite sensor and the information related to land surface area. Therefore the interpretation varies on processing of images with different heuristic parameters. These parameters have tuned according to the road characteristics of the terrain in satellite images. There are several approaches proposed and implemented to extract the roads from HRSI comprising a single or hybrid method. This kind of hybrid approach has also improved the accuracy of road extraction in comparison to a single approach. Some characteristics related to impervious and non-impervious surfaces are used as salient features that help to improve the extraction of road area only in the correct manner. These characteristics also used to utilize the spatial, spectral and texture features to increase the accuracy of classified results. Therefore, aforesaid characteristics have been utilized in combination of road spectral properties to extract road network only with improved accuracy. This evaluated road network is quite accurate with the help of these defined methodologies.
APA, Harvard, Vancouver, ISO, and other styles
24

Peng, Bao, Zhi-Bin Chen, Erkang Fu, and Zi-Chuan Yi. "The algorithm of nighttime pedestrian detection in intelligent surveillance for renewable energy power stations." Energy Exploration & Exploitation 38, no. 5 (April 22, 2020): 2019–36. http://dx.doi.org/10.1177/0144598720913964.

Full text
Abstract:
Intelligent surveillance is an important management method for the construction and operation of power stations such as wind power and solar power. The identification and detection of equipment, facilities, personnel, and behaviors of personnel are the key technology for the ubiquitous electricity The Internet of Things. This paper proposes a video solution based on support vector machine and histogram of oriented gradient (HOG) methods for pedestrian safety problems that are common in night driving. First, a series of image preprocessing methods are used to optimize night images and detect lane lines. Second, an image is divided into intelligent regions to be adapted to different road environments. Finally, the HOG and support vector machine methods are used to optimize the pedestrian image on a Linux system, which reduces the number of false alarms in pedestrian detection and the workload of the pedestrian detection algorithm. The test results show that the system can successfully detect pedestrians at night. With image preprocessing optimization, the correct rate of nighttime pedestrian detection can be significantly improved, and the correct rate of detection can reach 92.4%. After the division area is optimized, the number of false alarms decreases significantly, and the average frame rate of the optimized video reaches 28 frames per second.
APA, Harvard, Vancouver, ISO, and other styles
25

Crommelinck, S., B. Höfle, M. N. Koeva, M. Y. Yang, and G. Vosselman. "INTERACTIVE CADASTRAL BOUNDARY DELINEATION FROM UAV DATA." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences IV-2 (May 28, 2018): 81–88. http://dx.doi.org/10.5194/isprs-annals-iv-2-81-2018.

Full text
Abstract:
Unmanned aerial vehicles (UAV) are evolving as an alternative tool to acquire land tenure data. UAVs can capture geospatial data at high quality and resolution in a cost-effective, transparent and flexible manner, from which visible land parcel boundaries, i.e., cadastral boundaries are delineable. This delineation is to no extent automated, even though physical objects automatically retrievable through image analysis methods mark a large portion of cadastral boundaries. This study proposes (i) a methodology that automatically extracts and processes candidate cadastral boundary features from UAV data, and (ii) a procedure for a subsequent interactive delineation. Part (i) consists of two state-of-the-art computer vision methods, namely gPb contour detection and SLIC superpixels, as well as a classification part assigning costs to each outline according to local boundary knowledge. Part (ii) allows a user-guided delineation by calculating least-cost paths along previously extracted and weighted lines. The approach is tested on visible road outlines in two UAV datasets from Germany. Results show that all roads can be delineated comprehensively. Compared to manual delineation, the number of clicks per 100&amp;thinsp;m is reduced by up to 86&amp;thinsp;%, while obtaining a similar localization quality. The approach shows promising results to reduce the effort of manual delineation that is currently employed for indirect (cadastral) surveying.
APA, Harvard, Vancouver, ISO, and other styles
26

Rosas-Arias, Leonel, Jose Portillo-Portillo, Aldo Hernandez-Suarez, Jesus Olivares-Mercado, Gabriel Sanchez-Perez, Karina Toscano-Medina, Hector Perez-Meana, Ana Lucila Sandoval Orozco, and Luis Javier García Villalba. "Vehicle Counting in Video Sequences: An Incremental Subspace Learning Approach." Sensors 19, no. 13 (June 27, 2019): 2848. http://dx.doi.org/10.3390/s19132848.

Full text
Abstract:
The counting of vehicles plays an important role in measuring the behavior patterns of traffic flow in cities, as streets and avenues can get crowded easily. To address this problem, some Intelligent Transport Systems (ITSs) have been implemented in order to count vehicles with already established video surveillance infrastructure. With this in mind, in this paper, we present an on-line learning methodology for counting vehicles in video sequences based on Incremental Principal Component Analysis (Incremental PCA). This incremental learning method allows us to identify the maximum variability (i.e., motion detection) between a previous block of frames and the actual one by using only the first projected eigenvector. Once the projected image is obtained, we apply dynamic thresholding to perform image binarization. Then, a series of post-processing steps are applied to enhance the binary image containing the objects in motion. Finally, we count the number of vehicles by implementing a virtual detection line in each of the road lanes. These lines determine the instants where the vehicles pass completely through them. Results show that our proposed methodology is able to count vehicles with 96.6% accuracy at 26 frames per second on average—dealing with both camera jitter and sudden illumination changes caused by the environment and the camera auto exposure.
APA, Harvard, Vancouver, ISO, and other styles
27

S, Manu. "Road Lane Marking Detection with Deep Learning." International Journal for Research in Applied Science and Engineering Technology 9, no. VIII (August 15, 2021): 708–18. http://dx.doi.org/10.22214/ijraset.2021.37463.

Full text
Abstract:
Road Lane detection is an important factor for Advanced Driver Assistant System (ADAS). In this paper, we propose a lane detection technology using deep convolutional neural network to extract lane marking features. Many conventional approaches detect the lane using the information of edge, color, intensity and shape. In addition, lane detection can be viewed as an image segmentation problem. However, most methods are sensitive to weather condition and noises; and thus, many traditional lane detection systems fail when the external environment has significant variation.
APA, Harvard, Vancouver, ISO, and other styles
28

Tan, Hui, Jian Feng Wang, Kun Zhang, and Sheng Min Cui. "Research on Lane Marking Lines Detection." Applied Mechanics and Materials 274 (January 2013): 634–37. http://dx.doi.org/10.4028/www.scientific.net/amm.274.634.

Full text
Abstract:
Nowadays traffic accidents occur more and more frequently, as a result, intelligent vehicles develop more and more quickly. In many research directions of intelligent vehicles, vision navigation becomes the hot spot. An algorithm of the present lane left-right marking lines detection was proposed in this paper. The algorithm combines edge detection and Hough transform, firstly detects the initial lane marking lines and then tracks the final target lines. Simulation results indicated that the algorithm could recognize the present lane marking lines to make vehicle navigation precise and fast.
APA, Harvard, Vancouver, ISO, and other styles
29

Deng, Xi Xi, Xiao Nian Wang, and Jin Zhu. "Available Lane Detection Based on Radon Transform." Advanced Materials Research 1046 (October 2014): 415–24. http://dx.doi.org/10.4028/www.scientific.net/amr.1046.415.

Full text
Abstract:
To solve lane detection problem in the system of autonomous vehicle, this paper proposes a method of texture segment based on perspective transformation. In this paper, firstly road images were captured through cameras installed on the vehicle, then make a perspective transform to road plane, so that the road and the non-road texture information effectively stand out. After calculation of the texture trend in the transformed image, radon transform can effectively distinguish between the road and the non-road area, and achieve the purpose of the texture regional segment. Experiments prove that this method can be used on the lane detection, which eliminate barriers and road borders effectively.
APA, Harvard, Vancouver, ISO, and other styles
30

Shein, Vyacheslav Alexandrovich, and Alexander Olegovich Pak. "ROAD LANE LINE DETECTION WITH HOUGH TRANSFORM." Theoretical & Applied Science 92, no. 12 (December 30, 2020): 401–8. http://dx.doi.org/10.15863/tas.2020.12.92.77.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

NIE, YIMING, BIN DAI, XIANGJING AN, ZHENPING SUN, TAO WU, and HANGEN HE. "FAST LANE DETECTION USING DIRECTION KERNEL FUNCTION." International Journal of Wavelets, Multiresolution and Information Processing 10, no. 02 (March 2012): 1250017. http://dx.doi.org/10.1142/s0219691312500178.

Full text
Abstract:
The lane information is essential to the highway intelligent vehicle applications. The direct description of the lanes is lane markings. Many vision methods have been proposed for lane markings detection. But in practice there are some problems to be solved by previous lane tracking systems such as shadows on the road, lighting changes, characters on the road and discontinuous changes in road types. Direction kernel function is proposed for robust detection of the lanes. This method focuses on selecting points on the markings edge by classification. During the classifying, the vanishing point is selected and the parts of the lane marking could form the lanes. The algorithm presented in this paper is proved to be both robust and fast by a large amount of experiments in variable occasions, besides, the algorithm can extract the lanes even in some parts of lane markings missing occasions.
APA, Harvard, Vancouver, ISO, and other styles
32

Bai, Meng, Min Hua Li, and Ying Jun Lv. "A Lane Detection Method Based on Feature Point Segmented Fitting." Applied Mechanics and Materials 644-650 (September 2014): 990–93. http://dx.doi.org/10.4028/www.scientific.net/amm.644-650.990.

Full text
Abstract:
To detect lane from structured road, this paper presents a novel lane detection method based on feature point segmented fitting approach. Firstly, feature points are extracted from the edge image, and then segmented fitting approach based on least squares is used to obtain the line equation of feature points, and the lane recognizing criterion for fitted line equation is further proposed to detect lane. Finally, the lane tracking method is proposed to detect lane in video frames. Experimental results demonstrate that the proposed lane detection method can effectively detect lane of structured road.
APA, Harvard, Vancouver, ISO, and other styles
33

Jeong, Jinhan, Yook Hyun Yoon, and Jahng Hyon Park. "Reliable Road Scene Interpretation Based on ITOM with the Integrated Fusion of Vehicle and Lane Tracker in Dense Traffic Situation." Sensors 20, no. 9 (April 26, 2020): 2457. http://dx.doi.org/10.3390/s20092457.

Full text
Abstract:
Lane detection and tracking in a complex road environment is one of the most important research areas in highly automated driving systems. Studies on lane detection cover a variety of difficulties, such as shadowy situations, dimmed lane painting, and obstacles that prohibit lane feature detection. There are several hard cases in which lane candidate features are not easily extracted from image frames captured by a driving vehicle. We have carefully selected typical scenarios in which the extraction of lane candidate features can be easily corrupted by road vehicles and road markers that lead to degradations in the understanding of road scenes, resulting in difficult decision making. We have introduced two main contributions to the interpretation of road scenes in dense traffic environments. First, to obtain robust road scene understanding, we have designed a novel framework combining a lane tracker method integrated with a camera and a radar forward vehicle tracker system, which is especially useful in dense traffic situations. We have introduced an image template occupancy matching method with the integrated vehicle tracker that makes it possible to avoid extracting irrelevant lane features caused by forward target vehicles and road markers. Second, we present a robust multi-lane detection by a tracking algorithm that incudes adjacent lanes as well as ego lanes. We verify a comprehensive experimental evaluation with a real dataset comprised of problematic road scenarios. Experimental result shows that the proposed method is very reliable for multi-lane detection at the presented difficult situations.
APA, Harvard, Vancouver, ISO, and other styles
34

Nasution, Surya Michrandi, Emir Husni, Kuspriyanto Kuspriyanto, Rahadian Yusuf, and Rahmat Mulyawan. "Road Information Collector Using Smartphone for Measuring Road Width Based on Object and Lane Detection." International Journal of Interactive Mobile Technologies (iJIM) 14, no. 02 (February 10, 2020): 42. http://dx.doi.org/10.3991/ijim.v14i02.11530.

Full text
Abstract:
This paper proposed an application for Android smartphone to collect road information. That information will be extracted to acquire road width, so the drivers could use right alternate routes for their types of vehicle. Object and lane detection are both used for obtaining road width when lane boundaries are detected in road image otherwise, road width is obtained using a vanishing point method. The average error rate for road width measurement using object and lane detection is 19.71%. Meanwhile, the average error rate when there is no lane boundary is in the range of 10-15%, 8-18%, and 10-19% for various capturing sides. Reclassification of the road is done when the error rate of road width is set. Accuracy of road category reclassification is in the range of 70-75% in various sides.
APA, Harvard, Vancouver, ISO, and other styles
35

Lim, King Hann, Kah Phooi Seng, and Li-Minn Ang. "River Flow Lane Detection and Kalman Filtering-Based B-Spline Lane Tracking." International Journal of Vehicular Technology 2012 (November 5, 2012): 1–10. http://dx.doi.org/10.1155/2012/465819.

Full text
Abstract:
A novel lane detection technique using adaptive line segment and river flow method is proposed in this paper to estimate driving lane edges. A Kalman filtering-based B-spline tracking model is also presented to quickly predict lane boundaries in consecutive frames. Firstly, sky region and road shadows are removed by applying a regional dividing method and road region analysis, respectively. Next, the change of lane orientation is monitored in order to define an adaptive line segment separating the region into near and far fields. In the near field, a 1D Hough transform is used to approximate a pair of lane boundaries. Subsequently, river flow method is applied to obtain lane curvature in the far field. Once the lane boundaries are detected, a B-spline mathematical model is updated using a Kalman filter to continuously track the road edges. Simulation results show that the proposed lane detection and tracking method has good performance with low complexity.
APA, Harvard, Vancouver, ISO, and other styles
36

Gong, Xianyong, Fang Wu, Ruixing Xing, Jiawei Du, and Chengyi Liu. "LCBRG: A lane-level road cluster mining algorithm with bidirectional region growing." Open Geosciences 13, no. 1 (January 1, 2021): 835–50. http://dx.doi.org/10.1515/geo-2020-0271.

Full text
Abstract:
Abstract Lane-level road cluster is a most representative phenomenon in road networks and is vital to spatial data mining, cartographic generalization, and data integration. In this article, a lane-level road cluster recognition method was proposed. First, the conception of lane-level road cluster and our motivation were addressed and the spatial characteristics were given. Second, a region growing cluster algorithm was defined to recognize lane-level road clusters, where constraints including distance and orientation were used. A novel moving distance (MD) metric was proposed to measure the distance of two lines, which can effectively handle the non-uniformly distributed vertexes, heterogeneous length, inharmonious spatial alignment, and complex shape. Experiments demonstrated that the proposed method can effectively recognize lane-level road clusters with the agreement to human spatial cognition.
APA, Harvard, Vancouver, ISO, and other styles
37

R. Palafox, Pablo, Johannes Betz, Felix Nobis, Konstantin Riedl, and Markus Lienkamp. "SemanticDepth: Fusing Semantic Segmentation and Monocular Depth Estimation for Enabling Autonomous Driving in Roads without Lane Lines." Sensors 19, no. 14 (July 22, 2019): 3224. http://dx.doi.org/10.3390/s19143224.

Full text
Abstract:
Typically, lane departure warning systems rely on lane lines being present on the road.However, in many scenarios, e.g., secondary roads or some streets in cities, lane lines are eithernot present or not sufficiently well signaled. In this work, we present a vision-based method tolocate a vehicle within the road when no lane lines are present using only RGB images as input.To this end, we propose to fuse together the outputs of a semantic segmentation and a monoculardepth estimation architecture to reconstruct locally a semantic 3D point cloud of the viewed scene.We only retain points belonging to the road and, additionally, to any kind of fences or walls thatmight be present right at the sides of the road. We then compute the width of the road at a certainpoint on the planned trajectory and, additionally, what we denote as the fence-to-fence distance.Our system is suited to any kind of motoring scenario and is especially useful when lane lines arenot present on the road or do not signal the path correctly. The additional fence-to-fence distancecomputation is complementary to the road’s width estimation. We quantitatively test our methodon a set of images featuring streets of the city of Munich that contain a road-fence structure, so asto compare our two proposed variants, namely the road’s width and the fence-to-fence distancecomputation. In addition, we also validate our system qualitatively on the Stuttgart sequence of thepublicly available Cityscapes dataset, where no fences or walls are present at the sides of the road,thus demonstrating that our system can be deployed in a standard city-like environment. For thebenefit of the community, we make our software open source.
APA, Harvard, Vancouver, ISO, and other styles
38

Devane, Vighnesh, Ganesh Sahane, Hritish Khairmode, and Gaurav Datkhile. "Lane Detection Techniques using Image Processing." ITM Web of Conferences 40 (2021): 03011. http://dx.doi.org/10.1051/itmconf/20214003011.

Full text
Abstract:
Lane detection is a developing technology that is implemented in vehicles to enable autonomous navigation. Most lane detection systems are designed for roads with proper structure relying on the existence of markings. The main shortcoming of these approaches is that they might give inaccurate results or not work at all in situations involving unclear markings or the absence of them. In this study one such approach for detecting lanes on an unmarked road is reviewed followed by an improved approach. Both the approaches are based on digital image processing techniques and purely work on vision or camera data. The main aim is to obtain a real time curve value to assist the driver/autonomous vehicle for taking required turns and not go off the road.
APA, Harvard, Vancouver, ISO, and other styles
39

Panchal, Omkar. "Road Sign Recognition and Lane Detection using CNN with OpenCV." International Journal for Research in Applied Science and Engineering Technology 9, no. VI (June 30, 2021): 3238–43. http://dx.doi.org/10.22214/ijraset.2021.35752.

Full text
Abstract:
As a result of road traffic crashes, approximately 1.35 million people die each year, and between 40 to 70 million are injured drastically. Most of these accidents occurs because of to lack of response time to instant traffic events. To develop such recognition and detection system in autonomous cars, it is important to monitor and guide driver through real time traffic events. This involves Road sign recognition and road lane detection. In order to make the driving process safer and efficient, a plan is made to design a driver-assistance system with road sign recognition and lane detection features. In this system we have focused on two important aspects, Road sign recognition and lane detection. The process of road sign recognition in a video can be broken into two main areas for research; detection and classification using convolutional neural networks. Road signs will be detected by analysing colour information, which can be red and blue, contained on the images whereas, in classification phase the signs are classified according to their shapes and characteristics. Along with road sign recognition we also focused on Road Lane detection which is one significant method in the visualization-based driver support structure and capable to be used for vehicle guiding and monitoring, road congestion avoidance, crash avoidance.
APA, Harvard, Vancouver, ISO, and other styles
40

López, A., J. Serrat, C. Cañero, F. Lumbreras, and T. Graf. "Robust lane markings detection and road geometry computation." International Journal of Automotive Technology 11, no. 3 (May 29, 2010): 395–407. http://dx.doi.org/10.1007/s12239-010-0049-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Pappalardo, Giuseppina, Salvatore Cafiso, Alessandro Di Graziano, and Alessandro Severino. "Decision Tree Method to Analyze the Performance of Lane Support Systems." Sustainability 13, no. 2 (January 16, 2021): 846. http://dx.doi.org/10.3390/su13020846.

Full text
Abstract:
Road departure is one of the main causes of single vehicle and frontal crashes. By implementing lateral support systems, a significant amount of these accidents can be avoided. Typical accidents are normally occurring due to unintentional lane departure where the driver drifts towards and across the line identifying the edge of the lane. The Lane Support Systems (LSS) uses cameras to “read” the lines on the road and alert the driver if the car is approaching the lines. Anyway, despite the assumed technology readiness, there is still much uncertainty regarding the needs of vision systems for “reading” the road and limited results are still available from in field testing. In such framework the paper presents an experimental test of LSS performance carried out in two lane rural roads with different geometric alignments and road marking conditions. LSS faults, in day light and dry pavement conditions, were detected on average in 2% of the road sections. A decision tree method was used to analyze the cause of the faults and the importance of the variable involved in the process. The fault probability increased in road sections with radius less than 200 m and in poor conditions of road marking.
APA, Harvard, Vancouver, ISO, and other styles
42

Long, JianWu, ZeRan Yan, Lang Peng, and Tong Li. "The geometric attention-aware network for lane detection in complex road scenes." PLOS ONE 16, no. 7 (July 15, 2021): e0254521. http://dx.doi.org/10.1371/journal.pone.0254521.

Full text
Abstract:
Lane detection in complex road scenes is still a challenging task due to poor lighting conditions, interference of irrelevant road markings or signs, etc. To solve the problem of lane detection in the various complex road scenes, we proposed a geometric attention-aware network (GAAN) for lane detection. The proposed GAAN adopted a multi-task branch architecture, and used the attention information propagation (AIP) module to perform communication between branches, then the geometric attention-aware (GAA) module was used to complete feature fusion. In order to verify the lane detection effect of the proposed model in this paper, the experiments were conducted on the CULane dataset, TuSimple dataset, and BDD100K dataset. The experimental results show that our method performs well compared with the current excellent lane line detection networks.
APA, Harvard, Vancouver, ISO, and other styles
43

KANEKO, Alex Masuo, and Kenjiro YAMAMOTO. "1A1-G06 Road Lane Detection Method for Safe Driving : Lane Estimation by Chronologically Detected Road Feature Parameters." Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) 2015 (2015): _1A1—G06_1—_1A1—G06_2. http://dx.doi.org/10.1299/jsmermd.2015._1a1-g06_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Xiao, Jinsheng, Wenxin Xiong, Yuan Yao, Liang Li, and Reinhard Klette. "Lane Detection Algorithm Based on Road Structure and Extended Kalman Filter." International Journal of Digital Crime and Forensics 12, no. 2 (April 2020): 1–20. http://dx.doi.org/10.4018/ijdcf.2020040101.

Full text
Abstract:
Lane detection still demonstrates low accuracy and missing robustness when recorded markings are interrupted by strong light or shadows or missing marking. This article proposes a new algorithm using a model of road structure and an extended Kalman filter. The region of interest is set according to the vanishing point. First, an edge-detection operator is used to scan horizontal pixels and calculate edge-strength values. The corresponding straight line is detected by line parameters voted by edge points. From the edge points and lane mark candidates extracted above, and other constraints, these points are treated as the potential lane boundary. Finally, the lane parameters are estimated using the coordinates of the lane boundary points. They are updated by an extended Kalman filter to ensure the stability and robustness. Results indicate that the proposed algorithm is robust for challenging road scenes with low computational complexity.
APA, Harvard, Vancouver, ISO, and other styles
45

Kim, Dae-Hyun. "Lane Detection Method with Impulse Radio Ultra-Wideband Radar and Metal Lane Reflectors." Sensors 20, no. 1 (January 6, 2020): 324. http://dx.doi.org/10.3390/s20010324.

Full text
Abstract:
An advanced driver-assistance system (ADAS), based on lane detection technology, detects dangerous situations through various sensors and either warns the driver or takes over direct control of the vehicle. At present, cameras are commonly used for lane detection; however, their performance varies widely depending on the lighting conditions. Consequently, many studies have focused on using radar for lane detection. However, when using radar, it is difficult to distinguish between the plain road surface and painted lane markers, necessitating the use of radar reflectors for guidance. Previous studies have used long-range radars which may receive interference signals from various objects, including other vehicles, pedestrians, and buildings, thereby hampering lane detection. Therefore, we propose a lane detection method that uses an impulse radio ultra-wideband radar with high-range resolution and metal lane markers installed at regular intervals on the road. Lane detection and departure is realized upon using the periodically reflected signals as well as vehicle speed data as inputs. For verification, a field test was conducted by attaching radar to a vehicle and installing metal lane markers on the road. Experimental scenarios were established by varying the position and movement of the vehicle, and it was demonstrated that the proposed method enables lane detection based on the data measured.
APA, Harvard, Vancouver, ISO, and other styles
46

Talib, Muhamad Lazim, and Suzaimah Ramli. "A Review of Multiple Edge Detection in Road Lane Detection Using Improved Hough Transform." Advanced Materials Research 1125 (October 2015): 541–45. http://dx.doi.org/10.4028/www.scientific.net/amr.1125.541.

Full text
Abstract:
Lane detection system for the driver of the car is an important issue for the inquiry as a platform for safe driving experience. Implementation of this system is trying to investigate the possibility of traffic accidents, monitor the efficiency of the movement and position of the car contributes to the development of autonomous navigation technology. The purpose of this study is to get the best selection of banks in a better Hough transform technique to detect lane roads using edge detection techniques. For this study, Canny, Sobel and Prewitt edge detection is used as a trial. Selection of the best edge detection was using neural network techniques. Improved Hough Transform is used to extract features of a structured road. Point area near the straight line model adopted to accelerate the speed of calculation data and find the appropriate line. Prior knowledge is used in the process of finding a path to efficiently reduce the Hough space efficiently, thereby increasing the resistance by increasing the processing speed. Experiments provide good results in detecting straight and smooth fair curvature lane on highway even the hallways are painted shadows. Data from the lane highways have been taken in video format. Experiments have been done using an edge detection technique of choice in each scenario, and found that the best method of producing high accuracy of detection is to use intelligent edge detector. In this way, other people will be the best in certain cases scenarios lane highway.
APA, Harvard, Vancouver, ISO, and other styles
47

JIANG, RUYI, REINHARD KLETTE, TOBI VAUDREY, and SHIGANG WANG. "CORRIDOR DETECTION AND TRACKING FOR VISION-BASED DRIVER ASSISTANCE SYSTEM." International Journal of Pattern Recognition and Artificial Intelligence 25, no. 02 (March 2011): 253–72. http://dx.doi.org/10.1142/s0218001411008567.

Full text
Abstract:
A significant component of driver assistance systems (DAS) is lane detection, and has been studied since the 1990s. However, improving and generalizing lane detection solutions proved to be a challenging task until recently. A (physical) lane is defined by road boundaries or various kinds of lane marks, and this is only partially applicable for modeling the space an ego-vehicle is able to drive in. This paper proposes a concept of (virtual) corridor for modeling this space. A corridor depends on information available about the motion of the ego-vehicle, as well as about the (physical) lane. This paper also suggests a modified version of Euclidean Distance Transform (EDT), named Row Orientation Distance Transform (RODT), to facilitate the detection of corridor boundary points. Then, boundary selection and road patch extension are applied as post-processing. Moreover, this paper also informs about the possible application of corridor for driver assistance. Finally, experiments using images from highways and urban roads with some challenging road situations are presented, illustrating the effectiveness of the proposed corridor detection algorithm. Comparison of lane and corridor on a public dataset is also provided.
APA, Harvard, Vancouver, ISO, and other styles
48

Liu, Wenbo, Fei Yan, Jiyong Zhang, and Tao Deng. "A Robust Lane Detection Model Using Vertical Spatial Features and Contextual Driving Information." Sensors 21, no. 3 (January 21, 2021): 708. http://dx.doi.org/10.3390/s21030708.

Full text
Abstract:
The quality of detected lane lines has a great influence on the driving decisions of unmanned vehicles. However, during the process of unmanned vehicle driving, the changes in the driving scene cause much trouble for lane detection algorithms. The unclear and occluded lane lines cannot be clearly detected by most existing lane detection models in many complex driving scenes, such as crowded scene, poor light condition, etc. In view of this, we propose a robust lane detection model using vertical spatial features and contextual driving information in complex driving scenes. The more effective use of contextual information and vertical spatial features enables the proposed model more robust detect unclear and occluded lane lines by two designed blocks: feature merging block and information exchange block. The feature merging block can provide increased contextual information to pass to the subsequent network, which enables the network to learn more feature details to help detect unclear lane lines. The information exchange block is a novel block that combines the advantages of spatial convolution and dilated convolution to enhance the process of information transfer between pixels. The addition of spatial information allows the network to better detect occluded lane lines. Experimental results show that our proposed model can detect lane lines more robustly and precisely than state-of-the-art models in a variety of complex driving scenarios.
APA, Harvard, Vancouver, ISO, and other styles
49

He, Xin Sheng, Shi Shi Wang, Zhi Yong Cai, and Dong Yun Wang. "Study on Urban Expressway Lane Mark Detection in Special Conditions." Advanced Materials Research 305 (July 2011): 164–67. http://dx.doi.org/10.4028/www.scientific.net/amr.305.164.

Full text
Abstract:
Detection algorithm of lane line under the special condition is based on focal and difficult point of lane line departure warning system of computer vision. This article firstly deals with the image compression and grayscale, establishes reasonable region of interest, and remove the non-road information in the image; Then we proceed the probabilistic and statistical computing for the image pixels, draw the gray level histogram. By analyzing the dynamic gray level histogram, we identify the lane line and grey value of the road and automatically calculate the reasonable threshold to binarizate then denoise the images. Last we label the images to reach the goal of identification of lane line and establishment the space between lane line and vehicles. The test results show that: the algorithm mentioned in this paper can not only detect the lane line accurately in real time, but also it enjoys a wide range of applicability to provide reference for improvement of lane line departure warning system.
APA, Harvard, Vancouver, ISO, and other styles
50

Sheu, Jia-Shing, Chun-Kang Tsai, and Po-Tong Wang. "Driving Assistance System with Lane Change Detection." Advances in Technology Innovation 6, no. 3 (May 5, 2021): 137–45. http://dx.doi.org/10.46604/aiti.2021.7109.

Full text
Abstract:
In this study, a simple technology for a self-driving system called “driver assistance system” is developed based on embedded image identification. The system consists of a camera, a Raspberry Pi board, and OpenCV. The camera is used to capture lane images, and the image noise is overcome through color space conversion, grayscale, Otsu thresholding, binarization, erosion, and dilation. Subsequently, two horizontal lines parallel to the X-axis with a fixed range and interval are used to detect left and right lane lines. The intersection points between the left and right lane lines and the two horizontal lines can be obtained, and can be used to calculate the slopes of the left and right lanes. Finally, the slope change of the left and right lanes and the offset of the lane intersection are determined to detect the deviation. When the angle of lanes changes drastically, the driver receives a deviation warning. The results of this study suggest that the proposed algorithm is 1.96 times faster than the conventional algorithm.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography