To see the other types of publications on this topic, follow the link: Sketch-based Image retrieval.

Journal articles on the topic 'Sketch-based Image retrieval'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Sketch-based Image retrieval.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Dhole, Trupti, Urmila Shelake, Sagar Surwase, Preetam Joshi, and Dhananjay Bhosale. "Survey on Sketch based Image Retrieval." International Journal of Scientific Engineering and Research 4, no. 10 (2016): 46–49. https://doi.org/10.70729/ijser15979.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Deepika, Sivasankaran, Seena P. Sai, R. Rajesh, and Kanmani Madheswari. "Sketch Based Image Retrieval using Deep Learning Based Machine Learning." International Journal of Engineering and Advanced Technology (IJEAT) 10, no. 5 (2021): 79–86. https://doi.org/10.35940/ijeat.E2622.0610521.

Full text
Abstract:
Sketch based image retrieval (SBIR) is a sub-domain of Content Based Image Retrieval(CBIR) where the user provides a drawing as an input to obtain i.e retrieve images relevant to the drawing given. The main challenge in SBIR is the subjectivity of the drawings drawn by the user as it entirely relies on the user's ability to express information in hand-drawn form. Since many of the SBIR models created aim at using singular input sketch and retrieving photos based on the given single sketch input, our project aims to enable detection and extraction of multiple sketches given together as a single input sketch image. The features are extracted from individual sketches obtained using deep learning architectures such as VGG16 , and classified to its type based on supervised machine learning using Support Vector Machines. Based on the class obtained, photos are retrieved from the database using an opencv library, CVLib , which finds the objects present in a photo image. From the number of components obtained in each photo, a ranking function is performed to rank the retrieved photos, which are then displayed to the user starting from the highest order of ranking up to the least. The system consisting of VGG16 and SVM provides 89% accuracy
APA, Harvard, Vancouver, ISO, and other styles
3

Sivasankaran, Deepika, Sai Seena P, Rajesh R, and Madheswari Kanmani. "Sketch Based Image Retrieval using Deep Learning Based Machine Learning." International Journal of Engineering and Advanced Technology 10, no. 5 (2021): 79–86. http://dx.doi.org/10.35940/ijeat.e2622.0610521.

Full text
Abstract:
Sketch based image retrieval (SBIR) is a sub-domain of Content Based Image Retrieval(CBIR) where the user provides a drawing as an input to obtain i.e retrieve images relevant to the drawing given. The main challenge in SBIR is the subjectivity of the drawings drawn by the user as it entirely relies on the user's ability to express information in hand-drawn form. Since many of the SBIR models created aim at using singular input sketch and retrieving photos based on the given single sketch input, our project aims to enable detection and extraction of multiple sketches given together as a single input sketch image. The features are extracted from individual sketches obtained using deep learning architectures such as VGG16 , and classified to its type based on supervised machine learning using Support Vector Machines. Based on the class obtained, photos are retrieved from the database using an opencv library, CVLib , which finds the objects present in a photo image. From the number of components obtained in each photo, a ranking function is performed to rank the retrieved photos, which are then displayed to the user starting from the highest order of ranking up to the least. The system consisting of VGG16 and SVM provides 89% accuracy
APA, Harvard, Vancouver, ISO, and other styles
4

Luo, Qing, Xiang Gao, Bo Jiang, Xueting Yan, Wanyuan Liu, and Junchao Ge. "A review of fine-grained sketch image retrieval based on deep learning." Mathematical Biosciences and Engineering 20, no. 12 (2023): 21186–210. http://dx.doi.org/10.3934/mbe.2023937.

Full text
Abstract:
<abstract> <p>Sketch image retrieval is an important branch of the image retrieval field, mainly relying on sketch images as queries for content search. The acquisition process of sketch images is relatively simple and in some scenarios, such as when it is impossible to obtain photos of real objects, it demonstrates its unique practical application value, attracting the attention of many researchers. Furthermore, traditional generalized sketch image retrieval has its limitations when it comes to practical applications; merely retrieving images from the same category may not adequately identify the specific target that the user desires. Consequently, fine-grained sketch image retrieval merits further exploration and study. This approach offers the potential for more precise and targeted image retrieval, making it a valuable area of investigation compared to traditional sketch image retrieval. Therefore, we comprehensively review the fine-grained sketch image retrieval technology based on deep learning and its applications and conduct an in-depth analysis and summary of research literature in recent years. We also provide a detailed introduction to three fine-grained sketch image retrieval datasets: Queen Mary University of London (QMUL) ShoeV2, ChairV2 and PKU Sketch Re-ID, and list common evaluation metrics in the sketch image retrieval field, while showcasing the best performance achieved for these datasets. Finally, we discuss the existing challenges, unresolved issues and potential research directions in this field, aiming to provide guidance and inspiration for future research.</p> </abstract>
APA, Harvard, Vancouver, ISO, and other styles
5

Lei, Haopeng, Simin Chen, Mingwen Wang, Xiangjian He, Wenjing Jia, and Sibo Li. "A New Algorithm for Sketch-Based Fashion Image Retrieval Based on Cross-Domain Transformation." Wireless Communications and Mobile Computing 2021 (May 25, 2021): 1–14. http://dx.doi.org/10.1155/2021/5577735.

Full text
Abstract:
Due to the rise of e-commerce platforms, online shopping has become a trend. However, the current mainstream retrieval methods are still limited to using text or exemplar images as input. For huge commodity databases, it remains a long-standing unsolved problem for users to find the interested products quickly. Different from the traditional text-based and exemplar-based image retrieval techniques, sketch-based image retrieval (SBIR) provides a more intuitive and natural way for users to specify their search need. Due to the large cross-domain discrepancy between the free-hand sketch and fashion images, retrieving fashion images by sketches is a significantly challenging task. In this work, we propose a new algorithm for sketch-based fashion image retrieval based on cross-domain transformation. In our approach, the sketch and photo are first transformed into the same domain. Then, the sketch domain similarity and the photo domain similarity are calculated, respectively, and fused to improve the retrieval accuracy of fashion images. Moreover, the existing fashion image datasets mostly contain photos only and rarely contain the sketch-photo pairs. Thus, we contribute a fine-grained sketch-based fashion image retrieval dataset, which includes 36,074 sketch-photo pairs. Specifically, when retrieving on our Fashion Image dataset, the accuracy of our model ranks the correct match at the top-1 which is 96.6%, 92.1%, 91.0%, and 90.5% for clothes, pants, skirts, and shoes, respectively. Extensive experiments conducted on our dataset and two fine-grained instance-level datasets, i.e., QMUL-shoes and QMUL-chairs, show that our model has achieved a better performance than other existing methods.
APA, Harvard, Vancouver, ISO, and other styles
6

Abdul Baqi, Huda Abdulaali, Ghazali Sulong, Siti Zaiton Mohd Hashim, and Zinah S.Abdul jabar. "Innovative Sketch Board Mining for Online image Retrieval." Modern Applied Science 11, no. 3 (2016): 13. http://dx.doi.org/10.5539/mas.v11n3p13.

Full text
Abstract:
Developing an accurate and efficient Sketch-Based Image Retrieval (SBIR) method in determining the resemblances between the user's query and image stream has been a never-ending quest in digital data communication era. The main challenge is to overcome the asymmetry between a binary sketch and a full-color image. We introduce a unique sketch board mining method to recover the online web images. This image conceptual retrieval is performed by matching the sketch query with the relevant terminology of selected images. A systematic sequence is followed, including the sketch drawing by the user in interpreting its geometrical shape of the conceptual form based on annotation metadata matching technique achieved automatically from Google engines, indexing and clustering the selected images via data mining. The sketch mining board being built in dynamic drawing state used a set of features to generalize sketch board conceptualization in semantic level. Images from the global repository are retrieved via a semantic match of the user's sketch query with them. Excellent retrieval of hand-drawn sketches is found to achieve the recall rate within 0.1 to 0.8 and a precision rate is 0.7 to 0.98. The proposed technique solved many problems that stat-of-art suffered from SBIR (e.g. scaling, transport, imperfect) sketch. Furthermore, it is demonstrated that the proposed technique allowed us to exploit high-level features to search the web effectively and may constitute a basis for efficient and precise image recovery tool.
APA, Harvard, Vancouver, ISO, and other styles
7

Reddy, N. Raghu Ram, Gundreddy Suresh Reddy, and Dr M. Narayana. "Color Sketch Based Image Retrieval." International Journal of Advanced Research in Electrical, Electronics and Instrumentation Engineering 03, no. 09 (2014): 12179–85. http://dx.doi.org/10.15662/ijareeie.2014.0309054.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Lei, Haopeng, Yugen Yi, Yuhua Li, Guoliang Luo, and Mingwen Wang. "A new clothing image retrieval algorithm based on sketch component segmentation in mobile visual sensors." International Journal of Distributed Sensor Networks 14, no. 11 (2018): 155014771881562. http://dx.doi.org/10.1177/1550147718815627.

Full text
Abstract:
Nowadays, the state-of-the-art mobile visual sensors technology makes it easy to collect a great number of clothing images. Accordingly, there is an increasing demand for a new efficient method to retrieve clothing images by using mobile visual sensors. Different from traditional keyword-based and content-based image retrieval techniques, sketch-based image retrieval provides a more intuitive and natural way for users to clarify their search need. However, this is a challenging problem due to the large discrepancy between sketches and images. To tackle this problem, we present a new sketch-based clothing image retrieval algorithm based on sketch component segmentation. The proposed strategy is to first collect a large scale of clothing sketches and images and tag with semantic component labels for training dataset, and then, we employ conditional random field model to train a classifier which is used to segment query sketch into different components. After that, several feature descriptors are fused to describe each component and capture the topological information. Finally, a dynamic component-weighting strategy is established to boost the effect of important components when measuring similarities. The approach is evaluated on a large, real-world clothing image dataset, and experimental results demonstrate the effectiveness and good performance of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
9

Saavedra, Jose M., and Benjamin Bustos. "Sketch-based image retrieval using keyshapes." Multimedia Tools and Applications 73, no. 3 (2013): 2033–62. http://dx.doi.org/10.1007/s11042-013-1689-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

IKEDA, TAKASHI, and MASAFUMI HAGIWARA. "CONTENT-BASED IMAGE RETRIEVAL SYSTEM USING NEURAL NETWORKS." International Journal of Neural Systems 10, no. 05 (2000): 417–24. http://dx.doi.org/10.1142/s0129065700000326.

Full text
Abstract:
An effective image retrieval system is developed based on the use of neural networks (NNs). It takes advantages of association ability of multilayer NNs as matching engines which calculate similarities between a user's drawn sketch and the stored images. The NNs memorize pixel information of every size-reduced image (thumbnail) in the learning phase. In the retrieval phase, pixel information of a user's drawn rough sketch is inputted to the learned NNs and they estimate the candidates. Thus the system can retrieve candidates quickly and correctly by utilizing the parallelism and association ability of NNs. In addition, the system has learning capability: it can automatically extract features of a user's drawn sketch during the retrieval phase and can store them as additional information to improve the performance. The software for querying, including efficient graphical user interfaces, has been implemented and tested. The effectiveness of the proposed system has been investigated through various experimental tests.
APA, Harvard, Vancouver, ISO, and other styles
11

Gatti, Prajwal, Kshitij Parikh, Dhriti Prasanna Paul, Manish Gupta, and Anand Mishra. "Composite Sketch+Text Queries for Retrieving Objects with Elusive Names and Complex Interactions." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 3 (2024): 1869–77. http://dx.doi.org/10.1609/aaai.v38i3.27956.

Full text
Abstract:
Non-native speakers with limited vocabulary often struggle to name specific objects despite being able to visualize them, e.g., people outside Australia searching for ‘numbats.’ Further, users may want to search for such elusive objects with difficult-to-sketch interactions, e.g., “numbat digging in the ground.” In such common but complex situations, users desire a search interface that accepts composite multimodal queries comprising hand-drawn sketches of “difficult-to-name but easy-to-draw” objects and text describing “difficult-to-sketch but easy-to-verbalize” object's attributes or interaction with the scene. This novel problem statement distinctly differs from the previously well-researched TBIR (text-based image retrieval) and SBIR (sketch-based image retrieval) problems. To study this under-explored task, we curate a dataset, CSTBIR (Composite Sketch+Text Based Image Retrieval), consisting of ~2M queries and 108K natural scene images. Further, as a solution to this problem, we propose a pretrained multimodal transformer-based baseline, STNet (Sketch+Text Network), that uses a hand-drawn sketch to localize relevant objects in the natural scene image, and encodes the text and image to perform image retrieval. In addition to contrastive learning, we propose multiple training objectives that improve the performance of our model. Extensive experiments show that our proposed method outperforms several state-of-the-art retrieval methods for text-only, sketch-only, and composite query modalities. We make the dataset and code available at: https://vl2g.github.io/projects/cstbir.
APA, Harvard, Vancouver, ISO, and other styles
12

Christanti Mawardi, Viny, Yoferen Yoferen, and Stéphane Bressan. "Sketch-Based Image Retrieval with Histogram of Oriented Gradients and Hierarchical Centroid Methods." E3S Web of Conferences 188 (2020): 00026. http://dx.doi.org/10.1051/e3sconf/202018800026.

Full text
Abstract:
Searching images from digital image dataset can be done using sketch-based image retrieval that performs retrieval based on the similarity between dataset images and sketch image input. Preprocessing is done by using Canny Edge Detection to detect edges of dataset images. Feature extraction will be done using Histogram of Oriented Gradients and Hierarchical Centroid on the sketch image and all the preprocessed dataset images. The features distance between sketch image and all dataset images is calculated by Euclidean Distance. Dataset images used in the test consist of 10 classes. The test results show Histogram of Oriented Gradients, Hierarchical Centroid, and combination of both methods with low and high threshold of 0.05 and 0.5 have average precision and recall values of 90.8 % and 13.45 %, 70 % and 10.64 %, 91.4 % and 13.58 %. The average precision and recall values with low and high threshold of 0.01 and 0.1, 0.3 and 0.7 are 87.2 % and 13.19 %, 86.7 % and 12.57 %. Combination of the Histogram of Oriented Gradients and Hierarchical Centroid methods with low and high threshold of 0.05 and 0.5 produce better retrieval results than using the method individually or using other low and high threshold.
APA, Harvard, Vancouver, ISO, and other styles
13

Kaur, Bhupinder. "A Deep Learning Approach for Content-Based Image Retrieval." INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 06 (2025): 1–9. https://doi.org/10.55041/ijsrem50977.

Full text
Abstract:
Content-Based Image Retrieval (CBIR) aims to retrieve relevant images based on visual content rather than metadata, addressing the limitations of traditional retrieval methods. This study proposes a deep learning-based CBIR system utilizing Convolutional Neural Networks (CNNs) for automatic feature extraction. Leveraging the CIFAR-10 dataset, the system is evaluated against traditional handcrafted methods such as color histograms and color moments. Various retrieval paradigms image-based, text-based, sketch-based, and conceptual layout are analyzed for performance comparison. Experimental results demonstrate that CNN-based retrieval achieves over 85% accuracy, significantly outperforming traditional approaches. The system exhibits robustness to intra-class variation, occlusion, and background noise, establishing deep learning as a superior and scalable approach for large-scale CBIR applications. Keywords—Content-Based Image Retrieval, Deep Learning, CNN, Image Similarity, CIFAR-10, Feature Extraction, Image Retrieval Algorithms
APA, Harvard, Vancouver, ISO, and other styles
14

Xu, Yuxin, Yuyao Yan, Yiming Lin, Xi Yang, and Kaizhu Huang. "Sketch Based Image Retrieval for Architecture Images with Siamese Swin Transformer." Journal of Physics: Conference Series 2278, no. 1 (2022): 012035. http://dx.doi.org/10.1088/1742-6596/2278/1/012035.

Full text
Abstract:
Abstract Sketch-based image retrieval (SBIR) is an image retrieval task that takes a sketch as input and outputs colour images matching the sketch. Most recent SBIR methods utilise deep learning methods with complicated network designs, which are resource-intensive for practical use. This paper proposes a novel compact framework that takes the siamese network with image view angle information, targeting the SBIR task for architecture images. In particular, the proposed siamese network engages a compact SwinTiny transformer as the backbone encoder. View angle information of the architecture image is fed to the model to further improve search accuracy. To cope with the insufficient sketches issue, simulated building sketches are used in training, which are generated by a pre-trained edge extractor. Experiments show that our model achieves 0.859 top-one accuracy exceeding many baseline models for an architecture retrieval task.
APA, Harvard, Vancouver, ISO, and other styles
15

Yang, Bo, Chen Wang, Xiaoshuang Ma, Beiping Song, Zhuang Liu, and Fangde Sun. "Zero-Shot Sketch-Based Remote-Sensing Image Retrieval Based on Multi-Level and Attention-Guided Tokenization." Remote Sensing 16, no. 10 (2024): 1653. http://dx.doi.org/10.3390/rs16101653.

Full text
Abstract:
Effectively and efficiently retrieving images from remote-sensing databases is a critical challenge in the realm of remote-sensing big data. Utilizing hand-drawn sketches as retrieval inputs offers intuitive and user-friendly advantages, yet the potential of multi-level feature integration from sketches remains underexplored, leading to suboptimal retrieval performance. To address this gap, our study introduces a novel zero-shot, sketch-based retrieval method for remote-sensing images, leveraging multi-level feature extraction, self-attention-guided tokenization and filtering, and cross-modality attention update. This approach employs only vision information and does not require semantic knowledge concerning the sketch and image. It starts by employing multi-level self-attention guided feature extraction to tokenize the query sketches, as well as self-attention feature extraction to tokenize the candidate images. It then employs cross-attention mechanisms to establish token correspondence between these two modalities, facilitating the computation of sketch-to-image similarity. Our method significantly outperforms existing sketch-based remote-sensing image retrieval techniques, as evidenced by tests on multiple datasets. Notably, it also exhibits robust zero-shot learning capabilities in handling unseen categories and strong domain adaptation capabilities in handling unseen novel remote-sensing data. The method’s scalability can be further enhanced by the pre-calculation of retrieval tokens for all candidate images in a database. This research underscores the significant potential of multi-level, attention-guided tokenization in cross-modal remote-sensing image retrieval. For broader accessibility and research facilitation, we have made the code and dataset used in this study publicly available online.
APA, Harvard, Vancouver, ISO, and other styles
16

Gopu, Venkata Rama Muni Kumar, and Madhavi Dunna. "Zero-Shot Sketch-Based Image Retrieval Using StyleGen and Stacked Siamese Neural Networks." Journal of Imaging 10, no. 4 (2024): 79. http://dx.doi.org/10.3390/jimaging10040079.

Full text
Abstract:
Sketch-based image retrieval (SBIR) refers to a sub-class of content-based image retrieval problems where the input queries are ambiguous sketches and the retrieval repository is a database of natural images. In the zero-shot setup of SBIR, the query sketches are drawn from classes that do not match any of those that were used in model building. The SBIR task is extremely challenging as it is a cross-domain retrieval problem, unlike content-based image retrieval problems because sketches and images have a huge domain gap. In this work, we propose an elegant retrieval methodology, StyleGen, for generating fake candidate images that match the domain of the repository images, thus reducing the domain gap for retrieval tasks. The retrieval methodology makes use of a two-stage neural network architecture known as the stacked Siamese network, which is known to provide outstanding retrieval performance without losing the generalizability of the approach. Experimental studies on the image sketch datasets TU-Berlin Extended and Sketchy Extended, evaluated using the mean average precision (mAP) metric, demonstrate a marked performance improvement compared to the current state-of-the-art approaches in the domain.
APA, Harvard, Vancouver, ISO, and other styles
17

Adimas, Adimas, and Suhendro Y. Irianto. "Image Sketch Based Criminal Face Recognition Using Content Based Image Retrieval." Scientific Journal of Informatics 8, no. 2 (2021): 176–82. http://dx.doi.org/10.15294/sji.v8i2.27865.

Full text
Abstract:
Purpose: Face recognition is a geometric space recording activity that allows it to be used to distinguish the features of a face. Therefore, facial recognition can be used to identify ID cards, ATM card PINs, search for one’s committed crimes, terrorists, and other criminals whose faces were not caught by Close-Circuit Television (CCTV). Based on the face image database and by applying the Content-Base Image Retrieval method (CBIR), committed crimes can be recognized on his face. Moreover, the image segmentation technique was carried out before CBIR was applied. This work tried to recognize an individual who committed crimes based on his or her face by using sketch facial images as a query. Methods: We used an image sketch as a querybecause CCTV could not have caught the face image. The research used no less than 1,000 facial images were carried out, both normal as well asabnormal faces (with obstacles). Findings:Experiments demonstrated good enough in terms of precision and recall, which are 0,8 and 0,3 respectively, which is better than at least two previous works.The work demonstrates a precision of 80% which means retrieval of effectiveness is good enough. The 75 queries were carried out in this work to compute the precision and recall of image retrieval. Novelty: Most face recognition researchers using CBIR employed an image as a query. Furthermore, previous work still rarely applied image segmentation as well as CBIR.
APA, Harvard, Vancouver, ISO, and other styles
18

Zhang, Xianlin, Xueming Li, Xuewei Li, and Mengling Shen. "Better freehand sketch synthesis for sketch-based image retrieval: Beyond image edges." Neurocomputing 322 (December 2018): 38–46. http://dx.doi.org/10.1016/j.neucom.2018.09.047.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Dutta, Anjan, and Zeynep Akata. "Semantically Tied Paired Cycle Consistency for Any-Shot Sketch-Based Image Retrieval." International Journal of Computer Vision 128, no. 10-11 (2020): 2684–703. http://dx.doi.org/10.1007/s11263-020-01350-x.

Full text
Abstract:
Abstract Low-shot sketch-based image retrieval is an emerging task in computer vision, allowing to retrieve natural images relevant to hand-drawn sketch queries that are rarely seen during the training phase. Related prior works either require aligned sketch-image pairs that are costly to obtain or inefficient memory fusion layer for mapping the visual information to a semantic space. In this paper, we address any-shot, i.e. zero-shot and few-shot, sketch-based image retrieval (SBIR) tasks, where we introduce the few-shot setting for SBIR. For solving these tasks, we propose a semantically aligned paired cycle-consistent generative adversarial network (SEM-PCYC) for any-shot SBIR, where each branch of the generative adversarial network maps the visual information from sketch and image to a common semantic space via adversarial training. Each of these branches maintains cycle consistency that only requires supervision at the category level, and avoids the need of aligned sketch-image pairs. A classification criteria on the generators’ outputs ensures the visual to semantic space mapping to be class-specific. Furthermore, we propose to combine textual and hierarchical side information via an auto-encoder that selects discriminating side information within a same end-to-end model. Our results demonstrate a significant boost in any-shot SBIR performance over the state-of-the-art on the extended version of the challenging Sketchy, TU-Berlin and QuickDraw datasets.
APA, Harvard, Vancouver, ISO, and other styles
20

Chandresh, Pratap Singh. "R-Tree Implementation of Image Databases." Signal & Image Processing : An International Journal (SIPIJ) 2, no. 4 (2019): 89–108. https://doi.org/10.5281/zenodo.3501853.

Full text
Abstract:
With the onslaught of multimedia in the near past, there has been a tremendous increase in the uses of images. A very good example of which is the web on which most of the documents contain images. Other than this the images are being used in other applications like weather forecasting, medical diagnosis, police department. In R-Tree implementation of image database, images are made available to the program which are then stores in the database. The image database is presented using R-tree and the database is stored in separate file .This R-tree implementation results in both update as well as efficient retrieval of images from hard disk [1][2][4]. We use the similarity based retrieval feature to retrieve the required number of similar images being inquired by the user [3][5][6]. Distance matrix approach is used to find similarity of images [7]. Sobel edge detection algorithm is used to form sketches. If sketch of image is entered for similarity based retrieval, then sketches of stored images are formed and these sketches are compared with input image (sketch) using distance matrix approach[8][9].
APA, Harvard, Vancouver, ISO, and other styles
21

Li, Yi, and Wenzhao Li. "A survey of sketch-based image retrieval." Machine Vision and Applications 29, no. 7 (2018): 1083–100. http://dx.doi.org/10.1007/s00138-018-0953-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Prasad K, Durga, Manjunathachari K, and Giri Prasad M.N. "Orientation Feature Transform Model for Image Retrieval in Sketch Based Image Retrieval System." International Journal of Engineering & Technology 7, no. 2.24 (2018): 159. http://dx.doi.org/10.14419/ijet.v7i2.24.12022.

Full text
Abstract:
This paper focus on Image retrieval using Sketch based image retrieval system. The low complexity model for image representation has given the sketch based image retrieval (SBIR) a optimal selection for next generation application in low resource environment. The SBIR approach uses the geometrical region representation to describe the feature and utilize for recognition. In the SBIR model, the features represented define the image. Towards the improvement of SBIR recognition performance, in this paper a new invariant modeling using “orientation feature transformed modeling” is proposed. The approach gives the enhancement of invariant property and retrieval performance improvement in transformed domain. The experimental results illustrate the significance of invariant orientation feature representation in SBIR over the conventional models.
APA, Harvard, Vancouver, ISO, and other styles
23

More, Prof Rupal D., Rajashri Puranik, Purva Dusane, Sejal Bhawar, and Himanshu Sahu. "Sketch-Based Image Retrieval System for Criminal Records Using Deep Learning." International Journal for Research in Applied Science and Engineering Technology 11, no. 5 (2023): 4082–87. http://dx.doi.org/10.22214/ijraset.2023.52585.

Full text
Abstract:
Abstract: An overview to the Sketch Based Image Retrieval for Criminal Record where the user provides a sketch as input to the system to retrieve relevant images from the database. It is seen that traditional methods to draw the face sketch are still difficult and time consuming. This system is developed so that the identification of criminals is done faster than the traditional method. Therefore, the paper presents a simple and effective deep learning framework where user can create the sketch of the suspect and can be matched to the database to get the relevant criminal images. It mainly uses Histogram Oriented Graph, Support Vector Machine, Deep Convolutional Neural Networks machine learning algorithms for face landmarks estimation, feature extraction and pattern matching. SBIR proves to be more efficient and faster to process the criminal face in real-time, as time plays an important role in immediate action in the crime branch.
APA, Harvard, Vancouver, ISO, and other styles
24

Ge, Ce, Jingyu Wang, Qi Qi, Haifeng Sun, Tong Xu, and Jianxin Liao. "Scene-Level Sketch-Based Image Retrieval with Minimal Pairwise Supervision." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 1 (2023): 650–57. http://dx.doi.org/10.1609/aaai.v37i1.25141.

Full text
Abstract:
The sketch-based image retrieval (SBIR) task has long been researched at the instance level, where both query sketches and candidate images are assumed to contain only one dominant object. This strong assumption constrains its application, especially with the increasingly popular intelligent terminals and human-computer interaction technology. In this work, a more general scene-level SBIR task is explored, where sketches and images can both contain multiple object instances. The new general task is extremely challenging due to several factors: (i) scene-level SBIR inherently shares sketch-specific difficulties with instance-level SBIR (e.g., sparsity, abstractness, and diversity), (ii) the cross-modal similarity is measured between two partially aligned domains (i.e., not all objects in images are drawn in scene sketches), and (iii) besides instance-level visual similarity, a more complex multi-dimensional scene-level feature matching problem is imposed (including appearance, semantics, layout, etc.). Addressing these challenges, a novel Conditional Graph Autoencoder model is proposed to deal with scene-level sketch-images retrieval. More importantly, the model can be trained with only pairwise supervision, which distinguishes our study from others in that elaborate instance-level annotations (for example, bounding boxes) are no longer required. Extensive experiments confirm the ability of our model to robustly retrieve multiple related objects at the scene level and exhibit superior performance beyond strong competitors.
APA, Harvard, Vancouver, ISO, and other styles
25

Guo, Yuanchen, Yun Cai, and Songhai Zhang. "Attentive Edgemap Fusion for Sketch-Based Image Retrieval." Journal of Computer-Aided Design & Computer Graphics 33, no. 6 (2021): 847–54. http://dx.doi.org/10.3724/sp.j.1089.2021.18589.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Thankachan, Sini. "MindCam: An Approach for Sketch Based Image Retrieval." International Journal of Information Systems and Computer Sciences 8, no. 2 (2019): 67–71. http://dx.doi.org/10.30534/ijiscs/2019/16822019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Ohashi, Gosuke, Yasutake Nagashima, Keita Mochizuki, and Yoshifumi Shimodaira. "Edge-based Image Retrieval Using a Rough Sketch." Journal of the Institute of Image Information and Television Engineers 56, no. 4 (2002): 653–58. http://dx.doi.org/10.3169/itej.56.653.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Wang, Shu, and Zhenjiang Miao. "Sketch-based image retrieval using hierarchical partial matching." Journal of Electronic Imaging 24, no. 4 (2015): 043010. http://dx.doi.org/10.1117/1.jei.24.4.043010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Zhu, Ming, Chun Chen, Nian Wang, Jun Tang, and Wenxia Bao. "Gradually focused fine-grained sketch-based image retrieval." PLOS ONE 14, no. 5 (2019): e0217168. http://dx.doi.org/10.1371/journal.pone.0217168.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Wang, Jingyu, Yu Zhao, Qi Qi, et al. "MindCamera: Interactive Sketch-Based Image Retrieval and Synthesis." IEEE Access 6 (2018): 3765–73. http://dx.doi.org/10.1109/access.2018.2796638.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Fu, Haiyan, Hanguang Zhao, Xiangwei Kong, and Xianbo Zhang. "BHoG: binary descriptor for sketch-based image retrieval." Multimedia Systems 22, no. 1 (2014): 127–36. http://dx.doi.org/10.1007/s00530-014-0406-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Zhang, Yuting, Xueming Qian, Xianglong Tan, Junwei Han, and Yuanyan Tang. "Sketch-Based Image Retrieval by Salient Contour Reinforcement." IEEE Transactions on Multimedia 18, no. 8 (2016): 1604–15. http://dx.doi.org/10.1109/tmm.2016.2568138.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Sheng, Jianqiang, Fei Wang, Baoquan Zhao, Junkun Jiang, Yu Yang, and Tie Cai. "Sketch-Based Image Retrieval Using Novel Edge Detector and Feature Descriptor." Wireless Communications and Mobile Computing 2022 (February 1, 2022): 1–12. http://dx.doi.org/10.1155/2022/4554911.

Full text
Abstract:
With the explosive increase of digital images, intelligent information retrieval systems have become an indispensable tool to facilitate users’ information seeking process. Although various kinds of techniques like keyword-/content-based methods have been extensively investigated, how to effectively retrieve relevant images from a large-scale database remains a very challenging task. Recently, with the wide availability of touch screen devices and their associated human-computer interaction technology, sketch-based image retrieval (SBIR) methods have attracted more and more attention. In contrast to keyword-based methods, SBIR allows users to flexibly manifest their information needs into sketches by drawing abstract outlines of an object/scene. Despite its ease and intuitiveness, it is still a nontrivial task to accurately extract and interpret the semantic information from sketches, largely because of the diverse drawing styles of different users. As a consequence, the performance of existing SBIR systems is still far from being satisfactory. In this paper, we introduce a novel sketch image edge feature extraction algorithm to tackle the challenges. Firstly, we propose a Gaussian blur-based multiscale edge extraction (GBME) algorithm to capture more comprehensive and detailed features by continuously superimposing the edge filtering results after Gaussian blur processing. Secondly, we devise a hybrid barycentric feature descriptor (RSB-HOG) that extracts HOG features by randomly sampling points on the edges of a sketch. In addition, we integrate the directional distribution of the barycenters of all sampling points into the feature descriptor and thus improve its representational capability in capturing the semantic information of contours. To examine the efficiency of our method, we carry out extensive experiments on the public Flickr15K dataset. The experimental results indicate that the proposed method is superior to existing peer SBIR systems in terms of retrieval accuracy.
APA, Harvard, Vancouver, ISO, and other styles
34

Habrat, Magdalena, and Mariusz Młynarczuk. "Object Retrieval in Microscopic Images of Rocks Using the Query by Sketch Method." Applied Sciences 10, no. 1 (2019): 278. http://dx.doi.org/10.3390/app10010278.

Full text
Abstract:
This paper presents the retrieval method of geological images or their fragments using Query by Sketch method. The sketch can be created manually, for instance using a graphics editor, and may show the shape of objects or their distribution within an image. This query is then used to search the image database for objects showing the greatest similarity. As an example of the proposed method, the detection of porosity in microscopic images of carbonate rock and sandstone was presented. An approach was described which is founded on the designation of parameters of selected properties of the query image and images in databases, as well as on the conformity analysis of these parameters. Two methods were proposed: the first one searches for the most similar object in the image database with respect to the set criteria. The second method performs a search based on a sketch of images which are similar in terms of object distribution (i.e., porosity). The presented research results confirm that database search using the query by sketch method forms an interesting and modern approach and may constitute one of the functionalities of IT systems intended for use in geology and mining industry.
APA, Harvard, Vancouver, ISO, and other styles
35

R., Dipika, and J. V. "A Sketch based Image Retrieval with Descriptor based on Constraints." International Journal of Computer Applications 146, no. 12 (2016): 7–11. http://dx.doi.org/10.5120/ijca2016910923.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Ge, Ce, Jingyu Wang, Qi Qi, Haifeng Sun, Tong Xu, and Jianxin Liao. "Semi-transductive Learning for Generalized Zero-Shot Sketch-Based Image Retrieval." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 6 (2023): 7678–86. http://dx.doi.org/10.1609/aaai.v37i6.25931.

Full text
Abstract:
Sketch-based image retrieval (SBIR) is an attractive research area where freehand sketches are used as queries to retrieve relevant images. Existing solutions have advanced the task to the challenging zero-shot setting (ZS-SBIR), where the trained models are tested on new classes without seen data. However, they are prone to overfitting under a realistic scenario when the test data includes both seen and unseen classes. In this paper, we study generalized ZS-SBIR (GZS-SBIR) and propose a novel semi-transductive learning paradigm. Transductive learning is performed on the image modality to explore the potential data distribution within unseen classes, and zero-shot learning is performed on the sketch modality sharing the learned knowledge through a semi-heterogeneous architecture. A hybrid metric learning strategy is proposed to establish semantics-aware ranking property and calibrate the joint embedding space. Extensive experiments are conducted on two large-scale benchmarks and four evaluation metrics. The results show that our method is superior over the state-of-the-art competitors in the challenging GZS-SBIR task.
APA, Harvard, Vancouver, ISO, and other styles
37

Amarnadh, S., P. V. G. D. Reddy, and N. V. E. S. Murthy. "Perlustration on Image Processing under Free Hand Sketch Based Image Retrieval." EAI Endorsed Transactions on Internet of Things 4, no. 16 (2018): 159334. http://dx.doi.org/10.4108/eai.21-12-2018.159334.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Liu, Yujie, Changhong Dou, Qilu Zhao, Zongmin Li, and Hua Li. "Sketch Based Image Retrieval with Conditional Generative Adversarial Network." Journal of Computer-Aided Design & Computer Graphics 29, no. 12 (2017): 2336. http://dx.doi.org/10.3724/sp.j.1089.2017.16596.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Torabi Motlagh Fard, Mohammad Hossein, Nazean Jomhari, and Sri Devi Ravana. "Sketch Based Image Retrieval by Using Feature Extraction Technique." Journal of Computer Science & Computational Mathematics 6, no. 1 (2016): 21–24. http://dx.doi.org/10.20967/jcscm.2016.01.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Wang, Luo, Xueming Qian, Xingjun Zhang, and Xingsong Hou. "Sketch-Based Image Retrieval With Multi-Clustering Re-Ranking." IEEE Transactions on Circuits and Systems for Video Technology 30, no. 12 (2020): 4929–43. http://dx.doi.org/10.1109/tcsvt.2019.2959875.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Huang, Fei, Cheng Jin, Yuejie Zhang, Kangnian Weng, Tao Zhang, and Weiguo Fan. "Sketch-based image retrieval with deep visual semantic descriptor." Pattern Recognition 76 (April 2018): 537–48. http://dx.doi.org/10.1016/j.patcog.2017.11.032.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Hayashi, Takahiro, Atsushi Ishikawa, and Rikio Onai. "Landscape Image Retrieval with Query by Sketch and Icon." Journal of Advanced Computational Intelligence and Intelligent Informatics 11, no. 1 (2007): 61–70. http://dx.doi.org/10.20965/jaciii.2007.p0061.

Full text
Abstract:
This paper reports a new method for retrieving landscape images using a sketch and icons as a query. Based on the proposal, first, a user sketches lines expressing contours of landscape elements such as mountains and forests and attaches icons expressing landscape elements to the sketch. Second, whether individual images in a database match with the layout expressed by the sketch and icons is judged with principal component analysis and pattern recognition. From experimental results, we have confirmed that the proportion of the correct images ranked within top 10 of retrieval results is 80% in an average.
APA, Harvard, Vancouver, ISO, and other styles
43

Saavedra, Jose M. "RST-SHELO: sketch-based image retrieval using sketch tokens and square root normalization." Multimedia Tools and Applications 76, no. 1 (2015): 931–51. http://dx.doi.org/10.1007/s11042-015-3076-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Sabry, Eman S., Salah Elagooz, Fathi E. Abd El-Samie, et al. "Sketch-Based Retrieval Approach Using Artificial Intelligence Algorithms for Deep Vision Feature Extraction." Axioms 11, no. 12 (2022): 663. http://dx.doi.org/10.3390/axioms11120663.

Full text
Abstract:
Since the onset of civilization, sketches have been used to portray our visual world, and they continue to do so in many different disciplines today. As in specific government agencies, establishing similarities between sketches is a crucial aspect of gathering forensic evidence in crimes, in addition to satisfying the user’s subjective requirements in searching and browsing for specific sorts of images (i.e., clip art images), especially with the proliferation of smartphones with touchscreens. With such a kind of search, quickly and effectively drawing and retrieving sketches from databases can occasionally be challenging, when using keywords or categories. Drawing some simple forms and searching for the image in that way could be simpler in some situations than attempting to put the vision into words, which is not always possible. Modern techniques, such as Content-Based Image Retrieval (CBIR), may offer a more useful solution. The key engine of such techniques that poses various challenges might be dealt with using effective visual feature representation. Object edge feature detectors are commonly used to extract features from different image sorts. However, they are inconvenient as they consume time due to their complexity in computation. In addition, they are complicated to implement with real-time responses. Therefore, assessing and identifying alternative solutions from the vast array of methods is essential. Scale Invariant Feature Transform (SIFT) is a typical solution that has been used by most prevalent research studies. Even for learning-based methods, SIFT is frequently used for comparison and assessment. However, SIFT has several downsides. Hence, this research is directed to the utilization of handcrafted-feature-based Oriented FAST and Rotated BRIEF (ORB) to capture visual features of sketched images to overcome SIFT limitations on small datasets. However, handcrafted-feature-based algorithms are generally unsuitable for large-scale sets of images. Efficient sketched image retrieval is achieved based on content and separation of the features of the black line drawings from the background into precisely-defined variables. Each variable is encoded as a distinct dimension in this disentangled representation. For representation of sketched images, this paper presents a Sketch-Based Image Retrieval (SBIR) system, which uses the information-maximizing GAN (InfoGAN) model. The establishment of such a retrieval system is based on features acquired by the unsupervised learning InfoGAN model to satisfy users’ expectations for large-scale datasets. The challenges with the matching and retrieval systems of such kinds of images develop when drawing clarity declines. Finally, the ORB-based matching system is introduced and compared to the SIFT-based system. Additionally, the InfoGAN-based system is compared with state-of-the-art solutions, including SIFT, ORB, and Convolutional Neural Network (CNN).
APA, Harvard, Vancouver, ISO, and other styles
45

Pillay, Karan Ravindran, and Omkar Upendra Khadilkar. "The Scalable Image Retrieval Systems and Applications." International Journal of Engineering and Computer Science 7, ``11 (2018): 24406–8. http://dx.doi.org/10.18535/ijecs/v7i11.03.

Full text
Abstract:
Advances in information storage and image acquisition technologies have enabled the creation of enormous image datasets. during this situation, it's necessary to develop applicable data systems to with efficiency manage these collections. the most typical approaches use the supposed Content-Based Image Retrieval (CBIR) systems. Basically, these systems attempt to retrieve pictures like a user-defined specification or pattern (e.g., form sketch, image example). Their goal is to support image retrieval supported content properties (e.g., shape, color, texture), typically encoded into feature vectors. one among the most benefits of the CBIR approach is that the chance of AN automatic retrieval method, rather than the standard keyword-based approach, thattypically needs terribly toilsome and long previous annotation of info pictures. The CBIR technology has been utilized in many applications like fingerprint identification, variety data systems, digital libraries, crime bar, medicine, historical analysis, among others.
APA, Harvard, Vancouver, ISO, and other styles
46

Pavithra, Narasimha Murthy, and Kumar Yeliyur Hanumanthaiah Sharath. "Novel hybrid generative adversarial network for synthesizing image from sketch." International Journal of Electrical and Computer Engineering (IJECE) 13, no. 6 (2023): 6293–301. https://doi.org/10.11591/ijece.v13i6.pp6293-6301.

Full text
Abstract:
In the area of sketch-based image retrieval process, there is a potential difference between retrieving the match images from defined dataset and constructing the synthesized image. The former process is quite easier while the latter process requires more faster, accurate, and intellectual decision making by the processor. After reviewing open-end research problems from existing approaches, the proposed scheme introduces a computational framework of hybrid generative adversarial network (GAN) as a solution to address the identified research problem. The model takes the input of query image which is processed by generator module running 3 different deep learning modes of ResNet, MobileNet, and U-Net. The discriminator module processes the input of real images as well as output from generator. With a novel interactive communication between generator and discriminator, the proposed model offers optimal retrieval performance along with an inclusion of optimizer. The study outcome shows significant performance improvement.
APA, Harvard, Vancouver, ISO, and other styles
47

Zhang, Zhaolong, Yuejie Zhang, Rui Feng, Tao Zhang, and Weiguo Fan. "Zero-Shot Sketch-Based Image Retrieval via Graph Convolution Network." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (2020): 12943–50. http://dx.doi.org/10.1609/aaai.v34i07.6993.

Full text
Abstract:
Zero-Shot Sketch-based Image Retrieval (ZS-SBIR) has been proposed recently, putting the traditional Sketch-based Image Retrieval (SBIR) under the setting of zero-shot learning. Dealing with both the challenges in SBIR and zero-shot learning makes it become a more difficult task. Previous works mainly focus on utilizing one kind of information, i.e., the visual information or the semantic information. In this paper, we propose a SketchGCN model utilizing the graph convolution network, which simultaneously considers both the visual information and the semantic information. Thus, our model can effectively narrow the domain gap and transfer the knowledge. Furthermore, we generate the semantic information from the visual information using a Conditional Variational Autoencoder rather than only map them back from the visual space to the semantic space, which enhances the generalization ability of our model. Besides, feature loss, classification loss, and semantic loss are introduced to optimize our proposed SketchGCN model. Our model gets a good performance on the challenging Sketchy and TU-Berlin datasets.
APA, Harvard, Vancouver, ISO, and other styles
48

Tursun, Osman, Simon Denman, Sridha Sridharan, Ethan Goan, and Clinton Fookes. "An efficient framework for zero-shot sketch-based image retrieval." Pattern Recognition 126 (June 2022): 108528. http://dx.doi.org/10.1016/j.patcog.2022.108528.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Eitz, M., K. Hildebrand, T. Boubekeur, and M. Alexa. "Sketch-Based Image Retrieval: Benchmark and Bag-of-Features Descriptors." IEEE Transactions on Visualization and Computer Graphics 17, no. 11 (2011): 1624–36. http://dx.doi.org/10.1109/tvcg.2010.266.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Zhan, Shu, Jingjing Zhao, Yucheng Tang, and Zhenzhu Xie. "Face image retrieval: super-resolution based on sketch-photo transformation." Soft Computing 22, no. 4 (2016): 1351–60. http://dx.doi.org/10.1007/s00500-016-2427-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!