To see the other types of publications on this topic, follow the link: Image collection.

Journal articles on the topic 'Image collection'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Image collection.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Riahi Samani, Zahra, and Mohsen Ebrahimi Moghaddam. "Image Collection Summarization Method Based on Semantic Hierarchies." AI 1, no. 2 (2020): 209–28. http://dx.doi.org/10.3390/ai1020014.

Full text
Abstract:
The size of internet image collections is increasing drastically. As a result, new techniques are required to facilitate users in browsing, navigation, and summarization of these large volume collections. Image collection summarization methods present users with a set of exemplar images as the most representative ones from the initial image collection. In this study, an image collection summarization technique was introduced according to semantic hierarchies among them. In the proposed approach, images were mapped to the nodes of a pre-defined domain ontology. In this way, a semantic hierarchical classifier was used, which finally mapped images to different nodes of the ontology. We made a compromise between the degree of freedom of the classifier and the goodness of the summarization method. The summarization was done using a group of high-level features that provided a semantic measurement of information in images. Experimental outcomes indicated that the introduced image collection summarization method outperformed the recent techniques for the summarization of image collections.
APA, Harvard, Vancouver, ISO, and other styles
2

Guo, Feng Ying. "The Hardware Design and Implement of an Image Information Collection System." Advanced Materials Research 804 (September 2013): 211–15. http://dx.doi.org/10.4028/www.scientific.net/amr.804.211.

Full text
Abstract:
It introduces the hardware design and implement of the image information collection system which is one part of Fluorescent magnetic particle nondestructive detection system of Cannonball. The image information collection system includes Open collecting card model Initialize collection card modelApply for EMS memory modelCollecting image modelWrite bitmap information head model Show pictures modeClose collection card model. Through collecting image information to the EMS memory and then processing the image information, the magnetic images on the cannonball are showed clearly on computer, and cracks on the cannonball can easily to be judged. The software of the image information collection system is developed by Visual C++.
APA, Harvard, Vancouver, ISO, and other styles
3

Kim, Namhoon, Eunha Chun, and Eunju Ko. "Country of origin effects on brand image, brand evaluation, and purchase intention." International Marketing Review 34, no. 2 (2017): 254–71. http://dx.doi.org/10.1108/imr-03-2015-0071.

Full text
Abstract:
Purpose The purpose of this paper is to analyze how national stereotype, country of origin (COO), and fashion brand’s images influence consumers’ brand evaluations and purchase intentions regarding fashion collections. Korea (Seoul) and overseas (New York and Paris) collections are compared and analyzed. Design/methodology/approach The authors conduct a structural equation modeling and multi-group analysis using data collected from Seoul, New York, and Paris. Findings Consumers make higher brand evaluations and ultimately have stronger purchase intentions toward fashion collections from countries that have stronger COO and fashion brand images. In the context of fashion collections, COO image is greatly influenced by a nation’s political economic and cultural artistic images. In addition, comparing the domestic Seoul fashion collection with New York and Paris collections reveals that a national stereotype images, COO images of fashion collection, and fashion brand’s images cause different brand evaluation and purchase intention. Originality/value The overarching value of the study is that it expands COO research, which has been limited to actual products. Also, the results provide a basic foundation for establishing marketing strategy based on COO image as a way to enhance the development and image of fashion collection.
APA, Harvard, Vancouver, ISO, and other styles
4

Blatter, Jeremy. "Wellcome Library Moving Image and Sound Collection, (http://wellcomelibrary.org/about-us/about-the-collections/moving-image-and-sound-collection/)." Medical History 57, no. 3 (2013): 459–60. http://dx.doi.org/10.1017/mdh.2013.29.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Groom, Quentin, Mathias Dillen, Wouter Addink, et al. "Envisaging a global infrastructure to exploit the potential of digitised collections." Biodiversity Data Journal 11 (November 30, 2023): e109439. https://doi.org/10.3897/BDJ.11.e109439.

Full text
Abstract:
Tens of millions of images from biological collections have become available online over the last two decades. In parallel, there has been a dramatic increase in the capabilities of image analysis technologies, especially those involving machine learning and computer vision. While image analysis has become mainstream in consumer applications, it is still used only on an artisanal basis in the biological collections community, largely because the image corpora are dispersed. Yet, there is massive untapped potential for novel applications and research if images of collection objects could be made accessible in a single corpus. In this paper, we make the case for infrastructure that could support image analysis of collection objects. We show that such infrastructure is entirely feasible and well worth investing in.
APA, Harvard, Vancouver, ISO, and other styles
6

Caspers, Max. "Image Recognition to Enhance the Value of Collections." Biodiversity Information Science and Standards 2 (June 13, 2018): e26320. http://dx.doi.org/10.3897/biss.2.26320.

Full text
Abstract:
Techniques for image recognition through machine learning have advanced rapidly over recent years and applications using this technique are becoming increasingly common.. Applications using image recognition have enormous potential not only for research, education, conservation and capacity-building but certainly also for collections management. Perhaps by now an even bigger challenge than the technological one is supplying content in the form of large amounts of validated images. With an estimated 44 million objects, the collection of Naturalis Biodiversity Center has plenty of physical source material. During a five-year digitization programme (2010–2015) at Naturalis, 4.4 million herbarium sheets were imaged and since the start of the “Butterflies in Bags” project, 50,000 papered butterflies (out of more than 500,000) have been digitized and photographed by volunteers in a standardized manner. Still there are large parts of our collection that are not digitized at specimen level, let alone imaged, but hold great potential for collections work. This poster presents a workflow for efficient scanning of insect drawers and automated segmentation of those images to “feed” deep learning-based image recognition with images of individual insects. It will also demonstrate how this will aid in enhancing the value of our collections. With proper expert validation early on in the process, the software could mature and become more independent in such a way that ultimately, it could be used by non-specialist professionals to identify the majority of common species. The technique would pinpoint anomalies based on self-learned patterns, both in unidentified and in already identified specimens, and link those back to the taxonomic specialist. Not only does image recognition aid taxonomy, it may also hold potential for conservation and management by, for example, detecting damaged specimens or managing space utilization of drawers.
APA, Harvard, Vancouver, ISO, and other styles
7

Caspers, Max. "Image Recognition to Enhance the Value of Collections." Biodiversity Information Science and Standards 2 (June 13, 2018): e26320. https://doi.org/10.3897/biss.2.26320.

Full text
Abstract:
Techniques for image recognition through <i>machine learning</i> have advanced rapidly over recent years and applications using this technique are becoming increasingly common.. Applications using image recognition have enormous potential not only for research, education, conservation and capacity-building but certainly also for collections management. Perhaps by now an even bigger challenge than the technological one is supplying content in the form of large amounts of validated images. With an estimated 44 million objects, the collection of Naturalis Biodiversity Center has plenty of physical source material. During a five-year digitization programme (2010–2015) at Naturalis, 4.4 million herbarium sheets were imaged and since the start of the "Butterflies in Bags" project, 50,000 papered butterflies (out of more than 500,000) have been digitized and photographed by volunteers in a standardized manner. Still there are large parts of our collection that are not digitized at specimen level, let alone imaged, but hold great potential for collections work. This poster presents a workflow for efficient scanning of insect drawers and automated segmentation of those images to "feed" deep learning-based image recognition with images of individual insects. It will also demonstrate how this will aid in enhancing the value of our collections. With proper expert validation early on in the process, the software could mature and become more independent in such a way that ultimately, it could be used by non-specialist professionals to identify the majority of common species. The technique would pinpoint anomalies based on self-learned patterns, both in unidentified and in already identified specimens, and link those back to the taxonomic specialist. Not only does image recognition aid taxonomy, it may also hold potential for conservation and management by, for example, detecting damaged specimens or managing space utilization of drawers.
APA, Harvard, Vancouver, ISO, and other styles
8

Martin, William. "Satellite image collection optimization." Optical Engineering 41, no. 9 (2002): 2083. http://dx.doi.org/10.1117/1.1495856.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Legland, David, and Marie-Françoise Devaux. "ImageM: a user-friendly interface for the processing of multi-dimensional images with Matlab." F1000Research 10 (April 30, 2021): 333. http://dx.doi.org/10.12688/f1000research.51732.1.

Full text
Abstract:
Modern imaging devices provide a wealth of data often organized as images with many dimensions, such as 2D/3D, time and channel. Matlab is an efficient software solution for image processing, but it lacks many features facilitating the interactive interpretation of image data, such as a user-friendly image visualization, or the management of image meta-data (e.g. spatial calibration), thus limiting its application to bio-image analysis. The ImageM application proposes an integrated user interface that facilitates the processing and the analysis of multi-dimensional images within the Matlab environment. It provides a user-friendly visualization of multi-dimensional images, a collection of image processing algorithms and methods for analysis of images, the management of spatial calibration, and facilities for the analysis of multi-variate images. ImageM can also be run on the open source alternative software to Matlab, Octave. ImageM is freely distributed on GitHub: https://github.com/mattools/ImageM.
APA, Harvard, Vancouver, ISO, and other styles
10

Turner, J. N., D. H. Szarowski, B. Roysam, and T. J. Holmes. "Light Microscopy Image Collection: Confocal, Widefield and Deconvolution." Microscopy and Microanalysis 3, S2 (1997): 371–72. http://dx.doi.org/10.1017/s1431927600008746.

Full text
Abstract:
Light microscopy instrumentation and biological applications continue to expand rapidly, and an important aspect of this expansion is three-dimensional (3-D) imaging and image analysis. For many biological specimens, optical sectioning, i.e. collecting images at a sequence of depths under controlled conditions, provides true 3-D data through the entire specimen. This is especially important for specimens whose thickness exceeds the depth-of-field of the microscope objective lens. This sequence of optical sections is the basis for 3-D image reconstruction providing information from all three specimen dimensions instead of just the traditional two in the image plane of the microscope. The analysis of images in 3-D provides insights into the structure and function of biological specimens that are not available through other means. However, 3-D microscopy also presents additional choices for image collection. The first of which is whether to use widefield or confocal microscopy and the second is whether to utilize digital deblurring or deconvolution methods.
APA, Harvard, Vancouver, ISO, and other styles
11

Qu, Jingye, and Jiangping Chen. "An investigation of benchmark image collections: how different from digital libraries?" Electronic Library 37, no. 3 (2019): 401–18. http://dx.doi.org/10.1108/el-10-2018-0195.

Full text
Abstract:
Purpose This paper aims to introduce the construction methods, image organization, collection use and access of benchmark image collections to the digital library (DL) community. It aims to connect two distinct communities: the DL community and image processing researchers so that future image collections could be better constructed, organized and managed for both human and computer use. Design/methodology/approach Image collections are first identified through an extensive literature review of published journal articles and a web search. Then, a coding scheme focusing on image collections’ creation, organization, access and use is developed. Next, three major benchmark image collections are analysed based on the proposed coding scheme. Finally, the characteristics of benchmark image collections are summarized and compared to DLs. Findings Although most of the image collections in DLs are carefully curated and organized using various metadata schema based on an image’s external features to facilitate human use, the benchmark image collections created for promoting image processing algorithms are annotated on an image’s content to the pixel level, which makes each image collection a more fine-grained, organized database appropriate for developing automatic techniques on classification summarization, visualization and content-based retrieval. Research limitations/implications This paper overviews image collections by their application fields. The three most representative natural image collections in general areas are analysed in detail based on a homemade coding scheme, which could be further extended. Also, domain-specific image collections, such as medical image collections or collections for scientific purposes, are not covered. Practical implications This paper helps DLs with image collections to understand how benchmark image collections used by current image processing research are created, organized and managed. It informs multiple parties pertinent to image collections to collaborate on building, sustaining, enriching and providing access to image collections. Originality/value This paper is the first attempt to review and summarize benchmark image collections for DL managers and developers. The collection creation process and image organization used in these benchmark image collections open a new perspective to digital librarians for their future DL collection development.
APA, Harvard, Vancouver, ISO, and other styles
12

Saba, Khalaf. "Pseudo-Subdural Collection on Ultrasound: A Mirror Image Artefact." Journal of Clinical Research and Reports 11, no. 4 (2022): 01–03. http://dx.doi.org/10.31579/2690-1919/270.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Pangallo, Matteo. "Shakespeare Quartos Archive. Image Collection." Renaissance and Reformation 42, no. 3 (2019): 170–73. http://dx.doi.org/10.7202/1066364ar.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Gu, Yi, Chaoli Wang, Jun Ma, Robert J. Nemiroff, David L. Kao, and Denis Parra. "Visualization and recommendation of large image collections toward effective sensemaking." Information Visualization 16, no. 1 (2016): 21–47. http://dx.doi.org/10.1177/1473871616630778.

Full text
Abstract:
In our daily lives, images are among the most commonly found data which we need to handle. We present iGraph, a graph-based approach for visual analytics of large image collections and their associated text information. Given such a collection, we compute the similarity between images, the distance between texts, and the connection between image and text to construct iGraph, a compound graph representation which encodes the underlying relationships among these images and texts. To enable effective visual navigation and comprehension of iGraph with tens of thousands of nodes and hundreds of millions of edges, we present a progressive solution that offers collection overview, node comparison, and visual recommendation. Our solution not only allows users to explore the entire collection with representative images and keywords but also supports detailed comparison for understanding and intuitive guidance for navigation. The visual exploration of iGraph is further enhanced with the implementation of bubble sets to highlight group memberships of nodes, suggestion of abnormal keywords or time periods based on text outlier detection, and comparison of four different recommendation solutions. For performance speedup, multiple graphics processing units and central processing units are utilized for processing and visualization in parallel. We experiment with two image collections and leverage a cluster driving a display wall of nearly 50 million pixels. We show the effectiveness of our approach by demonstrating experimental results and conducting a user study.
APA, Harvard, Vancouver, ISO, and other styles
15

B.D.C.N, Prasad, M. Sailaja, and V. Suryanarayana. "Analysis on Content Based Image Retrieval Using Image Enhancement and Deep Learning Convolutional Neural Networks." ECS Transactions 107, no. 1 (2022): 19777–89. http://dx.doi.org/10.1149/10701.19777ecst.

Full text
Abstract:
"Content-Based" means that an image contents search analyzes instead of meta data, including keywords, tags or image descriptions. The word contents could apply in this sense to colours, structures, textures or any details extracted from the picture itself. CBIR is desirable as searches relying purely on metadata depend on the quality and the completeness of annotations. CBIR method for the recovery of images from huge, unshaped image databases is commonly used. The CBIR method is used. Therefore, users are not satisfied with standard knowledge collection methods. In addition, there are more images available to users, as well as the advent of web creation and transmission networks. Consequently, there is a permanent and important output of digital images in many regions. Hence the rapid access to these enormous picture collections and the identical image retrieval from this broad image collection presents major challenges and demands efficient techniques. The efficiency of a content-based image retrieval system depends on the characteristic representation and similarity calculation. We therefore have a simple but powerful, profound, CNN-based, and feature-extraction and classification-based imaging system. Some promising results have been obtained from a range of empirical studies on a variety of CB IR tasks through the image database. Content-based image recovery systems (CBIR) allow you to find images identical to a query image among a picture dataset. The best-known CBIR system is Google's search by image feature.
APA, Harvard, Vancouver, ISO, and other styles
16

Saleh, Emad Isa. "Image embedded metadata in cultural heritage digital collections on the web." Library Hi Tech 36, no. 2 (2018): 339–57. http://dx.doi.org/10.1108/lht-03-2017-0053.

Full text
Abstract:
Purpose The purpose of this paper is to investigate the availability of embedded metadata within images of digital cultural collections. It is designed to examine a proposed hypothesis that most digitally derived images of cultural resources are stripped of their metadata once they are placed on the web. Design/methodology/approach A sample of 603 images were selected randomly from four cultural portals which aggregate digitized cultural collections, then four steps in the data collection process took place to examine image metadata via the web-based tool and windows application. Findings The study revealed that 28.5 percent of the analyzed images contained metadata, no links exist between image embedded metadata and its metadata record or the pages of the websites analyzed, and there is a significant usage of Extensible Metadata Platform to encode embedded metadata within the images. Practical implications The findings of the study may encourage heritage digital collection providers to reconsider their metadata preservation practices and policies to enrich the content of embedded metadata. In addition, it will raise awareness about the potential and value of embedded metadata in enhancing the findability and exchange of digital collections. Originality/value This study is ground breaking in that it is one of the early studies, especially in the Arab world, which aim to recognize the use of image embedded metadata within cultural heritage digital collections on the web.
APA, Harvard, Vancouver, ISO, and other styles
17

Chu, Boce, Feng Gao, Yingte Chai, et al. "Large-Area Full-Coverage Remote Sensing Image Collection Filtering Algorithm for Individual Demands." Sustainability 13, no. 23 (2021): 13475. http://dx.doi.org/10.3390/su132313475.

Full text
Abstract:
Remote sensing is the main technical means for urban researchers and planners to effectively observe targeted urban areas. Generally, it is difficult for only one image to cover a whole urban area and one image cannot support the demands of urban planning tasks for spatial statistical analysis of a whole city. Therefore, people often artificially find multiple images with complementary regions in an urban area on the premise of meeting the basic requirements for resolution, cloudiness, and timeliness. However, with the rapid increase of remote sensing satellites and data in recent years, time-consuming and low performance manual filter results have become more and more unacceptable. Therefore, the issue of efficiently and automatically selecting an optimal image collection from massive image data to meet individual demands of whole urban observation has become an urgent problem. To solve this problem, this paper proposes a large-area full-coverage remote sensing image collection filtering algorithm for individual demands (LFCF-ID). This algorithm achieves a new image filtering mode and solves the difficult problem of selecting a full-coverage remote sensing image collection from a vast amount of data. Additionally, this is the first study to achieve full-coverage image filtering that considers user preferences concerning spatial resolution, timeliness, and cloud percentage. The algorithm first quantitatively models demand indicators, such as cloudiness, timeliness, resolution, and coverage, and then coarsely filters the image collection according to the ranking of model scores to meet the different needs of different users for images. Then, relying on map gridding, the image collection is genetically optimized for individuals using a genetic algorithm (GA), which can quickly remove redundant images from the image collection to produce the final filtering result according to the fitness score. The proposed method is compared with manual filtering and greedy retrieval to verify its computing speed and filtering effect. The experiments show that the proposed method has great speed advantages over traditional methods and exceeds the results of manual filtering in terms of filtering effect.
APA, Harvard, Vancouver, ISO, and other styles
18

Kesiman, Made Windu Antara, and Gede Aditra Pradnyana. "A Scheme Towards Automatic Word Indexation System for Balinese Palm Leaf Manuscripts." Journal of ICT Research and Applications 15, no. 2 (2021): 105–19. http://dx.doi.org/10.5614/itbj.ict.res.appl.2021.15.2.1.

Full text
Abstract:
This paper proposes an initial scheme towards the development of an automatic word indexation system for Balinese lontar (palm leaf manuscript) collections. The word indexation system scheme consists of a sub module for patch image extraction of text areas in lontars and a sub module for word image transliteration. This is the first word indexation system for lontar collections to be proposed. To detect parts of a lontar image that contain text, a Gabor filter is used to provide initial information about the presence of text texture in the image. An adaptive sliding patch algorithm for the extraction of patch images in lontars is also proposed. The word image transliteration sub module was built using the long short-term memory (LSTM) model. The results showed that the image patch extraction of text areas process succeeded in optimally detecting text areas in lontars and extracting the patch image in a suitable position. The proposed scheme successfully extracted between 20% to 40% of the keywords in lontars and thus can at least provide an initial description for prospective lontar readers of the content contained in a lontar collection or to find in which lontar collection certain keywords can be found.
APA, Harvard, Vancouver, ISO, and other styles
19

Schwendy, Mischa, Ronald E. Unger, and Sapun H. Parekh. "EVICAN—a balanced dataset for algorithm development in cell and nucleus segmentation." Bioinformatics 36, no. 12 (2020): 3863–70. http://dx.doi.org/10.1093/bioinformatics/btaa225.

Full text
Abstract:
Abstract Motivation Deep learning use for quantitative image analysis is exponentially increasing. However, training accurate, widely deployable deep learning algorithms requires a plethora of annotated (ground truth) data. Image collections must contain not only thousands of images to provide sufficient example objects (i.e. cells), but also contain an adequate degree of image heterogeneity. Results We present a new dataset, EVICAN—Expert visual cell annotation, comprising partially annotated grayscale images of 30 different cell lines from multiple microscopes, contrast mechanisms and magnifications that is readily usable as training data for computer vision applications. With 4600 images and ∼26 000 segmented cells, our collection offers an unparalleled heterogeneous training dataset for cell biology deep learning application development. Availability and implementation The dataset is freely available (https://edmond.mpdl.mpg.de/imeji/collection/l45s16atmi6Aa4sI?q=). Using a Mask R-CNN implementation, we demonstrate automated segmentation of cells and nuclei from brightfield images with a mean average precision of 61.6 % at a Jaccard Index above 0.5.
APA, Harvard, Vancouver, ISO, and other styles
20

Kalukin, Andrew, Satoshi Endo, Russell Crook, et al. "Image Collection Simulation Using High-Resolution Atmospheric Modeling." Remote Sensing 12, no. 19 (2020): 3214. http://dx.doi.org/10.3390/rs12193214.

Full text
Abstract:
A new method is described for simulating the passive remote sensing image collection of ground targets that includes effects from atmospheric physics and dynamics at fine spatial and temporal scales. The innovation in this research is the process of combining a high-resolution weather model with image collection simulation to attempt to account for heterogeneous and high-resolution atmospheric effects on image products. The atmosphere was modeled on a 3D voxel grid by a Large-Eddy Simulation (LES) driven by forcing data constrained by local ground-based and air-based observations. The spatial scale of the atmospheric model (10–100 m) came closer than conventional weather forecast scales (10–100 km) to approaching the scale of typical commercial multispectral imagery (2 m). This approach was demonstrated through a ground truth experiment conducted at the Department of Energy Atmospheric Radiation Measurement Southern Great Plains site. In this experiment, calibrated targets (colored spectral tarps) were placed on the ground, and the scene was imaged with WorldView-3 multispectral imagery at a resolution enabling the tarps to be visible in at least 9–12 image pixels. The image collection was simulated with Digital Imaging and Remote Sensing Image Generation (DIRSIG) software, using the 3D atmosphere from the LES model to generate a high-resolution cloud mask. The high-resolution atmospheric model-predicted cloud coverage was usually within 23% of the measured cloud cover. The simulated image products were comparable to the WorldView-3 satellite imagery in terms of the variations of cloud distributions and spectral properties of the ground targets in clear-sky regions, suggesting the potential utility of the proposed modeling framework in improving simulation capabilities, as well as testing and improving the operation of image collection processes.
APA, Harvard, Vancouver, ISO, and other styles
21

Therrell, Grace. "More product, more process: metadata in digital image collections." Digital Library Perspectives 35, no. 1 (2019): 2–14. http://dx.doi.org/10.1108/dlp-06-2018-0018.

Full text
Abstract:
Purpose The purpose of this paper is to discuss the implications of current theories that advocate for minimal levels of description in digital collections. Specifically, this paper looks at the archival theory of “More Product, Less Process” and its encouragement of collection-level description. The purpose of the study was to analyze how levels of description impact resource retrieval. Design/methodology/approach This study analyzed 35 images from a New York Public Library (NYPL) digital collection present on the NYPL website and on Flickr. The methodology was designed to reflect users’ information seeking behavior for image collections. There were two research questions guiding this study: what are the descriptive terms used to describe items in digital collections? and what is the success rate of retrieving resources using assigned descriptive terms? Findings The results of this study revealed that the images from the NYPL collection were more difficult to find on the institution’s website as compared with Flickr. These findings suggest that lesser levels of description in digital collections hinder resource retrieval. Research limitations/implications These findings suggest that lesser description levels hurt the findability of resources. In the wake of theories such as “More Product, Less Process”, information professionals must find ways to assign metadata to individual materials in digital image collections. Originality/value Recent research concerning description levels of digital collections is several years old and focuses mostly on the usefulness of collection-level metadata as a supplement to or substitute for item-level metadata. Few, if any, studies exist that explore the implications of description levels on resource retrievability and findability. This study is also unique in that it discusses these implications in the context of less-is-more theories of archival processing.
APA, Harvard, Vancouver, ISO, and other styles
22

Yangwen, Zheng. "Chinese Collection 457: the Call for Global History." Bulletin of the John Rylands Library 91, no. 1 (2015): 35–44. http://dx.doi.org/10.7227/bjrl.91.1.3.

Full text
Abstract:
With the help of the Jesuits, the Qianlong emperor (often said to be Chinas Sun King in the long eighteenth century) built European palaces in the Garden of Perfect Brightness and commissioned a set of twenty images engraved on copper in Paris. The Second Anglo-Chinese Opium War in 1860 not only saw the destruction of the Garden, but also of the images, of which there are only a few left in the world. The John Rylands set contains a coloured image which raises even more questions about the construction of the palaces and the after-life of the images. How did it travel from Paris to Bejing, and from Belgium to the John Rylands Library? This article probes the fascinating history of this image. It highlights the importance of Europeans in the making of Chinese history and calls for studies of China in Europe.
APA, Harvard, Vancouver, ISO, and other styles
23

Oktaviantina, Adek Dwi. "CITRAAN DALAM KUMPULAN PUISI ABDUL SALAM HS “MALAIKAT WARINGIN”." BEBASAN Jurnal Ilmiah Kebahasaan dan Kesastraan 6, no. 2 (2020): 137. http://dx.doi.org/10.26499/bebasan.v6i2.118.

Full text
Abstract:
Poetry is arranged beautiful words and understood by the readers. Therefore, poetry is inseparable from the accuracy of the language which is arranged aesthetically and creatively so that the meaning of poetry can be conveyed properly. Interpretive descriptive of images is used as the technique of this research. Image, as a way of looking at the meaning, is identified through the human senses. The method used is interpretative qualitative research method. The formulation of the problem is how the image in the poetru collection of Abdul Salam HS entitled "Malaikat Waringin". The purpose of this research is to describe the image in Abdul Salam HS‟ Poetry Collection. The results from analyzing fifty poems contained in the poetry collection entitled "Malaikat Waringin", it showed that there are 17 visual image data, 9 auditory image data, 4 smell and taste image images, and 6 visualization image data.
APA, Harvard, Vancouver, ISO, and other styles
24

Hindarto, Djarot, and Endah Tri Esti Handayani. "Revolution in Image Data Collection: CycleGAN as a Dataset Generator." Sinkron 9, no. 1 (2024): 444–54. http://dx.doi.org/10.33395/sinkron.v9i1.13211.

Full text
Abstract:
Computer vision, deep learning, and pattern recognition are just a few fields where image data collection has become crucial. The Cycle Generative Adversarial Network has become one of the most effective instruments in the recent revolution in image data collection. This research aims to comprehend the impact of CycleGAN on the collection of image datasets. CycleGAN, a variant of the Generative Adversarial Network model, has enabled the unprecedented generation of image datasets. CycleGAN can transform images from one domain to another without manual annotation by employing adversarial learning between the generator and discriminator. This means generating image datasets quickly and efficiently for various purposes, from object recognition to data augmentation. One of the most fascinating features of CycleGAN is its capacity to alter an image's style and characteristics. Using CycleGAN to generate unique and diverse datasets assists deep learning models in overcoming visual style differences. This is a significant development in understanding how machine learning models can comprehend visual art concepts. CycleGAN's use as a data set generator has altered the landscape of image data collection. CycleGAN has opened new doors in technological innovation and data science with its proficiency in generating diverse and unique datasets. This research will investigate in greater detail how CycleGAN revolutionized the collection of image datasets and inspired previously unconceived applications.
APA, Harvard, Vancouver, ISO, and other styles
25

Wang, Jiangning, Congtian Lin, Cuiping Bu, TianYu Xi, Zhaojun Wang, and Liqiang Ji. "The Practice of Deep Learning Methods in Biodiversity Information Collection." Biodiversity Information Science and Standards 3 (July 2, 2019): e37534. https://doi.org/10.3897/biss.3.37534.

Full text
Abstract:
Deep learning is one machine learning method based on the layers used in artificial neural networks. The breakthrough of deep learning in classification led to its rapid application in speech recognition, natural language understanding, and image processing and recognition. In the field of biodiversity informatics, deep learning efforts are being applied in rapid species identification and counts of individuals identified based on image, audio, video, and other data types. However, deep learning methods hold great potential for application in all aspects of biodiversity informatics. We present a case study illustrating how to increase data collection quality and efficiency using well-established technology such as optical character recognition (OCR) and some image classification. Our goal is to image data from the scanned documents of various butterfly atlases, add species, specimens, collections, photographs and other relevant information, and build a database of butterfly images. Information collection involves image annotation and text-based descritpion input. Although the work of image annotation is simple, this process can be accelerated by deep learning-based target segmentation to make the selection process easier, such as changing box select to a double click. The process of information collection is complicated, involving input of species names, specimen collection, specimen description, and other information. Generally, there are many images in atlases, the text layout is rather messy, and overall OCR success is poor. Therefore, the measures we take are as follows: Step A: select the screenshot of the text and then call the OCR interface to generate the text material; Step B: proceed with NLP- (natural language processing) related processing; Step C: perform manual operations on the results, and introduce the NLP function again to this process; Step D: submit the result. The deep learning applications we integrated in our client tool include: target segmentation of the annotated image for automatic positioning and background removal, etc. to improve the quality of the image used for identification; making a preliminary judgment on various attributes of the labeled image and using the results to assist the automatic filling of relevant information in step B, including species information, specimen attributes (specimen image, nature photo, hand drawing pictures, etc.), insect stage (egg, adult, etc.); OCR in step A. Some simple machine learning methods such as k-nearest neighbor can be used to automatically determine gender, pose, and so on. While complex information such as collection place, time, and collector can be analyzed by deep learning-based NLP methods in the future. In our infomation collection process, ten fields are required to submit one single record. Of those, about 4-5 input fields can be dealt with the AI-assistant. It can thus be seen from the above process that deep learning has reduced the workload of manual information annotation by at least 30%. With improvements in accuracy, the era of using automatic information extraction robots to replace manual information annotation and collection is just around the corner.
APA, Harvard, Vancouver, ISO, and other styles
26

Vizoso, M. Teresa, and Carmen Quesada. "Catalogue of type specimens of fungi and lichens deposited in the Herbarium of the University of Granada (Spain)." Biodiversity Data Journal 3 (July 13, 2015): e5204. https://doi.org/10.3897/BDJ.3.e5204.

Full text
Abstract:
A catalogue of types from the Herbarium of the University of Granada has not previously been compiled. As a result, a search of these collections in order to compile digital images for preservation and publication yielded a large number of formerly unrecognized types. This dataset contains the specimen records from the catalogue of the nomenclature types of fungi and lichens in the Herbarium of the University of Granada, Spain. These herbarium specimens are included in the GDA and GDAC collections, acronyms from Index Herbariorum (Thiers 2014). At this time, the type collection of fungi and lichens contains 88 type specimens of 49 nominal taxa, most from <i>Agaricales</i> and the genus <i>Cortinarius</i>, described from the western Mediterranean, mainly Spain, by the following authors: V.Antonin, J.Ballarà, A.Bidaud, G.F.Bills, M.Bon, C.Cano, M.Casares, G.Chevassut, M.Contu, F.Esteve-Raventós, R.Galán, L.Guzmán-Dávalos, R.Henry, E.Horak, R.Mahiques, G.Malençon, P.Moënne-Loccoz, G.Moreno, A.Ortega, F.Palazón, V.N.Suárez.-Santiago, A.Vêzda, J.Vila, and M.Villareal. For each specimen, the locality indication, species name, observation date, collector, type status, related information, associated sequences, other catalogue numbers related to each type, and image URL are recorded. The dataset is associated with an image collection named "Colección de imágenes de los tipos nomenclaturales de hongos, líquenes, musgos y algas incluidos en el Herbario de la Universidad de Granada (GDA y GDAC)" (Vizoso and Quesada 2013) which is housed and accessible at the Global Biodiversity Information Facility in Spain (GBIF.ES) Hosting and Publishing Service "Biodiversity Image Portal of Spanish collections" and is also available at the Herbarium of University of Granada institutional web (Vizoso 2014a, Vizoso 2014b). That image collection contains 113 images, of which 56 correspond to the nomenclature types of 49 taxa (47 fungi, 2 lichens), the rest of the images in this collection correspond to documents and specimens or microscopy photographs which are included in the herbarium specimens of fungi. These complement and document the process of the typification.
APA, Harvard, Vancouver, ISO, and other styles
27

Martell, Charles. "Editorial: Collection Development: The JANUS Image." College & Research Libraries 46, no. 2 (1985): 109–10. http://dx.doi.org/10.5860/crl_46_02_109.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Dauter, Z. "Image-plate data collection in Hamburg." Acta Crystallographica Section A Foundations of Crystallography 49, s1 (1993): c12. http://dx.doi.org/10.1107/s010876737809964x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Toet, Alexander. "The TNO Multiband Image Data Collection." Data in Brief 15 (December 2017): 249–51. http://dx.doi.org/10.1016/j.dib.2017.09.038.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Kramer, Elsa F. "IUPUI image collection: a usability survey." OCLC Systems & Services: International digital library perspectives 21, no. 4 (2005): 346–59. http://dx.doi.org/10.1108/10650750510631712.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Rorvig, M. E., C. H. Turner, and J. Moncada. "The NASA Image Collection Visual Thesaurus." Journal of the American Society for Information Science 50, no. 9 (1999): 794–98. http://dx.doi.org/10.1002/(sici)1097-4571(1999)50:9<794::aid-asi8>3.0.co;2-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Tremblay, Serge. "Canadian Forces Image Collection Digitization Plan." Archiving Conference 5, no. 1 (2008): 114–19. http://dx.doi.org/10.2352/issn.2168-3204.2008.5.1.art00022.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Erwis, Else Liliani, Anwar Efendi, and Hartono. "Mother's Image in Joko Pinurbo's Poetry Collection Surat Kopi." Formosa Journal of Multidisciplinary Research 2, no. 12 (2023): 1871–92. http://dx.doi.org/10.55927/fjmr.v2i12.7400.

Full text
Abstract:
This study aims to discuss the image of mothers in the poetry collection Surat Kopi by Joko Pinurbo. The research method used is descriptive qualitative using a hermeneutic approach. The data of this research is in the form of poetry quotations that contain words or phrases containing images of mothers, both physical, psychological, and social images. The data is obtained from the poems contained in the poetry collection Surat Kopi by Joko Pinurbo. The poems used as objects are 18 poems, with consideration of the suitability of the issues raised in the study. The results of this study describe the image of a mother as a woman who has physical, psychological, and social aspects. The mother depicted is a mother who is strong and strong in raising her children and a figure who loves and cares about her children. The poet more dominantly displays the psychological and social aspects of a mother in her poems.
APA, Harvard, Vancouver, ISO, and other styles
34

Decker, Peter, Axel Christian, and Willi E. R. Xylander. "VIRMISCO – The Virtual Microscope Slide Collection." ZooKeys 741 (March 7, 2018): 271–82. http://dx.doi.org/10.3897/zookeys.741.22284.

Full text
Abstract:
Digitisation allows scientists rapid access to research objects. For transparent to semi-transparent three-dimensional microscopic objects, such as microinvertebrates or small body parts of organisms, available databases are scarce. Most mounting media used for permanent microscope slides deteriorate after some years or decades, eventually leading to total damage and loss of the object. However, restoration is labour-intensive, and often the composition of the mounting media is not known. A digital preservation of important material, especially types, is important and an urgent need. The Virtual Microscope Slide Collection – VIRMISCO project has developed recommendations for taking microscopic image stacks of three-dimensional objects, depositing and presenting such series of digital image files or z-stacks as an online platform. The core of VIRMISCO is an online viewer, which enables the user to virtually focus through an object online as if using a real microscope. Additionally, VIRMISCO offers features such as search, rotating, zooming, measuring, changing brightness or contrast, taking snapshots, leaving feedback as well as downloading complete z-stacks as jpeg files or video file. The open source system can be installed by any institution and can be linked to common database or images can be sent to the Senckenberg Museum of Natural History Görlitz. The benefits of VIRMISCO are the preservation of important or fragile material, to avoid loan, to act as a digital archive for image files and to allow determination by experts from the distance, as well as providing reference libraries for taxonomic research or education and providing image series as online supplementary material for publications or digital vouchers of specimens of molecular investigations are relevant applications for VIRMISCO.
APA, Harvard, Vancouver, ISO, and other styles
35

Decker, Peter, Axel Christian, and Willi E.R. Xylander. "VIRMISCO – The Virtual Microscope Slide Collection." ZooKeys 741 (March 7, 2018): 271–82. https://doi.org/10.3897/zookeys.741.22284.

Full text
Abstract:
Digitisation allows scientists rapid access to research objects. For transparent to semi-transparent three-dimensional microscopic objects, such as microinvertebrates or small body parts of organisms, available databases are scarce. Most mounting media used for permanent microscope slides deteriorate after some years or decades, eventually leading to total damage and loss of the object. However, restoration is labour-intensive, and often the composition of the mounting media is not known. A digital preservation of important material, especially types, is important and an urgent need. The Virtual Microscope Slide Collection – VIRMISCO project has developed recommendations for taking microscopic image stacks of three-dimensional objects, depositing and presenting such series of digital image files or z-stacks as an online platform. The core of VIRMISCO is an online viewer, which enables the user to virtually focus through an object online as if using a real microscope. Additionally, VIRMISCO offers features such as search, rotating, zooming, measuring, changing brightness or contrast, taking snapshots, leaving feedback as well as downloading complete z-stacks as jpeg files or video file. The open source system can be installed by any institution and can be linked to common database or images can be sent to the Senckenberg Museum of Natural History Görlitz. The benefits of VIRMISCO are the preservation of important or fragile material, to avoid loan, to act as a digital archive for image files and to allow determination by experts from the distance, as well as providing reference libraries for taxonomic research or education and providing image series as online supplementary material for publications or digital vouchers of specimens of molecular investigations are relevant applications for VIRMISCO.
APA, Harvard, Vancouver, ISO, and other styles
36

Moskolaï, Waytehad Rose, Wahabou Abdou, Albert Dipanda, and Kolyang Kolyang. "A Workflow for Collecting and Preprocessing Sentinel-1 Images for Time Series Prediction Suitable for Deep Learning Algorithms." Geomatics 2, no. 4 (2022): 435–56. http://dx.doi.org/10.3390/geomatics2040024.

Full text
Abstract:
The satellite image time series are used for several applications such as predictive analysis. New techniques such as deep learning (DL) algorithms generally require long sequences of data to perform well; however, the complexity of satellite image preprocessing tasks leads to a lack of preprocessed datasets. Moreover, using conventional collection and preprocessing methods is time- and storage-consuming. In this paper, a workflow for collecting, preprocessing, and preparing Sentinel-1 images to use with DL algorithms is proposed. The process mainly consists of using scripts for collecting and preprocessing operations. The goal of this work is not only to provide the community with easily modifiable programs for image collection and batch preprocessing but also to publish a database with prepared images. The experimental results allowed the researchers to build three time series of Sentinel-1 images corresponding to three study areas, namely the Bouba Ndjida National Park, the Dja Biosphere Reserve, and the Wildlife Reserve of Togodo. A total of 628 images were processed using scripts based on the SNAP graph processing tool (GPT). In order to test the effectiveness of the proposed methodology, three DL models were trained with the Bouba Ndjida and Togodo images for the prediction of the next occurrence in a sequence.
APA, Harvard, Vancouver, ISO, and other styles
37

Hoogendoorn, S. P., H. J. Van Zuylen, M. Schreuder, B. Gorte, and G. Vosselman. "Microscopic Traffic Data Collection by Remote Sensing." Transportation Research Record: Journal of the Transportation Research Board 1855, no. 1 (2003): 121–28. http://dx.doi.org/10.3141/1855-15.

Full text
Abstract:
To gain insight into the behavior of drivers during congestion, and to develop and test theories and models that describe congested driving behavior, very detailed data are needed. A new data-collection system prototype is described for determining individual vehicle trajectories from sequences of digital aerial images. Software was developed to detect and track vehicles from image sequences. In addition to longitudinal and lateral position as a function of time, the system can determine vehicle length and width. Before vehicle detection and tracking can be achieved, the software handles correction for lens distortion, radiometric correction, and orthorectification of the image. The software was tested on data collected from a helicopter by a digital camera that gathered high-resolution monochrome images, covering 280 m of a Dutch motorway. From the test, it was concluded that the techniques for analyzing the digital images can be applied automatically without much problem. However, given the limited stability of the helicopter, only 210 m of the motorway could be used for vehicle detection and tracking. The resolution of the data collection was 22 cm. Weather conditions appear to have a significant influence on the reliability of the data: 98% of the vehicles could be detected and tracked automatically when conditions were good; this number dropped to 90% when the weather conditions worsened. Equipment for stabilizing the camera—gyroscopic mounting—and the use of color images can be applied to further improve the system.
APA, Harvard, Vancouver, ISO, and other styles
38

Jin, Shaojie, Ying Gao, Shoucai Jing, Fei Hui, Xiangmo Zhao, and Jianzhen Liu. "Traffic Flow Parameters Collection under Variable Illumination Based on Data Fusion." Journal of Advanced Transportation 2021 (August 15, 2021): 1–14. http://dx.doi.org/10.1155/2021/4592124.

Full text
Abstract:
Accurate traffic flow parameters are the supporting data for analyzing traffic flow characteristics. Vehicle detection using traffic surveillance pictures is a typical method for gathering traffic flow characteristics in urban traffic scenes. In complicated lighting conditions at night, however, neither classical nor deep-learning-based image processing algorithms can provide adequate detection results. This study proposes a fusion technique combining millimeter-wave radar data with image data to compensate for the lack of image-based vehicle detection under complicated lighting to complete all-day parameters collection. The proposed method is based on an object detector named CenterNet. Taking this network as the cornerstone, we fused millimeter-wave radar data into it to improve the robustness of vehicle detection and reduce the time-consuming postcalculation of traffic flow parameters collection. We collected a new dataset to train the proposed method, which consists of 1000 natural daytime images and 1000 simulated nighttime images with a total of 23094 vehicles counted, where the simulated nighttime images are generated by a style translator named CycleGAN to reduce labeling workload. Another four datasets of 2400 images containing 20161 vehicles were collected to test the proposed method. The experimental results show that the method proposed has good adaptability and robustness at natural daytime and nighttime scenes.
APA, Harvard, Vancouver, ISO, and other styles
39

Hodosh, M., P. Young, and J. Hockenmaier. "Framing Image Description as a Ranking Task: Data, Models and Evaluation Metrics." Journal of Artificial Intelligence Research 47 (August 30, 2013): 853–99. http://dx.doi.org/10.1613/jair.3994.

Full text
Abstract:
The ability to associate images with natural language sentences that describe what is depicted in them is a hallmark of image understanding, and a prerequisite for applications such as sentence-based image search. In analogy to image search, we propose to frame sentence-based image annotation as the task of ranking a given pool of captions. We introduce a new benchmark collection for sentence-based image description and search, consisting of 8,000 images that are each paired with five different captions which provide clear descriptions of the salient entities and events. We introduce a number of systems that perform quite well on this task, even though they are only based on features that can be obtained with minimal supervision. Our results clearly indicate the importance of training on multiple captions per image, and of capturing syntactic (word order-based) and semantic features of these captions. We also perform an in-depth comparison of human and automatic evaluation metrics for this task, and propose strategies for collecting human judgments cheaply and on a very large scale, allowing us to augment our collection with additional relevance judgments of which captions describe which image. Our analysis shows that metrics that consider the ranked list of results for each query image or sentence are significantly more robust than metrics that are based on a single response per query. Moreover, our study suggests that the evaluation of ranking-based image description systems may be fully automated.
APA, Harvard, Vancouver, ISO, and other styles
40

Li, Q., Z. Zhang, W. Lu, J. Yang, Y. Ma, and W. Yao. "From pixels to patches: a cloud classification method based on bag of micro-structures." Atmospheric Measurement Techniques Discussions 8, no. 10 (2015): 10213–47. http://dx.doi.org/10.5194/amtd-8-10213-2015.

Full text
Abstract:
Abstract. Automatic cloud classification has attracted more and more attention with the increasing development of whole sky imagers, but it is still in progress for ground-based cloud observation. This paper proposes a new cloud classification method, named bag of micro-structures (BoMS). This method treats an all-sky image as a collection of micro-structures mapped from image patches, rather than a collection of pixels. And then it constructs an image representation with a weighted histogram of micro-structures. Lastly, a support vector machine (SVM) classifier is applied on the image representation because SVM is appealing for sparse and high dimensional feature space. Five different sky conditions are identified: cirriform, cumuliform, stratiform, clear sky and mixed cloudiness that often appears in all-sky images but is seldom addressed in literature. BoMS is evaluated on a large dataset, which contains 5000 all-sky images that are captured by a total-sky cloud imager located in Tibet (29.25° N, 88.88° E). BoMS achieves an accuracy of 90.9 % for 10 fold cross-validation, and it outperforms the state-of-the-art method with an increase of about 19 %. Furthermore, influence of key parameters in BoMS are investigated to verify their robustness.
APA, Harvard, Vancouver, ISO, and other styles
41

Woodward-Greene, M. Jennifer, Jason M. Kinser, Tad S. Sonstegard, Johann Sölkner, Iosif I. Vaisman, and Curtis P. Van Tassell. "PreciseEdge raster RGB image segmentation algorithm reduces user input for livestock digital body measurements highly correlated to real-world measurements." PLOS ONE 17, no. 10 (2022): e0275821. http://dx.doi.org/10.1371/journal.pone.0275821.

Full text
Abstract:
Computer vision is a tool that could provide livestock producers with digital body measures and records that are important for animal health and production, namely body height and length, and chest girth. However, to build these tools, the scarcity of labeled training data sets with uniform images (pose, lighting) that also represent real-world livestock can be a challenge. Collecting images in a standard way, with manual image labeling is the gold standard to create such training data, but the time and cost can be prohibitive. We introduce the PreciseEdge image segmentation algorithm to address these issues by employing a standard image collection protocol with a semi-automated image labeling method, and a highly precise image segmentation for automated body measurement extraction directly from each image. These elements, from image collection to extraction are designed to work together to yield values highly correlated to real-world body measurements. PreciseEdge adds a brief preprocessing step inspired by chromakey to a modified GrabCut procedure to generate image masks for data extraction (body measurements) directly from the images. Three hundred RGB (red, green, blue) image samples were collected uniformly per the African Goat Improvement Network Image Collection Protocol (AGIN-ICP), which prescribes camera distance, poses, a blue backdrop, and a custom AGIN-ICP calibration sign. Images were taken in natural settings outdoors and in barns under high and low light, using a Ricoh digital camera producing JPG images (converted to PNG prior to processing). The rear and side AGIN-ICP poses were used for this study. PreciseEdge and GrabCut image segmentation methods were compared for differences in user input required to segment the images. The initial bounding box image output was captured for visual comparison. Automated digital body measurements extracted were compared to manual measures for each method. Both methods allow additional optional refinement (mouse strokes) to aid the segmentation algorithm. These optional mouse strokes were captured automatically and compared. Stroke count distributions for both methods were not normally distributed per Kolmogorov-Smirnov tests. Non-parametric Wilcoxon tests showed the distributions were different (p&lt; 0.001) and the GrabCut stroke count was significantly higher (p = 5.115 e-49), with a mean of 577.08 (std 248.45) versus 221.57 (std 149.45) with PreciseEdge. Digital body measures were highly correlated to manual height, length, and girth measures, (0.931, 0.943, 0.893) for PreciseEdge and (0.936, 0. 944, 0.869) for GrabCut (Pearson correlation coefficient). PreciseEdge image segmentation allowed for masks yielding accurate digital body measurements highly correlated to manual, real-world measurements with over 38% less user input for an efficient, reliable, non-invasive alternative to livestock hand-held direct measuring tools.
APA, Harvard, Vancouver, ISO, and other styles
42

Sun, Ke Mei. "Wireless Image Transmission System." Applied Mechanics and Materials 336-338 (July 2013): 1661–64. http://dx.doi.org/10.4028/www.scientific.net/amm.336-338.1661.

Full text
Abstract:
A wireless image transmission system is proposed, which in order to offset the disadvantages of wire video monitor laying points hard, and disadvantages of wire video monitor implementing hard in bad circumstance. The core of the system is TI series of C5000 high speed DSP processor and radio frequency wireless communication chip. Through DSP transplantation of JPEG compression algorithm, complete image collection and image compression through DSP, use wireless receiving and sending module to complete wireless images data transmission. Use wireless receiving and sending module instead of emitter and network as image data transmission device, low cost, not required network fee, system can real-time complete terminal images collection and show. It can apply community guard and video monitor of deploying points urgently in the electric power system, factory, bank and other important department, which dont require higher performance.
APA, Harvard, Vancouver, ISO, and other styles
43

Akhmetvaleev, R. R., I. A. Lackman, D. V. Popov, and M. V. Krasnoperov. "Image segmentation technique to support automatic marking of objects in endoscopic images." Informatization and communication, no. 2 (February 16, 2021): 146–54. http://dx.doi.org/10.34219/2078-8320-2021-12-2-146-154.

Full text
Abstract:
The aim of this study is to develop a method for visual segmentation of various objects of endoscopic images based on a collection of endoscopic images. The method was developed on the basis of a collection of images obtained by ENVD LLC on a contractual basis with medical organizations of the Republic of Bashkortostan, Russia. The collection consists of 70 endoscopic images recording clinical cases diagnosed in accordance with the Paris Tumor Classification of Gastrointestinal Diseases. A number of machine vision operations were carried out, including image preprocessing, image sampling, and subsequent clustering for the purpose of image segmentation. Results: A technique for the analysis of endoscopic images was developed, which makes it possible to obtain the contours of objects of interest to a specialist performing endoscopy. Conclusion. The developed solution allows to speed up and improve the procedure for marking endoscopic images, which in turn prepares a platform for further processing of endoscopic images, for example, nosological classification of neoplasms.
APA, Harvard, Vancouver, ISO, and other styles
44

Liu, Weiping, Jennifer Fung, W. J. de Ruijter, Hans Chen, John W. Sedat, and David A. Agard. "Automated electron tomography: from data collection to image processing." Proceedings, annual meeting, Electron Microscopy Society of America 53 (August 13, 1995): 26–27. http://dx.doi.org/10.1017/s0424820100136507.

Full text
Abstract:
Electron tomography is a technique where many projections of an object are collected from the transmission electron microscope (TEM), and are then used to reconstruct the object in its entirety, allowing internal structure to be viewed. As vital as is the 3-D structural information and with no other 3-D imaging technique to compete in its resolution range, electron tomography of amorphous structures has been exercised only sporadically over the last ten years. Its general lack of popularity can be attributed to the tediousness of the entire process starting from the data collection, image processing for reconstruction, and extending to the 3-D image analysis. We have been investing effort to automate all aspects of electron tomography. Our systems of data collection and tomographic image processing will be briefly described.To date, we have developed a second generation automated data collection system based on an SGI workstation (Fig. 1) (The previous version used a micro VAX). The computer takes full control of the microscope operations with its graphical menu driven environment. This is made possible by the direct digital recording of images using the CCD camera.
APA, Harvard, Vancouver, ISO, and other styles
45

MacKenzie, Adrian, and Anna Munster. "Platform Seeing: Image Ensembles and Their Invisualities." Theory, Culture & Society 36, no. 5 (2019): 3–22. http://dx.doi.org/10.1177/0263276419847508.

Full text
Abstract:
How can one ‘see’ the operationalization of contemporary visual culture, given the imperceptibility and apparent automation of so many processes and dimensions of visuality? Seeing – as a position from a singular mode of observation – has become problematic since many visual elements, techniques, and forms of observing are highly distributed through data practices of collection, analysis and prediction. Such practices are subtended by visual cultural techniques that are grounded in the development of image collections, image formatting and hardware design. In this article, we analyze recent transformations in forms of prediction and data analytics associated with spectacular performances of computation. We analyze how transformations in the collection and accumulation of images as ensembles by platforms have a qualitative and material effect on the emergent sociotechnicality of platform ‘life’ and ‘perception’. Reconstructing the visual transformations that allow artificial intelligence assemblages to operate allows some sense of their heteronomous materiality and contingency.
APA, Harvard, Vancouver, ISO, and other styles
46

Ørnager, Susanne, and Haakon Lund. "Billedindeksering og sociale medier." Nordisk Tidsskrift for Informationsvidenskab og Kulturformidling 8, no. 1 (2019): 2–21. http://dx.doi.org/10.7146/ntik.v8i1.115599.

Full text
Abstract:
This article focuses on the methodologies, organization, and communication of digital image collection research that utilizes social media content. “Image” is here understood as a cultural, conventional, and commercial—stock photo—representation. Two methodologies i.e. PRISMA and Grounded Theory are employed to examine, categorize and analyze images and comprehend how humans consider them. The literature review covers research since 2005, when major social media platforms emerged. It demonstrates that the images on social media have not changed the overall direction of research into image indexing and retrieval, though new topics on crowdsourcing and tagging have emerged in it. A citation analysis includes an overview of co-citation maps that demonstrate the nexus of image research literature and the journals in which they appear. The results point to new possibilities influencing the research by providing large image collections as new testbeds for improving or testing research hypothesis on a new scale.
APA, Harvard, Vancouver, ISO, and other styles
47

Agard, D. A., A. J. Koster, M. B. Braunfeld, and J. W. Sedat. "Automated data collection for electron tomography." Proceedings, annual meeting, Electron Microscopy Society of America 50, no. 2 (1992): 1044–45. http://dx.doi.org/10.1017/s0424820100129851.

Full text
Abstract:
Three-dimensional imaging has become an important addition to the variety of methods available for research on biological structures. Non-crystalline samples can be examined by high resolution electron tomography which requires that projection data be collected over a large range of specimen tilts. Practical limitations of tomography are set by the large number of micrographs to be processed, and by the required (and tedious) recentering and refocusing of the object during data collection; especially for dose sensitive specimens. With automated electron tomography a number of these problems can be overcome. First, the images are recorded directly in digital format, using a cooled slow scan CCD camera, and, with automatic tracking and correction for image shift and focus variation, a pre-aligned dataset is obtained, with every image recorded under well defined imaging conditions.At UCSF, we use intermediate voltage electron tomography to study higher-order chromatin structure. Of central interest is elucidating the higher-order arrangement of the 30nm chromatin fiber within condensed chromosomes through several phases of the cell cycle and, in collaboration with Chris Woodcock, the structure of the 30 nm fiber.
APA, Harvard, Vancouver, ISO, and other styles
48

Yogaswara, Andrey Satwika, Disman Disman, Eeng Ahman, and Nugraha Nugraha. "Kinerja Dilihat dari Perspektif Kepemimpinan Militer dan Budaya Organisasi." Image : Jurnal Riset Manajemen 11, no. 2 (2023): 142–51. http://dx.doi.org/10.17509/image.2023.013.

Full text
Abstract:
This study aims to determine the role of military leadership and organizational culture on leaders' performance in the TNI AD Military Police unit. The approach used is descriptive verification with multiple regression methods. The population in the study were 69 POMDAM and DENPOM commanders throughout Indonesia. Data collection used a questionnaire distributed via google forms in one data collection (cross-sectional method). Military leadership variables are measured by task-oriented, relationship-oriented, change-oriented, and external dimensions. Organizational culture variables are measured using the dimensions of individual behavior, norms, dominant values, philosophy, applicable regulations, and organizational climate. Meanwhile, the TNI Commander's regulations measure the leader's performance, including quantity, quality, creativity, cooperation, initiative, and personal qualities. The findings show that military leadership and organizational culture significantly affect leadership performance, partially or simultaneously. Military leadership, which is a projection of the personality and character of a leader to make his subordinates do what is asked, can explain performance. Oath-based military culture includes matters such as commitment, values, and behavior which are elements in the culture that play a role in improving the performance of every level of personnel in the organization.
APA, Harvard, Vancouver, ISO, and other styles
49

Bakış, Yasin, Xiaojun Wang, and Hank Bart. "Challenges in Curating 2D Multimedia Data in the Application of Machine Learning in Biodiversity Image Analysis." Biodiversity Information Science and Standards 5 (September 28, 2021): e75856. https://doi.org/10.3897/biss.5.75856.

Full text
Abstract:
Over 1 billion biodiversity collection specimens ranging from fungi to fish to fossils are housed in more than 1,600 natural history collections across the United States. The digitization of these specimens has risen significantly within the last few decades and this is only likely to increase, as the use of digitized data gains more importance every day. Numerous experiments with automated image analysis have proven the practicality and usefulness of digitized biodiversity images by computational techniques such as neural networks and image processing. However, most of the computational techniques to analyze images of biodiversity collection specimens require a good curation of this data. One of the challenges in curating multimedia data of biodiversity collection specimens is the quality of the multimedia objects—in our case, two dimensional images. To tackle the image quality problem, multimedia needs to be captured in a specific format and presented with appropriate descriptors. In this study we present an analysis of two image repositories each consisting of 2D images of fish specimens from several institutions—the Integrated Digitized Biocollections (iDigBio) and the Great Lakes Invasives Network (GLIN). Approximately 70 thousand images have been processed from the GLIN repository and 450 thousand images have been processed from the iDigBio repository and their suitability assessed for use in neural network-based species identification and trait extraction applications. Our findings showed that images that came from the GLIN dataset were more successful for image processing and machine learning purposes. Almost 40% of the species have been represented with less than 10 images while only 20% have more than 100 images per species.We identified and captured 20 metadata descriptors that define quality and usability of the image. According to the captured metadata information, 70% of the GLIN dataset images were found to be useful for further analysis according to the overall image quality score. Quality issues with the remaining images included: curved specimens, non-fish objects in the images such as tags, labels and rocks that obstructed the view of the specimen; color, focus and brightness issues; folded or overlapping parts as well as missing parts.We used both the web interface and the API (Application Programming Interface) for downloading images from iDigBio. We searched for all fish genera, families and classes in three different searches with the images-only option selected. Then we combined all of the search results and removed duplicates. Our search on the iDigBio database for fish taxa returned approximately 450 thousand records with images. We narrowed this down to 90 thousand fish images aided by the multimedia metadata with the downloaded search results, excluding some non-fish images, fossil samples, X-ray and CT (computed tomography) scans and several others. Only 44% of these 90 thousand images were found to be suitable for further analysis.In this study, we discovered some of the limitations of biodiversity image datasets and built an infrastructure for assessing the quality of biodiversity images for neural network analysis. Our experience with the fish images gathered from two different image repositories has enabled describing image quality metadata features. With the help of these metadata descriptors, one can simply create a dataset for a desired image quality for the purpose of analysis. Likewise, the availability of the metadata descriptors will help advance our understanding of quality issues, while helping data technicians, curators and the other digitization staff be more aware of multimedia.
APA, Harvard, Vancouver, ISO, and other styles
50

Pflugrath, J. W. "The finer things in X-ray diffraction data collection." Acta Crystallographica Section D Biological Crystallography 55, no. 10 (1999): 1718–25. http://dx.doi.org/10.1107/s090744499900935x.

Full text
Abstract:
X-ray diffraction images from two-dimensional position-sensitive detectors can be characterized as thick or thin, depending on whether the rotation-angle increment per image is greater than or less than the crystal mosaicity, respectively. The expectations and consequences of the processing of thick and thin images in terms of spatial overlap, saturated pixels, X-ray background andI/σ(I) are discussed. Thed*TREKsoftware suite for processing diffraction images is briefly introduced, and results fromd*TREKare compared with those from another popular package.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography