Academic literature on the topic 'Colorspaces'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Colorspaces.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Colorspaces"

1

Nardini, Pascal, Min Chen, Michael Böttinger, Gerik Scheuermann, and Roxana Bujack. "Automatic Improvement of Continuous Colormaps in Euclidean Colorspaces." Computer Graphics Forum 40, no. 3 (2021): 361–73. http://dx.doi.org/10.1111/cgf.14313.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ennis, Robert, Florian Schiller, Matteo Toscani, and Karl Gegenfurtner. "Hyperspectral database of fruits and vegetables (v1.1) - Calibrated data, colorspaces, masks, and MATLAB code." Journal of the Optical Society of America A 35, no. 4 (2018): 11. https://doi.org/10.5281/zenodo.2611806.

Full text
Abstract:
&nbsp; &nbsp; <em><strong>***Please note that we added a small change in the &quot;readCompressedDAT.m&quot; MATLAB function since some users experienced a bit of confusion. There were no errors in the spectral data, but the change&nbsp;makes it clearer to users which wavelengths correspond&nbsp;to the elements of the uncompressed hyperspectral data matrix. This should hopefully be much clearer now. We updated the version number to reflect this. All other data and files remain the same, so there is no need to use the older version.***</strong></em> &nbsp; We have built a hyperspectral database of 42 fruits and vegetables and this is a permanent online repository for public access to the calibrated data and its representation in different colorspaces (RGB, LAB, LUV, DKL, LMS, XYZ, xyY). For more complete details, you are advised to check the official webpage for the database and to read the journal article. The database and some accompanying documentation for the Matlab functions can be found at: http://www.allpsych.uni-giessen.de/GHIFVD (pronounced &ldquo;gift&rdquo;) The journal article that accompanies this repository is published in JOSA A: Ennis R., Schiller F., Toscani M., Gegenfurtner, K. (2018) &quot;Hyperspectral database of fruits and vegetables&quot;, JOSA A, Vol. 35, No. 4., pp. B256-B266 and can be found here: https://www.osapublishing.org/josaa/abstract.cfm?uri=josaa-35-4-B256 Included here is the MATLAB code for opening the PCA compressed images. Both the outside (skin) and inside of the&nbsp;objects were imaged. We used a Specim VNIR HS-CL-30-V8E-OEM mirror-scanning hyperspectral camera&nbsp;and took pictures at a spatial resolution of &sim;57 px/deg by 800 pixels at a wavelength resolution of&nbsp;&sim;1.12 nm. A stable, broadband illuminant was used. Images and software are freely available on our webserver&nbsp;(http://www.allpsych.uni-giessen.de/GHIFVD; pronounced &ldquo;gift&rdquo;). We performed two kinds of analyses on these&nbsp;images. First, when comparing the insides and outsides of the objects, we observed that the insides were lighter&nbsp;than the skins, and that the hues of the insides and skins were significantly correlated (circular&nbsp;correlation 0.638). Second, we compared the color distribution within each object to corresponding human&nbsp;color discrimination thresholds. We found a significant correlation (0.75) between the orientation of ellipses fit to&nbsp;the chromaticity distributions of our fruits and vegetables with the orientations of interpolated MacAdam&nbsp;discrimination ellipses. This indicates a relationship between sensory processing and the characteristic&nbsp;of environmental objects. &nbsp; If for some reason you need access to the original RAW data from the camera sensor, that can be found here: https://doi.org/10.5281/zenodo.1186372 However, it is only posted for preservation purposes. You are not expected to work with the RAW data.
APA, Harvard, Vancouver, ISO, and other styles
3

da Silva, C. C. V., K. Nogueira, H. N. Oliveira, and J. A. dos Santos. "TOWARDS OPEN-SET SEMANTIC SEGMENTATION OF AERIAL IMAGES." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences IV-3/W2-2020 (October 29, 2020): 19–24. http://dx.doi.org/10.5194/isprs-annals-iv-3-w2-2020-19-2020.

Full text
Abstract:
Abstract. Classical and more recently deep computer vision methods are optimized for visible spectrum images, commonly encoded in grayscale or RGB colorspaces acquired from smartphones or cameras. A more uncommon source of images exploited in the remote sensing field are satellite and aerial images. However the development of pattern recognition approaches for these data is relatively recent, mainly due to the limited availability of this type of images, as until recently they were used exclusively for military purposes. Access to aerial imagery, including spectral information, has been increasing mainly due to the low cost of drones, cheapening of imaging satellite launch costs, and novel public datasets. Usually remote sensing applications employ computer vision techniques strictly modeled for classification tasks in closed set scenarios. However, real-world tasks rarely fit into closed set contexts, frequently presenting previously unknown classes, characterizing them as open set scenarios. Focusing on this problem, this is the first paper to study and develop semantic segmentation techniques for open set scenarios applied to remote sensing images. The main contributions of this paper are: 1) a discussion of related works in open set semantic segmentation, showing evidence that these techniques can be adapted for open set remote sensing tasks; 2) the development and evaluation of a novel approach for open set semantic segmentation. Our method yielded competitive results when compared to closed set methods for the same dataset.
APA, Harvard, Vancouver, ISO, and other styles
4

Naik Dharavath, Haji. "Aiming for G7 Master Compliance through a Color Managed Workflow: Comparison of Compliance with Amplitude Modulated (AM) vs. Frequency Modulated (FM) Screening of Multicolor Digital Printing." Journal of graphic engineering and design 12, no. 2 (2021): 5–19. http://dx.doi.org/10.24867/jged-2021-2-005.

Full text
Abstract:
The purpose of this research was to determine the influence of screening technologies (AM vs. FM) in the color reproduction aimed at the G7 master compliance. The quality of digital color printing is determined by these influential factors: screening method applied, type of printing process, ink (dry-toner or liquid-toner), printer resolution and the substrate (paper). For this research, only the color printing attributes such as the G7 colors hue and chroma, gray balance, and overall color deviations were analyzed to examine the significant differences that exist between the two screening technologies (AM vs. FM). These are the color attributes which are monitored and managed for quality accuracy during the printing. Printed colorimetry of each screening from the experiment was compared against G7 ColorSpace GRACoL 2013 (CGATS21-2-CRPC6) in CIE L* a* b* space using an IDEAlliance (Chromix/Hutch Color) Curve 4.2.4 application interface with an X-Rite spectrophotometer with an i1iO table. The measured data of each screening were run through this application (Curve 4.2.4). The data of each screening were analyzed by using the Verify Tool of the Curve 4.2.4 application to determine the pass/fail of G7 master compliance levels using G7 ColorSpace tolerances (G7 Grayscale, G7 Targeted, and G7 Colorspace). Analyzed data from the experiment revealed that the printed colorimetric values of each screening (G7 Grayscale, G7 Targeted, and G7 Colorspace) are in match (aligned) with the G7 master compliance levels (reference/target) colorimetric values (G7 Grayscale, G7 Targeted, and G7 Colorspace). Therefore, the press run was passed by the Curve 4 application for both screening technologies tested.
APA, Harvard, Vancouver, ISO, and other styles
5

Naik Dharavath, Haji. "Aiming for G7 Master Compliance through a Color Managed Digital Printing Workflow (CMDPW): Comparison of Compliance with Output Device Profile (ODP) vs. Device Link Profile (DLP)." Journal of graphic engineering and design 12, no. 1 (2021): 23–35. http://dx.doi.org/10.24867/jged-2021-1-023.

Full text
Abstract:
The purpose of this applied research was to determine the influence of device link profile (DLP) in the color reproduction aimed at the G7 master compliance. The quality of digital color printing is determined by these influential factors: screening method applied, type of printing process, ink (dry-toner or liquid-toner), printer resolution and the substrate (paper). For this research, only the color printing attributes such as the G7 colors hue and chroma, gray balance, and overall color deviations were analyzed to examine the significant differences that exist between the two output profiles [Output Device Profile (ODP) vs Device Link Profile (DLP)]. These are the color attributes which are monitored and managed for quality accuracy during the printing. Printed colorimetry of each profile from the experiment was compared against G7 ColorSpace GRACoL 2013 (CGATS21-2-CRPC6) in CIE L* a* b* space using an IDEAlliance (Chromix/Hutch Color) Curve 4.2.4 application interface with an X-Rite spectrophotometer with an i1iO table. The measured data of each profile were run through this application (Curve 4.2.4). The data were analyzed by using the Verify Tool of the Curve 4.2.4 application to determine the pass/fail of G7 master compliance levels using G7 ColorSpace tolerances (G7 Grayscale, G7 Targeted, and G7 Colorspace). Analyzed data from the experiment revealed that the printed colorimetric values of each profile (G7 Grayscale, G7 Targeted, and G7 Colorspace) are in match (aligned) with the G7 master compliance levels (reference/target) colorimetric values (G7 Grayscale, G7 Targeted, and G7 Colorspace). Therefore, the press run was passed by the Curve 4 application for both the profiles used/tested towards aiming for G7 master compliance.
APA, Harvard, Vancouver, ISO, and other styles
6

Yue, Lin, Hao Shen, Sen Wang, et al. "Exploring BCI Control in Smart Environments." ACM Transactions on Knowledge Discovery from Data 15, no. 5 (2021): 1–20. http://dx.doi.org/10.1145/3450449.

Full text
Abstract:
The brain–computer interface (BCI) control technology that utilizes motor imagery to perform the desired action instead of manual operation will be widely used in smart environments. However, most of the research lacks robust feature representation of multi-channel EEG series, resulting in low intention recognition accuracy. This article proposes an EEG2Image based Denoised-ConvNets (called EID) to enhance feature representation of the intention recognition task. Specifically, we perform signal decomposition, slicing, and image mapping to decrease the noise from the irrelevant frequency bands. After that, we construct the Denoised-ConvNets structure to learn the colorspace and spatial variations of image objects without cropping new training images precisely. Toward further utilizing the color and spatial transformation layers, the colorspace and colored area of image objects have been enhanced and enlarged, respectively. In the multi-classification scenario, extensive experiments on publicly available EEG datasets confirm that the proposed method has better performance than state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
7

Maurya, Shweta, and Vishal Shrivastava. "An Improved Novel Steganographic Technique For RGB And YCbCr Colorspace." IOSR Journal of Computer Engineering 16, no. 2 (2014): 155–57. http://dx.doi.org/10.9790/0661-1629155157.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Fairley, Iain, Anouska Mendzil, Michael Togneri, and Dominic Reeve. "The Use of Unmanned Aerial Systems to Map Intertidal Sediment." Remote Sensing 10, no. 12 (2018): 1918. http://dx.doi.org/10.3390/rs10121918.

Full text
Abstract:
This paper describes a new methodology to map intertidal sediment using a commercially available unmanned aerial system (UAS). A fixed-wing UAS was flown with both thermal and multispectral cameras over three study sites comprising of sandy and muddy areas. Thermal signatures of sediment type were not observable in the recorded data and therefore only the multispectral results were used in the sediment classification. The multispectral camera consisted of a Red–Green–Blue (RGB) camera and four multispectral sensors covering the green, red, red edge and near-infrared bands. Statistically significant correlations (&gt;99%) were noted between the multispectral reflectance and both moisture content and median grain size. The best correlation against median grain size was found with the near-infrared band. Three classification methodologies were tested to split the intertidal area into sand and mud: k-means clustering, artificial neural networks, and the random forest approach. Classification methodologies were tested with nine input subsets of the available data channels, including transforming the RGB colorspace to the Hue–Saturation–Value (HSV) colorspace. The classification approach that gave the best performance, based on the j-index, was when an artificial neural network was utilized with near-infrared reflectance and HSV color as input data. Classification performance ranged from good to excellent, with values of Youden’s j-index ranging from 0.6 to 0.97 depending on flight date and site.
APA, Harvard, Vancouver, ISO, and other styles
9

Stauffer, Reto, and Achim Zeileis. "colorspace: A Python Toolbox for Manipulating and Assessing Colors and Palettes." Journal of Open Source Software 9, no. 102 (2024): 7120. http://dx.doi.org/10.21105/joss.07120.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kim, Young-Ju. "Multi-Mode Reconstruction of Subsampled Chrominance Information using Inter-Component Correlation in YCbCr Colorspace." Journal of the Korea Contents Association 8, no. 2 (2008): 74–82. http://dx.doi.org/10.5392/jkca.2008.8.2.074.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Colorspaces"

1

James, Stuart G. "Developing a flexible and expressive realtime polyphonic wave terrain synthesis instrument based on a visual and multidimensional methodology." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2005. https://ro.ecu.edu.au/theses/107.

Full text
Abstract:
The Jitter extended library for Max/MSP is distributed with a gamut of tools for the generation, processing, storage, and visual display of multidimensional data structures. With additional support for a wide range of media types, and the interaction between these mediums, the environment presents a perfect working ground for Wave Terrain Synthesis. This research details the practical development of a realtime Wave Terrain Synthesis instrument within the Max/MSP programming environment utilizing the Jitter extended library. Various graphical processing routines are explored in relation to their potential use for Wave Terrain Synthesis.
APA, Harvard, Vancouver, ISO, and other styles
2

Lin, En-yu, and 林恩宇. "A VQ Encoding Based Motion Detection in HSV Colorspace." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/10668379936592012468.

Full text
Abstract:
碩士<br>逢甲大學<br>通訊工程所<br>97<br>Motion detection is always one of the important projects in the vision surveillance related research. The purpose of the object detection is to separate the foreground object from an image accurately. This paper aims at two of the most important stages in background subtraction: background model construction and illumination variation, which build a background model based on codebook model with a shadow suppression algorithm in HSV colorspace. In the background model, each pixel can have one or more codewords representing the background. Samples at each pixel are clustered into the set of codewords based on a color distortion and brightness constraint. Finally, during the foreground detection procedure, if an incoming pixel happens to be a member of a codeword’s set, that codeword is updated accordingly; if not, the pixel is classified as foreground.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Colorspaces"

1

Margulis, Dan. Photoshop LAB Color: The Canyon Conundrum and Other Adventures in the Most Powerful Colorspace. Pearson Education, Limited, 2015.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Margulis, Dan. Photoshop LAB Color: The Canyon Conundrum and Other Adventures in the Most Powerful Colorspace. Peachpit Press, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Photoshop LAB color: The canyon conundrum and other adventures in the most powerful colorspace. Peachpit Press, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Colorspaces"

1

Gowda, Shreyank N., and Chun Yuan. "StegColNet: Steganalysis Based on an Ensemble Colorspace Approach." In Lecture Notes in Computer Science. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-73973-7_30.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Afonso, Manya, Angelo Mencarelli, Gerrit Polder, Ron Wehrens, Dick Lensink, and Nanne Faber. "Detection of Tomato Flowers from Greenhouse Images Using Colorspace Transformations." In Progress in Artificial Intelligence. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-30241-2_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Romero, Andrés, Michéle Gouiffés, and Lionel Lacassagne. "Covariance Descriptor Multiple Object Tracking and Re-identification with Colorspace Evaluation." In Computer Vision - ACCV 2012 Workshops. Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-37484-5_33.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Colorspaces"

1

Fujisawa, Takanori, and Masaaki Ikehara. "Color image coding based on linear combination of adaptive colorspaces." In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2017. http://dx.doi.org/10.1109/icassp.2017.7952411.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Keunecke, Nils, and S. Hamidreza Kasaei. "Open-Ended Fine-Grained 3D Object Categorization by Combining Shape and Texture Features in Multiple Colorspaces." In 2020 IEEE-RAS 20th International Conference on Humanoid Robots (Humanoids). IEEE, 2021. http://dx.doi.org/10.1109/humanoids47582.2021.9555670.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Suryawanshi, Bhavana V., and Jaya H. Dewan. "Image compression using assorted colorspaces with Kekre, Slant and Walsh Wavelet Transforms in TTEVR codebook for vector quantization." In 2015 International Conference on Information Processing (ICIP). IEEE, 2015. http://dx.doi.org/10.1109/infop.2015.7489423.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Madane, Manisha S., and Sudeep D. Thepade. "Score level fusion based Multimodal biometric identification using Thepade's Sorted Ternary Block Truncation coding with variod proportion of Iris, Palmprint, Left Fingerprint & Right Fingerprint with asorted similarity measures & different Colorspaces." In 2016 International Conference on Automatic Control and Dynamic Optimization Techniques (ICACDOT). IEEE, 2016. http://dx.doi.org/10.1109/icacdot.2016.7877702.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Moreno-Noguer, F., A. Sanfeliu, and D. Samaras. "A Target Dependent Colorspace for Robust Tracking." In 18th International Conference on Pattern Recognition (ICPR'06). IEEE, 2006. http://dx.doi.org/10.1109/icpr.2006.182.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Maxwell, Skyler, Matthew Kilcher, Alexander Benasutti, et al. "Automated Detection of Colorspace Via Convolutional Neural Network." In 2018 IEEE Applied Imagery Pattern Recognition Workshop (AIPR). IEEE, 2018. http://dx.doi.org/10.1109/aipr.2018.8707374.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Zhu, Tong, Zhaoxia Yin, Wanli Lyu, Jiefei Zhang, and Bin Luo. "Imperceptible Adversarial Attack on S Channel of HSV Colorspace." In 2023 International Joint Conference on Neural Networks (IJCNN). IEEE, 2023. http://dx.doi.org/10.1109/ijcnn54540.2023.10191049.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Sahagun, Mary Anne M. "Determination of Sweetness level of Muntingia Calabura using HSV Colorspace." In 2020 International Conference on Data Analytics for Business and Industry: Way Towards a Sustainable Economy (ICDABI). IEEE, 2020. http://dx.doi.org/10.1109/icdabi51230.2020.9325684.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Lyu, Wanli, Xinming Sun, and Zhaoxia Yin. "Reversible Adversarial Attack based on Pixel Smoothing in HSV Colorspace." In ICIIT 2024: 2024 9th International Conference on Intelligent Information Technology. ACM, 2024. http://dx.doi.org/10.1145/3654522.3654532.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Bergeles, C., G. Fagogenis, J. J. Abbott, and B. J. Nelson. "Tracking intraocular microdevices based on colorspace evaluation and statistical color/shape information." In 2009 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2009. http://dx.doi.org/10.1109/robot.2009.5152348.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!