To see the other types of publications on this topic, follow the link: CMOS image sensor module.

Journal articles on the topic 'CMOS image sensor module'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'CMOS image sensor module.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Xu, Lei, and Wei Lu. "Research on Video Information Acquisition Module of OV7620." Advanced Materials Research 216 (March 2011): 29–33. http://dx.doi.org/10.4028/www.scientific.net/amr.216.29.

Full text
Abstract:
In order to meet real-time and quick requirements for video information acquiring and processing, this paper introduces a hardware platform based on Altera's Cyclone series EP1C12Q240C8, to descript the driving timing for CMOS image sensor OV7620 with Verilog HDL language to acquire video information. The system uses SCCB programming model, establishes communication between FPGA chip and CMOS image sensor to achieve the control and acquisition of the signal. In order to achieve operational requirements in different environments and needs, the corresponding registers and the controller are set within CMOS image sensor. Experimental results show that the control of the CMOS image sensor OV7620 flexibly provides a stable and reliable source of raw information for video monitor, industrial applications such as on-site monitoring.
APA, Harvard, Vancouver, ISO, and other styles
2

Kim, Bioh, Thorsten Matthias, Gerald Kreindl, Viorel Dragoi, Markus Wimplinger, and Paul Lindner. "Advances in Wafer Level Processing and Integration for CIS Module Manufacturing." International Symposium on Microelectronics 2010, no. 1 (2010): 000378–84. http://dx.doi.org/10.4071/isom-2010-wa1-paper5.

Full text
Abstract:
This article presents the advances in wafer-level processing and integration techniques for CMOS image sensor module manufacturing. CMOS image sensors gave birth to the low-cost, high-volume camera phone market and are being adopted for various high-end applications. The backside illumination technique has significant advantages over the front-side illumination due to separation of the optical path from the metal interconnects. Wafer bonding plays a key role in manufacturing backside illuminated sensors. The cost-effective integration of miniaturized cameras in various handheld devices becomes realized through the introduction of CMOS image sensor modules or camera modules manufactured with wafer-level processing and integration techniques. We developed various technologies enabling wafer-level processing and integration, such as (a) wafer-to-wafer permanent bonding with oxide or polymer layers for manufacturing backside illuminated sensor wafers, (b) wafer-level lens molding and stacking based on UV imprint lithography for making wafer-level optics, (c) conformal coating of various photoresists within high aspect ratio through-silicon vias, and (d) advanced backside lithography for various metallization processes in wafer-level packaging. Those techniques pave the way to the future growth of the digital imaging industry by improving the electrical and optical aspects of devices as well as the module manufacturability.
APA, Harvard, Vancouver, ISO, and other styles
3

Okada, Kei, Takeshi Morishita, Marika Hayashi, Masayuki Inaba, and Hirochika Inoue. "Design and Development of a Small Stereovision Sensor Module for Small Self-Contained Autonomous Robots." Journal of Robotics and Mechatronics 17, no. 3 (2005): 248–54. http://dx.doi.org/10.20965/jrm.2005.p0248.

Full text
Abstract:
We designed a small stereovision (SSV) sensor module for easily adding visual functions to a small robot and enabling their use. The SSV sensor module concept includes 1) a vision sensor module containing a camera and a visual processor and 2) connecting to a robot system through general-purpose interface. This design enables the use of visual functions as ordinary sensors such, as touch or ultra-sonic sensors, by simply connecting a general-purpose interface port such as an IO port or serial connector. We developed a prototype module with small CMOS image sensors for a mobile phone and a 16 bit microprocessor. The 30×40mm prototype is small enough to attach even to palm-top robots. Our module demonstrates image processing including binarization, color extraction and labeling, and template matching. We developed self-contained robots, including a 2DOF head robot, a humanoid robot, and a palm-top robot, and realized vision-based autonomous behavior.
APA, Harvard, Vancouver, ISO, and other styles
4

Chungyong, Kim, and Kim Gyu-Sik. "Analog CMOS Image Sensor based Radon Counter." International Journal of Trend in Scientific Research and Development 2, no. 2 (2018): 54–59. https://doi.org/10.31142/ijtsrd8330.

Full text
Abstract:
Radon is an invisible, odorless, and chemically inactive radioactive gas produced by the decay of uranium ore. Various types of equipment and components have been proposed for use in effective radon detection. In this paper, we describe a radon detector that uses an analog CMOS image sensor module. Based on our studies, we believe that this system would be helpful in protecting many people from the dangers associated with radon exposure. Chungyong Kim | Gyu-Sik Kim "Analog CMOS Image Sensor-based Radon Counter" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-2 | Issue-2 , February 2018, URL: https://www.ijtsrd.com/papers/ijtsrd8330.pdf
APA, Harvard, Vancouver, ISO, and other styles
5

Katayan, Riyas, Shwe Sin Win, Rui Qi Lim, and Kripesh Vaidyanathan. "Wireless Imaging Module Assembly and Integration for Capsule Endoscopic Applications." Advanced Materials Research 254 (May 2011): 62–65. http://dx.doi.org/10.4028/www.scientific.net/amr.254.62.

Full text
Abstract:
Various breakthroughs have being made recently in Capsule endoscopy (CE). As the technology gets more matured with more clinical acceptance rate, it’s time to explore the best way for fabricating, packaging and integrating the CE. This paper will present the development of a compact high resolution image module with a VGA CMOS sensor and in house RF-Baseband IC chip for capsule endoscopic applications. The complete module, inclusive of lens, measures 11.5 mm in diameter by 28 mm in length have being able to design, fabricate and assemble. 640x260 CMOS sensor, 0201 capacitors, resistors and LEDs, are assembled onto a rigid-flex PCB with processes such as reflow heating and auto highly accurate pick and place die placement. The optical imaging module is interfaced with a RF communication unit, consisting of a base-band IC and antenna, to enable wireless transmission of dynamic image data to an external data processing and visualization unit. Animal trials produces ex-vivo GI tissue images of superior quality in terms of color saturation, contrast and resolution compared with currently available commercial capsule imaging devices.
APA, Harvard, Vancouver, ISO, and other styles
6

Park, Min-Chul, Kyung-Joo Cheoi, and Takayuki Hamamoto. "A CMOS Digital Image Sensor with a Feature-Driven Attention Module." KIPS Transactions:PartB 15B, no. 3 (2008): 189–96. http://dx.doi.org/10.3745/kipstb.2008.15-b.3.189.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Xia, Peng, Wei Dong Hao, and Xiao Yu. "Robot System to Detect Line Based on ARM9." Applied Mechanics and Materials 16-19 (October 2009): 905–9. http://dx.doi.org/10.4028/www.scientific.net/amm.16-19.905.

Full text
Abstract:
In this paper, a new system of visual detect line robot based on CMOS image sensor was designed. Meanwhile a new technology to detect line was proposed and designed. The system includes nuclear module, image collecting module, robot orientation module, electromotor drive module, man-computer interaction module and power supply module. The principle, function and implementation of every module were introduced in detail. Results of experiments show that the system has not only strong real-time capability and high accuracy, but also outstanding stabilization.
APA, Harvard, Vancouver, ISO, and other styles
8

Becker, Gabor Szedo, and Róbert Lovas. "Uniformity Correction of CMOS Image Sensor Modules for Machine Vision Cameras." Sensors 22, no. 24 (2022): 9733. http://dx.doi.org/10.3390/s22249733.

Full text
Abstract:
Flat-field correction (FFC) is commonly used in image signal processing (ISP) to improve the uniformity of image sensor pixels. Image sensor nonuniformity and lens system characteristics have been known to be temperature-dependent. Some machine vision applications, such as visual odometry and single-pixel airborne object tracking, are extremely sensitive to pixel-to-pixel sensitivity variations. Numerous cameras, especially in the fields of infrared imaging and staring cameras, use multiple calibration images to correct for nonuniformities. This paper characterizes the temperature and analog gain dependence of the dark signal nonuniformity (DSNU) and photoresponse nonuniformity (PRNU) of two contemporary global shutter CMOS image sensors for machine vision applications. An optimized hardware architecture is proposed to compensate for nonuniformities, with optional parametric lens shading correction (LSC). Three different performance configurations are outlined for different application areas, costs, and power requirements. For most commercial applications, the correction of LSC suffices. For both DSNU and PRNU, compensation with one or multiple calibration images, captured at different gain and temperature settings are considered. For more demanding applications, the effectiveness, external memory bandwidth, power consumption, implementation, and calibration complexity, as well as the camera manufacturability of different nonuniformity correction approaches were compared.
APA, Harvard, Vancouver, ISO, and other styles
9

Chambion, Bertrand, G. Moulin, S. Caplet, et al. "Curved CMOS Image Sensors: Packaging Issues, Applications and Roadmaps." Additional Conferences (Device Packaging, HiTEC, HiTEN, and CICMT) 2017, DPC (2017): 1–33. http://dx.doi.org/10.4071/2017dpc-tp3_presentation1.

Full text
Abstract:
Since few years, there has been an increasing interest and demand in flexible electronics. Standard imaging system consists of an optical module (set of lenses) and an image sensor. For wide field of view applications, and due to the curved shape of lenses and mirrors, the flat image after being propagated through the optical system is not flat but curved, i.e. the off-axis light focuses in a curved manner. This problem is called Petzval Field Curvature Aberration (Petzval FCA). It is generally fixed by additional complex lenses to “flatten” the image plane. We propose another approach with a hemispherical curved sensor technology. It allows eliminating FCA directly at the sensor level and thus makes it possible to drastically simplify, and hence miniaturize, the optical system architecture. First, a brief state of the art on curved detectors will be detailed for different application fields. Bendable capacities of hydrid detectors (included interconnection layer) were fully investigated and tested in the past [1, 2]. Moreover, a hemi-spherically curved visible image sensor with better optical characteristics (image quality) was realized and patented by Sony Company in 2014 [3]. Recently, a tunable curving packaging technology, with new optical functions possibilities has been presented in Electronic Component and Technology Conference 2016 [4]. Then, CEA-LETI curving technologies will be explained to address fixed and tunable curvature packaging applications, included modeling and technical process steps. Characterization of curved sensors prototypes have been performed to understand mechanical and electro-optical bending limits and will be also presented in the paper. Based on an existing fisheye flat sensor optical design, a curved focal plane will be described, showing that it's possible to simplify the standard system from 14 lenses (11 types of optical glass) with 2 aspheric lenses, to only 9 lenses (−35%), 3 types of optical glasses, without aspheric surfaces. The benefits of a curved sensor will be summarized into two categories: those related to the optical system design and those related to the quality of images produced by a camera with curved sensor. Optical system:» Miniaturization of optical devices (volume, weight);» Simplification of the lenses alignment process (due to reduced number of lenses);» Suppression of aspheric lenses;» Wide field of view enhancement. Image quality:» More homogeneous image quality (reduced image noise);» Similar or improved resolution and higher sensitivity;» Corrected distortion occurring along the image edges. Finally, curved CMOS image sensor roadmaps and perspectives will be discussed: from a market point of view, application field surveys have been done on mass market applications (mobile, consumer…), photography, automotive… From a technical aspect, a curving technologies roadmap will be proposed, leaded by applications needs, on single chip, collective, and wafer level processes.
APA, Harvard, Vancouver, ISO, and other styles
10

Yamawaki, Akira, and Serikawa Seiichi. "A Wearable Supporting System for Visually Impaired People in Operating Capacitive Touchscreen." Applied Mechanics and Materials 103 (September 2011): 687–94. http://dx.doi.org/10.4028/www.scientific.net/amm.103.687.

Full text
Abstract:
We propose a wearable supporting system with a CMOS image sensor for the visually impaired people in operating capacitive touchscreen. This system attaches the CMOS image sensor without a lens to the tip of the middle finger. The icons and buttons displayed on the touchscreen are replaced to the color barcodes. Touching the surface of the touchscreen with the CMOS image sensor directly, the color barcode is detected and decoded. The decoded results are returned to the user by some interaction like audio. Then, the user touches the button area around the color barcode by the forefinger to operate the target device. This system can provide very easy and natural way for operating the touchscreen to the visually impaired people who usually recognize the materials by the finger. Any mechanical modification of the target device is not needed. The modification can be made by changing its software program. Since the color barcode is sensed by the image sensor without any lens touching the surface of the touchscreen, each bar in the color barcode should be blurred. So, we develop an easy and simple image processing to handle such problem. We design it as the hardware module to achieve the high performance and low-power wearable device. A prototype hardware using an FPGA shows the hardware size, the performance and the actual demonstration.
APA, Harvard, Vancouver, ISO, and other styles
11

Wang, Tong, Geng Tian, and Cong Han. "Route Identification in Intelligent Traffic System Based on CMOS Image Sensor." Applied Mechanics and Materials 229-231 (November 2012): 1202–5. http://dx.doi.org/10.4028/www.scientific.net/amm.229-231.1202.

Full text
Abstract:
This paper introduces a method based on image processing technology, which allows model car to identify the route on a white board. Our system uses MC9S12XS128 as the control unit, CMOS camera is applied to capture data, and H-bridge driver circuit supports the motor. In addition, we use the ultrasonic to calculate the distance and wireless data transmission module to protect the car from a crash. Finally, we establish an algorithm which controls the car and recognize dotted lines in order to overtake the other model car at speed-change area.
APA, Harvard, Vancouver, ISO, and other styles
12

Ma, Qingxiang. "Prevention and control tracking system based on 3D face recognition." Journal of Computational Methods in Sciences and Engineering 24, no. 3 (2024): 1807–23. http://dx.doi.org/10.3233/jcm-230005.

Full text
Abstract:
A prevention and control tracking system based on three-dimensional (3D) face recognition was designed to improve the target tracking accuracy of the prevention and control tracking system. The ARM control chip of TMS320DM6446 was selected as the control chip of the ARM control module. The CMOS image acquisition sensor of the image acquisition module collected face images. The collected images were transmitted to the 3D face recognition module. The 3D face recognition module used the Gabor wavelet algorithm to extract the 3D face contour features of the face image. Moreover, the LDA algorithm was used to recognize faces based on 3D face contour features. The 3D face recognition results were compared with the faces in the face library to determine whether prevention and control tracking were necessary. When prevention and control tracking was needed, the GPS tracking and positioning module embedded in the mobile device terminal of the target object was started. The GPS tracking and positioning module was used to prevent and control the tracking of the target. The results of prevention and control tracking were displayed to the system users using a VGA display. The experimental results indicated that the designed system could accurately recognize faces and achieve prevention and control tracking of the target based on the face recognition results.
APA, Harvard, Vancouver, ISO, and other styles
13

Sun, Senzhen, Guangyun Li, Yangjun Gao, and Li Wang. "Robust Dynamic Indoor Visible Light Positioning Method Based on CMOS Image Sensor." Photogrammetric Engineering & Remote Sensing 88, no. 5 (2022): 333–42. http://dx.doi.org/10.14358/pers.21-00077r3.

Full text
Abstract:
A real-time imaging recognition and positioning method based on visible light communication flat light source is proposed. This method images the visible light communication flat light source through the rolling shutter effect of the complementary metal-oxide semiconductor imaging sensor and obtains the rectangular area outline of the light source. The light and dark stripe information of image with the digital image processing method realizes light source matching recognition by defining the concept, the autocorrelation sequence, which can be used to obtain the identity of the light source, and the rectangular vertex coordinate information of flat light source achieves high-precision vision positioning on the basis of inertial measurement unit attitude sensor-assisted imaging. Simultaneously, the corresponding positioning module is developed for positioning testing. The test results indicate that the plane positioning error is less than 4.5 cm, and the positioning frequency is greater than 10 Hz, which provides a high-precision visual positioning solution for indoor positioning.
APA, Harvard, Vancouver, ISO, and other styles
14

Huang, Shih-Chia, Quoc-Viet Hoang, Trung-Hieu Le, et al. "An Advanced Noise Reduction and Edge Enhancement Algorithm." Sensors 21, no. 16 (2021): 5391. http://dx.doi.org/10.3390/s21165391.

Full text
Abstract:
Complementary metal-oxide-semiconductor (CMOS) image sensors can cause noise in images collected or transmitted in unfavorable environments, especially low-illumination scenarios. Numerous approaches have been developed to solve the problem of image noise removal. However, producing natural and high-quality denoised images remains a crucial challenge. To meet this challenge, we introduce a novel approach for image denoising with the following three main contributions. First, we devise a deep image prior-based module that can produce a noise-reduced image as well as a contrast-enhanced denoised one from a noisy input image. Second, the produced images are passed through a proposed image fusion (IF) module based on Laplacian pyramid decomposition to combine them and prevent noise amplification and color shift. Finally, we introduce a progressive refinement (PR) module, which adopts the summed-area tables to take advantage of spatially correlated information for edge and image quality enhancement. Qualitative and quantitative evaluations demonstrate the efficiency, superiority, and robustness of our proposed method.
APA, Harvard, Vancouver, ISO, and other styles
15

Zambrano, Benjamin, Sebastiano Strangio, Tommaso Rizzo, Esteban Garzón, Marco Lanuzza, and Giuseppe Iannaccone. "All-Analog Silicon Integration of Image Sensor and Neural Computing Engine for Image Classification." IEEE Access 10 (September 1, 2022): 94417–30. https://doi.org/10.1109/ACCESS.2022.3203394.

Full text
Abstract:
We have designed a fully-integrated analog CMOS cognitive image sensor based on a two-layer artificial neural network and targeted to low-resolution image classification. We have used a single poly 180 nm CMOS process technology, which includes process modules for realizing the building blocks of the CMOS image sensor. Our design includes all the analog sub-circuits required to perform the cognitive sensing task, from image sensing to output classification decision. The weights of the network are stored in single-poly floating-gate memory cells, using a single transistor per analog weight. This enables the classifier to be intrinsically reconfigurable, and to be trained for various classification problems, based on low-resolution images. As a case study, the classifier capability is tested using a low-resolution version of the MNIST dataset of handwritten digits. The circuit exhibits a classification accuracy of 87.8%, that is comparable to an equivalent software implementation operating in the digital domain with floating point data precision, with an average energy consumption of 6 nJ per inference, a latency of 22.5 μs and a throughput of up to 133.3 thousand inferences per second.
APA, Harvard, Vancouver, ISO, and other styles
16

Zhang, Bing Yin, Mu Zheng Xiao, Zhi Jing Zhang, and Ting Hai Qin. "High-Resolution Real-Time Visual Inspection System for Micro Assembly." Applied Mechanics and Materials 870 (September 2017): 249–56. http://dx.doi.org/10.4028/www.scientific.net/amm.870.249.

Full text
Abstract:
For precise assembly of miniature parts, the precise inspection for parts’ posture and real-time servo control for assembly greatly depend on the performance of visual inspection system. This paper proposed a high-resolution real-time visual inspection system of micro assembly. The CMOS image sensor and high-speed digital signal processing chip were chosen to design the image acquisition module, image processing module and image display module. High-accuracy display on the common display device was implemented with the video encoding chip and FPGA. The test results showed that the processing speed with preprocessing could reach 3.5 frames per second with 5 mega-pixel resolution, and the display accuracy after threshold processing had little loss. Micro parts assembly experiment and high accuracy Peg-in-Hole assembly experiment are done to test the performance of the proposed visual inspection system. This visual inspection system can be used for high-resolution real-time micro assembly and other real-time visual servo control.
APA, Harvard, Vancouver, ISO, and other styles
17

Lee, Euncheol, Hyunsu Jun, Wonho Choi, et al. "CIS Band Noise Prediction Methodology Using Co-Simulation of Camera Module." Electronic Imaging 2020, no. 7 (2020): 328–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.7.iss-328.

Full text
Abstract:
This paper describes a CMOS image sensor (CIS) horizontal band noise reduction methodology considering on-chip and offchip camera module PCB design parameters. The horizontal band noise is a crucial issue for high quality camera of modern smartphone applications. This paper discusses CIS horizontal band noise mechanism and proposes the solution by optimization of design factors in CIS and camera module. Analog ground impedance value and bias voltage condition of pixel array transfer gate have been found to be effective optimization parameters. Through the real experimental data, we proved that proposed solution is instrumental in reducing the horizontal band noise.
APA, Harvard, Vancouver, ISO, and other styles
18

Guo, Ying, Zhaoshuo Tian, Zongjie Bi, Xiaohua Che, and Songlin Yin. "Research on Water Quality Chemical Oxygen Demand Detection Using Laser-Induced Fluorescence Image Processing." Sensors 25, no. 5 (2025): 1404. https://doi.org/10.3390/s25051404.

Full text
Abstract:
Chemical Oxygen Demand (COD) serves as a crucial metric for assessing the extent of water pollution attributable to organic substances. This study introduces an innovative approach for the detection of low-concentration COD in aqueous environments through the application of Laser-Induced Fluorescence (LIF) image processing. The technique employs an image sensor to capture fluorescence image data generated by organic compounds in water when excited by ultraviolet laser radiation. Subsequently, the COD value, indicative of the concentration of organic matter in the water, is derived via image processing techniques. Utilizing this methodology, an LIF image processing COD detection system has been developed. The system is primarily composed of a CMOS image sensor, an STM32 microprocessor, a laser emission module, and a display module. In this study, the system was employed to detect mixed solutions of sodium humate and glucose at varying concentrations, resulting in the acquisition of corresponding fluorescence images. By isolating color channels and processing the image data features, variations in RGB color characteristics were analyzed. The Partial Least Squares Regression (PLSR) analysis method was utilized to develop a predictive model for COD concentration values based on the average RGB color feature values from the characteristic regions of the fluorescence images. Within the COD concentration range of 0–12 mg/L, the system demonstrated a detection relative error of less than 10%. In summary, the system designed in this research, utilizing the LIF image processing method, exhibits high sensitivity, robust stability, miniaturization, and non-contact detection capabilities for low-concentration COD measurement. It is well-suited for rapid, real-time online water quality monitoring.
APA, Harvard, Vancouver, ISO, and other styles
19

Lin, Sheng-Feng, and Cheng-Huan Chen. "Optical Design of Compact Space Autonomous Docking Instrument with CMOS Image Sensor and All Radiation Resistant Lens Elements." Applied Sciences 10, no. 15 (2020): 5302. http://dx.doi.org/10.3390/app10155302.

Full text
Abstract:
Built-in autonomous stereo vision devices play a critical role in the autonomous docking instruments of space vehicles. Traditional stereo cameras for space autonomous docking use charge-coupled device (CCD) image sensors, and it is difficult for the overall size to be reduced due to the size of the CCD. In addition, only the few outermost elements of the camera lens use radiation-resistant optical glass material. In this paper, a complementary metal–oxide semiconductor (CMOS) device is used as the image sensor, and radiation-resistant optical glass material is introduced to all lens elements in order to make a compact and highly reliable space grade instrument. Despite the limited available material, a fixed focus module with 7 lens elements and overall length of 42 mm has been achieved, while meeting all the required performance demands for the final vision-guided docking process.
APA, Harvard, Vancouver, ISO, and other styles
20

Chen, Shuo, Jun Lei Song, Zi Min Yuan, Yang Liu, and Pei Pei Guo. "Diver Communication System Based on Underwater Optical Communication." Applied Mechanics and Materials 621 (August 2014): 259–63. http://dx.doi.org/10.4028/www.scientific.net/amm.621.259.

Full text
Abstract:
The underwater diver visible light communication system integrating information collection, transmission and processing achieves the optical communication device established for the diver’s underwater wireless transmission and underwater sensor network. The front-end signal acquisition module capable of carrying out voice and image acquisition utilizes a MEMS digital microphone and a high performance CMOS camera to change optical signals in to digital ones. The signal source applies wavelet conversion and the channel coding and decoding apply Turbo algorithms, channel modulation and demodulation adopt PPM modulation, so compression, coding and modulation are mounted on TI's high-performance DSP TMS320DM642 platform to ensure the stability and reliability of data transmission. Back-end data acquisition module utilizes a photomultiplier tube and its peripheral circuits for receiving and converting optical signals. Display and storage modules are TFT and SD cards to achieve data reception and sound and light reduction and storage functions.
APA, Harvard, Vancouver, ISO, and other styles
21

Thomas, Dave, Jean Michailos, Nicolas Hotellier, et al. "Integration Aspects of the Implementation of Through Silicon Vias (TSV) for CMOS Image Sensors." Additional Conferences (Device Packaging, HiTEC, HiTEN, and CICMT) 2010, DPC (2010): 000539–56. http://dx.doi.org/10.4071/2010dpc-ta14.

Full text
Abstract:
One of the first device types to benefit from TSV implementation is the CMOS image sensor, an image capture device designed to combine high image quality within a compact form-factor that can be mass produced at low cost. End markets include mobile phones, PDAs and gaming consoles. STMicroelectronics is pioneering their production, based on ≤65nm CMOS technology, at its 300mm facility in Crolles. These sensors employ TSVs as part of a wafer level package allowing the camera module to be directly soldered to a phone PCB thereby saving cost, space and time to manufacture. SPTS's Versalis fxP system is being used to combine multiple TSV formation processes onto one platform including hard-mask deposition, hard-mask etching, TSV etching, partial PMD etching, dielectric liner deposition and spacer etching to define the area for the metal contact. All processes are carried out on a silicon wafer bonded to a glass carrier, through which the final device is illuminated. We will present a TSV silicon etch process for 70 μm x 70 μm Vias in a thinned 300mm silicon wafer on glass carriers with an etch rate uniformity of ≤±1% and sidewall scalloping in the range 80–210 nm. We will show that this process can be conveniently mixed in production with the various oxide etches. A PECVD dielectric liner deposited at <200 °C having excellent coverage, thermal stability and adhesion combined with a breakdown voltage >8 MVcm−1 and leakage current <1E-7 Acm−2 will also be described. Process integration aspects will be discussed using high resolution SEMS to show the key material interfaces in critical areas such as feature corners and along sidewalls. Furthermore the successful implementation of TSV technology on ST's CMOS image sensors will be demonstrated through a combination of electrical characteristics, parametric device data and overall device performance/reliability.
APA, Harvard, Vancouver, ISO, and other styles
22

Jeffrey Kuo, Chung-Feng, Wen-Chi Lo, Yan-Ru Huang, Hsin-Yang Tsai, Chi-Lung Lee, and Han-Cheng Wu. "Automated defect inspection system for CMOS image sensor with micro multi-layer non-spherical lens module." Journal of Manufacturing Systems 45 (October 2017): 248–59. http://dx.doi.org/10.1016/j.jmsy.2017.10.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Park, Cheonwi, Woo-Tae Kim, In-June Yeo, Moongu Jeon, and Byung-geun Lee. "An event detection module with a low-power, small-size CMOS image sensor with reference scaling." Analog Integrated Circuits and Signal Processing 99, no. 2 (2019): 359–69. http://dx.doi.org/10.1007/s10470-019-01421-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Xu, Li, and Dechun Zheng. "Data Acquisition and Performance Analysis of Image-Based Photonic Encoder Using Field-Programmable Gate Array (FPGA)." Journal of Nanoelectronics and Optoelectronics 18, no. 12 (2023): 1475–83. http://dx.doi.org/10.1166/jno.2023.3542.

Full text
Abstract:
With the continuous advancement of numerical control technology, the requirements for the position detection resolution, precision, and size of photoelectric encoders in computer numerical control machine tools are increasingly stringent. In the pursuit of high resolution and precision, this work investigates the principles of electronic subdivision and embedded hardware. It designs a high-precision image-based photonic encoder using a Field-Programmable Gate Array (FPGA). This photonic encoder captures the pattern of a rotating code disk using a complementary metal-oxide-semiconductor (CMOS) image sensor. The encoder’s core is the XC6SLX25T chip from the Spartan-6 series, with peripheral circuits including only A/D sampling and low-pass signal processing circuits. The FPGA module handles the digital signal reception, waveform conversion, quadrature frequency coarse count calculation, fine count subdivision calculation, and final position calculation of the encoder. In experiments, the output signal of the photonic encoder contains many impurities. After processing by the signal processing module, the A and B phase signals are not affected by previous interference, with a phase difference of 90°, meeting the requirements for subsequent signal processing modules. After fine count subdivision processing, the waveform graph significantly increases within one cycle, and after quadrupling the frequency, 30 subdivisions are performed within each cycle. Noise is introduced into graphic positioning or graphics are positioned under different noise conditions. Experimental results show that utilizing an improved centroid algorithm helps further suppress noise and enhance measurement accuracy in the design of image-based photonic encoders.
APA, Harvard, Vancouver, ISO, and other styles
25

Chen, Xiaochuan, Xuan Feng, Yapeng Li, et al. "An Image Unmixing and Stitching Deep Learning Algorithm for In-Screen Fingerprint Recognition Application." Electronics 12, no. 18 (2023): 3768. http://dx.doi.org/10.3390/electronics12183768.

Full text
Abstract:
The market share of organic light-emitting diode (OLED) screens in consumer electronics has grown rapidly in recent years. In order to increase the screen-to-body ratio of OLED phones, under-screen or in-screen fingerprint recognition is a must-have option. Current commercial hardware schemes include adhesive, ultrasonic, and under-screen optical ones. No mature in-screen solution has been proposed. In this work, we designed and manufactured an OLED panel with an in-screen fingerprint recognition system for the first time, by integrating an active sensor array into the OLED panel. The sensor and display module share the same set of fabrication processes when manufactured. Compared with the current widely commercially available under-screen schemes, the proposed in-screen solution can achieve a much larger functional area, better flexibility, and smaller thickness, while significantly reducing module cost. A point light source scheme, implemented by lighting up a single or several adjacent OLED pixels, instead of a conventional area source scheme as in the CMOS image sensor, or a CIS-based solution, has to be adopted since the optical distance is not long enough due to the integration. We designed a pattern for the point light sources and developed an optical unmixing network model to realize the unmixing and stitching of images obtained by each point light source at the same exposure time. After training, data verification of this network model shows that this deep learning algorithm outputs a stitched image of large area and high quality, where FRR = 0.7% given FAR = 1:50 k. In despite of a poorer quality of raw images and a much more complex algorithm compared with current commercial solutions, the proposed algorithm still obtains results comparable to peer studies, proving the effectiveness of our algorithm. Thus, the time required for fingerprint capture in our in-screen scheme is greatly reduced, by which one of the main obstacles for commercial application is overcome.
APA, Harvard, Vancouver, ISO, and other styles
26

Zheng, Qi, Huihuang Wu, Haiyan Jiang, Jiejie Yang, and Yueming Gao. "Development of a Smartphone-Based Fluorescent Immunochromatographic Assay Strip Reader." Sensors 20, no. 16 (2020): 4521. http://dx.doi.org/10.3390/s20164521.

Full text
Abstract:
Fluorescence immunochromatographic assay (FICA) is a rapid immunoassay technique that has the characteristics of high precision and sensitivity. Although image FICA strip readers have the advantages of high portability and easy operation, the use of high-precision complementary metal oxide semiconductor (CMOS) image sensors leads to an increase in overall cost. Considering the popularity of CMOS image sensors in smartphones and their powerful processing functions, this work developed a smartphone-based FICA strip reader. An optical module suitable for the test strips with different fluorescent markers was designed by replacing the excitation light source and the light filter. An android smartphone was used for image acquisition and image denoising. Then, the test and control lines of the test strip image were recognized by the sliding window algorithm. Finally, the characteristic value of the strip image was calculated. A linear detection range from 10 to 5000 mIU/mL (R2 = 0.95) was obtained for human chorionic gonadotrophin with the maximum relative error less than 9.41%, and a linear detection range from 5 to 4000 pg/mL (R2 = 0.99) was obtained for aflatoxin B1, with the maximum relative error less than 12.71%. Therefore, the smartphone-based FICA strip reader had high portability, versatility, and accuracy.
APA, Harvard, Vancouver, ISO, and other styles
27

Huang, J., R. He, X. Niu, et al. "Design of Nupix-A2, a Monolithic Active Pixel Sensor for heavy-ion physics." Journal of Instrumentation 18, no. 11 (2023): C11014. http://dx.doi.org/10.1088/1748-0221/18/11/c11014.

Full text
Abstract:
Abstract The High Intensity heavy-ion Accelerator Facility (HIAF) is being constructed to generate intense beams of primary and radioactive ion for a wide range of research fields. Hence, a Monolithic Active Pixel Sensor (MAPS) named Nupix-A2 has been developed in a 130-nm High Resistivity CMOS process. The Nupix-A2 can simultaneously measure the particle hit' energy, arrival time, and position. It consists of a 128 × 128 pixel array, a digital-to-analog converter array, and a digital control module. The Nupix-A2 can measure energy deposition from 300 e- to over 50 ke- and time duration from 13 μs to 140 μs. This sensor also offers full readout mode and fast readout mode. In full-readout mode, all pixels measure arrival time and energy, suitable for real-time beam monitoring and image reconstruction. In fast-readout mode, only particle hit's positions are detected and read out. This paper presents the design of the Nupix-A2.
APA, Harvard, Vancouver, ISO, and other styles
28

Wang, Ya Zhen, Ying Jun Chen, and Huang Ping. "Study on the Design of a Magnetic Started Wireless Capsule Endoscopy." Applied Mechanics and Materials 195-196 (August 2012): 864–67. http://dx.doi.org/10.4028/www.scientific.net/amm.195-196.864.

Full text
Abstract:
Based on the minimal CMOS image sensor OV6920, a magnetic controlled wireless capsule endoscopy has been designed in principles of micromation and low power consumption. The peripheral circuit of OV6920 is designed. With optical design, the power consumption of transmitting circuit is cut down according to the relationships between the resistance in series and the voltage, current, and transmitting power consumption. The total size of the system is only φ13×29mm, and the power consumption is about 100mW when all modules are connected altogether. The clear captured images in the imaging experiments prove the system design is feasible.
APA, Harvard, Vancouver, ISO, and other styles
29

Segawa, M., M. Ono, S. Musha, Y. Kishimoto, and A. Ohashi. "A CMOS image sensor module applied for a digital still camera utilizing the TAB on glass (TOG) bonding method." IEEE Transactions on Advanced Packaging 22, no. 2 (1999): 160–65. http://dx.doi.org/10.1109/6040.763187.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Kumar, Santosh. "What is driving the TSV business: Market & Technology Trends." Additional Conferences (Device Packaging, HiTEC, HiTEN, and CICMT) 2019, DPC (2019): 000808–33. http://dx.doi.org/10.4071/2380-4491-2019-dpc-presentation_wp1_060.

Full text
Abstract:
TSV interconnect based 3D/2.5D packaging has gained significant attention since its introduction in FPGA (for die partitioning) and HBM integrated GPU module (for gaming application). The performance potential offered by this technology is unequalled by any other packaging platform today. High-end applications like deep learning, datacenter networking, AR/VR, and autonomous driving are becoming real, thereby pushing the limits of other current packaging platforms. Fueled by increasing bandwidth needs for moving data in cloud-computing and supercomputing applications, performance-driven markets have adopted 3D stacked technologies in a row. Imaging, as the first market adopter of 3D integration, is propelling the market with an increasing number of sensors in smartphones and tablets, including 3D imaging. TSV-based products can be classified in three ranges: low, middle, and high-end. The middle and high-end product markets like CMOS image sensor, memory cube, and interposer are based on a via-middle process. In low-end products, we can also find TSV based on via-middle (i.e. in Apple's fingerprint sensor), but for cost reasons the MEMS industry is using essentially a via-last process, which is cheaper than a via-middle process. TSV's penetration rate in low-end products will remain stable, with the main source of growth due to RF filters in smartphone front-end modules, which keep increasing in order to support the different frequency bands used in 5G mobile communications protocol. This presentation will discuss about the market and technology trends of the TSV based 3D/2.5D packaging.
APA, Harvard, Vancouver, ISO, and other styles
31

Zoladz, M., P. Grybos, and K. Choręgiewicz. "Test measurements of an ASIC for X-ray material discrimination by using on-chip time domain integration and a CdTe sensor." Journal of Instrumentation 19, no. 03 (2024): C03033. http://dx.doi.org/10.1088/1748-0221/19/03/c03033.

Full text
Abstract:
Abstract Using time-domain integration (TDI) and a two-dimensional sensor is beneficial for X-ray imaging of moving objects. Applying on-chip instead off-chip TDI decreases the required data throughput between an ASIC and the backend several times [1]. This, in turn, allows us to significantly simplify the ASIC itself as well as the backend. We present the results of test measurements of an ASIC dedicated for X-ray material discrimination by using on-chip TDI and a CdTe sensor. The main part of the ASIC is an 192 × 64 pixel matrix. The pixel size is 100 μm × 100 μm, so the chip size is approximately 6.7 mm × 2 cm. The chip is manufactured in a 130 nm CMOS technology with 8 metal layers. A single pixel analog front-end consists of a charge-sensitive amplifier, a shaper, and three discriminators followed by counters. The test was conducted with a 0.75 mm thick CdTe sensor. First, we show the results of the discriminator offset correction followed by the evaluation of the pixel analog front-end gain and noise conducted with single energy radiation. Next, the results of moving objects imaging are obtained by using an industrial X-ray machine for food inspection (continuous energy spectrum). This is carried out to present an example of the image and evaluate the image signal to noise ratio for different energy discriminator thresholds, sensor bias voltages, detector module temperatures and object speeds.
APA, Harvard, Vancouver, ISO, and other styles
32

Fujimori, Noriyuki, Takatoshi Igarashi, Takahiro Shimohata, et al. "Development of Wafer Level Chip Size Packaging Process and its Application to the CMOS Image Sensor Module for Medical Device." IEEJ Transactions on Sensors and Micromachines 137, no. 2 (2017): 48–58. http://dx.doi.org/10.1541/ieejsmas.137.48.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Sun, Rong Chun, and Yan Piao. "Platform of Image Acquisition and Processing Based on DSP and FPGA." Applied Mechanics and Materials 457-458 (October 2013): 932–37. http://dx.doi.org/10.4028/www.scientific.net/amm.457-458.932.

Full text
Abstract:
Real-time image acquisition and processing system is widely used. In order to realize the fast development of the relative project, an image acquisition and processing platform is presented. In the design, the DSP of TMS320C6416 is used as high-speed signal processor core, and the FPGA of XC3S700 is taken to offer embedded memory, and connecting DSP and CMOS image sensor. To reduce the burden of CPU, the EDMA technology is used to finish the image data transmission. The main function modules of the platform include acquisition, processing, USB communication, and VGA displaying. The platform can acquire image, display image and make some image processing operations such as geometry transform, orthographic transform, features extracting, target recognition and target tracking. Experiment results show that design of the image acquisition and processing system is reasonable, feasible and has some application value.
APA, Harvard, Vancouver, ISO, and other styles
34

Purwanto and Aris Sunawar. "PENCEGAH KEBAKARAN AKIBAT PANAS PADA INSTALASI LISTRIK MENGGUNAKAN DETEKSI THERMAL CAMERA BERBASIS MICROPROSESOR." Journal of Electrical Vocational Education and Technology 2, no. 1 (2020): 33–38. http://dx.doi.org/10.21009/jevet.0021.06.

Full text
Abstract:
This study aims to produce a heat detection equipment on cables using the thermal camera method
 which will later become the output of the system to secure electrical equipment from the danger of fire due to excessive heat.
 The research method used the laboratory experimental method by making a prototype, which began with the design of the tool first, then made a prototype based on the design made and continued with testing the prototype. The design of a heat detection system prototype using a thermal camera module controlled using the Arduino system, using a CMOS camera as an image image viewer and MLX90614 thermal sensor as input input from temperature, the output of Arduino will be incorporated into a computer system that will combine the images from a digital camera and thermal sensor so that the temperature distribution map is obtained on the results of digital imaging.
 The research was carried out by testing each subsystem being slaughtered before assembling it into the system as a whole, so that valid data were obtained regarding the ability of each subsystem before being made into a complete system.
 Based on the results of measurements and testing the design of the heat detection system using thermal camera and Arduino can be concluded that the tool has been designed, created and tested
 to detect temperature differences by showing images, the difference in temperature can be obtained
 by combining the results of the camera with thermal sensors, concluding that the proposed system meets the research criteria so that the research hypothesis can be accepted.
 Abstrak
 Penelitian ini bertujuan untuk menghasilkan suatu peralatan pendeteksi panas pada kabel dengan
 menggunakan metode kamera panas (thermal camera) yang nantinya akan menjadi output pada system untuk mengamankan peralatan listrik dari bahaya kebakaran akibat panas berlebih.
 Metode penelitian menggunakan metode eksperiman laboratoriumdengan membuat prototype, yang dimulai dengan perancangan alat terlebih dahulu, selanjutnya dilakukan pembuatan prototype berdasarkan perancangan yang dibuat dan dilanjutkan dengan pengujian prototype. Perancangan prototype system pendeteksi panas menggunakan modul thermal kamera yang dikontrol dengan menggunakan sistem arduino, menggunakan kamera cmos sebagai penampil citra gambar dan sensor thermal MLX90614 sebagai input masukan dari suhu, keluaran dari arduino akan dimasukkan kedalam sistem computer yang akan memadukan hasil gambar dari kamera digital dan sensor thermal sehingga diperoleh peta sebaran suhu pada hasil pencitraan digital.
 Penelitian dilakukan dengan menguji masing-masing subsistem terbelih dahulu sebelum merangkai kedalam sistem secara utuh, sehingga diperoleh data yang valid mengenai kemampuan masing- masing subsistem sebelum dijadikan suatu sistem yang utuh.
 Berdasarkan hasil pengukuran dan pengujian Perancangan Sistem pendeteksi panas menggunakan thermal camera dan Arduino dapat disimpulkan bahwa alat telah selesai didesain, dibuat dan diuji
 dapat mendeteksi perbedaan suhu dengan menunjukkan gambar, perbedaan suhu diberikan dapat
 diperoleh dengan menggabungkan hasil kamera dengan sensor thermal, dapat diperoleh kesimpulan bahwa system yang diusulkan telah memenuhi kriteria penelitian sehingga hipotesis penelitian dapat diterima.
APA, Harvard, Vancouver, ISO, and other styles
35

Ruiz-Beltrán, Camilo A., Adrián Romero-Garcés, Martín González-García, Rebeca Marfil, and Antonio Bandera. "FPGA-Based CNN for Eye Detection in an Iris Recognition at a Distance System." Electronics 12, no. 22 (2023): 4713. http://dx.doi.org/10.3390/electronics12224713.

Full text
Abstract:
Neural networks are the state-of-the-art solution to image-processing tasks. Some of these neural networks are relatively simple, but the popular convolutional neural networks (CNNs) can consist of hundreds of layers. Unfortunately, the excellent recognition accuracy of CNNs comes at the cost of very high computational complexity, and one of the current challenges is managing the power, delay and physical size limitations of hardware solutions dedicated to accelerating their inference process. In this paper, we describe the embedding of an eye detection system on a Zynq XCZU4EV UltraScale+ multiprocessor system-on-chip (MPSoC). This eye detector is used in the application framework of a remote iris recognition system, which requires high resolution images captured at high speed as input. Given the high rate of eye regions detected per second, it is also important that the detector only provides as output images eyes that are in focus, discarding all those seriously affected by defocus blur. In this proposal, the network will be trained only with correctly focused eye images to assess whether it can differentiate this pattern from that associated with the out-of-focus eye image. Exploiting the neural network’s advantage of being able to work with multi-channel input, the inputs to the CNN will be the grey level image and a high-pass filtered version, typically used to determine whether the iris is in focus or not. The complete system synthetises other cores and implements CNN using the so-called Deep Learning Processor Unit (DPU), the intellectual property (IP) block released by AMD/Xilinx. Compared to previous hardware designs for implementing FPGA-based CNNs, the DPU IP supports extensive deep learning core functions, and developers can leverage DPUs to conveniently accelerate CNN inference. Experimental validation has been successfully addressed in a real-world scenario working with walking subjects, demonstrating that it is possible to detect only eye images that are in focus. This prototype module includes a CMOS digital image sensor that provides 16 Mpixel images, and outputs a stream of detected eyes as 640 × 480 images. The module correctly discards up to 95% of the eyes present in the input images as not being correctly focused.
APA, Harvard, Vancouver, ISO, and other styles
36

Yang, Zhanglei, Haipeng Li, Mingbo Hong, Chen-Lin Zhang, Jiajun Li, and Shuaicheng Liu. "Single Image Rolling Shutter Removal with Diffusion Models." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 9 (2025): 9373–81. https://doi.org/10.1609/aaai.v39i9.33015.

Full text
Abstract:
We present RS-Diffusion, the first Diffusion Models-based method for single-frame Rolling Shutter (RS) correction. RS artifacts compromise visual quality of frames due to the row-wise exposure of CMOS sensors. Most previous methods have focused on multi-frame approaches, using temporal information from consecutive frames for the motion rectification. However, few approaches address the more challenging but important single frame RS correction. In this work, we present an ``image-to-motion" framework via diffusion techniques, with a designed patch-attention module. In addition, we present the RS-Real dataset, comprised of captured RS frames alongside their corresponding Global Shutter (GS) ground-truth pairs. The GS frames are corrected from the RS ones, guided by the corresponding Inertial Measurement Unit (IMU) gyroscope data acquired during capture. Experiments show that RS-Diffusion surpasses previous single-frame RS methods, demonstrates the potential of diffusion-based approaches, and provides a valuable dataset for further research.
APA, Harvard, Vancouver, ISO, and other styles
37

Arena, A., G. Cantatore, and M. Karuza. "Digital holographic interferometry method for tracking detector modules displacement." Journal of Instrumentation 18, no. 11 (2023): C11030. http://dx.doi.org/10.1088/1748-0221/18/11/c11030.

Full text
Abstract:
Abstract In high energy particle physics scattering experiments, the precision of the reconstructed particle tracks can be fundamental. For this reason, a method for detecting the displacement of tracking detector modules is developed. The modules are silicon planes mounted on a frame and used in the MUonE project, which aims at a precision measurement of the scattering angle of elastic muon-electron scattering. From the scattering angle, the hadronic contribution to the anomalous magnetic moment of the muon is extracted. To achieve the desired accuracy, the position of the tracking detector planes must be monitored. The allowable relative displacements must be less than 10 μ m. To meet the specifications and to monitor as large an area of the detector as possible, a digital holographic interferometer was developed. It is based on a novel lens-less design in off-axis holographic geometry. Light from a fiber-coupled laser source is split by a fiber beam splitter, with one output used to illuminate the detector plane and the other for the reference beam. The two beams produce an interference pattern on a CMOS image sensor. To obtain relative displacement information, successive images are superimposed on an initial reference image and reconstructed by solving the Rayleigh-Sommerfeld diffraction integral taking into account the spherical wavefronts of the beams. The interference fringes that appear in the reconstructed holographic image provide a measure of the relative displacement of the detector plane compared to the initial position. The performance of the reconstruction method used was verified with the proposed setup at a real tracking station.
APA, Harvard, Vancouver, ISO, and other styles
38

Ayan, Chakraborty, Das Suvajeet, and Mondal Biplob. "Integrating Neural Network for Pest Detection in Controlled Environment Vertical Farm." Indian Journal of Science and Technology 15, no. 17 (2022): 829–38. https://doi.org/10.17485/IJST/v15i17.353.

Full text
Abstract:
Abstract <strong>Background:</strong>&nbsp;An integrated system for creating and maintaining controlled environment ideal for vertical farming prototype is demonstrated. The requirement of optimal artificial light for different growth stages of tomato and chilli plants is studied in detail and CNN model-based method for detection and classification of Leaf disease is also developed.&nbsp;<strong>Methods:</strong>&nbsp;The artificial environment ensuring adequate artificial lighting, moisture, and minerals was create by implanting various sensors and actuators to the plant beds and connected in network through a cloud based remote server. A CMOS image sensor module was used to monitor the various stages of plant growth.&nbsp;<strong>Findings:</strong>&nbsp;The duration and intensity requirement for germination, vegetation and flowering of both tomato and chilli plants are relatively lesser with artificial light condition than with sunlight. At the end of fifth epoch the developed convolution neural network model for detection and classification of leaf disease produced training and validation accuracies of 84.8% and 67.2%, respectively.&nbsp;<strong>Novelty:</strong>&nbsp;For different growth stages of tomato and chilli plants in north eastern India, the requirement of optimal artificial light is studied by exposing them to different light intensities. The study was conducted during summer (May-June) when the average sun exposure in eastern India was ~130- 190 hours. The captured images and data generated were used to monitor the status of the crops and identifying diseases with the application of Deep Learning models. Convolutional Neural Network (CNN) model-based method for detection and classification of leaf disease is presented. <strong>Keywords:</strong> Convolution Neural Network; Vertical Farming; Artificial Light; Pest Detection; Light Emitting Diode
APA, Harvard, Vancouver, ISO, and other styles
39

Wunderer, Cornelia B., Aschkan Allahgholi, Matthias Bayer, et al. "Detector developments at DESY." Journal of Synchrotron Radiation 23, no. 1 (2016): 111–17. http://dx.doi.org/10.1107/s1600577515022237.

Full text
Abstract:
With the increased brilliance of state-of-the-art synchrotron radiation sources and the advent of free-electron lasers (FELs) enabling revolutionary science with EUV to X-ray photons comes an urgent need for suitable photon imaging detectors. Requirements include high frame rates, very large dynamic range, single-photon sensitivity with low probability of false positives and (multi)-megapixels. At DESY, one ongoing development project – in collaboration with RAL/STFC, Elettra Sincrotrone Trieste, Diamond, and Pohang Accelerator Laboratory – is the CMOS-based soft X-ray imager PERCIVAL. PERCIVAL is a monolithic active-pixel sensor back-thinned to access its primary energy range of 250 eV to 1 keV with target efficiencies above 90%. According to preliminary specifications, the roughly 10 cm × 10 cm, 3.5k × 3.7k monolithic sensor will operate at frame rates up to 120 Hz (commensurate with most FELs) and use multiple gains within 27 µm pixels to measure 1 to ∼100000 (500 eV) simultaneously arriving photons. DESY is also leading the development of the AGIPD, a high-speed detector based on hybrid pixel technology intended for use at the European XFEL. This system is being developed in collaboration with PSI, University of Hamburg, and University of Bonn. The AGIPD allows single-pulse imaging at 4.5 MHz frame rate into a 352-frame buffer, with a dynamic range allowing single-photon detection and detection of more than 10000 photons at 12.4 keV in the same image. Modules of 65k pixels each are configured to make up (multi)megapixel cameras. This review describes the AGIPD and the PERCIVAL concepts and systems, including some recent results and a summary of their current status. It also gives a short overview over other FEL-relevant developments where the Photon Science Detector Group at DESY is involved.
APA, Harvard, Vancouver, ISO, and other styles
40

Shim, Dongha, and Jason Yi. "Ultra-wide-angle Wireless Endoscope with a Backend-camera-controller Architecture." International Journal of Electronics and Telecommunications 63, no. 1 (2017): 19–24. http://dx.doi.org/10.1515/eletel-2017-0003.

Full text
Abstract:
Abstract This paper presents a wireless endoscope with an ultra-wide FOV (Field of View) of 130° and HD resolution (1280×720 pixels). The proposed endoscope consists of a camera head, cable, camera controller, and wireless handle. The lens module with a 150° degrees AOV (Angle of View) is achieved using the plastic injection-molding process to reduce manufacturing costs. A serial CMOS image sensor using the MIPI (Mobile Industry Processor Interface) CSI-2 (Camera Serial Interface-2) interface physically separates the camera processor from the camera head. The camera head and the cable have a compact structure due to the BCC (Backend-Camera-Controller) architecture. The size of the camera head and the camera controller is 8×8×26 mm and 7×55 mm, respectively. The wireless handle supports a UWB (Ultra-Wide-Band) or a Wi-Fi communication to transmit video data. The UWB link supports a maximum data transfer rate of ~37 Mbps to transmit video data with a resolution of 1280×720 pixels at a frame rate of 30 fps in the MJPEG (Motion JPEG) format. Although the Wi-Fi link provides a lower data transfer rate (~8 Mbps Max.), it has the advantage of flexible interoperability with various mobile devices. The latency of the UWB link is measured to be ~0.1 sec. The Wi-Fi link has a larger latency (~0.5 sec) due to its lower data transfer rate. The proposed endoscope demonstrates the feasibility of a high-performance yet low-cost wireless endoscope using the BCC architecture. To the best of the author’s knowledge, the proposed endoscope has the largest FOV among all presently existing wireless endoscopes.
APA, Harvard, Vancouver, ISO, and other styles
41

Hou, Chongyang, Shuye Zhang, Rui Liu, et al. "Boosting flexible electronics with integration of two‐dimensional materials." InfoMat 6, no. 7 (2024): e12555. https://doi.org/10.1002/inf2.12555.

Full text
Abstract:
This dataset includes original TIFF and PNG data from original research within the project EBEAM. Precisely, there are 15 final, complex Figures, two Schemes, seven Tables, and the final PDF version of the article below: PDF of the article final version "Boosting flexible electronics with integration of two‐dimensional materials" Figure 1. Machine learning‐assisted temperature-pressure electronic skin with decoupling capability (TPD e skin) enables object recognition. (A) Principle of using machine learning to recognize objects via e‐skin. (B) Structure of a one‐dimensional convolutional neural network for TPD object recognition. (C) Breakthrough in grasping objects made from 15 different materials by prosthetics. (D) Temperature-pressure frequency waveforms generated by prosthetic grasping of 15 different materials, realized by neuromorphic coding. (E) Visualization of 15 samples of signals of different frequencies by t‐distributed stochastic neighbor embedding (t‐SNE). (F) Confusion matrices for 15 types of object recognition. (G) Cognitive outcome waveform during expiration. (H) Identification and waveform of grasping thermoplastic bottles Figure 2. Application scenarios for piezoresistive sensors based on polyetherimide (PET)/MXene designs. (A) Wireless transmission system for MXene‐based sensor signals. A Bluetooth module is used for signal transmission in response to pressure on the sensor. (B) Use of MXene‐based sensor to detect the pressure of different chess pieces and thus locate them. (C) Pressure detection during the swing of a robotic arm. (D) Utilizing the brightness of an LED to reflect changes in pressure applied to the sensor. (E) Application of the MXene‐based sensor to the skin for Joule heating experiments. (F) Temperature distribution of MXene‐based sensors at different voltages. (G) Infrared thermal imaging of the MXene‐based sensor at increasing voltage, corresponding to the test results in (F) Figure 3. Arrayed flexible graphene thermal patches for patient skin temperature and hyperthermia monitoring. (A) Illustration of the process of detecting and sensing skin temperature using graphene patches. Human skin temperature perception and auxiliary heating are achieved by integrating array‐based sensor patches into medical patches. The measured temperature signal is then processed through a readout circuit and transmitted to the phone via Bluetooth to detect the skin temperature signal in real time. Additionally, the heating temperature of graphene patches can be controlled through mobile phones. (B) Photos of a graphene capacitive sensor arranged in an 8&thinsp;&times;&thinsp;8 array composition. (C) Interface structure of the sensor array utilizing graphene as an electrode. (D) Partial enlarged view of a graphene sensor. (E) Enlarged image of the flexible substrate functional area of the graphene patch in (B). Active areas include a readout front end, an analog‐to‐digital converter, a microcontroller, an Xtal XO, Bluetooth low energy (BLE), a DC/DC converter, and a battery. (F) Framework diagram of the wireless measurement and heating system for graphene patches. The design allows precise temperature measurement by alternating between measurement and heating states Figure 4. Electronic skin, artificial retina, and electronic nose designed with machine learning algorithms. (A) Deep learning based on PdSe2 piezoresistive sensors for pulse recognition and temperature readout. Deep learning steps for converting resistance to temperatures with the input of three distinct pulse signals. The detection of pulse temperature is achieved by employing the pressure signals of pulse beats. (B) Temperature&ndash;pulse curves representing distinct pulse shapes. (C) Stabilization of training and validation losses at low values after 500 training cycles. (D) Temperature readout through deep learning with 98% accuracy. (E) MoSSe‐based artificial retina with integrated sensing, storage, and computing functions. (F) Graphene‐based artificial nose identifying four different volatile organic molecules Figure 5. (A&ndash;C) MoS2‐based field‐effect transistor employed for information encryption. (D) Image captured by the MoS2 image sensor. (E) Transfer characteristics of the MoS2 transistor, determining binary values based on the intercept of the linear fitting. (F) Histogram of the gate‐voltage intercept, with blue denoting 0 and red denoting 1; reference gate voltage is &minus;1&thinsp;V. (G) MoS2 transistor leakage current curve in the on/off state. (H) Histogram of drain current for binary data: 0 when ID&thinsp;lower 18&thinsp;&mu;A and 1 when ID&thinsp;higher 18&thinsp;&mu;A. (I) PUF pattern. (J) Sequential steps of image encryption and decryption: the image, captured by the sensor, is encrypted with a PUF key and is subsequently decrypted with the corresponding key Figure 6. Optical non‐contact control system based on a PtTex-Si sensor array for human&ndash;machine interaction. (A) Scheme of a photomultiplier transistor. (B) Optical image of the sensor array. (C) Flowchart depicting the process of shadow encoding and recognition. Converting photocurrent into a discrete signal enables instruction retrieval. (D) Process of encoding a photocurrent signal using the gradient approach. (E) Output displaying the encoding of shadows Figure 7. NbS2-MoS2‐based neurally inspired optical sensor array for high‐precision dynamic image recognition and single‐point motion trajectory extraction. (A) Schematic diagram of the structure of the NbS2-MoS2‐based vision system, which consists of a 100‐pixel NbS2-MoS2 optical sensing array. (B) Scheme and circuit diagram of the NbS2-MoS2 optical sensing array. (C) Optical micrograph of a 100‐pixel sensor array and scheme of a NbS2-MoS2 phototransistor Figure 8. Supercapacitor woven bracelet based on the MXene coaxial structure for charging a watch. (A) CV curves of a single zinc‐ion hybrid fiber supercapacitor (FSC) and two supercapacitors connected in series and parallel, respectively. (B) Plots of capacitance and energy of the supercapacitors versus length. (C) Bracelet weaving by consuming a 1.5&thinsp;m coaxial FSC. The bracelet provides electric power for a watch and LEDs in a glove Figure 9. Three‐dimensional motion detection using an MXene‐based TENG. (A) Scheme of a 4&thinsp;x&thinsp;4 TENG sensor array. The motion trajectory of a finger positioned above can be captured using the sensor array. (B) Perception of finger linear motion above the sensor. (C) Capture of the finger motion trajectory when moving in a curved path. (D) Blind navigation by installing the sensor on a walking cane. (E) Photograph of the non‐contact sensor. (F) Voltage signal output when the finger moves above the six planes (A&ndash;F) Figure 10. Graphene human&ndash;robot interfaces empowered by machine learning based on graphene acoustic transducers. (A) Auditory and vocal capabilities of the robot system empowered by the system. After training using convolutional neural networks, it can recognize different identities and emotional characteristics, allowing intelligent communication and responses. (B) Schematic representation of the graphene-PI-graphene structure formed through laser irradiation. (C) SEM image of graphene. (D) Human&ndash;robot interfaces attached to a robot. (E and F) TENG operating in microphone mode, detecting sound vibrations through surface‐charge changes. (G) TENG in loudspeaker mode, generating acoustic waves through the thermoacoustic effect Figure 11. Spiking neural network (SNN) structure based on a hybrid 2D‐CMOS microchip. (A) SNN structure. The image from the Modified National Institute of Standards and Technology (MNIST) database is edited into a column vector with 784 input neurons. Pixel intensity is encoded by the firing pattern of the input neurons. Unsupervised training of neurons connecting the input and excitation layers results in labeled trained neurons. These, together with firing patterns, are transmitted to the decision block for feedback, allowing inference of the presented images. (B) Synaptic connection evolution training conducted on 400 excitatory and 400 inhibitory layer neurons. (C) Obfuscation matrix, which provides a visual representation of dataset accuracy. (D) 50 Monte Carlo simulations of 400 excitatory and 400 inhibitory layer neurons of SNN. After 50 iterations, the system accuracy reached 90percent. (E) Schematic diagram of the neuron&ndash;synapse&ndash;neuron module circuit design based on h‐BN. (F) SPICE‐like simulation of synaptic signals from one‐transistor‐one‐memristor cells. (G) Neuronal membrane potential simulated by SPICE simulation Figure 12. Comparison between the traditional cross‐computing structure and the cyclic logic computing scheme. (A) Cross‐operation structure. The input and output memristors are in the same row and column. (B) Circular logic computational structure. The state and inverted state of the cell computer are correlated with the resistance state of the two circuits of the memristor, and the state of its neighboring cells determines the state of each cell through a cyclic logic calculation scheme. (C) Schematic diagram of the design of the memristor array to implement the cyclic logic operation scheme. (D) Design diagram of the cyclic logic calculation scheme. The mapping scheme divides the input signal into two modes-calculation and writing. The state calculation of the cellular automaton is realized through the calculation mode, whereas the cell automaton storage mode is realized through the write mode Figure 13. In‐memory computing design for a hybrid logic circuit with a MoS2‐based transistor and memristor. (A and B) I&ndash;V curves for memristors and MoS2‐based transistors, respectively. (C and D) Simple NAND and AND logic operation verification using a memristor hybrid circuit. (E and F) Measurements of both NAND and AND logic operations. (G and H) Measurement of voltage deviation by 100 simulations using Vdd and VR Figure 14. Demonstration of circular logic solutions for 1D cellular and basic cellular automata. (A) Optical image of a 1D cell automaton. (B) Circuit diagram of a 1D cell automaton. Green dotted coil indicates a basic unit. Each basic unit consists of a memory resistor and an auxiliary memory resistor. CAx and CAx represent the resistance value and resistance inverse value of the cellular automaton respectively. (C) 110 logical operation of ECA rules that describe the corresponding operation in a circular logic operation. (D) Time series in which the 110 operation triggers the signal. (E) Evolution of memory resistor states under different conversion rules Figure 15. Schematic diagram of the structure of a memristor textile network with a Ag-MoS2-HfAlOx-CNT heterostructure. (A) E‐textile memristor network with remodeled synapses at the upper layer and neuromorphic functions at the lower layer. (B) Unit device structure of the Ag-MoS2-HfAlOx-CNT heterostructure. (C) SEM images of the Ag-MoS2-HfAlOx-CNT heterostructure. (D) Artificial synaptic function simulation using the reconfigurable memristor Scheme 1. Typical applications of 2D material‐empowered flexible and wearable electronics. (1) Flexible and wearable electronics. (A) Pulse temperature measurement. (B) Rechargeable gloves. (2) Flexible energy storage and conversion. (C) E‐fabric for charging. (D) Uses of mechanical and thermal energy to create sound Scheme 2.&nbsp; Current situation and future development trend of flexible electronics Table 1. Roles of 2D materials in flexible electronics Table 2. Two-dimensional material-based flexible sensors according to their sensing from different physical signals Table 3. Four types of 2D material-based tactile sensors Table 4. Roles of 2D materials in bioelectronic devices and their features Table 5. Two-dimensional material-based flexible solid-state supercapacitors Table 6. Flexible solid-state lithium batteries from two-dimensional materials Table 7. Comparison of the performance of TENGs before and after the incorporation of 2D materials Funding: National Key Research and Development Program (No. 2022YFE0124200), National Natural Science Foundation of China (No. U2241221); Natural Science Foundation of Shandong Province for Excellent Young Scholars (YQ2022041), and the fund (No. SKT2203) from the State Key Laboratories of Transducer Technology, Shanghai Institute of Microsystem and Information Technology, Chinese Academy of Sciences for support; Major Scientific and Technological Innovation Project of Shandong Province (2021CXGC010603), NSFC (No. 52022037) and Taishan Scholars Project Special Funds (TSQN201812083); Foundation (No. GZKF202107) of State Key Laboratory of Biobased Material and Green Papermaking, Qilu University of Technology, Shandong Academy of Sciences; NSFC (No. 52071225), the National Science Center and the Czech Republic under the European Regional Development Fund (ERDF) &ldquo;Institute of Environmental Technology&mdash;Excellent Research&rdquo; (No. CZ.02.1.01/0.0/0.0/16_019/0000853); Sino-German Center for Research Promotion (SGC) for support (No. GZ 1400), European Union&rsquo;s Horizon Europe Research and Innovation Program under grant agreement No.101087143 (Electron Beam Emergent Additive Manufacturing (EBEAM)).
APA, Harvard, Vancouver, ISO, and other styles
42

Matsunaga, Yoshiyuki. "CMOS Image Sensor." Journal of the Institute of Image Information and Television Engineers 52, no. 8 (1998): 1171–72. http://dx.doi.org/10.3169/itej.52.1171.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Matsunaga, Yoshiyuki, and Kazushige Ooi. "CMOS Image Sensor Dreams of Intelligent Sensors. Future in CMOS Image Sensor." Journal of the Institute of Image Information and Television Engineers 53, no. 2 (1999): 184–86. http://dx.doi.org/10.3169/itej.53.184.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Schanz, M., W. Brockherde, R. Hauschild, B. J. Hosticka, and M. Schwarz. "Smart CMOS image sensor arrays." IEEE Transactions on Electron Devices 44, no. 10 (1997): 1699–705. http://dx.doi.org/10.1109/16.628825.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Perenzoni, M., and L. Gonzo. "Solar-powered CMOS image sensor." Electronics Letters 46, no. 1 (2010): 77. http://dx.doi.org/10.1049/el.2010.2288.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Mendis, S., S. E. Kemeny, and E. R. Fossum. "CMOS active pixel image sensor." IEEE Transactions on Electron Devices 41, no. 3 (1994): 452–53. http://dx.doi.org/10.1109/16.275235.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Cathébras, G. "CMOS image sensor for spatiotemporal image acquisition." Journal of Electronic Imaging 15, no. 2 (2006): 020502. http://dx.doi.org/10.1117/1.2191767.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Li, Gongchen, Fangji Zhao, Zicheng Wang, and Zhe Chen. "The development and application of CMOS image sensor." Applied and Computational Engineering 7, no. 1 (2023): 767–77. http://dx.doi.org/10.54254/2755-2721/7/20230460.

Full text
Abstract:
This paper focuses on the development of Complementary metal-oxide semiconductor (CMOS) image sensor and its applications in aerospace, medical and automotive fields. Firstly, the representative events in history and the contributions of some companies to CMOS image sensor are described. Subsequently, some characteristics of CMOS image sensor are analyzed in the image field involved. In order to evaluate the performance of CMOS image sensor, single even effect and electronic endoscope structures are analyzed and active and passive range finder experiments are carried out. The results show that the imaging based on CMOS sensor can fully meet the requirements of imaging applications in many fields.
APA, Harvard, Vancouver, ISO, and other styles
49

Santos, Patrick M., and Davies William De Lima Monteiro. "Intrinsically Self-Amplified CMOS Image Sensor." ECS Transactions 23, no. 1 (2019): 537–44. http://dx.doi.org/10.1149/1.3183761.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Carmo, J. P., M. F. Silva, R. P. Rocha, J. F. Ribeiro, L. M. Goncalves, and J. H. Correia. "Stereoscopic image sensor in CMOS technology." Procedia Engineering 25 (2011): 1277–80. http://dx.doi.org/10.1016/j.proeng.2011.12.315.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography