Computational Photography
Toward Faithfulness: Color Reproduction
Color reproduction is a critical aspect of digital imaging and display technology, aiming to faithfully represent the original colors across different devices and media. Along with dynamic range, colorimetry plays a pivotal role in recreate the visual experience that closely matches human perception. Different from common low-level vision tasks such as image denoising or enhancement, that map pixel values to better pixel values within the same signal space, color reproduction involves complex transformations between various signal spaces, including scene-referred (e.g., camera RAW) and display-referred (e.g., sRGB) representations.
My main achivement along this direction is Multi-Spectral Image Color Reproduction(MSICR) (done in mid 2023), which is a pioneering research project for the feasibility of color reproduction utilizing multi-spectral imaging systems1 for flagship smartphones. [2025 Update: It’s a mainstream feature for flagship smartphones now📱2,3.] The goal of MSICR is to produce highly faithful and accurate color representations from multi-spectral RAW signals, handling the challenging cases where traditional RGB cameras fail to reproduce colors accurately due to limited spectral sensitivity, such as in low light conditions where only a few colors present. Under three representative light conditions, our system achieved less than 1 degree of angular error for most cases, and an average color difference of around 2 $\Delta E$, approaching the discrimination limit of human color perception. We revealed the considerable potential of multi-spectral imaging systems in enhancing color matching and color constancy, showing the importance of a holistic system design for unlocking the full potential of both hardware and software components.
Moving forward, I am excited to explore more applications of color reproduction, especially under the real-time constraints imposed by wearable devices like AR/VR headsets, where high-quality color representation is essential for an immersive user experience. This will involve advancements in both multiple sensors integration and algorithmic solutions to complement each other effectively.
References
Toward Faithfulness: RAW Image Denosing
Noise is a fundamental obstacle in modern imaging systems, degrading visual quality and hampering machine perception tasks. My research confronts this challenge by developing RAW image denoising techniques with a strong emphasis on hardware-friendly design, ensuring solutions are both effective and practical for real-world deployment.
On the other hand, a major bottleneck for the entire research community is the need for costly, per-camera noise profile calibration. To address this, we collected and released a novel benchmark dataset for realistic noise modeling. Based on this benchmark, we are organizing a denoising challenge1🏆 at the AIM workshop2 in conjunction with ICCV 2025. The goal of this initiative is to spur the development of camera-agnostic noise models. By fostering research that integrates physical priors, we aim to accelerate the creation of generalizable denoising solutions that benefit both academia and industry.
References
Toward Efficiency: Edge AI
My research focuses on unlocking the full potential of edge computing—delivering applications with lower latency, enhanced privacy, and minimal bandwidth requirements. To achieve this, I am conducting a holistic design philosophy that deeply integrates algorithmic innovations with hardware optimizations.
My current work involves developing ultra-efficient, learning-based models, including my research on learned LUTs, and deploying them for demanding edge-side applications like real-time RAW image processing and efficient video decoding.
The ultimate goal is to push the boundaries of on-device AI, creating a virtuous cycle where novel algorithms are designed with hardware capabilities in mind, and hardware is inspired by the needs of next-generation models.