Guofu Xie
Currently, I am a senior researcher in
Baidu Inc., and focus on the work of computer vision, image processing and vectorization technique to improve products and services for Baidu.
I was a Postdoctoral Fellow in the
Department of Computer Science and Operations Research,
University of Montreal, working with Prof.
Derek Nowrouzezahrai. I obtained my Ph.D. in computer science from
State Key Laboratory of Computer Science,
Institute of Software,
Chinese Academy of Sciences in 2013. My Ph.D. advisor is Professor
Wencheng Wang. I received my B.S. in
Software Engineering from
Xiamen University in 2007.
From June 2010 to July 2013, I was a full-time research intern in
Internet Graphics Group,
Microsoft Research Asia (MSRA). My mentor is
Xin Sun in MSRA. I also work with
Xin Tong.
Contact me by: guofu85
gmail.com,
Linkedin,
Google Scholar
Research Interests
- Computer graphics: photorealistic real-time rendering, vector representation/rendering
- Image processing: image fusion, image editing, and image stitching
- Computer vision: 3D reconstruction, and convolution neural network
Professional Experience
July 2013 ~ August 2014
Postdoctoral Fellow in LIGUM
June 2010 ~ July 2013
Research intern in Internet Graphics Group
Paper Reviewer
- SIGGRAPH 2016
- SIGGRAPH Asia 2014, 2015
- Computer Graphics Forum
- Eurographics Symposium on Rendering 2015
- High-Performance Graphics 2015
- Pacific Graphics 2013
- Graphics Interface 2014
- Computers & Graphics (Elsevier)
Selected Publications
Hierarchical Diffusion Curves for Accurate Automatic Image Vectorization
ACM Transactions on Graphics (ACM SIGGRAPH Asia 2014)
Vol 33,
Issue 6,
230:1-230:11
Diffusion curve primitives are a compact and powerful representation for vector images. While several vector image authoring tools leverage these representations, automatically and accurately vectorizing arbitrary raster images using diffusion curves remains a difficult problem. We automatically generate sparse diffusion curve vectorizations of raster images by fitting curves in the Laplacian domain. Our approach is fast, combines Laplacian and bilaplacian diffusion curve representations, and generates a hierarchical representation that accurately reconstructs both vector art and natural images. The key idea of our method is to trace curves in the Laplacian domain, which captures both sharp and smooth image features, across scales, more robustly than previous image- and gradient-domain fitting strategies. The sparse set of curves generated by our method accurately reconstructs images and often closely matches tediously hand-authored curve data. Also, our hierarchical curves are readily usable in all existing editing frameworks. We validate our method on a broad class of images, including natural images, synthesized images with turbulent multi-scale details, and traditional vector-art, as well as illustrating simple multi-scale abstraction and color editing results.
@article{Guofu:2014:ImgVe,
author = {Xie, Guofu and Sun, Xin and Tong, Xin and Nowrouzezahrai, Derek},
title = {Hierarchical Diffusion Curves for Accurate Automatic Image Vectorization},
journal = {ACM Trans. Graph.},
volume = {33},
number = {6},
year = {2014},
pages = {230:1-230:11}
}
Line Segment Sampling with Blue-Noise Properties
ACM Transactions on Graphics (ACM SIGGRAPH 2013)
Vol 32,
Issue 4,
127:1-127:13
Line segment sampling has recently been adopted in many rendering algorithms for better handling of a wide range of effects such as motion blur, defocus blur and scattering media. A question naturally raised is how to generate line segment samples with good properties that can effectively reduce variance and aliasing artifacts observed in the rendering results. This paper studies this problem and presents a frequency analysis of line segment sampling. The analysis shows that the frequency content of a line segment sample is equivalent to the weighted frequency content of a point sample. The weight introduces anisotropy that smoothly changes among point samples, line segment samples and line samples according to the lengths of the samples. Line segment sampling thus makes it possible to achieve a balance between noise (point sampling) and aliasing (line sampling) under the same sampling rate. Based on the analysis, we propose a line segment sampling scheme to preserve blue-noise properties of samples which can significantly reduce noise and aliasing artifacts in reconstruction results. We demonstrate that our sampling scheme improves the quality of depth-of-field rendering, motion blur rendering, and temporal light field reconstruction.
@article{Guofu:2012:DiffusionCurveTexture,
author = {Sun, Xin and Zhou, Kun and Guo, Jie and Xie, Guofu and Pan, Jingui and Wang, Wencheng and Guo, Baining},
title = {Line Segment Sampling with Blue-Noise Properties},
journal = {ACM Trans. Graph.},
volume = {32},
number = {4},
year = {2013},
pages = {127:1-127:13}
}
Memory-Efficient Single-Pass GPU Rendering of Multifragment Effects
* joint first authors
IEEE Transaction on Visualization and Computer Graphics (IEEE TVCG 2013),
Vol 19,
Issue 8,
1307-1316
Rendering multi-fragment effects using GPUs is attractive for high speed. However, the efficiency is seriously compromised, because ordering fragments on GPUs is not easy and the GPU's memory may not be large enough to store the whole scene geometry. Hitherto, existing methods have been unsuitable for large models or have required many passes for data transmission from CPU to GPU, resulting in a bottleneck for speedup. This paper presents a stream method for accurate rendering of multi-fragment effects. It decomposes the model into parts and manages these in an efficient manner, guaranteeing that the parts can easily be ordered with respect to any viewpoint, and that each part can be rendered correctly on the GPU. Thus, we can transmit the model data part by part, and once a part has been loaded onto the GPU we immediately render it and composite its result with the results of the processed parts. In this way, we need only a single pass for data access with a very low bounded memory requirement. Moreover, we treat parts in packs for further acceleration. Results show that our method is much faster than existing methods, and can easily handle large models of any size.
@article{Guofu:2013:SinglePassPeeling,
author = {Wang, Wencheng and Xie, Guofu},
title = {Memory-Efficient Single-Pass GPU Rendering of Multifragment Effects},
journal = {IEEE Transaction on Visualization and Computer Graphics},
volume = {19},
number = {8},
year = {2013},
pages = {1307-1316}
}
Diffusion Curve Textures for Resolution Independent Texture Mapping
ACM Transactions on Graphics (ACM SIGGRAPH 2012),
Vol 31,
Issue 4,
74:1-74:9
We introduce a vector representation called diffusion curve textures for mapping diffusion curve images (DCI) onto arbitrary surfaces. In contrast to the original implicit representation of DCIs [Orzan et al. 2008], where determining a single texture value requires iterative computation of the entire DCI via the Poisson equation, diffusion curve textures provide an explicit representation from which the texture value at any point can be solved directly, while preserving the compactness and resolution independence of diffusion curves. This is achieved through a formulation of the DCI diffusion process in terms of Green's functions. This formulation furthermore allows the texture value of any rectangular region (e.g. pixel area) to be solved in closed form, which facilitates anti-aliasing. We develop a GPU algorithm that renders anti-aliased diffusion curve textures in real time, and demonstrate the effectiveness of this method through high quality renderings with detailed control curves and color variations.
@article{Guofu:2012:DiffusionCurveTexture,
author = {Sun, Xin and Xie, Guofu and Dong, Yue and Lin, Stephen and Xu, Weiwei and Wang, Wencheng and Tong, Xin and Guo, Baining},
title = {Diffusion Curve Textures for Resolution Independent Texture Mapping},
journal = {ACM Trans. Graph.},
volume = {31},
number = {4},
year = {2012},
pages = {74:1-74:9}
}
Interactive Depth-of-Field Rendering with Secondary Rays
Journal of Computer Science and Technology,
Vol 28,
Issue 3,
490-498
This paper presents an efficient method to trace secondary rays in depth-of-field (DOF) rendering, which significantly enhances realism. Till now, the effects by secondary rays have been little addressed in real-time/interactive DOF rendering, because secondary rays have less coherence than primary rays, making them very difficult to handle. We propose novel measures to cluster secondary rays, and take a virtual viewpoint to construct a layered image-based representation for the objects that would be intersected by a cluster of secondary rays respectively. Therefore, we can exploit coherence of secondary rays in the clusters to speed up tracing secondary rays in DOF rendering. Results show that we can interactively achieve DOF rendering effects with reflections or refractions on a commodity graphics card.
@article{Guofu:2013:DOF,
author = {Xie, Guofu and Sun, Xin and Wang, Wencheng},
title = {Interactive Depth-of-Field Rendering with Secondary Rays},
journal = {Journal of Computer Science and Technology},
volume = {28},
number = {3},
year = {2013},
pages = {490-498}
}
Single-Pass Data Access for Multi-Fragment Effects Rendering on GPUs (Chinese)
Chinese Journal of Computers,
Vol 34,
Issue 3,
473-481
Rendering of multi-fragment effects can be greatly accelerated on the GPU. However, existing methods always need to read the model data in more than one passes, due to the requirements for depth ordering of fragments and the architecture limitation of the GPU. This has been a bottleneck for increasing the rendering efficiency, because of the limited transmittance bandwidth from CPU to GPU. Though there have been methods proposed to use CUDA with the data loaded once, they cannot process large models due to the limited storage on the GPU. This paper proposes a new method to implement single-pass GPU rendering of multi-fragment effects. It first decomposes the 3D model into a set of convex polyhedrons, and then by the viewpoint determines the order of transmitting the convex polyhedrons one by one to the GPU, to guarantee the correct ordering of fragments. In the process, the new method immediately performs illumination computation and blends the rendering results of the transmitted convex polyhedrons, so that it can greatly reduce the storage requirement. As a result, it can take more shading parameters to promote the rendering effects. Experimental results show that our new method can be faster than existing methods, even compared with the methods using CUDA, and can conveniently handle large models.
@article{Guofu:2011:SinglePassPeeling,
author = {Xie, Guofu and Wang, Wencheng},
title = {Single-Pass Data Access for Multi-Fragment Effects Rendering on GPUs},
journal = {Chinese Journal of Computers},
volume = {34},
number = {3},
year = {2011},
pages = {473-481}
}
Efficient Search of Lightcuts by Spatial Clustering
SIGGRAPH Asia 2011 Sketches (ACM SIGGRAPH ASIA Sketches),
26:1-26:2
Lightcuts is an efficient illumination method for scenes with many complex lights, by hierarchical clustering of lights in a light tree. However, when the light tree is large, it is very time-consuming to traverse in the tree to get the suitable clusters of lights for illumination computation. For this, some methods proposed to use image coherence for neighboring pixels to share clusters of lights, and so save traversal cost in the light tree. However, with the image resolutions reduced, fewer and fewer coherences could be used and so their acceleration efficiency will be reduced dramatically, and they may even decrease the rendering efficiency.
This sketch proposes to exploit spatial coherences to enhance rendering by lightcuts. For the intersection points between rays and the scene, they are clustered on the fly and the points of a cluster search their respective suitable clusters of lights from a common set of clusters, called a common cut. In this way, the traversal cost from the tree root to the common cuts can be considerably saved for acceleration. Results show that our method can be faster than the methods using image coherence, and works stably with various image resolutions. And with the lights being more complex, our method can obtain more acceleration.
@proceedings{Guangwei:2011:LightcutsSketch,
author = {Wang, Guangwei and Xie, Guofu and Wang, Wencheng},
title = {Efficient Search of Lightcuts by Spatial Clustering},
booktitle = {SIGGRAPH Asia 2011 Sketches},
year = {2011},
pages = {26:1-26:2}
}
Enhancing Illumination Computation of Lightcuts with Spatial Coherence (Chinese)
Accepted to Chinese Journal of Computers
Lightcuts is an efficient rendering algorithm for scenes with many complex lights. In order to approximate lights by representative lights of some clusters (defined as a cut), lightcuts builds a binary light tree to progressively cluster the lights. As a result, the number of lights to be processed is reduced during rendering. However, when dealing with plenty of complex lights, finding cuts in a light tree is still expensive. Therefore some algorithms exploit image coherence to accelerate cuts computation and achieve good performance. This paper proposes an algorithm that exploits spatial coherence to reduce the search cost of cuts. The proposed method clusters geometry positions by their material and normal. For each cluster, the proposed method first calculates the cut of the representative point of the cluster, and then uses the cut as the initial to search for the cuts of other points in this cluster. Experimental results show that the new method can significantly reduce the cost of finding cuts in light tree, enhance illumination computation and outperform other methods using image coherence, with stable effect on acceleration. In general, with more lights in a more complex distribution, the new method can achieve more acceleration.
@proceedings{Guangwei:2011:LightcutsVR,
author = {Wang, Guangwei and Xie, Guofu and Wang, Wencheng},
title = {Enhancing Illumination Computation of Lightcuts with Spatial Coherence},
booktitle = {ChinaVR' 2011},
year = {2011}
}
Image Fusion Algorithm Based on Neighbors and Cousins Information in Nonsubsampled Contourlet Transform Domain
Xiaobo Qu, Guofu Xie, Jingwen Yan, Ziqian Zhu, Bengang Chen
Proceedings of the 2007 International Conference on Wavelet Analysis and Pattern Recognition
Nonsubsampled contourlet transform (NSCT) provides flexible multiresolution, anisotropy and directional expansion for images. Compared with the foremost contourlet transform, it is shift-invariant and can overcome the pseudo-Gibbs phenomena around singularities. In addition, coefficients of NSCT are dependent on their neighborhood coefficients in the local window and cousin coefficients in directional subbands. In this paper, region energy and cousin correlation are defined to represent the neighbors and cousins information, respectively. Salience measure, as the combination of region energy and cousin correlation, is defined to obtain fused coefficients in the high-frequency NSCT domain. First, source images are decomposed into subimages via NSCT. Secondly, salience measure is computed. Thirdly, salience measure-maximum-based rule and average rule are employed to obtain high-frequency and low-frequency coefficients, respectively. Finally, fused image is reconstructed by inverse NSCT. Experimental results show that the proposed algorithm outperforms wavelet-based fusion algorithms and contourlet transform-based fusion algorithms.
@proceedings{Xiaobo:2007:Contourlet,
author = {Qu, Xiaobo and Xie, Guofu and Yan, Jingwen and Zhu, Ziqian and Chen, Bengang},
title = {Image Fusion Algorithm Based on Neighbors and cousins information in Nonsubsampled Contourlet Transform Domain},
booktitle = {Proceedings of the 2007 International Conference on Wavelet Analysis and Pattern Recognition},
year = {2007},
pages = {1797-1802}
}
A Novel Image Fusion Algorithm Based on Bandelet Transform
Xiaobo Qu, Jingwen Yan, Guofu Xie, Ziqian Zhu, Bengang Chen
Chinese Optics Letters,
Vol 5,
Issue 10,
569-572
A novel image fusion algorithm based on bandelet transform is proposed. Bandelet transform can take advantage of the geometrical regularity of image structure and represent sharp image transitions such as edges efficiently in image fusion. For reconstructing the fused image, the maximum rule is used to select source images' geometric flow and bandelet coefficients. Experimental results indicate that the bandelet-based fusion algorithm represents the edge and detailed information well and outperforms the wavelet-based and Laplacian pyramid-based fusion algorithms, especially when the abundant texture and edges are contained in the source images.
@article{Xiaobo:2007:Bandelet,
author = {Qu, Xiaobo and Yan, Jingwen and Xie, Guofu and Zhu, Ziqian and Chen, Bengang},
title = {A Novel Image Fusion Algorithm Based on Bandelet Transform},
journal = {Chinese Optics Letters},
volume = {5},
number = {10},
year = {2007},
pages = {569-572}
}