Home Research Contact

Chen Du

Ph.D. Student @ ECE Department, UC-San Diego

I'm a Ph.D. student at the Electrical and Computer Engineering (ECE) department, University of California-San Diego. I am supervised by Prof. Truong Q. Nguyen at the Video Processing Lab. I received my M.S. Degree from ECE Department, UC-San Diego and my B.S. Degree from the Department of Electronic Engineering, Shanghai Jiao Tong University. My Research interests are computer vision and machine learning.

Research

Video Analysis for Human Balance Evaluation

Center of pressure (COP) is an important measurement of postural and gait control in human biomechanical studies. A vision-based estimation of COP metrics offers a way to obtain these gold-standard metrics for the detection of balance and gait problems. In this paper, we propose an end-to-end framework to estimate the COP path length and the COP positions from the 3D skeleton, utilizing the spatial-temporal features learned by graph convolutional networks. We propose two single-task models for each metric and a multi-task approach jointly learning two metrics. To facilitate this line of research, we also release a novel 3D skeleton dataset containing a wide variety of action patterns with synchronized COP labels. The experiments on the dataset validate that our framework achieves state-of-the-art accuracies for both COP path length and COP position estimations, while the multitask approach could yield more accurate and robust performance on COP path length estimation compared to the singletask model.

Below are example results showing the predicted COP position vs. ground truth from force plate (ML: Medial-Lateral direction, AP: Anterior-Posterior direction, Blue dot: COP position for each frame)

C. Du, S. Graham, S. Jin, C. Depp, and T. Nguyen. "Multi-task center-of-pressure metrics estimation from skeleton using graph convolutional network." International Conference on Acoustics, Speech, and Signal Processing (ICASSP) 2020. [IEEE] [dataset]

Video De-fencing

We present a novel video de-fencing system providing a clear view of the fenced scenes in the video. To provide accurate fence masks for de-fencing, we proposed a fence segmentation method using fully convolutional neural network that significantly improves the accuracy of fence segmentation. The exsiting segmentation datasets do not provide accurate label for fence, thus restricting the data-driven methods for fence segmentation. To overcome this problem, we collected and released a novel fence segmentation dataset with highly precise pixel-level ground truths. We then proposed a content recovery alrogithm based on optical flow, producing plausible de-fenced images and videos.

Below are example results on phone captured videos (left: de-fenced videos, right: original videos)

C. Du, B. Kang, Z. Xu, J. Dai, and T. Nguyen. "Accurate and efficient video de-fencing using convolutional neural networks and temporal information." 2018 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2018. [IEEE] [arXiv] [dataset]

Contact

E-mail: c9du @ eng.ucsd.edu

LinkedIn Github