副研究员/高级工程师

副研究员/高级工程师

刘成

刘成

  • 职称: 副研究员
  • 研究方向: 

    专用加速器设计、可重构计算、容错计算

  • 导师类别: 硕士生导师
  • 电子邮件: liucheng@ict.ac.cn
  • 个人主页: https://liu-cheng.github.io/

简历

刘成,中科院计算技术研究所,副研究员,硕士生导师,主要研究方向为领域专用硬件加速器设计、FPGA可重构计算、容错计算等。在EDA与芯片设计领域的重要学术会议和期刊DAC,ICCAD,TCAD,TVLSI等发表学术论文五十余篇。目前主持国家自然科学基金项目两项,参与物端智能芯片、敏捷芯片设计、异构计算等多个国家级以及省部级重大项目,获得2019年中科院北京分院科技成果转化特等奖。

获奖及荣誉:

(1) 中国科学院北京分院科技成果转化特等奖,2019

代表论著:

Book Chapter:
[1] Hayden Kwok-Hay So and Cheng Liu. FPGA overlays. In FPGAs for Software Programmers, pp. 285-305. Springer, Cham, 2016.
Journals:
[1] Xiandong Zhao, Ying Wang, Cheng Liu, Cong Shi, Kaijie Tu, Lei Zhang, "Network Pruning for Bit-Serial Accelerators", IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD), 2022.
[2] Weiwei Chen, Ying Wang, Ying Xu, Chengsi Gao, Cheng Liu, Lei Zhang, "A Framework for Neural Network Architecture and Compiler Co-Optimization", in ACM Transactions on Embedded Computing Systems (TECS), 2022.
[3] Benjamin Chen Ming Choong, Tao Luo, Cheng Liu, Bingsheng He, Wei Zhang and Joey Tianyi Zhou, "Hardware-software co-exploration with racetrack memory based in-memory computing for CNN inference in embedded systems", in Journal of System Architecture (JSA), 2022.
[4] Wen Li, Ying Wang , Cheng Liu, Yintao He, Lian Liu, Huawei Li, Xiaowei Li, “On-line Fault Protection for ReRAM-based Neural Networks,” in IEEE Transactions on Computers (TC), 2022
[5] Dawen Xu, Zhuangyu Feng, Cheng Liu, Li Li, Ying Wang, Huawei Li, Xiaowei Li, "Taming Process Variations in CNFET for Efficient Last Level Cache Design", IEEE Transactions on Very Large Scale Integration (VLSI) Systems, 2021.
[6] Cheng Liu, Cheng Chu, Dawen Xu, Ying Wang, Qianlong Wang, Huawei Li, Xiaowei Li, Kwang-Ting Cheng, "HyCA: A Hybrid Computing Architecture for Fault Tolerant Deep Learning", IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD), 2021
[7] Dawen Xu, Meng He, Cheng Liu, Ying Wang, Long Cheng, Huawei Li, Xiaowei Li, Kwang-Ting Cheng, "R2F: A Remote Retraining Framework for AIoT Processors with Computing Errors", IEEE Transactions on Very Large Scale Integration (VLSI) Systems, 2021.
[8] Dawen Xu, Ziyang Zhu, Cheng Liu, Ying Wang, Shuang Zhao, Lei Zhang, Huaguo Liang, Huawei Li, Kwang-Ting Cheng, "Reliability Evaluation and Analysis of FPGA-based Neural Network Acceleration System", IEEE Transactions on Very Large Scale Integration (VLSI) Systems, 2021
[9] Shengwen Liang, Ying Wang, Cheng Liu, Lei He, Huawei Li, Dawen Xu, Xiaowei Li. "EnGN: A High-Throughput and Energy-Efficient Accelerator for Large Graph Neural Networks", IEEE Transactions on Computers (2020) (Featured Paper of the Month)
[10] Dawen Xu, Cheng Liu, Ying Wang, Kaijie Tu, Bingsheng He, and Lei Zhang. "Accelerating Generative Neural Networks on Unmodified Deep Learning Processors-A Software Approach." IEEE Transactions on Computers (TC), 2020.
Conferences
[1] Lei Dai, Ying Wang, Cheng Liu, Fuping Li, Huawei Li and Xiaowei Li, "Reexamining the CGRA Memory Sub-system for Higher Memory Utilization and Performance", The 40th IEEE International Conference on Computer Design(ICCD), October, 2022.
[2] Cheng Liu, Zhen Gao, Siting Liu, Xuefei Ning, Huawei Li, and Xiaowei Li, "Fault-Tolerant Deep Learning: A Hierarchical Perspective", in The 40th IEEE VLSI Test Symposium (VTS), 2022.
[3] Xinghua Xue, Haitong Huang, Cheng Liu , Ying Wang, Tao Luo, Huawei Li, Xiaowei Li, "Winograd Convolution: A Perspective from Fault Tolerance", In proceedings of ACM/IEEE Design Automation Conference (DAC), 2022.
[4] Shengwen Liang, Ziming Yuan, Ying Wang, Cheng Liu, Huawei Li and Xiaowei Li, "VStore: In-Storage Graph Based Vector Search Accelerator", In proceedings of ACM/IEEE Design Automation Conference (DAC), 2022.
[5] Cangyuan Li, Ying Wang, Cheng Liu , Shengwen Liang, Huawei Li, Xiaowei Li, "GLIST: Towards In-Storage Graph Learning", USENIX Annual Technical Conference (ATC), 2021.
[6] Mengdi Wang, Bing Li, Ying Wang, Cheng Liu, Lei Zhang, “MT-DLA: An Efficient Multi-Task Deep Learning Accelerator Design,” in IEEE GLVLSI, 2021.(Best Paper Award)
[7] Xiaohan Ma, Chang Si, Ying Wang, Cheng Liu, Lei Zhang, "NASA: Accelerating Neural Network Design with a NAS Processor", In The 48th IEEE/ACM International Symposium on Computer Architecture (ISCA), 2021.
[8] Lei He, Cheng Liu, Ying Wang, Shengwen Liang, Huawei Li, and Xiaowei Li, "GCiM: A Near-Data Processing Accelerator for Graph Construction", In proceedings of ACM/IEEE Design Automation Conference (DAC), 2021
[9] Mengdi Wang and Ying Wang and Cheng Liu and Lei Zhang, "Network-on-Interposer Design for Agile Neural-Network Processor Chip Customization", In proceedings of ACM/IEEE Design Automation Conference (DAC), 2021.
[10] Yintao He, Ying Wang, Cheng Liu, Huawei Li, and Xiaowei Li, "TARe: Task-Adaptive in-situ ReRAM Computing for Graph Learning", In proceedings of ACM/IEEE Design Automation Conference (DAC), 2021.
[11] Yuquan He, Ying Wang , Cheng Liu , and Lei Zhang, "PicoVO: A Lightweight RGB-D Visual Odometry Targeting Resource-Constrained IoT Devices", In The 2021 IEEE International Conference on Robotics and Automation (ICRA), 2021.
[12] Dawen Xu, Cheng Chu, Cheng Liu , Qianlong Wang, Ying Wang, Lei Zhang, Huaguo Liang and Kwang-Ting Tim Cheng, A Hybrid Computing Architecture for Fault-tolerant Deep Learning Accelerators, The 38th IEEE International Conference on Computer Design(ICCD), October, 2020.
[13] Shengwen Liang, Cheng Liu, Ying Wang, Huawei Li, Xiaowei Li, DeepBurning-GL: an Automated Framework for Generating Graph Neural Network Accelerators, IEEE/ACM International Conference on Computer-Aided Design (ICCAD'20), November, 2020.
[14] Xiandong Zhao, Ying Wang, Cheng Liu, Cong Shi, Lei Zhang, BitPruner: Network Pruning for Bit-Serial Accelerators, In IEEE/ACM Proceedings of Design, Automation Conference (DAC), 2020.
[15] Xiandong Zhao, Ying Wang, Xuyi Cai, Cheng Liu, Lei Zhang, Linear Symmetric Quantization of Neural Networks for Low-precision Integer Hardware , In International Conference on Learning Representations(ICLR), 2020
[16] Dawen Xu, Kaijie Tu, Ying Wang, Cheng Liu, Bingsheng He, and Huawei Li. FCNengine: accelerating deconvolutional layers in classic CNN processors. In Proceedings of the International Conference on Computer-Aided Design (ICCAD), p.22. ACM, 2018.

承担科研项目情况:

1、国家自然科学基金面上项目,面向深度学习处理器的弹性容错技术研究,2022/1-2025/12
2、国家自然科学青年基金项目,基于FPGA的专用高能效图计算加速研究,2020/1-2022/12
3、计算机体系结构国家重点实验室重点支持课题,容错深度学习处理器的自动化设计, 2021/6-2022/12
4、科技部重点研发项目,超异构软硬件协同计算统一框架,2022/10-2025/10
5、191项目子课题,物端智能无人机系统,2019/1-2020/6
6、中国科学院STS计划项目,超微智能计算机,2019/1-2019/12
7、中国科学院,先导C子项目,开源智能物端处理器,2020/1-2021/12