Biography
I was a research scientist and led the NLP research group [photo of JDExplore d-team] at JD Explore Academy, where I was a member of the Doctoral Management Trainee (DMT) program (a top-tier talent program in JD.com, Inc.).
I received Ph.D. from The University of Sydney, supervised by Prof. Dacheng Tao (IEEE/ACM Fellow).
I was a research intern at Tencent AI Lab, advised by Dr. Zhaopeng Tu and Dr. Longyue Wang. I also worked at Cheetah Mobile as the main developer of the "realtime voice translator" [Demo].
I have published over 70 papers in NLP/AI venues, including ACL, EMNLP, ICLR, NeurIPS, ICML, AAAI, IEEE T-PAMI, and T-KDE and some of them were applied to industrial products.
I am the Area/ Session Chair of ACL, EMNLP, NAACL, AAAI, and SDM.
I won many AI challenges, including SuperGLUE benchmark, GLUE benchmark, WMT 2022, IWSLT 2021, WMT 2020 and WMT 2019.
I have been appointed as a researcher of SIAS, Zhejiang University since 2023.
My research mainly focuses on deep learning for NLP, including large language model pretraining, language understanding, generation, and translation.
More recently, we have been working on the whole pipeline of LLM R&D, including efficient and sufficient training, alignment, evaluations, compression, multilinguality, multimodality and much more.
I am always open for collaborations!
📣 NEWS: I have several FTE and intern positions on LLM research and development. Reach out if you are interested in joining.
📣 I have several internship positions. Self-motivated students with experience in NLP and PLM are welcome.
News
-
Dec. 2024: Two papers about {multimodal benchmark on high-resolution images and complex reasoning} of language models are accepted by AAAI 2025, congrats to my interns.
-
Nov. 2024: Three papers about {distillation for translation, jailbreak defence, translation evaluation} of language models are accepted by COLING 2025, congrats to my interns.
-
Oct. 2024: One survey about efficient training of large models is accepted by ACM Computing Surveys, check out the [paper]&[media coverage by 新智元].
-
Sept. 2024: One Paper about mitigating reward hacking in RLHF is accepted by NeurIPS 2024, congrats to my intern Yuchun.
-
Sept. 2024: Four Papers about {catastrophic forgetting, distillation for CodeGen, speech modality expansion, and watermark} of language models are accepted by EMNLP 2024, congrats to my interns and coauthors.
-
Sept. 2024: One paper about data augmentation for multilabel classification is accepted by IEEE Transactions on Multimedia.
-
Aug. 2024: One paper about understanding multimodal alignment for MLLM is accepted by ACM ToMM.
-
Jul. 2024: One paper about multimodal fusion for MLLM is accepted by ACM MM 2024.
-
Jul. 2024: One paper about orthogonal optimizer for MoE is accepted by ECAI 2024.
-
Dec. 2023: Invited to serve as the Area Chair for EMNLP 2024.
-
May. 2024: 🎉 Ten papers about {alignment, in-context learning, compression, evaluation, safety, and downstream adaptations} of language models are accepted by ACL 2024.
-
Mar. 2024: One paper about sparse graph Transformer is accepted by Neural Networks.
-
Mar. 2024: Two papers about {prompt transfer tuning & sheared backpropagation tuning of foundation models} are accepted by IEEE Transactions on Knowledge and Data Engineering and CVPR 2024, respectively.
-
Feb. 2024: Two papers about {prompt bias in LMs and multimodal translation dataset} are accepted by COLING 2024.
-
Dec. 2023: Invited to serve as the Area Chair for NAACL 2024, ACL 2024, & EACL 2024.
-
Dec. 2023: Two papers about {Seq2Seq LM pretraining & alleviating exposure bias for DiffModel} are accepted by IEEE Transactions on Knowledge and Data Engineering and AAAI 2024, respectively.
-
Oct. 2023: One paper about lexical choice for non-autoregressive translation is accepted by Computer Speech and Language.
-
Oct. 2023: One paper about training LM with adaptive sharpness-aware optimizer is accepted by Neural Networks.
-
Oct. 2023: Five Papers about {high (data & model) efficiency, cross-modal alignment in speech translation, LLM quantization, and ChatGPT for machine translation} are accepted by EMNLP 2023, congrats to my interns and coauthors.
-
Sept. 2023: One paper about parameter-efficient knowledge distillation is accepted by IEEE Transactions on Multimedia.
-
Jul. 2023: Three papers about {cross-modal contrastive learning, knowledge alignment, and federated optimizer} of model training are accepted by ECAI 2023, IEEE TASLP, and TPAMI, respectively.
-
May. 2023: 🎉 Nine papers about {training, evaluation, robustness, and downstream adaptation} of the large model are accepted by ACL 2023, two oral papers and one best paper nomination, congrats to my interns and coauthors.
-
May. 2023: Two papers about GNN sparse training and healthcare dataset are accepted by IEEE Transactions on Neural Networks and Learning Systems and Information Fusion, respectively.
-
Apr. 2023: One paper about flatter optimization for fedML is accepted by ICML 2023 (oral).
-
Mar. 2023: We release reports to better understand and harness the power of ChatGPT on language understanding (NLU), machine translation (MT), and MT evaluation. Enjoy it!
-
Mar. 2023: 🥂 I lead the R&D of the Vega series Large Language Models (织女系列自然语言大模型), which won the 2022 Technology Golden Award ("京东集团技术金项奖", the highest tech award at JD.com, Inc.), see internal media coverage.
-
Feb. 2023: One paper about knowledge-grounded multiview learning is accepted by IEEE Transactions on Knowledge and Data Engineering, congrats to my intern Qihuang.
-
Jan. 2023: Invited to serve as the Session Chair for AAAI 2023.
-
Jan. 2023: One paper about federated learning is accepted by ICLR 2023.
-
Jan. 2023: One paper about dynamic contrastive distillation is accepted by IEEE Transactions on Multimedia, congrats to my intern Jun.
-
Nov. 2022: One paper about memory-efficient pipeline parallelism of mixture-of-experts is accepted by IPDPS 2023, congrats to my intern Zheng.
-
Nov. 2022: Invited talk at China National Computer Congress 2022 (CNCC'22), check out the schedule.
-
Nov. 2022: One paper about simultaneous translation is accepted by AAAI 2023, congrats to my intern Hexuan.
-
Oct. 2022: 🏆 Our Vega v2 got 1st place on one of the most difficult general language understanding leaderboards - SuperGLUE! Check out the tech report.
-
Oct. 2022: Invited talk about Towards Efficient NLP Foundation Models -- Pretrain, Downstream Adaptation, and Beyond at Nankai Univ. and Univ. Chinese Academy of Sciences.
-
Oct. 2022: Two papers are accepted by EMNLP 2022, congrats to my interns Qihuang and Shwai.
-
Sep. 2022: 📖 Co-authored "White Paper on Artificial Intelligence Generated Content" is published, check out the [Chinese version]&[media coverage].
-
Aug. 2022: Two papers are accepted by COLING 2022, congrats to my interns Changtong and Bing.
-
Aug. 2022: 🥂 Our Project "Super Deep Learning of JD Explore Academy" won 2022 SAIL (Superior AI Leader/ 卓越人工智能引领者) Award Top30 at World Artificial Intelligence Conference, see media coverage.
-
Jul. 2022: 🏆 Our Vega-MT ranked 1st (Chinese<=>English, German<=>English, Czech<=>English, English=>Russian), 2nd (Russian=>English, Japanese=>English), and 3rd (English=>Japanese) in General Translation Task in WMT 2022.
-
Apr. 2022: One paper is accepted by NAACL 2022.
-
Apr. 2022: One paper is accepted by SIGIR 2022, congrats to my interns Jun and Fei.
-
Mar. 2022: One paper is accepted by CVPR 2022.
-
Feb. 2022: Submitted Ph.D. thesis "Neural Machine Translation with Fully Information Transformation", containing sufficient (adequate translation) & efficient (fast translation) information transformation.
-
Feb. 2022: One paper is accepted by ACL 2022.
-
Jan. 2022: 🏆 Our Vega v1 got 1st place on The General Language Understanding Evaluation (GLUE) benchmark! Check out the [tech report]&[media coverage].
-
Dec. 2021: Invited to serve as the Area Chair for ACL 2022.
-
Dec. 2021: Our Vega (织女) achieved the SOTA performance in two tasks @GLUE, surpassing human performance.
-
Aug. 2021: Two papers are accepted by EMNLP 2021 and its findings.
-
Aug. 2021: We organize a course "Advanced topics of AI" at the School of Gifted Young, USTC. I am the lecturer of NLP part.
-
Jul. 2021: 🏆 Ranked 1st in Swahili-English Speech Translation Task in IWSLT 2021.
-
Jul. 2021: Two papers are accepted by IWSLT 2021.
-
May. 2021: Three papers are accepted by ACL 2021 and its findings.
-
Mar. 2021: Invited to serve as the Session Chair for SDM 2021
-
Jan. 2021: Two papers are accepted by ICLR 2021.
-
Jan. 2021: One paper is accepted by ICASSP 2021.
-
Sept. 2020: One paper is accepted by COLING 2020.
-
Sept. 2020: Two papers are accepted by WMT 2020.
-
Sept. 2020: One paper is accepted by EMNLP 2020.
-
Aug. 2020: 🏆 Ranked 2nd in German-English Chat Translation Task in WMT 2020.
-
Apr. 2020: One paper is accepted by ACL 2020.
-
Apr. 2019: 🏆 Ranked 1st in Finnish-English News Translation Task in WMT 2019.
Publications
† indicates intern/student under my supervision. ✉️ means corresponding author.
-
On Efficient Training of Large-Scale Deep Learning Models: A Literature Review.
Li Shen, Yan Sun, Zhiyuan Yu, Liang Ding, Xinmei Tian, and Dacheng Tao.
arXiv preprint, 2023. & ACM Computing Surveys, 2024 (ACM CSUR 2024). (CORE Rank A*)
-
InfoRM: Mitigating Reward Hacking in RLHF via Information-Theoretic Reward Modeling.
†Yuchun Miao, Sen Zhang, Liang Ding, Rong Bao, Lefei Zhang, and Dacheng Tao.
The Annual Conference on Neural Information Processing Systems, 2024 (NeurIPS 2024). (CORE Rank A*)
-
Revisiting Catastrophic Forgetting in Large Language Model Tuning.
†Hongyu Li, Liang Ding✉️, Meng Fang, and Dacheng Tao.
Findings of The Conference on Empirical Methods in Natural Language Processing, 2024 (EMNLP 2024). (CORE Rank A*)
-
Learning from Imperfect Data: Towards Efficient Knowledge Distillation of Autoregressive Language Models for Text-to-SQL.
†Qihuang Zhong, Kunfeng Chen, Liang Ding, Juhua Liu, Bo Du, and Dacheng Tao.
Findings of The Conference on Empirical Methods in Natural Language Processing, 2024 (EMNLP 2024). (CORE Rank A*)
-
Self-Powered LLM Modality Expansion for Large Speech-Text Models.
Tengfei Yu, Xuebo Liu, Zhiyi Hou, Liang Ding, Dacheng Tao, and Min Zhang.
The Conference on Empirical Methods in Natural Language Processing, 2024 (EMNLP 2024). (CORE Rank A*)
-
Context-Aware Watermark with Semantic Balanced Green-red Lists for Large Language Models.
Yuxuan Guo, Zhiliang Tian, Yiping Song, Tianlun Liu, Liang Ding, and Dongsheng Li.
The Conference on Empirical Methods in Natural Language Processing, 2024 (EMNLP 2024). (CORE Rank A*)
-
SpliceMix: A Cross-scale and Semantic Blending Augmentation Strategy for Multi-label Image Classification.
Lei Wang, Yibing Zhan, Leilei Ma, Dapeng Tao, Liang Ding, and Chen Gong
arXiv preprint, 2023. & IEEE Transactions on Multimedia, 2024 (TMM 2024). (CORE Rank A*)
-
Can Linguistic Knowledge Improve Multimodal Alignment in Vision-Language Pretraining?.
†Fei Wang, Liang Ding✉️, Jun Rao, Ye Liu, Li Shen, and Changxing Ding✉️.
arXiv preprint, 2023. & ACM Transactions on Multimedia Computing, Communications, and Applications, 2024 (ACM ToMM 2024). (CORE Rank B)
-
WisdoM: Improving Multimodal Sentiment Analysis by Fusing Contextual World Knowledge.
†Wenbin Wang, Liang Ding(co-first author), Li Shen, Yong Luo✉️, Han Hu, and Dacheng Tao.
arXiv preprint, 2024. & ACM Multimedia, 2024 (ACM MM 2024). (CORE Rank A*)
-
Diversifying the Mixture-of-Experts Representation for Language Models with Orthogonal Optimizer.
†Boan Liu, Liang Ding✉️, Li Shen, Keqin Peng, Yu Cao, Dazhao Cheng✉️, and Dacheng Tao.
arXiv preprint, 2023. & The European Conference on Artificial Intelligence, 2024 (ECAI 2024). (CORE Rank A)
-
Uncertainty Aware Learning for Language Model Alignment.
†Yikun Wang, Rui Zheng, Liang Ding✉️, Qi Zhang, Dahua Lin, and Dacheng Tao.
The Annual Meeting of the Association for Computational Linguistics, 2024 (ACL 2024). (CORE Rank A*)
-
Revisiting Demonstration Selection Strategies in In-Context Learning.
†Keqin Peng, Liang Ding✉️, Yancheng Yuan✉️, Xuebo Liu, Min Zhang, Yuanxin Ouyang, and Dacheng Tao.
The Annual Meeting of the Association for Computational Linguistics, 2024 (ACL 2024). (CORE Rank A*)
-
Revisiting Knowledge Distillation for Autoregressive Language Models.
†Qihuang Zhong, Liang Ding, Li Shen, Juhua Liu, Bo Du, and Dacheng Tao.
The Annual Meeting of the Association for Computational Linguistics, 2024 (ACL 2024). (CORE Rank A*)
-
DB-LLM: Accurate Dual-Binarization for Efficient LLMs.
†Hong Chen, Chengtao Lv, Liang Ding, Haotong Qin, Xiabin Zhou, Yifu Ding, Xuebo Liu, Min Zhang, Jinyang Guo, Xianglong Liu, and Dacheng Tao.
Findings of The Annual Meeting of the Association for Computational Linguistics, 2024 (ACL 2024). (CORE Rank A*)
-
OOP: Object-Oriented Programming Evaluation Benchmark for Large Language Models.
†Shuai Wang, Liang Ding✉️, Li Shen, Yong Luo✉️, Bo Du, and Dacheng Tao.
Findings of The Annual Meeting of the Association for Computational Linguistics, 2024 (ACL 2024). (CORE Rank A*)
-
Error Analysis Prompting Enables Human-Like Translation Evaluation in Large Language Models.
†Qingyu Lu, †Baopu Qiu, Liang Ding, Kanjian Zhang, Tom Kocmi, and Dacheng Tao.
Technical report & arXiv preprint, 2023. & Findings of The Annual Meeting of the Association for Computational Linguistics, 2024 (ACL 2024). (CORE Rank A*)
(🎁A present for the MT evaluation community to better understand and harness the powerful ChatGPT)
-
Mitigating Hallucinations in Large Vision-Language Models with Instruction Contrastive Decoding.
Xintong Wang, Jingheng Pan, Liang Ding✉️, and Chris Biemann✉️.
Findings of The Annual Meeting of the Association for Computational Linguistics, 2024 (ACL 2024). (CORE Rank A*)
-
ROSE Doesn't Do That: Boosting the Safety of Instruction-Tuned Large Language Models with Reverse Prompt Contrastive Decoding.
†Qihuang Zhong, Liang Ding, Juhua Liu, Bo Du, and Dacheng Tao.
Findings of The Annual Meeting of the Association for Computational Linguistics, 2024 (ACL 2024). (CORE Rank A*)
-
POMP: Probability-driven Meta-graph Prompter for LLMs in Low-resource Unsupervised Neural Machine Translation.
†Shilong Pan, Zhiliang Tian, Liang Ding, Zhen Huang, Zhihua Wen, and Dongsheng Li.
The Annual Meeting of the Association for Computational Linguistics, 2024 (ACL 2024). (CORE Rank A*)
-
Speech Sense Disambiguation: Tackling Homophone Ambiguity in End-to-End Speech Translation.
Tengfei Yu, Xuebo Liu, Liang Ding, Kehai Chen, Dacheng Tao, and Min Zhang.
The Annual Meeting of the Association for Computational Linguistics, 2024 (ACL 2024). (CORE Rank A*)
-
Exploring Sparsity in Graph Transformers.
Chuang Liu, Yibing Zhan, Xueqi Ma, Liang Ding, Dapeng Tao, Jia Wu, Wenbin Hu, and Bo Du.
arXiv preprint, 2023. & Neural Networks, 2024 (Neural Netw 2024). (CORE Rank A)
-
PANDA: Prompt Transfer Meets Knowledge Distillation for Efficient Model Adaptation.
†Qihuang Zhong, Liang Ding, Juhua Liu, Bo Du, and Dacheng Tao.
arXiv preprint, 2022. & IEEE Transactions on Knowledge and Data Engineering, 2024 (TKDE 2024). (CORE Rank A*)
-
Sheared Backpropagation for Finetuning Foundation Models.
Zhiyuan Yu, Li Shen, Liang Ding, Xinmei Tian, Yixin Chen, and Dacheng Tao.
IEEE/CVF Computer Vision and Pattern Recognition Conference, 2024. (CVPR 2024). (CORE Rank A*)
-
Take Care of Your Prompt Bias! Investigating and Mitigating Prompt Bias in Factual Knowledge Extraction.
†Ziyang Xu, †Keqin Peng, Liang Ding✉️, Dacheng Tao, and Xiliang Lu.
The International Conference on Computational Linguistics, 2024. (COLING 2024). (CORE Rank A)
-
3AM: An Ambiguity-Aware Multimodal Machine Translation Dataset.
Xinyu Ma, Xuebo Liu, Derek F. Wong, Jun Rao, Bei Li, Liang Ding, Lidia S. Chao, Dacheng Tao, and Min Zhang.
The International Conference on Computational Linguistics, 2024. (COLING 2024). (CORE Rank A)
-
E2S2: Encoding-Enhanced Sequence-to-Sequence Pretraining for Language Understanding and Generation.
†Qihuang Zhong, Liang Ding, Juhua Liu, Bo Du, and Dacheng Tao.
arXiv preprint, 2022. & IEEE Transactions on Knowledge and Data Engineering, 2024 (TKDE 2024). (CORE Rank A*)
-
Multi-step Denoising Scheduled Sampling: Towards Alleviating Exposure Bias for Diffusion Models.
Zhiyao Ren, Yibing Zhan, Liang Ding, Gaoang Wang, Chaoyue Wang, Zhongyi Fan, and Dacheng Tao.
The AAAI Conference on Artificial Intelligence, 2024. (AAAI 2024). (CORE Rank A*)
-
Widening The Bottleneck of Lexical Choice for Non-Autoregressive Translation.
Longyue Wang, Liang Ding✉️(co-first author), Siyou Liu, and Zhaopeng Tu.
Computer Speech and Language, 2023 (Comput Speech Lang 2023). (CORE Rank A)
-
AdaSAM: Boosting Sharpness-Aware Minimization with Adaptive Learning Rate and Momentum for Training Deep Neural Networks.
Hao Sun, Li Shen, Qihuang Zhong, Liang Ding, Shixiang Chen, Jingwei Sun, Jing Li, Guangzhong Sun, and Dacheng Tao.
arXiv preprint, 2023. & Neural Networks, 2023 (Neural Netw 2023). (CORE Rank A)
-
Self-Evolution Learning for Mixup: Enhance Data Augmentation on Few-Shot Text Classification Tasks.
Haoqi Zheng, Qihuang Zhong, Liang Ding, Zhiliang Tian, Xin Niu, Changjian Wang, Dongsheng Li, and Dacheng Tao.
The Conference on Empirical Methods in Natural Language Processing, 2023 (EMNLP 2023). (CORE Rank A*)
-
Merging Experts into One: Improving Computational Efficiency of Mixture of Experts.
†Shwai He, Run-Ze Fan, Liang Ding✉️, Li Shen, Tianyi Zhou✉️, and Dacheng Tao.
The Conference on Empirical Methods in Natural Language Processing, 2023 (EMNLP 2023 oral). (CORE Rank A*)
-
PromptST: Abstract Prompt Learning for End-to-End Speech Translation.
†Tengfei Yu, Liang Ding, Xuebo Liu, Kehai Chen, Meishan Zhang, Dacheng Tao, and Min Zhang.
The Conference on Empirical Methods in Natural Language Processing, 2023 (EMNLP 2023). (CORE Rank A*)
-
Zero-shot Sharpness-Aware Quantization for Pre-trained Language Models.
Miaoxi Zhu, Qihuang Zhong, Li Shen, Liang Ding, Juhua Liu, Bo Du, and Dacheng Tao.
The Conference on Empirical Methods in Natural Language Processing, 2023 (EMNLP 2023). (CORE Rank A*)
-
Towards Making the Most of ChatGPT for Machine Translation.
†Keqin Peng, Liang Ding✉️, Qihuang Zhong, Li Shen, Xuebo Liu, Min Zhang, Yuanxin Ouyang, and Dacheng Tao.
Technical report & arXiv preprint, 2023. & Findings of the Conference on Empirical Methods in Natural Language Processing, 2023 (EMNLP 2023). (CORE Rank A*)
(🎁A present for the MT community to better understand and harness the powerful ChatGPT)
-
Using Self-Supervised Dual Constraint Contrastive Learning for Cross-modal Retrieval.
Xintong Wang, Xiaoyu Li, Liang Ding, Sanyuan Zhao, Chris Biemann.
The European Conference on Artificial Intelligence, 2023 (ECAI 2023). (CORE Rank A)
-
Unified Instance and Knowledge Alignment Pretraining for Aspect-based Sentiment Analysis.
Juhua Liu, †Qihuang Zhong, Liang Ding, Hua Jin, Bo Du, and Dacheng Tao.
arXiv preprint, 2021. & IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2023 (TASLP 2023).
-
Efficient Federated Learning via Local Adaptive Amended Optimizer with Linear Speedup.
Yan Sun, Li Shen, Hao Sun, Liang Ding, and Dacheng Tao.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023 (TPAMI 2023). (CORE Rank A*)
-
Self-Evolution Learning for Discriminative Language Model Pretraining.
†Qihuang Zhong, Liang Ding(co-first author), Juhua Liu, Bo Du, and Dacheng Tao.
Findings of The Annual Meeting of the Association for Computational Linguistics, 2023 (ACL 2023). (CORE Rank A*)
-
Revisiting Token Dropping Strategy in Efficient BERT Pretraining.
†Qihuang Zhong, Liang Ding, Juhua Liu, Xuebo Liu, Min Zhang, Bo Du, and Dacheng Tao.
The Annual Meeting of the Association for Computational Linguistics, 2023 (ACL 2023). (CORE Rank A*)
-
Token-Level Self-Evolution Training for Sequence-to-Sequence Learning.
†Keqin Peng, Liang Ding(co-first author), Qihuang Zhong, Yuanxin Ouyang, Wenge Rong, Zhang Xiong, and Dacheng Tao.
The Annual Meeting of the Association for Computational Linguistics, 2023 (ACL 2023). (CORE Rank A*)
(best paper nomination)
-
PAD-Net: An Efficient Framework for Dynamic Networks.
†Shwai He, Liang Ding✉️, Daize Dong, Boan Liu, Fuqiang Yu, and Dacheng Tao.
arXiv preprint, 2022. & The Annual Meeting of the Association for Computational Linguistics, 2023 (ACL 2023). (CORE Rank A*)
-
Toward Human-Like Evaluation for Natural Language Generation with Error Analysis.
†Qingyu Lu, Liang Ding(co-first author), Liping Xie, Kanjian Zhang, Derek F. Wong, and Dacheng Tao.
arXiv preprint, 2022. & The Annual Meeting of the Association for Computational Linguistics, 2023 (ACL 2023 oral). (CORE Rank A*)
-
CASN: Class-Aware Score Network for Textual Adversarial Detection.
†Rong Bao, Rui Zheng, Liang Ding, Qi Zhang, and Dacheng Tao.
The Annual Meeting of the Association for Computational Linguistics, 2023 (ACL 2023). (CORE Rank A*)
-
Divide, Conquer, and Combine: Mixture of Semantic-Independent Experts for Zero-Shot Dialogue State Tracking.
†Qingyue Wang, Liang Ding, Yanan Cao, Yibing Zhan, Zheng Lin, Shi Wang, Dacheng Tao, and Li Guo.
The Annual Meeting of the Association for Computational Linguistics, 2023 (ACL 2023 oral). (CORE Rank A*)
-
Unsupervised Dense Retrieval with Relevance-Aware Contrastive Pre-Training.
†Yibin Lei, Liang Ding✉️, Yu Cao, Changtong Zan, Andrew Yates, and Dacheng Tao.
Findings of The Annual Meeting of the Association for Computational Linguistics, 2023 (ACL 2023). (CORE Rank A*)
-
TransGEC: Improving Grammatical Error Correction with Translationese.
Tao Fang, Xuebo Liu, Derek F. Wong, Runzhe Zhan, Liang Ding, Lidia S. Chao, Dacheng Tao, and Min Zhang.
Findings of The Annual Meeting of the Association for Computational Linguistics, 2023 (ACL 2023). (CORE Rank A*)
-
Comprehensive Graph Gradual Pruning for Sparse Training in Graph Neural Networks.
†Chuang Liu, Xueqi Ma, Yibing Zhan, Liang Ding, Dapeng Tao, Bo Du, Wenbin Hu, and Danilo Mandic.
arXiv preprint, 2022. & IEEE Transactions on Neural Networks and Learning Systems, 2023 (TNNLS 2023). (CORE Rank A*)
-
A Perioperative Risk Assessment Dataset with Multi-View Data Based on Online Accelerated Pairwise Comparison.
Xinyao Li, Yibing Zhan, Yanhua Zhao, Yiqiang Wu, Liang Ding, Yuanyuan Li, Dapeng Tao, and Hua Jin.
Information Fusion, 2023 (Inf Fusion 2023). (CORE Rank B)
-
Dynamic Regularized Sharpness Aware Minimization in Federated Learning: Approaching Global Consistency and Smooth Landscape.
Yan Sun, Li Shen, Shixiang Chen, Liang Ding, and Dacheng Tao.
International Conference on Machine Learning, 2023. (ICML 2023 oral). (CORE Rank A*)
-
On Efficient Training of Large-Scale Deep Learning Models: A Literature Review.
Li Shen, Yan Sun, Zhiyuan Yu, Liang Ding, Xinmei Tian, and Dacheng Tao.
arXiv preprint, 2023.
-
OmniForce: On Human-Centered, Large Model Empowered and Cloud-Edge Collaborative AutoML System.
(JD Explore Academy) Chao Xue, Wei Liu, Shuai Xie, Zhenfang Wang, Jiaxing Li, Xuyang Peng, Liang Ding, Shanshan Zhao, Qiong Cao, Yibo Yang, Fengxiang He, Bohua Cai, Rongcheng Bian, Yiyan Zhao, Heliang Zheng, Xiangyang Liu, Dongkai Liu, Daqing Liu, Li Shen, Chang Li, Shijin Zhang, Yukang Zhang, Guanpu Chen, Shixiang Chen, Yibing Zhan, Jing Zhang, Chaoyue Wang, and Dacheng Tao.
System report & arXiv preprint, 2023.
-
Can ChatGPT Understand Too? A Comparative Study on ChatGPT and Fine-tuned BERT.
†Qihuang Zhong, Liang Ding, Juhua Liu, Bo Du, and Dacheng Tao.
Technical report & arXiv preprint, 2023.
(🎁A present for the NLU community to better understand and harness the powerful ChatGPT)
-
Gapformer: Graph Transformer with Graph Pooling for Node Classification.
†Chuang Liu, Yibing Zhan, Xueqi Ma, Liang Ding, Dapeng Tao, Jia Wu, and Wenbin Hu.
International Joint Conference on Artificial Intelligence, 2023 (IJCAI 2023). (CORE Rank A*)
-
Prompt-Learning for Cross-Lingual Relation Extraction.
†Chiaming Hsu, †Changtong Zan, Liang Ding✉️, Longyue Wang, Xiaoting Wang, Weifeng Liu, Fu Lin, and Wenbin Hu.
IEEE International Joint Conference on Neural Networks, 2023 (IJCNN 2023). (CORE Rank B)
-
Knowledge Graph Augmented Network Towards Multi-View Representation Learning for Aspect-based Sentiment Analysis.
†Qihuang Zhong, Liang Ding, Juhua Liu, Bo Du, Hua Jin, and Dacheng Tao.
arXiv preprint, 2022. & IEEE Transactions on Knowledge and Data Engineering, 2023 (TKDE 2023). (CORE Rank A*)
-
Dynamic Contrastive Distillation for Image-Text Retrieval.
†Jun Rao, Liang Ding(co-first author), Shuhan Qi, Meng Fang, Yang Liu, Li Shen, and Dacheng Tao.
arXiv preprint, 2022. & IEEE Transactions on Multimedia, 2023 (TMM 2023). (CORE Rank A*)
-
FedSpeed: Larger Local Interval, Less Communication Round, and Higher Generalization Accuracy.
Yan Sun, Li Shen, Tiansheng Huang, Liang Ding, and Dacheng Tao.
The International Conference on Learning Representations, 2023. (ICLR 2023). (CORE Rank A*)
-
Improving Simultaneous Machine Translation with Monolingual Data.
†Hexuan Deng, Liang Ding, Xuebo Liu, Meishan Zhang, Dacheng Tao, and Min Zhang.
The AAAI Conference on Artificial Intelligence, 2023. (AAAI 2023). (CORE Rank A*)
-
MPipeMoE: Memory Efficient MoE for Pre-trained Models with Adaptive Pipeline Parallelism.
†Zheng Zhang, Donglin Yang, Yaqi Xia, Liang Ding, Dacheng Tao, Xiaobo Zhou, and Dazhao Cheng.
IEEE International Parallel & Distributed Processing Symposium, 2023. (IPDPS 2023). (CORE Rank A)
-
Parameter-Efficient and Student-Friendly Knowledge Distillation.
†Jun Rao, Xv Meng, Liang Ding, Shuhan Qi, Xuebo Liu, Min Zhang, and Dacheng Tao.
arXiv preprint, 2022. & IEEE Transactions on Multimedia, 2023 (TMM 2023). (CORE Rank A*)
-
Bag of Tricks for Effective Language Model Pretraining and Downstream Adaptation: A Case Study on GLUE.
†Qihuang Zhong, Liang Ding(co-first author), Keqin Peng, Juhua Liu, Bo Du, Li Shen, Yibing Zhan, and Dacheng Tao.
Technical report & arXiv preprint, 2023.
-
Toward Efficient Language Model Pretraining and Downstream Adaptation via Self-Evolution: A Case Study on SuperGLUE.
†Qihuang Zhong, Liang Ding(co-first author), Yibing Zhan, Yu Qiao, Yonggang Wen, Li Shen, Juhua Liu, Baosheng Yu, Bo Du, Yixin Chen, Xinbo Gao, Chunyan Miao, Xiaoou Tang, and Dacheng Tao.
Technical report & arXiv preprint, 2022.
-
Original or Translated? On the Use of Parallel Data for Translation Quality Estimation.
†Baopu Qiu, Liang Ding, Di Wu, Lin Shang, Yibing Zhan, and Dacheng Tao.
arXiv preprint, 2022.
-
Improving Sharpness-Aware Minimization with Fisher Mask for Better Generalization on Language Models.
†Qihuang Zhong, Liang Ding, Li Shen, Peng Mi, Juhua Liu, Bo Du, and Dacheng Tao.
Findings of the Conference on Empirical Methods in Natural Language Processing, 2022 (EMNLP 2022). (CORE Rank A*)
-
SparseAdapter: An Easy Approach for Improving the Parameter-Efficiency of Adapters.
†Shwai He, Liang Ding✉️, Daize Dong, Miao Zhang, and Dacheng Tao.
Findings of the Conference on Empirical Methods in Natural Language Processing, 2022 (EMNLP 2022). (CORE Rank A*)
-
Vega-MT: The JD Explore Academy Translation System for WMT22.
†Changtong Zan, †Keqin Peng, Liang Ding✉️(co-first author), Baopu Qiu, Boan Liu, Shwai He, Qingyu Lu, Zheng Zhang, Chuang Liu, Weifeng Liu, Yibing Zhan, and Dacheng Tao.
The Conference on Machine Translation, 2022 (WMT 2022).
(Among all constrained high-resource tracks, Vega-MT won 7 champions, 2 runners-up, and 1 third place w.r.t BLEU, and won 8 champions and 2 runners-up w.r.t COMET.)
-
On the Complementarity between Pre-Training and Random-Initialization for Resource-Rich Machine Translation.
†Changtong Zan, Liang Ding✉️, Li Shen, Yu Cao, Weifeng Liu✉️, and Dacheng Tao.
The International Conference on Computational Linguistics, 2022 (COLING 2022). (CORE Rank A)
-
A Contrastive Cross-Channel Data Augmentation Framework for Aspect-based Sentiment Analysis.
†Bing Wang, Liang Ding✉️, Qihuang Zhong, Ximing Li, and Dacheng Tao.
The International Conference on Computational Linguistics, 2022 (COLING 2022). (CORE Rank A)
-
Bridging Cross-Lingual Gaps During Leveraging the Multilingual Sequence-to-Sequence Pretraining for Text Generation and Understanding.
†Changtong Zan, Liang Ding✉️, Li Shen, Yu Cao, Weifeng Liu✉️, and Dacheng Tao.
arXiv preprint, 2022.
-
BLISS: Robust Sequence-to-Sequence Learning via Self-Supervised Input Representation.
†Zheng Zhang, Liang Ding✉️, Dazhao Cheng✉️, Xuebo Liu, Min Zhang, and Dacheng Tao.
arXiv preprint, 2022.
-
Redistributing Low-Frequency Words: Making the Most of Monolingual Data in Non-Autoregressive Translation.
Liang Ding, Longyue Wang, Shuming Shi, Dacheng Tao, and Zhaopeng Tu.
The Annual Meeting of the Association for Computational Linguistics, 2022 (ACL 2022). (CORE Rank A*)
(Full meta-score paper)
-
Where Does the Performance Improvement Come From? - A Reproducibility Concern about Image-Text Retrieval.
†Jun Rao, Fei Wang, Liang Ding✉️, Shuhan Qi, Yibing Zhan, Weifeng Liu✉️, and Dacheng Tao.
ACM Special Interest Group on Information Retrieval, 2022 (SIGIR 2022). (CORE Rank A*)
-
Interpretable Proof Generation via Iterative Backward Reasoning.
Hanhao Qu, Yu Cao, Jun Gao, Liang Ding, and Ruifeng Xu.
Annual Conference of the North American Chapter of the Association for Computational Linguistics, 2022 (NAACL 2022). (CORE Rank A)
-
Fine-tuning Global Model via Data-Free Knowledge Distillation for Non-IID Federated Learning.
†Lin Zhang, Li Shen, Liang Ding, Dacheng Tao, and Lingyu Duan.
IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022 (CVPR 2022). (CORE Rank A*)
-
Improving Neural Machine Translation by Denoising Training.
Liang Ding, Keqin Peng, and Dacheng Tao.
arXiv preprint, 2022.
-
SLUA: A Super Lightweight Unsupervised Word Alignment Model via Cross-Lingual Contrastive Learning.
Di Wu, Liang Ding, Shuo Yang, and Dacheng Tao.
arXiv preprint, 2021. & The International Conference on Spoken Language Translation, 2022 (IWSLT 2022).
-
Improving Neural Machine Translation by Bidirectional Training.
Liang Ding, Di Wu, and Dacheng Tao.
The Conference on Empirical Methods in Natural Language Processing, 2021 (EMNLP 2021). (CORE Rank A*)
-
On the Complementarity between Pre-training and Back-Translation.
Xuebo Liu, Longyue Wang, Derek F. Wong, Liang Ding, Lidia S. Chao, Shuming Shi, and Zhaopeng Tu.
Findings of the Conference on Empirical Methods in Natural Language Processing, 2021 (EMNLP 2021). (CORE Rank A*)
-
The USYD-JD Speech Translation System for IWSLT2021.
Liang Ding, Di Wu, and Dacheng Tao.
The International Conference on Spoken Language Translation, 2021 (IWSLT 2021).
(Winning submission out of 42 teams to Sw-En speech translation, exceeding the 2nd place by more than 10 BLEU points)
-
Self-Guided Curriculum Learning for Neural Machine Translation.
Lei Zhou, Liang Ding, Kevin Duh, Shinji Watanabe, Ryohei Sasano, and Koichi Takeda.
The International Conference on Spoken Language Translation, 2021 (IWSLT 2021).
-
Rejuvenating Low-Frequency Words: Making the Most of Parallel Data in Non-Autoregressive Translation.
Liang Ding, Longyue Wang, Xuebo Liu, Derek F. Wong, Dacheng Tao, and Zhaopeng Tu.
The Annual Meeting of the Association for Computational Linguistics, 2021 (ACL 2021). (CORE Rank A*)
-
Progressive Multi-Granularity Training for Non-Autoregressive Translation.
Liang Ding, Longyue Wang, Xuebo Liu, Derek F. Wong, Dacheng Tao, and Zhaopeng Tu.
Findings of the Annual Meeting of the Association for Computational Linguistics, 2021 (ACL 2021). (CORE Rank A*)
-
On the Copying Behaviors of Pre-Training for Neural Machine Translation.
Xuebo Liu, Longyue Wang, Derek F. Wong, Liang Ding, Lidia S. Chao, Shuming Shi, and Zhaopeng Tu.
Findings of the Annual Meeting of the Association for Computational Linguistics, 2021 (ACL 2021). (CORE Rank A*)
-
Bridging the Gap between Clean Data Training and Real World Inference for Spoken Language Understanding.
Di Wu, Liang Ding, Yiren Chen, and Dacheng Tao.
arXiv preprint, 2021.
-
Understanding and Improving Lexical Choice in Non-Autoregressive Translation.
Liang Ding, Longyue Wang, Xuebo Liu, Derek F Wong, Dacheng Tao, and Zhaopeng Tu.
The International Conference on Learning Representations, 2021 (ICLR 2021). (CORE Rank A*)
-
Understanding and Improving Encoder Layer Fusion in Sequence-to-Sequence Learning.
Xuebo Liu, Longyue Wang, Derek F Wong, Liang Ding, Lidia S Chao, and Zhaopeng Tu.
The International Conference on Learning Representations, 2021 (ICLR 2021). (CORE Rank A*)
-
Towards Efficiently Diversifying Dialogue Generation via Embedding Augmentation.
Yu Cao, Liang Ding, Zhiliang Tian, and Meng Fang.
IEEE International Conference on Acoustics, Speech and Signal Processing, 2021 (ICASSP 2021). (CORE Rank B)
-
Context-Aware Cross-Attention for Non-Autoregressive Translation.
Liang Ding, Longyue Wang, Di Wu, Dacheng Tao, and Zhaopeng Tu.
The International Conference on Computational Linguistics, 2020 (COLING 2020). (CORE Rank A)
-
SlotRefine: A Fast Non-Autoregressive Model for Joint Intent Detection and Slot Filling.
Di Wu, Liang Ding, Fan Lu, and Jian Xie.
The Conference on Empirical Methods in Natural Language Processing, 2020 (EMNLP 2020). (CORE Rank A*)
-
Self-Attention with Cross-Lingual Position Representation.
Liang Ding, Longyue Wang, and Dacheng Tao.
The Annual Meeting of the Association for Computational Linguistics, 2020 (ACL 2020). (CORE Rank A*)
-
Zero-Shot Translation Quality Estimation with Explicit Cross-Lingual Patterns.
Lei Zhou, Liang Ding, and Koichi Takeda.
The Conference on Machine Translation, 2020 (WMT 2020).
-
Tencent AI Lab machine translation systems for the WMT20 chat translation task.
Longyue Wang, Zhaopeng Tu, Xing Wang, Li Ding, Liang Ding, and Shuming Shi.
The Conference on Machine Translation, 2020 (WMT 2020).
-
The University of Sydney's Machine Translation System for WMT19.
Liang Ding and Dacheng Tao.
The Conference on Machine Translation, 2019 (WMT 2019).
(Winning submission to Fi-En translation task, exceeding Microsoft Research by more than 1.1 BLEU points)
-
Recurrent Graph Syntax Encoder for Neural Machine Translation.
Liang Ding and Dacheng Tao.
arXiv preprint, 2019.
Competitions and Shared Tasks
-
SuperGLUE Benchmark, ranked 1st with an average score of 91.3 (since Oct. 8 2022).
-
WMT 2022, ranked 1st on Chinese<=>English, German<=>English, Czech<=>English, and English=>Russian, 2nd on Russian=>English and Japanese=>English, and 3rd on English=>Japanese General Translation Tasks, respectively.
-
GLUE Benchmark, ranked 1st with an average score of 91.3 (since Jan. 1 2022).
-
IWSLT21, ranked 1st on Swahili-English speech translation task.
-
WMT20, ranked 2nd on German-to-English chat translation shared task.
-
Tencent AI Innovation Competition, ranked 3rd on "input tips in human-computer interaction translation" problem.
-
WMT19, ranked 1st on Finnish-to-English news translation shared task.
-
CWMT17, ranked 3rd on the Japanese-to-Chinese patent translation shared task.
Professional Services
- Area Chair: ACL (2022-2024)/ EMNLP (2023-2024)/ NAACL 2024/ EACL 2024/ ACL Rolling Review (ARR)
- Session Chair: AAAI 2023/ SDM 2021
- Conference Committee: ACL/ EMNLP/ COLING/ NAACL/ EACL/ AACL/ KDD/ SDM/ ICLR/ NeurIPS/ ICML/ AAAI/ IJCAI/ CVPR/ ICCV/ ECCV/ WACV etc.
- Journal Reviewer: Transactions of the Association for Computational Linguistics (TACL)/ IEEE Transactions on Neural Networks and Learning Systems (TNNLS)/ IEEE Transactions on Multimedia (TMM)/ IEEE Transactions on Knowledge and Data Engineering (TKDE)/ Computational Linguistics (CL)/ Artificial Intelligence (AI)/ Knowledge-Based Systems (KBS)/ ACM Transactions on the Web (Distinguished Reviewer)/ IEEE/ACM Transactions on Audio, Speech, and Language Processing (TASLP)/ Natural Language Engineering/ ACM Transactions on Asian and Low-Resource Language Information Processing/ Neural Computation/ Neurocomputing/ Neural Networks (NN)/ Engineering Applications of Artificial Intelligence (EAAI) etc.
- Member: ACL (2020-)/ IEEE (2020-)
Group Members
I am always fortunate to work with these brilliant students (both onsite and remote, see alumni here), they are:
- Changtong Zan (PhD from China University of Petroleum, 07/2021-present)
- Qihuang Zhong (PhD from Wuhan University, 09/2021-present)
- Keqin Peng (PhD from Beihang University, 09/2022-present)
- Rong Bao (PhD from Fudan University, 09/2022-present)
- Ziyang Xu (Master from Wuhan University, 11/2022-present)
- Qingyu Lu (PhD from Southeast University, 05/2023-present)
- Baopu Qiu (Master from Nanjing University, 05/2023-present)
- Fei Wang (PhD from South China University of Technology, 09/2022-present)
- Shizhan Cai (Mphil from The University of Sydney, 07/2023-present)
Selected Awards
- 2023: Best Paper Nomination, ACL.
- 2022-23: Technology Golden Award (京东集团技术金项奖), the highest tech. award at JD.com, Inc.
- 2022: Distinguished Reviewer, ACM Transactions on the Web.
- 2022: Superior AI Leader Award, The World Artificial Intelligence Conference.
- 2018: Beijing Outstanding Graduate, The Education Committee of Beijing.
- 2013: National Scholarship, Ministry of Education of P.R.China.
*Last updated on 12/2024.