Welcome to Tieyuan Chenโs Homepage
๐ About Me
Hello! I am Tieyuan Chen, a third-year Ph.D. student (2023โpresent) at Shanghai Jiao Tong University, School of Electronic Information and Electrical Engineering (SEIEE), advised by Prof. Weiyao Lin. To date, during my Ph.D. studies, I have published 4 first-author papers (7 papers in total), including top-tier venues such as T-PAMI * 2, ICLR * 2, NeurIPS * 2, and T-CSVT.
Previously, I received my B.Eng. degree from Sichuan University, College of Electronics and Information Engineering (CEIE) (2019โ2023), ranking 1 / 29.
I was selected for the Joint PhD Program at Beijing Zhongguancun Academy (Sep. 2024 โ June 2028).
Currently, I am a Research Intern at AGI Center, Ant Research Institute (Mar. 2025 โ Present), working under the supervision of Jianguo Li, Tao Lin, Haoxing Chen, and Huabin Liu.
๐ฌ Research Interests
My research focuses on:
- ๐ฅ Video Understanding & Video Reasoning
- ๐ง Large Language Models (LLMs) & Multimodal LLMs (MLLMs)
- ๐ Causal Reasoning and Event-level Modeling
๐ซ Feel free to reach out via email:
tieyuanchen@sjtu.edu.cn
๐ฅ Honors and Awards
- China National Scholarship (2021) โ Top 1%
- China National Scholarship (2022) โ Top 1%
- Sichuan University Comprehensive Special Scholarship (2022) โ Top 0.1%
- Sichuan University Hundred Excellent Student (2022) โ Top 0.2%
- Sichuan Province Outstanding Graduate (2023) โ Top 3%
๐ First-Author Publications
![]() | MECD: Unlocking Multi-Event Causal Discovery in Video Reasoning Tieyuan Chen, Huabin Liu, Tianyao He, Yihang Chen, Chaofan Gan, Xiao Ma, Cheng Zhong, Yang Zhang, Yingxue Wang, Hui Lin, Weiyao Lin Conference on Neural Information Processing Systems (NeurIPS), 2024 (Spotlight, Top 2.4%) |
![]() | DND: Boosting Large Language Models with Dynamic Nested Depth Tieyuan Chen, Xiaodong Chen, Haoxing Chen, Zhenzhong Lan, Weiyao Lin, Jianguo Li International Conference on Learning Representations (ICLR), 2026 |
![]() | MECD+: Unlocking Event-Level Causal Graph Discovery for Video Reasoning Tieyuan Chen, Huabin Liu, Yi Wang, Yihang Chen, Tianyao He, Chaofan Gan, Huanyu He, Weiyao Lin IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) |
![]() | CSTA: Spatial-Temporal Causal Adaptive Learning for Exemplar-Free Video Class-Incremental Learning Tieyuan Chen, Huabin Liu, Chern Hong Lim, John See, Xing Gao, Junhui Hou, Weiyao Lin IEEE Transactions on Circuits and Systems for Video Technology (TCSVT) |
๐ Technical Reports
Since March 2025, I have been actively participating in many AR-Based LLM, Diffusion-Based LLM, Diffusion-Based VLM researches at inclusion AI. Below are the technical reports and open-source models I have contributed to during this period:
LLaDA2.0-Uni: Unifying Multimodal Understanding and Generation with Diffusion Large Language Model
LLaDA2.0-Uni is a unified discrete diffusion language model, achieving performance comparable to specialized vision-language models while enabling efficient inference and high-fidelity image generation.
Role: Contributor (Mask Token Reweighting Loss & Data Processing) | Apr. 2026
LLaDA-MoE: A Sparse MoE Diffusion Language Model
The first open-source Mixture-of-Experts (MoE) diffusion large language model, pre-trained from scratch on approximately 20 trillion tokens.
Role: Contributor (Megatron AI infra) | Oct. 2025
DND: Boosting Large Language Models with Dynamic Nested Depth
Improves LLM inference efficiency and reasoning capabilities by dynamically adjusting compute depth via a novel nested architecture.
Role: Independent First Author | Sep. 2025
Grove MoE: Towards Efficient and Superior MoE LLMs with Adjugate Experts
Proposes a novel MoE architecture utilizing adjugate experts to achieve better parameter efficiency and overall model performance.
Role: Core Contributor | Aug. 2025
๐ Academic Service
Reviewer for Top Conferences
- NeurIPS (2025), ICLR (2025, 2026), ICML (2026), CVPR (2025, 2026), ICCV (2025), ECCV (2026), AAAI (2025), BMVC (2026)




