Yen-Yu Chang
Master Student at Stanford
yenyu [at]
Another Me
I love sports, especially basketball and table tennis.


Yen-Yu Chang is a master student in the Electrical Engineering Department at Stanford University, working with Prof. Jure Leskovec and Prof. Pan Li. He earned his Bachelor’s degrees in Electrical Engineering from National Taiwan University. He worked with Prof. Ho-Lin Chen, Prof. Shou-De Lin, and Prof. Hung-Yi Lee during his undergrads. If you would like to learn more about him, please see his [ Résumé ] or contact him at yenyu [at]


  • Machine Learning / Deep Learning
  • Structure Learning / Graph Mining
  • Reinforcement Learning / Multiagent Systems


National Taiwan University (NTU)
B.S. in Electrical Engineering, 2018

Timeline & Experiences

2019 Sep. -
EE master student @ Stanford
Deep Learning, Graph Mining, and Anomaly Detection
2019 Jul. - 2019 Sep.
Summer Research Intern @ Stanford
2014 - 2018
Undergraduate student & researcher @ NTU
Electrical Engineering department
Game Theory and Molecular Computing Laboratory & Speech Processing and Machine Learning Laboratory
Primary focus:
- Network creation games
- Price of anarchy (PoA)
- Speech enhancement
Computer Science department
Machine Discovery and Social Network Mining Laboratory
Primary focus:
- (Multiagent) Reinforcement learning
- Time series prediction
He was born.

Selected Publications

Inductive Representation Learning in Temporal Networks via Causal Anonymous Walks

In this paper we propose Causal Anonymous Walks (CAWs) to inductively represetn a temporal network. We further propose a neural-network model CAW-s to encode CAWs, and pair it with a CAW sampling strategy with constant memory and time cost to support online training and inference. CAW-N is evaluated to predict links over 6 real temporal networks and uniformly outperforms previous SOTA methods by averaged 15% AUC gain in the inductive setting. CAW-N also outperforms previous methods in 5 out of the 6 networks in the transductive setting.

ICLR 2021 (Virtual)
F-FADE: Frequency Factorization for Anomaly Detection in Edge Streams

we propose F-FADE, a new approach for detection of anomalies in edge streams, which uses a novel frequency-factorization technique to efficiently model the time-evolving distributions of frequencies of interactions between node-pairs. Our experiments on one synthetic and six real-world dynamic networks show that F-FADE achieves state of the art performance and may detect anomalies that previous methods are unable to find.

WSDM 2021 (Virtual)
A Regulation Enforcement Solution for Multi-agent Reinforcement Learning

In this paper, we proposed a framework to solve the following problem: In a decentralized environment, given that not all agents are compliant to regulations at first, can we develop a mechanism such that it is in the self-interest of non-compliant agents to comply after all. We utilized empirical game-theoretic analysis to justify our method.

AAMAS 2019 Montreal, QC
Designing Non-greedy Reinforcement Learning Agents with Diminishing Reward Shaping

This paper intends to address an issue in multi-agent RL that when agents possessing varying capabilities. We introduce a simple method to train non-greedy agents with nearly no extra cost. Our model can achieve the following goals: non-homogeneous equality, only need local information, cost-effective, generalizable and configurable.

AAAI/ACM conference on AI, Ethics, Society 2018 (Oral) New Orleans, LA
A Memory-Network Based Solution for Multivariate Time-Series Forecasting

Inspired by Memory Network for solving the question-answering tasks, we proposed a deep learning based model named Memory Time-series network (MTNet) for time series forecasting. Additionally, the attention mechanism designed enable MTNet to be interpretable.

ANS: Adaptive Network Scaling for Deep Rectifier Reinforcement Learning Models

This work provides a thorough study on how reward scaling can affect performance of deep reinforcement learning agents. We also propose an Adaptive Network Scaling framework to find a suitable scale of the rewards during learning for better performance. We conducted empirical studies to justify the solution.

Heterogeneous Star Celebrity Games

In this paper, we study the problem of heterogeneous star celebrity games. We prove that the PoA is upper bounded by O(n/β) for all heterogeneous star celebrity games. The bound is asymptotically tight even when restricted to the max celebrity game model and matches with the upper bound on the star celebrity game model. We also show that this upper bound is tight for an extension of the bounded distance network creation games.

[ pdf ]

Honors & Awards


  • Ranked 19th (out of 4180) / KDD CUP - Main Track / 2018
  • Ranked 4th (out of 4180) / KDD CUP - Specialized Prize for long term prediction / 2018


  • Dean's List / National Taiwan University / 2016
  • Finalist (Top 30) / International Physics Olympiad Domestic Final / 2013