Timeline & Experiences
Game Theory and Molecular Computing Laboratory & Speech Processing and Machine Learning Laboratory
- Network creation games
- Price of anarchy (PoA)
- Speech enhancement
Computer Science department
Machine Discovery and Social Network Mining Laboratory
- (Multiagent) Reinforcement learning
- Time series prediction
In this paper we propose Causal Anonymous Walks (CAWs) to inductively represetn a temporal network. We further propose a neural-network model CAW-s to encode CAWs, and pair it with a CAW sampling strategy with constant memory and time cost to support online training and inference. CAW-N is evaluated to predict links over 6 real temporal networks and uniformly outperforms previous SOTA methods by averaged 15% AUC gain in the inductive setting. CAW-N also outperforms previous methods in 5 out of the 6 networks in the transductive setting.
we propose F-FADE, a new approach for detection of anomalies in edge streams, which uses a novel frequency-factorization technique to efficiently model the time-evolving distributions of frequencies of interactions between node-pairs. Our experiments on one synthetic and six real-world dynamic networks show that F-FADE achieves state of the art performance and may detect anomalies that previous methods are unable to find.
In this paper, we proposed a framework to solve the following problem: In a decentralized environment, given that not all agents are compliant to regulations at first, can we develop a mechanism such that it is in the self-interest of non-compliant agents to comply after all. We utilized empirical game-theoretic analysis to justify our method.
This paper intends to address an issue in multi-agent RL that when agents possessing varying capabilities. We introduce a simple method to train non-greedy agents with nearly no extra cost. Our model can achieve the following goals: non-homogeneous equality, only need local information, cost-effective, generalizable and configurable.
Inspired by Memory Network for solving the question-answering tasks, we proposed a deep learning based model named Memory Time-series network (MTNet) for time series forecasting. Additionally, the attention mechanism designed enable MTNet to be interpretable.
This work provides a thorough study on how reward scaling can affect performance of deep reinforcement learning agents. We also propose an Adaptive Network Scaling framework to find a suitable scale of the rewards during learning for better performance. We conducted empirical studies to justify the solution.
In this paper, we study the problem of heterogeneous star celebrity games. We prove that the PoA is upper bounded by O(n/β) for all heterogeneous star celebrity games. The bound is asymptotically tight even when restricted to the max celebrity game model and matches with the upper bound on the star celebrity game model. We also show that this upper bound is tight for an extension of the bounded distance network creation games.
Honors & Awards
- Ranked 19th (out of 4180) / KDD CUP - Main Track / 2018
- Ranked 4th (out of 4180) / KDD CUP - Specialized Prize for long term prediction / 2018
- Dean's List / National Taiwan University / 2016
- Finalist (Top 30) / International Physics Olympiad Domestic Final / 2013