Xinpeng Wang

I am a second-year PhD student at the Munich AI & NLP (MaiNLP) research lab at LMU Munich . My supervisor is Barbara Plank .

Previously, I completed my M.Sc. degree in Robitics, Cognition, Intelligence at Technical University of Munich, where I was a student researcher at Visual Computing & AI lab at TUM working on indoor scene synthesis. I was also a teaching assistant of the course Introduction to Deep Learning (IN2346).

My research interests broadly lie in the field of Deep Learning, Machine Learning, Computer Vision and NLP.

Email  /  GitHub  /  Google Scholar  /  LinkedIn  /  Twitter  /  CV

profile photo

Research

project image

"My Answer is C": First-Token Probabilities Do Not Match Text Answers in Instruction-Tuned Language Models


Xinpeng Wang, Bolei Ma, Chengzhi Hu, Leon Weber-Genzel, Paul Röttger, Frauke Kreuter, Dirk Hovy, Barbara Plank
preprint, 2024
arxiv /

We showed that the first-token probability evaluation does not match text answers in instruction-tuned language models.

project image

ACTOR: Active Learning with Annotator-specific Classification Heads to Embrace Human Label Variation


Xinpeng Wang, Barbara Plank
EMNLP, 2023
arxiv /

We proposed an active learning framework that utilizes a muli-head model to model individual annotators. We designed different acquisition functions and showed our active learning setup achieved performance comparable to full-scale training while saving up to 70% of the annotation budget.

project image

How to Distill your BERT: An Empirical Study on the Impact of Weight Initialisation and Distillation Objectives


Xinpeng Wang, Leonie Weissweiler, Hinrich Schütze, Barbara Plank
ACL, 2023
arxiv / code /

We showed that using lower teacher layers for pre-loading student model gives significant performance improvement compared to higher layers. We also studied the robustness of different distillation objectives under various initialisation choices.

project image

Sceneformer: Indoor Scene Generation with Transformers


Xinpeng Wang, Chandan Yeshwanth, Matthias Nießner
3DV, 2021
oral
arxiv / video / code /

We proposed a transformer model for scene generation conditioned on room layout and text description.




Projects

These include coursework and practical course projects.

project image

Domain Specific Multi-Lingually Aligned Word Embeddings


Machine Learning for Natural Language Processing Applications
2021-07
report /

project image

Curiosity Driven Learning


Advanced Deep Learning in Robotics
2021-03
report /

Evaluated and compared the count-based and prediction-based curiosity driven learning on different Atari game environments.

Teaching

project image

Introduction to Deep Learning (IN2346)


SS 2020, WS2020/2021
Teaching Assistant
website /


Design and source code from Jon Barron's website