您的位置: 网站首页 > 成员介绍 > 指导老师 > 正文

Hongfei Xu (Lecture)

文章作者: 访问次数: 发布时间:2023-04-13

Hongfei Xu, male, born at July, 1997, postgraduate student supervisor (School of Computer and Artificial Intelligence of Zhengzhou University, Software Engineering, Computer Technology, Artificial Intelligence), member of China Computer Federation Technical Committee on Natural Language Processing.

He obtains a (summa cum laude) PhD degree at Saarland University, Germany. His research focuses on natural language processing, specifically machine translation, neural model architectures, training techniques of neural models, and low-resource NLP.

He has served as a program committee member of conferences including ACL, EMNLP, NAACL, EACL, ARR, COLING, IJCAI, AAAI, CCMT and IALP since 2020, and as an action editor/area chair of ARR (ACL, EMNLP, NAACL, EACL) since 2024. He was the session chair of IJCAI 2020 and IALP 2021.

Requirements for students: self-motivated, creative, response-in-time. Please contact in advance.

Projects

2024-2026, National Natural Science Foundation of China, Organizer

2024-2025, China Postdoctoral Science Foundation, Organizer

2023-2024, Natural Science Foundation of Henan Province, Organizer

2023-, Automatic Chinese Text Correction (Enterprise), Organizer

2021, German Federal Ministry for Education and Research (Cora4NLP subcontract), Organizer

Education

Zhengzhou University, Communication Engineering, Bachelor

Zhengzhou University, Computer Technology, Master

Saarland University, Language Science and Technology, Ph.D


Working Experience

German Research Center on Artificial Intelligence, Junior Researcher

Huawei Noah's Ark Lab, Intern

School of Computer and Artificial Intelligence


Research Interests

Machine Translation, Automatic Error Correction, Information Extraction, Language Resource and Evaluation, Natural Language Processing, Computing Intelligence

Publications

Hongfei Xu, Yang Song, Qiuhui Liu, Josef van Genabith, Deyi Xiong. Rewiring the Transformer with Depth-Wise LSTMs. LREC-COLING 2024. (CCF B)

Songhua Yang, Hanjia Zhao, Senbin Zhu, Guangyu Zhou, Hongfei Xu, Yuxiang Jia, Hongying Zan. Zhongjing: Enhancing the Chinese Medical Capabilities of Large Language Model through Expert Feedback and Real-world Multi-turn Dialogue. AAAI 2024. (CCF A)

Hongyang Chang, Hongfei Xu*, Josef van Genabith, Deyi Xiong, Hongying Zan. JoinER-BART: Joint Entity and Relation Extraction with Constrained Decoding, Representation Reuse and Fusion. 2023. TASLP. (CCF B,CAS-2 top)

Xiaoying Wang, Lingling Mu, Hongfei Xu*. Improving Chinese-Centric Low-Resource Translation Using English-Centric Pivoted Parallel Data. IALP 2023.

Yifan Guo, Hongying Zan, Hongfei Xu*. Joint Modeling of Chinese Minority Language Translation Tasks. IALP 2023.

Tengxun Zhang, Hongfei Xu, Josef van Genabith, Deyi Xiong, Hongying Zan. NAPG: Non-Autoregressive Program Generation for Hybrid Tabular-Textual Question Answering. NLPCC 2023. (CCF C)

Songhua Yang, Tengxun Zhang, Hongfei Xu, Yuxiang Jia. Improving Aspect Sentiment Triplet Extraction with Perturbed Masking and Edge-Enhanced Sentiment Graph Attention Network. IJCNN 2023. (CCF C)

Jingyi Zhang, Gerard de Melo, Hongfei Xu, Kehai Chen. A Closer Look at Transformer Attention for Multilingual Translation. WMT 2023.

Wenjie Hao, Hongfei Xu, Deyi Xiong, Hongying Zan and Lingling Mu. ParaZh-22M: a Large-Scale Chinese Parabank via Machine Translation. COLING 2022. (CCF B)

Wenjie Hao, Hongfei Xu, Lingling Mu, Hongying Zan. Optimizing Deep Transformers for Chinese-Thai Low-Resource Translation. CCMT 2022.

Hongfei Xu, Qiuhui Liu, Josef van Genabith, Deyi Xiong, Meng Zhang. Multi-Head Highly Parallelized LSTM Decoder for Neural Machine Translation. ACL-IJCNLP 2021. (CCF A)

Hongfei Xu, Qiuhui Liu, Josef van Genabith, Deyi Xiong. Modeling Task-Aware MIMO Cardinality for Efficient Multilingual Neural Machine Translation. ACL-IJCNLP 2021. (CCF A)

Hongfei Xu, Josef van Genabith, Qiuhui Liu, Deyi Xiong. Probing Word Translations in the Transformer and Trading Decoder for Encoder Layers. NAACL-HLT 2021. (CCF B)

Hongfei Xu, Qiuhui Liu, Josef van Genabith, Deyi Xiong. Learning Hard Retrieval Decoder Attention for Transformers. EMNLP 2021 (Findings).

Zifa Gan, Hongfei Xu, Hongying Zan. Self-Supervised Curriculum Learning for Spelling Error Correction. EMNLP 2021. (CCF B)

Hongfei Xu, Josef van Genabith, Deyi Xiong, Qiuhui Liu, Jingyi Zhang. Learning Source Phrase Representations for Neural Machine Translation. ACL 2020. (CCF A)

Hongfei Xu, Qiuhui Liu, Josef van Genabith, Deyi Xiong, Jingyi Zhang. Lipschitz Constrained Parameter Initialization for Deep Transformers. ACL 2020. (CCF A)

Hongfei Xu, Josef van Genabith, Deyi Xiong, Qiuhui Liu. Dynamically Adjusting Transformer Batch Size by Monitoring Gradient Direction Change. ACL 2020. (CCF A)

Hongfei Xu, Deyi Xiong, Josef van Genabith, Qiuhui Liu. Efficient Context-Aware Neural Machine Translation with Layer-Wise Weighting and Input-Aware Gating. IJCAI-PRICAI 2020. (CCF A)

Santanu Pal, Hongfei Xu, Nico Herbig, Sudip Kumar Naskar, Antonio Krüger, Josef van Genabith. The Transference Architecture for Automatic Post Editing. COLING 2020. (CCF B)

Tongfeng Guan, Hongying Zan, Xiabing Zhou, Hongfei Xu, Kunli Zhang. CMeIE: Construction and Evaluation of Chinese Medical Information Extraction Dataset. NLPCC 2020. (CCF C)

Hongfei Xu, Qiuhui Liu, Josef van Genabith. UdS Submission for the WMT 19 Automatic Post-Editing Task. WMT 2019.

Santanu Pal, Hongfei Xu, Nico Herbig, Antonio Krüger, Josef van Genabith. USAAR-DFKI – The Transference Architecture for English–German Automatic Post-Editing. WMT 2019.

Qiuhui Liu, Kunli Zhang, Hongfei Xu, Shiwen Yu, Hongying Zan. Research on Automatic Recognition of Auxiliary “DE”. 2018. Acta Scientiarum Naturalium Universitatis Pekinensis.

Kunli Zhang, Hongfei Xu, Deyi Xiong, Qiuhui Liu, Hongying Zan. Improving Chinese-English Neural Machine Translation with Detected Usages of Function Words. NLPCC 2017. (CCF C)

Yuxiang Jia, Hongfei Xu, Hongying Zan. Neural Network Models for Selectional Preference Acquisition. 2017. Journal of Chinese Information Processing.

Hongying Zan, Hongfei Xu, Kunli Zhang, Zhifang Sui. The Construction of Internet Slang Dictionary and Its Analysis. 2016. Journal of Chinese Information Processing.

Contact

Email: hfxunlp@{foxmail.com,outlook.com,zzu.edu.cn}, Wechat: hfxunlp, Mobile: 18503893108, Office: Computing Center 2209A8.

Call for Papers

Large Language Models and AI-Generated Content

Discover Artificial Intelligence is part of the Discover journal series committed to providing a streamlined submission process, rapid review and publication, and a high level of author service at every stage. It is an open access, community-focused journal publishing research covering all aspects of artificial intelligence in theory and application.

https://link.springer.com/collections/geafahjbjb

Recent years have witnessed rapid and remarkable progress made in large language models (LLMs), e.g., ChatGPT, GPT-4, BARD, Claude, etc. The emerging LLMs not only revolutionize the field of natural language processing, but also have a transformative impact on AI, science and society. On the one hand, LLMs are used as backbone models for generative AI, enabling AI-generated content (AIGC) in a variety of forms, e.g., text, image, video, audio. On the other hand, challenges coexist with opportunities in LLMs. Due to the blackbox nature of LLMs, the theoretical reasons of emergent abilities, chain-of-thought capability, instruction generalization, etc., are not yet clear. Value alignment of LLMs, which handles ethical concerns regarding different aspects of LLMs and makes them safe, is both a societal and technological desideratum.

This Topical Collection aims at soliciting articles on large language models/AI-generated content via either large language models or other technologies. Topics of interest include, but are not limited to:

- Data processing and governance for large language models/AIGC

- Deep analysis of capabilities of large language models

- Interpretable large language models

- Neural architectures for large language models/AIGC

- Large multimodal models/AIGC

- Training and inference algorithms for large language models/AIGC

- Approaches to the alignment of large language models, e.g., RLHF

- Ethics issues of large language models/AIGC

- Various applications of LLMs in AIGC or for social good

- Automatic/Human Evaluations of LLMs/AIGC

Keywords: Large Language Models, AI-Generated Content, AI Alignment, Natural Language Processing, Ethics Issues of LLMs, LLM Evaluation, LLM Interpretability





上一篇: 许鸿飞 讲师
下一篇: 贾玉祥 副教授