1 code implementation • EMNLP 2021 • Shitao Xiao, Zheng Liu, Yingxia Shao, Defu Lian, Xing Xie
In this work, we propose the Matching-oriented Product Quantization (MoPQ), where a novel objective Multinoulli Contrastive Loss (MCL) is formulated.
no code implementations • Findings (EMNLP) 2021 • Shuxian Bi, Chaozhuo Li, Xiao Han, Zheng Liu, Xing Xie, Haizhen Huang, Zengxuan Wen
As the fundamental basis of sponsored search, relevance modeling has attracted increasing attention due to the tremendous practical value.
no code implementations • 10 Jun 2024 • Zhiquan Tan, Lai Wei, Jindong Wang, Xing Xie, Weiran Huang
Large language models (LLMs) have achieved remarkable progress in linguistic tasks, necessitating robust evaluation frameworks to understand their capabilities and limitations.
no code implementations • 30 May 2024 • Shaohua Wang, Xing Xie, Yong Li, Danhuai Guo, Zhi Cai, Yu Liu, Yang Yue, Xiao Pan, Feng Lu, Huayi Wu, Zhipeng Gui, Zhiming Ding, Bolong Zheng, Fuzheng Zhang, Tao Qin, Jingyuan Wang, Chuang Tao, Zhengchao Chen, Hao Lu, Jiayi Li, Hongyang Chen, Peng Yue, Wenhao Yu, Yao Yao, Leilei Sun, Yong Zhang, Longbiao Chen, Xiaoping Du, Xiang Li, Xueying Zhang, Kun Qin, Zhaoya Gong, Weihua Dong, Xiaofeng Meng
This report focuses on spatial data intelligent large models, delving into the principles, methods, and cutting-edge applications of these models.
no code implementations • 24 May 2024 • Cheng Li, Damien Teney, Linyi Yang, Qingsong Wen, Xing Xie, Jindong Wang
Results show that for content moderation, our GPT-3. 5-based models either match or outperform GPT-4 on datasets.
1 code implementation • 13 May 2024 • Qi Chen, Xiubo Geng, Corby Rosset, Carolyn Buractaon, Jingwen Lu, Tao Shen, Kun Zhou, Chenyan Xiong, Yeyun Gong, Paul Bennett, Nick Craswell, Xing Xie, Fan Yang, Bryan Tower, Nikhil Rao, Anlei Dong, Wenqi Jiang, Zheng Liu, Mingqin Li, Chuanjie Liu, Zengzhong Li, Rangan Majumder, Jennifer Neville, Andy Oakley, Knut Magne Risvik, Harsha Vardhan Simhadri, Manik Varma, Yujing Wang, Linjun Yang, Mao Yang, Ce Zhang
Recent breakthroughs in large models have highlighted the critical significance of data scale, labels and modals.
no code implementations • 19 Apr 2024 • Pablo Biedma, Xiaoyuan Yi, Linus Huang, Maosong Sun, Xing Xie
Recent advancements in Large Language Models (LLMs) have revolutionized the AI field but also pose potential safety and ethical risks.
no code implementations • 18 Apr 2024 • Zhihao Xu, Ruixuan Huang, Xiting Wang, Fangzhao Wu, Jing Yao, Xing Xie
Even when successful, the harmfulness of their outputs cannot be guaranteed, leading to suspicions that these methods have not accurately identified the safety vulnerabilities of LLMs.
1 code implementation • 11 Mar 2024 • Jianxun Lian, Yuxuan Lei, Xu Huang, Jing Yao, Wei Xu, Xing Xie
This paper introduces RecAI, a practical toolkit designed to augment or even revolutionize recommender systems with the advanced capabilities of Large Language Models (LLMs).
no code implementations • 11 Mar 2024 • Hao Chen, Jindong Wang, Zihan Wang, Ran Tao, Hongxin Wei, Xing Xie, Masashi Sugiyama, Bhiksha Raj
Foundation models are usually pre-trained on large-scale datasets and then adapted to downstream tasks through tuning.
no code implementations • 8 Mar 2024 • Jio Oh, Soyeon Kim, Junseok Seo, Jindong Wang, Ruochen Xu, Xing Xie, Steven Euijong Whang
Our key idea is to construct questions using the database schema, records, and functional dependencies such that they can be automatically verified.
no code implementations • 8 Mar 2024 • Wensheng Lu, Jianxun Lian, Wei zhang, Guanghua Li, Mingyang Zhou, Hao Liao, Xing Xie
Inspired by the exceptional general intelligence of Large Language Models (LLMs), researchers have begun to explore their application in pioneering the next generation of recommender systems - systems that are conversational, explainable, and controllable.
no code implementations • 7 Mar 2024 • Xinpeng Wang, Shitong Duan, Xiaoyuan Yi, Jing Yao, Shanlin Zhou, Zhihua Wei, Peng Zhang, Dongkuan Xu, Maosong Sun, Xing Xie
Big models have achieved revolutionary breakthroughs in the field of AI, but they might also pose potential concerns.
1 code implementation • 7 Mar 2024 • Zihan Luo, Xiran Song, Hong Huang, Jianxun Lian, Chenhao Zhang, Jinqi Jiang, Xing Xie
To evaluate and enhance the graph understanding abilities of LLMs, in this paper, we propose a benchmark named GraphInstruct, which comprehensively includes 21 classical graph reasoning tasks, providing diverse graph generation pipelines and detailed reasoning steps.
1 code implementation • 6 Mar 2024 • Shitong Duan, Xiaoyuan Yi, Peng Zhang, Tun Lu, Xing Xie, Ning Gu
Large language models (LLMs) have revolutionized the role of AI, yet also pose potential risks of propagating unethical content.
no code implementations • 29 Feb 2024 • Xukun Liu, Zhiyuan Peng, Xiaoyuan Yi, Xing Xie, Lirong Xiang, Yuchen Liu, Dongkuan Xu
While achieving remarkable progress in a broad range of tasks, large language models (LLMs) remain significantly limited in properly using massive external tools.
1 code implementation • 29 Feb 2024 • Yuxuan Lei, Jianxun Lian, Jing Yao, Mingqi Wu, Defu Lian, Xing Xie
Our empirical studies demonstrate that fine-tuning embedding models on the dataset leads to remarkable improvements in a variety of retrieval tasks.
no code implementations • 26 Feb 2024 • Peiyan Zhang, Chaozhuo Li, Liying Kang, Feiran Huang, Senzhang Wang, Xing Xie, Sunghun Kim
Moreover, we show that existing contrastive objective learns the low-frequency component of the augmentation graph and propose a high-frequency component (HFC)-aware contrastive learning objective that makes the learned embeddings more distinctive.
2 code implementations • 23 Feb 2024 • Zhuohao Yu, Chang Gao, Wenjin Yao, Yidong Wang, Wei Ye, Jindong Wang, Xing Xie, Yue Zhang, Shikun Zhang
Automatic evaluation methods for large language models (LLMs) are hindered by data contamination, leading to inflated assessments of their effectiveness.
1 code implementation • 21 Feb 2024 • Kaijie Zhu, Jindong Wang, Qinlin Zhao, Ruochen Xu, Xing Xie
Our multifaceted analysis demonstrated the strong correlation between the basic abilities and an implicit Matthew effect on model size, i. e., larger models possess stronger correlations of the abilities.
1 code implementation • 2 Feb 2024 • Hao Chen, Jindong Wang, Lei Feng, Xiang Li, Yidong Wang, Xing Xie, Masashi Sugiyama, Rita Singh, Bhiksha Raj
Weakly supervised learning generally faces challenges in applicability to various scenarios with diverse weak supervision and in scalability due to the complexity of existing algorithms, thereby hindering the practical deployment.
no code implementations • 2 Feb 2024 • Hao Chen, Bhiksha Raj, Xing Xie, Jindong Wang
Large foundation models (LFMs) are claiming incredible performances.
no code implementations • 27 Jan 2024 • Pengjie Liu, Zhenghao Liu, Xiaoyuan Yi, Liner Yang, Shuo Wang, Yu Gu, Ge Yu, Xing Xie, Shuang-Hua Yang
It proposes a dual-view legal clue reasoning mechanism, which derives from two reasoning chains of judges: 1) Law Case Reasoning, which makes legal judgments according to the judgment experiences learned from analogy/confusing legal cases; 2) Legal Ground Reasoning, which lies in matching the legal clues between criminal cases and legal decisions.
1 code implementation • 12 Jan 2024 • Lei LI, Jianxun Lian, Xiao Zhou, Xing Xie
However, most existing retrieval models employ a single-round inference paradigm, which may not adequately capture the dynamic nature of user preferences and stuck in one area in the item space.
1 code implementation • 10 Jan 2024 • Lichao Sun, Yue Huang, Haoran Wang, Siyuan Wu, Qihui Zhang, Yuan Li, Chujie Gao, Yixin Huang, Wenhan Lyu, Yixuan Zhang, Xiner Li, Zhengliang Liu, Yixin Liu, Yijue Wang, Zhikun Zhang, Bertie Vidgen, Bhavya Kailkhura, Caiming Xiong, Chaowei Xiao, Chunyuan Li, Eric Xing, Furong Huang, Hao liu, Heng Ji, Hongyi Wang, huan zhang, Huaxiu Yao, Manolis Kellis, Marinka Zitnik, Meng Jiang, Mohit Bansal, James Zou, Jian Pei, Jian Liu, Jianfeng Gao, Jiawei Han, Jieyu Zhao, Jiliang Tang, Jindong Wang, Joaquin Vanschoren, John Mitchell, Kai Shu, Kaidi Xu, Kai-Wei Chang, Lifang He, Lifu Huang, Michael Backes, Neil Zhenqiang Gong, Philip S. Yu, Pin-Yu Chen, Quanquan Gu, ran Xu, Rex Ying, Shuiwang Ji, Suman Jana, Tianlong Chen, Tianming Liu, Tianyi Zhou, William Wang, Xiang Li, Xiangliang Zhang, Xiao Wang, Xing Xie, Xun Chen, Xuyu Wang, Yan Liu, Yanfang Ye, Yinzhi Cao, Yong Chen, Yue Zhao
This paper introduces TrustLLM, a comprehensive study of trustworthiness in LLMs, including principles for different dimensions of trustworthiness, established benchmark, evaluation, and analysis of trustworthiness for mainstream LLMs, and discussion of open challenges and future directions.
no code implementations • 2 Jan 2024 • Xixu Hu, Runkai Zheng, Jindong Wang, Cheuk Hang Leung, Qi Wu, Xing Xie
In this study, we address this gap by introducing SpecFormer, specifically designed to enhance ViTs' resilience against adversarial attacks, with support from carefully derived theoretical guarantees.
1 code implementation • 26 Dec 2023 • Linyi Yang, Shuibai Zhang, Zhuohao Yu, Guangsheng Bao, Yidong Wang, Jindong Wang, Ruochen Xu, Wei Ye, Xing Xie, Weizhu Chen, Yue Zhang
Large Language Models (LLMs) exhibit emerging in-context learning abilities through prompt engineering.
1 code implementation • 21 Dec 2023 • Jingwei Yi, Yueqi Xie, Bin Zhu, Emre Kiciman, Guangzhong Sun, Xing Xie, Fangzhao Wu
Based on the evaluation, our work makes a key analysis of the underlying reason for the success of the attack, namely the inability of LLMs to distinguish between instructions and external content and the absence of LLMs' awareness to not execute instructions within external content.
no code implementations • 18 Dec 2023 • Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie
Through extensive experiments involving language and multi-modal models on semantic understanding, logical reasoning, and generation tasks, we demonstrate that both textual and visual EmotionPrompt can boost the performance of AI models while EmotionAttack can hinder it.
1 code implementation • 13 Dec 2023 • Kaijie Zhu, Qinlin Zhao, Hao Chen, Jindong Wang, Xing Xie
The evaluation of large language models (LLMs) is crucial to assess their performance and mitigate potential security risks.
1 code implementation • 13 Dec 2023 • Xinpeng Wang, Xiaoyuan Yi, Han Jiang, Shanlin Zhou, Zhihua Wei, Xing Xie
Warning: this paper includes model outputs showing offensive content.
1 code implementation • 11 Dec 2023 • Jiyan He, Weitao Feng, Yaosen Min, Jingwei Yi, Kunsheng Tang, Shuai Li, Jie Zhang, Kejiang Chen, Wenbo Zhou, Xing Xie, Weiming Zhang, Nenghai Yu, Shuxin Zheng
In this study, we aim to raise awareness of the dangers of AI misuse in science, and call for responsible AI development and use in this domain.
1 code implementation • 28 Nov 2023 • Yuhang Wang, Yanxu Zhu, Chao Kong, Shuyu Wei, Xiaoyuan Yi, Xing Xie, Jitao Sang
This benchmark serves as a valuable resource for cultural studies in LLMs, paving the way for more culturally aware and sensitive models.
no code implementations • 18 Nov 2023 • Yuxuan Lei, Jianxun Lian, Jing Yao, Xu Huang, Defu Lian, Xing Xie
Behavior alignment operates in the language space, representing user preferences and item information as text to learn the recommendation model's behavior; intention alignment works in the latent space of the recommendation model, using user and item representations to understand the model's behavior; hybrid alignment combines both language and latent spaces for alignment training.
no code implementations • 16 Nov 2023 • Jing Yao, Wei Xu, Jianxun Lian, Xiting Wang, Xiaoyuan Yi, Xing Xie
In this paper, we propose a general paradigm that augments LLMs with DOmain-specific KnowledgE to enhance their performance on practical applications, namely DOKE.
no code implementations • 15 Nov 2023 • Jing Yao, Xiaoyuan Yi, Xiting Wang, Yifan Gong, Xing Xie
The rapid advancement of Large Language Models (LLMs) has attracted much attention to value alignment for their responsible development.
1 code implementation • 26 Oct 2023 • Qinlin Zhao, Jindong Wang, Yixuan Zhang, Yiqiao Jin, Kaijie Zhu, Hao Chen, Xing Xie
We hope that the framework and environment can be a promising testbed to study competition that fosters understanding of society.
no code implementations • 26 Oct 2023 • Xiaoyuan Yi, Jing Yao, Xiting Wang, Xing Xie
Big models have greatly advanced AI's ability to understand, generate, and manipulate information and content, enabling numerous applications.
no code implementations • 25 Oct 2023 • Xiting Wang, Liming Jiang, Jose Hernandez-Orallo, David Stillwell, Luning Sun, Fang Luo, Xing Xie
Comprehensive and accurate evaluation of general-purpose AI systems such as large language models allows for effective mitigation of their risks and deepened understanding of their capabilities.
no code implementations • 20 Oct 2023 • Xu Huang, Jianxun Lian, Hao Wang, Defu Lian, Xing Xie
Recommendation systems effectively guide users in locating their desired information within extensive content repositories.
no code implementations • 17 Oct 2023 • Shitong Duan, Xiaoyuan Yi, Peng Zhang, Tun Lu, Xing Xie, Ning Gu
We discovered that most models are essentially misaligned, necessitating further ethical value alignment.
1 code implementation • 11 Oct 2023 • Cunxiang Wang, Xiaoze Liu, Yuanhao Yue, Xiangru Tang, Tianhang Zhang, Cheng Jiayang, Yunzhi Yao, Wenyang Gao, Xuming Hu, Zehan Qi, Yidong Wang, Linyi Yang, Jindong Wang, Xing Xie, Zheng Zhang, Yue Zhang
This survey addresses the crucial issue of factuality in Large Language Models (LLMs).
1 code implementation • 8 Oct 2023 • Wang Lu, Hao Yu, Jindong Wang, Damien Teney, Haohan Wang, Yiqiang Chen, Qiang Yang, Xing Xie, Xiangyang Ji
When personalized federated learning (FL) meets large foundation models, new challenges arise from various limitations in resources.
no code implementations • 1 Oct 2023 • Yachuan Liu, Liang Chen, Jindong Wang, Qiaozhu Mei, Xing Xie
We hope this initial work can shed light on future research of LLMs evaluation.
1 code implementation • 29 Sep 2023 • Kaijie Zhu, Jiaao Chen, Jindong Wang, Neil Zhenqiang Gong, Diyi Yang, Xing Xie
Moreover, DyVal-generated samples are not only evaluation sets, but also helpful data for fine-tuning to improve the performance of LLMs on existing benchmarks.
no code implementations • 29 Sep 2023 • Hao Chen, Jindong Wang, Ankit Shah, Ran Tao, Hongxin Wei, Xing Xie, Masashi Sugiyama, Bhiksha Raj
This paper aims to understand the nature of noise in pre-training datasets and to mitigate its impact on downstream tasks.
1 code implementation • NeurIPS 2023 • Hailin Zhang, Yujing Wang, Qi Chen, Ruiheng Chang, Ting Zhang, Ziming Miao, Yingyan Hou, Yang Ding, Xupeng Miao, Haonan Wang, Bochen Pang, Yuefeng Zhan, Hao Sun, Weiwei Deng, Qi Zhang, Fan Yang, Xing Xie, Mao Yang, Bin Cui
We empirically show that our model achieves better performance on the commonly used academic benchmarks MSMARCO Passage and Natural Questions, with comparable serving latency to dense retrieval solutions.
1 code implementation • 31 Aug 2023 • Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system.
no code implementations • 23 Aug 2023 • Jing Yao, Xiaoyuan Yi, Xiting Wang, Jindong Wang, Xing Xie
Big models, exemplified by Large Language Models (LLMs), are models typically pre-trained on massive data and comprised of enormous parameters, which not only obtain significantly improved performance across diverse tasks but also present emergent capabilities absent in smaller models.
no code implementations • 21 Aug 2023 • Peiyan Zhang, Haoyang Liu, Chaozhuo Li, Xing Xie, Sunghun Kim, Haohan Wang
Machine learning has demonstrated remarkable performance over finite datasets, yet whether the scores over the fixed benchmarks can sufficiently indicate the model's performance in the real world is still in discussion.
1 code implementation • ICCV 2023 • Sungwon Han, Sungwon Park, Fangzhao Wu, Sundong Kim, Bin Zhu, Xing Xie, Meeyoung Cha
Federated learning is used to train a shared model in a decentralized way without clients sharing private data with each other.
no code implementations • 5 Aug 2023 • Hao Wang, Jianxun Lian, Mingqi Wu, Haoxuan Li, Jiajun Fan, Wanyue Xu, Chaozhuo Li, Xing Xie
Sequential user modeling, a critical task in personalized recommender systems, focuses on predicting the next item a user would prefer, requiring a deep understanding of user behavior sequences.
no code implementations • 4 Aug 2023 • Wang Lu, Jindong Wang, Xinwei Sun, Yiqiang Chen, Xiangyang Ji, Qiang Yang, Xing Xie
We propose DIVERSIFY, a general framework, for OOD detection and generalization on dynamic distributions of time series.
no code implementations • 4 Aug 2023 • Juncheng Wang, Jindong Wang, Xixu Hu, Shujun Wang, Xing Xie
Empirical risk minimization (ERM) is a fundamental machine learning paradigm.
1 code implementation • ICCV 2023 • Kaijie Zhu, Jindong Wang, Xixu Hu, Xing Xie, Ge Yang
The core idea of RiFT is to exploit the redundant capacity for robustness by fine-tuning the adversarially trained model on its non-robust-critical module.
1 code implementation • 18 Jul 2023 • Sungwon Park, Sungwon Han, Fangzhao Wu, Sundong Kim, Bin Zhu, Xing Xie, Meeyoung Cha
Evaluations of real-world scenarios across multiple datasets show that the proposed method enhances the robustness of federated learning against model poisoning attacks.
no code implementations • 14 Jul 2023 • Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie
In addition to those deterministic tasks that can be automatically evaluated using existing metrics, we conducted a human study with 106 participants to assess the quality of generative tasks using both vanilla and emotional prompts.
1 code implementation • 6 Jul 2023 • Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications.
no code implementations • 25 Jun 2023 • Tao Qi, Fangzhao Wu, Lingjuan Lyu, Yongfeng Huang, Xing Xie
In this paper, instead of client uniform sampling, we propose a novel data uniform sampling strategy for federated learning (FedSampling), which can effectively improve the performance of federated learning especially when client data size distribution is highly imbalanced across clients.
1 code implementation • 17 Jun 2023 • Yuxi Feng, Xiaoyuan Yi, Laks V. S. Lakshmanan, Xing Xie
Self-training (ST) has come to fruition in language understanding tasks by producing pseudo labels, which reduces the labeling bottleneck of language model fine-tuning.
2 code implementations • 8 Jun 2023 • Yidong Wang, Zhuohao Yu, Zhengran Zeng, Linyi Yang, Cunxiang Wang, Hao Chen, Chaoya Jiang, Rui Xie, Jindong Wang, Xing Xie, Wei Ye, Shikun Zhang, Yue Zhang
To ensure the reliability of PandaLM, we collect a diverse human-annotated test dataset, where all contexts are generated by humans and labels are aligned with human preferences.
1 code implementation • 7 Jun 2023 • Kaijie Zhu, Jindong Wang, Jiaheng Zhou, Zichen Wang, Hao Chen, Yidong Wang, Linyi Yang, Wei Ye, Yue Zhang, Neil Zhenqiang Gong, Xing Xie
The increasing reliance on Large Language Models (LLMs) across academia and industry necessitates a comprehensive understanding of their robustness to prompts.
Cross-Lingual Paraphrase Identification Machine Translation +5
1 code implementation • 25 May 2023 • Xin Qin, Jindong Wang, Shuo Ma, Wang Lu, Yongchun Zhu, Xing Xie, Yiqiang Chen
With the constructed self-supervised learning task, DDLearn enlarges the data diversity and explores the latent activity properties.
1 code implementation • 23 May 2023 • Rui Li, Xu Chen, Chaozhuo Li, Yanming Shen, Jianan Zhao, Yujing Wang, Weihao Han, Hao Sun, Weiwei Deng, Qi Zhang, Xing Xie
Embedding models have shown great power in knowledge graph completion (KGC) task.
1 code implementation • 23 May 2023 • Peiyan Zhang, Yuchen Yan, Chaozhuo Li, Senzhang Wang, Xing Xie, Guojie Song, Sunghun Kim
Dynamic graph learning methods commonly suffer from the catastrophic forgetting problem, where knowledge learned for previous graphs is overwritten by updates for new graphs.
no code implementations • 22 May 2023 • Hao Chen, Ankit Shah, Jindong Wang, Ran Tao, Yidong Wang, Xing Xie, Masashi Sugiyama, Rita Singh, Bhiksha Raj
In this paper, we introduce imprecise label learning (ILL), a framework for the unification of learning with various imprecise label configurations.
Ranked #1 on Learning with noisy labels on mini WebVision 1.0
1 code implementation • 17 May 2023 • Wenjun Peng, Jingwei Yi, Fangzhao Wu, Shangxi Wu, Bin Zhu, Lingjuan Lyu, Binxing Jiao, Tong Xu, Guangzhong Sun, Xing Xie
Companies have begun to offer Embedding as a Service (EaaS) based on these LLMs, which can benefit various natural language processing (NLP) tasks for customers.
1 code implementation • 27 Apr 2023 • Yuntao Du, Jianxun Lian, Jing Yao, Xiting Wang, Mingqi Wu, Lu Chen, Yunjun Gao, Xing Xie
In recent decades, there have been significant advancements in latent embedding-based CF methods for improved accuracy, such as matrix factorization, neural collaborative filtering, and LightGCN.
1 code implementation • 4 Apr 2023 • Yidong Wang, Zhuohao Yu, Jindong Wang, Qiang Heng, Hao Chen, Wei Ye, Rui Xie, Xing Xie, Shikun Zhang
However, their performance on imbalanced dataset is relatively poor, where the distribution of classes in the training dataset is skewed, leading to poor performance in predicting minority classes.
1 code implementation • 17 Mar 2023 • Yidan Zhang, Ting Zhang, Dong Chen, Yujing Wang, Qi Chen, Xing Xie, Hao Sun, Weiwei Deng, Qi Zhang, Fan Yang, Mao Yang, Qingmin Liao, Baining Guo
While generative modeling has been ubiquitous in natural language processing and computer vision, its application to image retrieval remains unexplored.
1 code implementation • 15 Mar 2023 • Sungwon Han, Seungeon Lee, Fangzhao Wu, Sundong Kim, Chuhan Wu, Xiting Wang, Xing Xie, Meeyoung Cha
Algorithmic fairness has become an important machine learning problem, especially for mission-critical Web applications.
1 code implementation • 2 Mar 2023 • SeongKu Kang, Wonbin Kweon, Dongha Lee, Jianxun Lian, Xing Xie, Hwanjo Yu
Our work aims to transfer the ensemble knowledge of heterogeneous teachers to a lightweight student model using knowledge distillation (KD), to reduce the huge inference costs while retaining high accuracy.
1 code implementation • 27 Feb 2023 • Wang Lu, Xixu Hu, Jindong Wang, Xing Xie
Concretely, we design an attention-based adapter for the large model, CLIP, and the rest operations merely depend on adapters.
1 code implementation • 22 Feb 2023 • Jindong Wang, Xixu Hu, Wenxin Hou, Hao Chen, Runkai Zheng, Yidong Wang, Linyi Yang, Haojun Huang, Wei Ye, Xiubo Geng, Binxin Jiao, Yue Zhang, Xing Xie
In this paper, we conduct a thorough evaluation of the robustness of ChatGPT from the adversarial and out-of-distribution (OOD) perspective.
4 code implementations • 26 Jan 2023 • Hao Chen, Ran Tao, Yue Fan, Yidong Wang, Jindong Wang, Bernt Schiele, Xing Xie, Bhiksha Raj, Marios Savvides
The critical challenge of Semi-Supervised Learning (SSL) is how to effectively leverage the limited labeled data and massive unlabeled data to improve the model's generalization performance.
1 code implementation • 21 Dec 2022 • Dongmin Hyun, Xiting Wang, Chanyoung Park, Xing Xie, Hwanjo Yu
We formulate the unsupervised summarization based on the Markov decision process with rewards representing the summary quality.
1 code implementation • 16 Dec 2022 • Yuxi Feng, Xiaoyuan Yi, Xiting Wang, Laks V. S. Lakshmanan, Xing Xie
Augmented by only self-generated pseudo text, generation models over-emphasize exploitation of the previously learned space, suffering from a constrained generalization boundary.
1 code implementation • 30 Nov 2022 • Jing Yao, Zheng Liu, Junhan Yang, Zhicheng Dou, Xing Xie, Ji-Rong Wen
In the first stage, a lightweight CNN-based ad-hod neighbor selector is deployed to filter useful neighbors for the matching task with a small computation cost.
no code implementations • 24 Nov 2022 • Yiqiao Jin, Xiting Wang, Yaru Hao, Yizhou Sun, Xing Xie
In this paper, we move towards combining large parametric models with non-parametric prototypical networks.
no code implementations • 20 Nov 2022 • Hao Chen, Yue Fan, Yidong Wang, Jindong Wang, Bernt Schiele, Xing Xie, Marios Savvides, Bhiksha Raj
While standard SSL assumes uniform data distribution, we consider a more realistic and challenging setting called imbalanced SSL, where imbalanced class distributions occur in both labeled and unlabeled data.
1 code implementation • 15 Nov 2022 • Linyi Yang, Shuibai Zhang, Libo Qin, Yafu Li, Yidong Wang, Hanmeng Liu, Jindong Wang, Xing Xie, Yue Zhang
Pre-trained language models (PLMs) are known to improve the generalization performance of natural language understanding models by leveraging large amounts of data during the pre-training phase.
Natural Language Understanding Out-of-Distribution Generalization
1 code implementation • 14 Nov 2022 • Wenhao Li, Xiaoyuan Yi, Jinyi Hu, Maosong Sun, Xing Xie
In this work, we dig into the intrinsic mechanism of this problem and found that sparser attention values in Transformer could improve diversity.
no code implementations • 10 Nov 2022 • Yueqi Xie, Weizhong Zhang, Renjie Pi, Fangzhao Wu, Qifeng Chen, Xing Xie, Sunghun Kim
Since at each round, the number of tunable parameters optimized on the server side equals the number of participating clients (thus independent of the model size), we are able to train a global model with massive parameters using only a small amount of proxy data (e. g., around one hundred samples).
1 code implementation • 7 Nov 2022 • Wang Lu, Jindong Wang, Han Yu, Lei Huang, Xiang Zhang, Yiqiang Chen, Xing Xie
Firstly, Mixup cannot effectively identify the domain and class information that can be used for learning invariant representations.
2 code implementations • 26 Oct 2022 • Jianan Zhao, Meng Qu, Chaozhuo Li, Hao Yan, Qian Liu, Rui Li, Xing Xie, Jian Tang
In this paper, we propose an efficient and effective solution to learning on large text-attributed graphs by fusing graph structure and language learning with a variational Expectation-Maximization (EM) framework, called GLEM.
Ranked #1 on Node Property Prediction on ogbn-papers100M
no code implementations • 22 Oct 2022 • Jinyi Hu, Xiaoyuan Yi, Wenhao Li, Maosong Sun, Xing Xie
We demonstrate that TRACE could enhance the entanglement of each segment and preceding latent variables and deduce a non-zero lower bound of the KL term, providing a theoretical guarantee of generation diversity.
1 code implementation • 18 Oct 2022 • Zhoujin Tian, Chaozhuo Li, Shuo Ren, Zhiqiang Zuo, Zengxuan Wen, Xinyue Hu, Xiao Han, Haizhen Huang, Denvy Deng, Qi Zhang, Xing Xie
Bilingual lexicon induction induces the word translations by aligning independently trained word embeddings in two languages.
1 code implementation • 17 Oct 2022 • Jingwei Yi, Fangzhao Wu, Chuhan Wu, Xiaolong Huang, Binxing Jiao, Guangzhong Sun, Xing Xie
In this paper, we propose an effective query-aware webpage snippet extraction method named DeepQSE, aiming to select a few sentences which can best summarize the webpage content in the context of input query.
no code implementations • 17 Oct 2022 • Yiqi Wang, Chaozhuo Li, Wei Jin, Rui Li, Jianan Zhao, Jiliang Tang, Xing Xie
To bridge such gap, in this work we introduce the first test-time training framework for GNNs to enhance the model generalization capacity for the graph classification task.
1 code implementation • 13 Oct 2022 • Seungeon Lee, Xiting Wang, Sungwon Han, Xiaoyuan Yi, Xing Xie, Meeyoung Cha
We present SELOR, a framework for integrating self-explaining capabilities into a given deep model to achieve both high prediction performance and human precision.
no code implementations • 10 Oct 2022 • Zonghan Yang, Xiaoyuan Yi, Peng Li, Yang Liu, Xing Xie
Warning: this paper contains model outputs exhibiting offensiveness and biases.
1 code implementation • 15 Sep 2022 • Wang Lu, Jindong Wang, Xinwei Sun, Yiqiang Chen, Xing Xie
Time series classification is an important problem in real world.
no code implementations • 1 Sep 2022 • Wang Lu, Jindong Wang, Yidong Wang, Xing Xie
For optimization, we utilize an adapted Mixup to generate an out-of-distribution dataset that can guide the preference direction and optimize with Pareto optimization.
1 code implementation • 18 Aug 2022 • Yi-Fan Zhang, Jindong Wang, Jian Liang, Zhang Zhang, Baosheng Yu, Liang Wang, DaCheng Tao, Xing Xie
Our bound motivates two strategies to reduce the gap: the first one is ensembling multiple classifiers to enrich the hypothesis space, then we propose effective gap estimation methods for guiding the selection of a better hypothesis for the target.
5 code implementations • 12 Aug 2022 • Yidong Wang, Hao Chen, Yue Fan, Wang Sun, Ran Tao, Wenxin Hou, RenJie Wang, Linyi Yang, Zhi Zhou, Lan-Zhe Guo, Heli Qi, Zhen Wu, Yu-Feng Li, Satoshi Nakamura, Wei Ye, Marios Savvides, Bhiksha Raj, Takahiro Shinozaki, Bernt Schiele, Jindong Wang, Xing Xie, Yue Zhang
We further provide the pre-trained versions of the state-of-the-art neural models for CV tasks to make the cost affordable for further tuning.
no code implementations • 3 Aug 2022 • Yivan Zhang, Jindong Wang, Xing Xie, Masashi Sugiyama
To formally analyze this issue, we provide a unique algebraic formulation of the combination shift problem based on the concepts of homomorphism, equivariance, and a refined definition of disentanglement.
no code implementations • 2 Aug 2022 • Yiding Zhang, Chaozhuo Li, Senzhang Wang, Jianxun Lian, Xing Xie
Graph-based collaborative filtering is capable of capturing the essential and abundant collaborative signals from the high-order interactions, and thus received increasingly research interests.
1 code implementation • 25 Jul 2022 • Wang Lu, Jindong Wang, Haoliang Li, Yiqiang Chen, Xing Xie
Internal invariance means that the features can be learned with a single domain and the features capture intrinsic semantics of data, i. e., the property within a domain, which is agnostic to other domains.
1 code implementation • 19 Jul 2022 • Sungwon Han, Sungwon Park, Fangzhao Wu, Sundong Kim, Chuhan Wu, Xing Xie, Meeyoung Cha
This paper presents FedX, an unsupervised federated learning framework.
1 code implementation • NAACL 2022 • Jinyi Hu, Xiaoyuan Yi, Wenhao Li, Maosong Sun, Xing Xie
The past several years have witnessed Variational Auto-Encoder's superiority in various text generation tasks.
1 code implementation • 28 Jun 2022 • Xu Huang, Defu Lian, Jin Chen, Zheng Liu, Xing Xie, Enhong Chen
Deep recommender systems (DRS) are intensively applied in modern web services.
no code implementations • 26 Jun 2022 • Mengyan Zhang, Thanh Nguyen-Tang, Fangzhao Wu, Zhenyu He, Xing Xie, Cheng Soon Ong
We consider the problem of personalised news recommendation where each user consumes news in a sequential fashion.
1 code implementation • 26 Jun 2022 • Jiayan Guo, Peiyan Zhang, Chaozhuo Li, Xing Xie, Yan Zhang, Sunghun Kim
Session-based recommendation (SBR) aims to predict the user next action based on the ongoing sessions.
1 code implementation • 26 Jun 2022 • Peiyan Zhang, Jiayan Guo, Chaozhuo Li, Yueqi Xie, Jaeboum Kim, Yan Zhang, Xing Xie, Haohan Wang, Sunghun Kim
Based on this observation, we intuitively propose to remove the GNN propagation part, while the readout module will take on more responsibility in the model reasoning process.
2 code implementations • 17 Jun 2022 • Yiqiang Chen, Wang Lu, Xin Qin, Jindong Wang, Xing Xie
Federated learning has attracted increasing attention to building models without accessing the raw user data, especially in healthcare.
1 code implementation • 7 Jun 2022 • Tao Qi, Fangzhao Wu, Chuhan Wu, Lingjuan Lyu, Tong Xu, Zhongliang Yang, Yongfeng Huang, Xing Xie
In order to learn a fair unified representation, we send it to each platform storing fairness-sensitive features and apply adversarial learning to remove bias from the unified representation inherited from the biased data.
1 code implementation • 6 Jun 2022 • Yujing Wang, Yingyan Hou, Haonan Wang, Ziming Miao, Shibin Wu, Hao Sun, Qi Chen, Yuqing Xia, Chengmin Chi, Guoshuai Zhao, Zheng Liu, Xing Xie, Hao Allen Sun, Weiwei Deng, Qi Zhang, Mao Yang
To this end, we propose Neural Corpus Indexer (NCI), a sequence-to-sequence network that generates relevant document identifiers directly for a designated query.
no code implementations • 1 Jun 2022 • Lanling Xu, Jianxun Lian, Wayne Xin Zhao, Ming Gong, Linjun Shou, Daxin Jiang, Xing Xie, Ji-Rong Wen
The learn-to-compare paradigm of contrastive representation learning (CRL), which compares positive samples with negative ones for representation learning, has achieved great success in a wide range of domains, including natural language processing, computer vision, information retrieval and graph learning.
no code implementations • 22 May 2022 • Jingwei Yi, Fangzhao Wu, Huishuai Zhang, Bin Zhu, Tao Qi, Guangzhong Sun, Xing Xie
Federated learning (FL) enables multiple clients to collaboratively train models without sharing their local data, and becomes an important privacy-preserving machine learning framework.
1 code implementation • 22 May 2022 • Xinyan Fan, Jianxun Lian, Wayne Xin Zhao, Zheng Liu, Chaozhuo Li, Xing Xie
We first extract distribution patterns from the item candidates.
1 code implementation • 18 May 2022 • Juyong Jiang, Peiyan Zhang, Yingtao Luo, Chaozhuo Li, Jae Boum Kim, Kai Zhang, Senzhang Wang, Xing Xie, Sunghun Kim
Sequential recommendation (SR) aims to model users dynamic preferences from a series of interactions.
5 code implementations • 15 May 2022 • Yidong Wang, Hao Chen, Qiang Heng, Wenxin Hou, Yue Fan, Zhen Wu, Jindong Wang, Marios Savvides, Takahiro Shinozaki, Bhiksha Raj, Bernt Schiele, Xing Xie
Semi-supervised Learning (SSL) has witnessed great success owing to the impressive performances brought by various methods based on pseudo labeling and consistency regularization.
no code implementations • 21 Apr 2022 • Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang, Xing Xie
In this paper, we propose a federated contrastive learning method named FedCL for privacy-preserving recommendation, which can exploit high-quality negative samples for effective model training with privacy well protected.
1 code implementation • 10 Apr 2022 • Tao Qi, Fangzhao Wu, Chuhan Wu, Peijie Sun, Le Wu, Xiting Wang, Yongfeng Huang, Xing Xie
To learn provider-fair representations from biased data, we employ provider-biased representations to inherit provider bias from data.
2 code implementations • 1 Apr 2022 • Shitao Xiao, Zheng Liu, Weihao Han, Jianjin Zhang, Defu Lian, Yeyun Gong, Qi Chen, Fan Yang, Hao Sun, Yingxia Shao, Denvy Deng, Qi Zhang, Xing Xie
We perform comprehensive explorations for the optimal conduct of knowledge distillation, which may provide useful insights for the learning of VQ based ANN index.
no code implementations • 28 Feb 2022 • Junhan Yang, Zheng Liu, Shitao Xiao, Jianxun Lian, Lijun Wu, Defu Lian, Guangzhong Sun, Xing Xie
Instead of relying on annotation heuristics defined by humans, it leverages the sentence representation model itself and realizes the following iterative self-supervision process: on one hand, the improvement of sentence representation may contribute to the quality of data annotation; on the other hand, more effective data annotation helps to generate high-quality positive samples, which will further improve the current sentence representation model.
no code implementations • ACL 2022 • Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang, Xing Xie
In this paper, we propose a very simple yet effective method named NoisyTune to help better finetune PLMs on downstream tasks by adding some noise to the parameters of PLMs before fine-tuning.
no code implementations • 16 Feb 2022 • Ruixuan Liu, Fangzhao Wu, Chuhan Wu, Yanlin Wang, Lingjuan Lyu, Hong Chen, Xing Xie
In this way, all the clients can participate in the model learning in FL, and the final model can be big and powerful enough.
1 code implementation • 16 Feb 2022 • Rui Li, Jianan Zhao, Chaozhuo Li, Di He, Yiqi Wang, Yuming Liu, Hao Sun, Senzhang Wang, Weiwei Deng, Yanming Shen, Xing Xie, Qi Zhang
The effectiveness of knowledge graph embedding (KGE) largely depends on the ability to model intrinsic relation patterns and mapping properties.
1 code implementation • 14 Feb 2022 • Jingwei Yi, Fangzhao Wu, Bin Zhu, Jing Yao, Zhulin Tao, Guangzhong Sun, Xing Xie
Our study reveals a critical security issue in existing federated news recommendation systems and calls for research efforts to address the issue.
no code implementations • 13 Feb 2022 • Jianjin Zhang, Zheng Liu, Weihao Han, Shitao Xiao, Ruicheng Zheng, Yingxia Shao, Hao Sun, Hanqing Zhu, Premkumar Srinivasan, Denvy Deng, Qi Zhang, Xing Xie
On the other hand, the capability of making high-CTR retrieval is optimized by learning to discriminate user's clicked ads from the entire corpus.
no code implementations • 10 Feb 2022 • Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang, Xing Xie
However, existing general FL poisoning methods for degrading model performance are either ineffective or not concealed in poisoning federated recommender systems.
no code implementations • 10 Feb 2022 • Chuhan Wu, Fangzhao Wu, Tao Qi, Yanlin Wang, Yuqing Yang, Yongfeng Huang, Xing Xie
To solve the game, we propose a platform negotiation method that simulates the bargaining among platforms and locally optimizes their policies via gradient descent.
no code implementations • 23 Jan 2022 • Chao Feng, Defu Lian, Xiting Wang, Zheng Liu, Xing Xie, Enhong Chen
Instead of searching the nearest neighbor for the query, we search the item with maximum inner product with query on the proximity graph.
2 code implementations • 14 Jan 2022 • Shitao Xiao, Zheng Liu, Weihao Han, Jianjin Zhang, Yingxia Shao, Defu Lian, Chaozhuo Li, Hao Sun, Denvy Deng, Liangjie Zhang, Qi Zhang, Xing Xie
In this work, we tackle this problem with Bi-Granular Document Representation, where the lightweight sparse embeddings are indexed and standby in memory for coarse-grained candidate search, and the heavyweight dense embeddings are hosted in disk for fine-grained post verification.
no code implementations • 14 Dec 2021 • Yiqi Wang, Chaozhuo Li, Zheng Liu, Mingzheng Li, Jiliang Tang, Xing Xie, Lei Chen, Philip S. Yu
Thus, graph pre-training has the great potential to alleviate data sparsity in GNN-based recommendations.
no code implementations • 25 Oct 2021 • Jianan Zhao, Chaozhuo Li, Qianlong Wen, Yiqi Wang, Yuming Liu, Hao Sun, Xing Xie, Yanfang Ye
Existing graph transformer models typically adopt fully-connected attention mechanism on the whole input graph and thus suffer from severe scalability issues and are intractable to train in data insufficient cases.
1 code implementation • 19 Oct 2021 • Yu Song, Jianxun Lian, Shuai Sun, Hong Huang, Yu Li, Hai Jin, Xing Xie
Then we propose a hierarchical CB (HCB) algorithm to explore users' interest in the hierarchy tree.
1 code implementation • 13 Sep 2021 • Yiqiao Jin, Xiting Wang, Ruichao Yang, Yizhou Sun, Wei Wang, Hao Liao, Xing Xie
The detection of fake news often requires sophisticated reasoning skills, such as logically combining information by considering word-level subtle clues.
1 code implementation • EMNLP 2021 • Jingwei Yi, Fangzhao Wu, Chuhan Wu, Ruixuan Liu, Guangzhong Sun, Xing Xie
However, the computation and communication cost of directly learning many existing news recommendation models in a federated way are unacceptable for user clients.
no code implementations • Findings (EMNLP) 2021 • Tao Qi, Fangzhao Wu, Chuhan Wu, Yongfeng Huang, Xing Xie
In this paper, we propose a unified news recommendation framework, which can utilize user data locally stored in user clients to train models and serve users in a privacy-preserving way.
no code implementations • 3 Sep 2021 • Chuhan Wu, Fangzhao Wu, Yang Yu, Tao Qi, Yongfeng Huang, Xing Xie
Two self-supervision tasks are incorporated in UserBERT for user model pre-training on unlabeled user behavior data to empower user modeling.
no code implementations • 30 Aug 2021 • Chuhan Wu, Fangzhao Wu, Lingjuan Lyu, Yongfeng Huang, Xing Xie
Instead of directly communicating the large models between clients and server, we propose an adaptive mutual distillation framework to reciprocally learn a student and a teacher model on each client, where only the student model is shared by different clients and updated collaboratively to reduce the communication cost.
9 code implementations • 20 Aug 2021 • Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang, Xing Xie
In this way, Fastformer can achieve effective context modeling with linear complexity.
Ranked #1 on News Recommendation on MIND (using extra training data)
no code implementations • 20 Aug 2021 • Chuhan Wu, Fangzhao Wu, Tao Qi, Binxing Jiao, Daxin Jiang, Yongfeng Huang, Xing Xie
We then sample token pairs based on their probability scores derived from the sketched attention matrix to generate different sparse attention index matrices for different attention heads.
no code implementations • 10 Aug 2021 • Yiqi Wang, Chaozhuo Li, Mingzheng Li, Wei Jin, Yuming Liu, Hao Sun, Xing Xie, Jiliang Tang
These methods often make recommendations based on the learned user and item embeddings.
1 code implementation • ACL 2021 • Xiang Ao, Xiting Wang, Ling Luo, Ying Qiao, Qing He, Xing Xie
To build up a benchmark for this problem, we publicize a large-scale dataset named PENS (PErsonalized News headlineS).
no code implementations • 16 Jun 2021 • Chuhan Wu, Fangzhao Wu, Yongfeng Huang, Xing Xie
Instead of following the conventional taxonomy of news recommendation methods, in this paper we propose a novel perspective to understand personalized news recommendation based on its core problems and the associated techniques and challenges.
no code implementations • ACL 2021 • Tao Qi, Fangzhao Wu, Chuhan Wu, Peiru Yang, Yang Yu, Xing Xie, Yongfeng Huang
Instead of a single user embedding, in our method each user is represented in a hierarchical interest tree to better capture their diverse and multi-grained interest in news.
1 code implementation • NeurIPS 2021 • Junhan Yang, Zheng Liu, Shitao Xiao, Chaozhuo Li, Defu Lian, Sanjay Agrawal, Amit Singh, Guangzhong Sun, Xing Xie
The representation learning on textual graph is to generate low-dimensional embeddings for the nodes based on the individual textual features and the neighbourhood information.
1 code implementation • 25 Apr 2021 • Chaozhuo Li, Bochen Pang, Yuming Liu, Hao Sun, Zheng Liu, Xing Xie, Tianqi Yang, Yanling Cui, Liangjie Zhang, Qi Zhang
Our motivation lies in incorporating the tremendous amount of unsupervised user behavior data from the historical search logs as the complementary graph to facilitate relevance modeling.
no code implementations • 22 Apr 2021 • Junhan Yang, Zheng Liu, Bowen Jin, Jianxun Lian, Defu Lian, Akshay Soni, Eun Yong Kang, Yajun Wang, Guangzhong Sun, Xing Xie
For the sake of efficient recommendation, conventional methods would generate user and advertisement embeddings independently with a siamese transformer encoder, such that approximate nearest neighbour search (ANN) can be leveraged.
2 code implementations • 16 Apr 2021 • Shitao Xiao, Zheng Liu, Yingxia Shao, Defu Lian, Xing Xie
In this work, we propose the Matching-oriented Product Quantization (MoPQ), where a novel objective Multinoulli Contrastive Loss (MCL) is formulated.
no code implementations • 15 Apr 2021 • Jingwei Yi, Fangzhao Wu, Chuhan Wu, Qifei Li, Guangzhong Sun, Xing Xie
The core of our method includes a bias representation module, a bias-aware user modeling module, and a bias-aware click prediction module.
1 code implementation • 18 Feb 2021 • Jianxun Lian, Iyad Batal, Zheng Liu, Akshay Soni, Eun Yong Kang, Yajun Wang, Xing Xie
User states in different channels are updated by an \emph{erase-and-add} paradigm with interest- and instance-level attention.
1 code implementation • 18 Feb 2021 • Shitao Xiao, Zheng Liu, Yingxia Shao, Tao Di, Xing Xie
Secondly, it improves the data efficiency of the training workflow, where non-informative data can be eliminated from encoding.
no code implementations • 9 Feb 2021 • Chuhan Wu, Fangzhao Wu, Yang Cao, Yongfeng Huang, Xing Xie
To incorporate high-order user-item interactions, we propose a user-item graph expansion method that can find neighboring users with co-interacted items and exchange their embeddings for expanding the local user-item graphs in a privacy-preserving way.
no code implementations • 12 Jan 2021 • Chuhan Wu, Fangzhao Wu, Yongfeng Huang, Xing Xie
The dwell time of news reading is an important clue for user interest modeling, since short reading dwell time usually indicates low and even negative interest.
no code implementations • ACCV 2020 • Yuanzhong Liu, Zhigang Tu, Liyu Lin, Xing Xie, and Qianqing Qin
In this paper, we exploit better ways to use motion information in a unified end-to-end trainable network architecture.
no code implementations • 3 Nov 2020 • Hao Liao, Qixin Liu, Kai Shu, Xing Xie
Yet, the popularity of social media also provides opportunities to better detect fake news.
Fake News Detection Representation Learning Social and Information Networks
no code implementations • NeurIPS 2020 • Binbin Jin, Defu Lian, Zheng Liu, Qi Liu, Jianhui Ma, Xing Xie, Enhong Chen
The GAN-style recommenders (i. e., IRGAN) addresses the challenge by learning a generator and a discriminator adversarially, such that the generator produces increasingly difficult samples for the discriminator to accelerate optimizing the discrimination objective.
2 code implementations • 21 Oct 2020 • Jiancan Wu, Xiang Wang, Fuli Feng, Xiangnan He, Liang Chen, Jianxun Lian, Xing Xie
In this work, we explore self-supervised learning on user-item graph, so as to improve the accuracy and robustness of GCNs for recommendation.
Ranked #4 on Collaborative Filtering on Yelp2018
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Chuhan Wu, Fangzhao Wu, Tao Qi, Jianxun Lian, Yongfeng Huang, Xing Xie
Motivated by pre-trained language models which are pre-trained on large-scale unlabeled corpus to empower many downstream tasks, in this paper we propose to pre-train user models from large-scale unlabeled user behaviors data.
1 code implementation • 23 Jul 2020 • Chuhan Wu, Fangzhao Wu, Tao Di, Yongfeng Huang, Xing Xie
On each platform a local user model is used to learn user embeddings from the local user behaviors on that platform.
no code implementations • ACL 2020 • Heyuan Wang, Fangzhao Wu, Zheng Liu, Xing Xie
Existing studies generally represent each user as a single vector and then match the candidate news vector, which may lose fine-grained information for recommendation.
2 code implementations • ACL 2020 • Fangzhao Wu, Ying Qiao, Jiun-Hung Chen, Chuhan Wu, Tao Qi, Jianxun Lian, Danyang Liu, Xing Xie, Jianfeng Gao, Winnie Wu, Ming Zhou
News recommendation is an important technique for personalized news service.
1 code implementation • ACL 2020 • Linmei Hu, Siyong Xu, Chen Li, Cheng Yang, Chuan Shi, Nan Duan, Xing Xie, Ming Zhou
Furthermore, the learned representations are disentangled with latent preference factors by a neighborhood routing algorithm, which can enhance expressiveness and interpretability.
no code implementations • 30 Jun 2020 • Chuhan Wu, Fangzhao Wu, Xiting Wang, Yongfeng Huang, Xing Xie
In this paper, we propose a fairness-aware news recommendation approach with decomposed adversarial learning and orthogonality regularization, which can alleviate unfairness in news recommendation brought by the biases of sensitive user attributes.
1 code implementation • International World Wide Web Conference 2020 • Defu Lian, Haoyu Wang, Zheng Liu, Jianxun Lian, Enhong Chen, Xing Xie
On top of such a structure, LightRec will have an item represented as additive composition of B codewords, which are optimally selected from each of the codebooks.
2 code implementations • Findings of the Association for Computational Linguistics 2020 • Tao Qi, Fangzhao Wu, Chuhan Wu, Yongfeng Huang, Xing Xie
Extensive experiments on a real-world dataset show the effectiveness of our method in news recommendation model training with privacy protection.
no code implementations • 20 Mar 2020 • Suyu Ge, Fangzhao Wu, Chuhan Wu, Tao Qi, Yongfeng Huang, Xing Xie
Since the labeled data in different platforms usually has some differences in entity type and annotation criteria, instead of constraining different platforms to share the same model, we decompose the medical NER model in each platform into a shared module and a private module.
no code implementations • 28 Feb 2020 • Qingyu Guo, Fuzhen Zhuang, Chuan Qin, HengShu Zhu, Xing Xie, Hui Xiong, Qing He
On the one hand, we investigate the proposed algorithms by focusing on how the papers utilize the knowledge graph for accurate and explainable recommendation.
1 code implementation • 30 Jan 2020 • Jiancan Wu, Xiangnan He, Xiang Wang, Qifan Wang, Weijian Chen, Jianxun Lian, Xing Xie
The encoder projects users, items, and contexts into embedding vectors, which are passed to the GC layers that refine user and item embeddings with context-aware graph convolutions on user-item graph.
no code implementations • IJCNLP 2019 • Chuhan Wu, Fangzhao Wu, Mingxiao An, Tao Qi, Jianqiang Huang, Yongfeng Huang, Xing Xie
In the user representation module, we propose an attentive multi-view learning framework to learn unified representations of users from their heterogeneous behaviors such as search queries, clicked news and browsed webpages.
3 code implementations • IJCNLP 2019 • Chuhan Wu, Fangzhao Wu, Suyu Ge, Tao Qi, Yongfeng Huang, Xing Xie
The core of our approach is a news encoder and a user encoder.
no code implementations • IJCNLP 2019 • Chuhan Wu, Fangzhao Wu, Tao Qi, Suyu Ge, Yongfeng Huang, Xing Xie
In the review content-view, we propose to use a hierarchical model to first learn sentence representations from words, then learn review representations from sentences, and finally learn user/item representations from reviews.
1 code implementation • 25 Oct 2019 • Danyang Liu, Jianxun Lian, Shiyin Wang, Ying Qiao, Jiun-Hung Chen, Guangzhong Sun, Xing Xie
News articles usually contain knowledge entities such as celebrities or organizations.
no code implementations • 15 Jul 2019 • Zheng Liu, Yu Xing, Jianxun Lian, Defu Lian, Ziyao Li, Xing Xie
Our work is undergoing a anonymous review, and it will soon be released after the notification.
5 code implementations • 12 Jul 2019 • Chuhan Wu, Fangzhao Wu, Mingxiao An, Jianqiang Huang, Yongfeng Huang, Xing Xie
In the user encoder we learn the representations of users based on their browsed news and apply attention mechanism to select informative news for user representation learning.
Ranked #6 on News Recommendation on MIND
no code implementations • 12 Jul 2019 • Chuhan Wu, Fangzhao Wu, Mingxiao An, Jianqiang Huang, Yongfeng Huang, Xing Xie
Since different words and different news articles may have different informativeness for representing news and users, we propose to apply both word- and news-level attention mechanism to help our model attend to important words and news articles.
no code implementations • ACL 2019 • Chuhan Wu, Fangzhao Wu, Mingxiao An, Yongfeng Huang, Xing Xie
The core of our approach is a topic-aware news encoder and a user encoder.
1 code implementation • ACL 2019 • Mingxiao An, Fangzhao Wu, Chuhan Wu, Kun Zhang, Zheng Liu, Xing Xie
In this paper, we propose a neural news recommendation approach which can learn both long- and short-term user representations.
Ranked #7 on News Recommendation on MIND
no code implementations • ACL 2019 • Dehong Ma, Sujian Li, Fangzhao Wu, Xing Xie, Houfeng Wang
Aspect term extraction (ATE) aims at identifying all aspect terms in a sentence and is usually modeled as a sequence labeling problem.
Ranked #1 on Term Extraction on SemEval 2014 Task 4 Laptop
no code implementations • 24 Jun 2019 • Xiao Zhou, Danyang Liu, Jianxun Lian, Xing Xie
The success of recommender systems in modern online platforms is inseparable from the accurate capture of users' personal tastes.
1 code implementation • 4 Jun 2019 • Chanyoung Park, Donghyun Kim, Xing Xie, Hwanjo Yu
We also conduct extensive qualitative evaluations on the translation vectors learned by our proposed method to ascertain the benefit of adopting the translation mechanism for implicit feedback-based recommendations.
Ranked #1 on Recommendation Systems on Declicious
no code implementations • 1 Jun 2019 • Le Wu, Lei Chen, Yonghui Yang, Richang Hong, Yong Ge, Xing Xie, Meng Wang
We argue that the key challenge of this problem lies in discovering users' visual profiles for key frame recommendation, as most recommendation models would fail without any users' fine-grained image behavior.
no code implementations • 29 May 2019 • Xianchen Wang, Hongtao Liu, Peiyi Wang, Fangzhao Wu, Hongyan Xu, Wenjun Wang, Xing Xie
In this paper, we propose a hierarchical attention model fusing latent factor model for rating prediction with reviews, which can focus on important words and informative reviews.
5 code implementations • 29 May 2019 • Hongtao Liu, Fangzhao Wu, Wenjun Wang, Xianchen Wang, Pengfei Jiao, Chuhan Wu, Xing Xie
In this paper we propose a neural recommendation approach with personalized attention to learn personalized representations of users and items from reviews.
no code implementations • 27 May 2019 • Yu Yin, Zhenya Huang, Enhong Chen, Qi Liu, Fuzheng Zhang, Xing Xie, Guoping Hu
Then, we decide "what-to-write" by developing a GRU based network with the spotlight areas for transcribing the content accordingly.
1 code implementation • 26 Apr 2019 • Fangzhao Wu, Junxin Liu, Chuhan Wu, Yongfeng Huang, Xing Xie
Besides, the training data for CNER in many domains is usually insufficient, and annotating enough training data for CNER is very expensive and time-consuming.
Chinese Named Entity Recognition named-entity-recognition +1
no code implementations • 26 Apr 2019 • Junxin Liu, Fangzhao Wu, Chuhan Wu, Yongfeng Huang, Xing Xie
Luckily, the unlabeled data is usually easy to collect and many high-quality Chinese lexicons are off-the-shelf, both of which can provide useful information for CWS.
8 code implementations • 18 Mar 2019 • Hongwei Wang, Miao Zhao, Xing Xie, Wenjie Li, Minyi Guo
To alleviate sparsity and cold start problem of collaborative filtering based recommender systems, researchers and engineers usually collect attributes of users and items, and design delicate algorithms to exploit these additional information.
Ranked #1 on Click-Through Rate Prediction on Book-Crossing
3 code implementations • 23 Jan 2019 • Hongwei Wang, Fuzheng Zhang, Miao Zhao, Wenjie Li, Xing Xie, Minyi Guo
Collaborative filtering often suffers from sparsity and cold start problems in real recommendation scenarios, therefore, researchers and engineers usually use side information to address the issues and improve the performance of recommender systems.
2 code implementations • IJCAI 2019 • Zeping Yu, Jianxun Lian, Ahmad Mahmoody, Gongshen Liu, Xing Xie
User modeling is an essential task for online rec- ommender systems.
Ranked #2 on Recommendation Systems on Amazon Product Data
7 code implementations • 1 Nov 2018 • Shu Wu, Yuyuan Tang, Yanqiao Zhu, Liang Wang, Xing Xie, Tieniu Tan
To obtain accurate item embedding and take complex transitions of items into account, we propose a novel method, i. e. Session-based Recommendation with Graph Neural Networks, SR-GNN for brevity.
Ranked #1 on Session-Based Recommendations on Gowalla
1 code implementation • WS 2018 • Chuhan Wu, Fangzhao Wu, Junxin Liu, Sixing Wu, Yongfeng Huang, Xing Xie
This paper describes our system for the first and third shared tasks of the third Social Media Mining for Health Applications (SMM4H) workshop, which aims to detect the tweets mentioning drug names and adverse drug reactions.
no code implementations • 9 Aug 2018 • Wen-Feng Cheng, Chao-Chung Wu, Ruihua Song, Jianlong Fu, Xing Xie, Jian-Yun Nie
This is one of the few attempts to generate poetry from images.
no code implementations • COLING 2018 • Sixing Wu, Dawei Zhang, Ying Li, Xing Xie, Zhonghai Wu
Recent years have witnessed a surge of interest on response generation for neural conversation systems.
no code implementations • 22 Jul 2018 • Lijun Yu, Dawei Zhang, Xiangqun Chen, Xing Xie
In this paper, we introduce MOBA-Slice, a time slice based evaluation framework of relative advantage between teams in MOBA games.
no code implementations • 11 Jul 2018 • Junxin Liu, Fangzhao Wu, Chuhan Wu, Yongfeng Huang, Xing Xie
The experimental results on two benchmark datasets validate that our approach can effectively improve the performance of Chinese word segmentation, especially when training data is insufficient.
1 code implementation • 3 Jun 2018 • Le Wu, Lei Chen, Richang Hong, Yanjie Fu, Xing Xie, Meng Wang
After that, we design a hierarchical attention network that naturally mirrors the hierarchical relationship (elements in each aspects level, and the aspect level) of users' latent interests with the identified key aspects.
19 code implementations • 14 Mar 2018 • Jianxun Lian, Xiaohuan Zhou, Fuzheng Zhang, Zhongxia Chen, Xing Xie, Guangzhong Sun
On one hand, the xDeepFM is able to learn certain bounded-degree feature interactions explicitly; on the other hand, it can learn arbitrary low- and high-order feature interactions implicitly.
Ranked #1 on Click-Through Rate Prediction on Dianping
9 code implementations • 9 Mar 2018 • Hongwei Wang, Fuzheng Zhang, Jialin Wang, Miao Zhao, Wenjie Li, Xing Xie, Minyi Guo
To address the sparsity and cold start problem of collaborative filtering, researchers usually make use of side information, such as social networks or item attributes, to improve recommendation performance.
Ranked #2 on Click-Through Rate Prediction on Book-Crossing
4 code implementations • 25 Jan 2018 • Hongwei Wang, Fuzheng Zhang, Xing Xie, Minyi Guo
To solve the above problems, in this paper, we propose a deep knowledge-aware network (DKN) that incorporates knowledge graph representation into news recommendation.
Ranked #5 on News Recommendation on MIND
1 code implementation • 3 Dec 2017 • Hongwei Wang, Fuzheng Zhang, Min Hou, Xing Xie, Minyi Guo, Qi Liu
First, due to the lack of explicit sentiment links in mainstream social networks, we establish a labeled heterogeneous sentiment dataset which consists of users' sentiment relation, social relation and profile knowledge by entity-level sentiment extraction method.
5 code implementations • 22 Nov 2017 • Hongwei Wang, Jia Wang, Jialin Wang, Miao Zhao, Wei-Nan Zhang, Fuzheng Zhang, Xing Xie, Minyi Guo
The goal of graph representation learning is to embed each vertex in a graph into a low-dimensional vector space.
Ranked #1 on Node Classification on Wikipedia
no code implementations • 8 Mar 2017 • Tianran Hu, Ruihua Song, Maya Abtahian, Philip Ding, Xing Xie, Jiebo Luo
We propose an approach that quantifies semantic differences in interpretations among different groups of people.
no code implementations • ACM SIGSPATIAL GIS 2010 2010 • Jing Yuan, Yu Zheng, Chengyang Zhang, Wenlei Xie, Xing Xie, Guangzhong Sun, Yan Huang
GPS-equipped taxis can be regarded as mobile sensors probing traffic flows on road surfaces, and taxi drivers are usually experienced in finding the fastest (quickest) route to a destination based on their knowledge.