1 code implementation • ICML 2020 • Kimon Fountoulakis, Di Wang, Shenghao Yang
Local graph clustering and the closely related seed set expansion problem are primitives on graphs that are central to a wide range of analytic and learning tasks such as local clustering, community detection, nodes ranking and feature inference.
no code implementations • 30 May 2024 • Jia Li, Lijie Hu, Zhixian He, Jingfeng Zhang, Tianhang Zheng, Di Wang
With the advancement of image-to-image diffusion models guided by text, significant progress has been made in image editing.
no code implementations • 27 May 2024 • Meng Ding, Kaiyi Ji, Di Wang, Jinhui Xu
In this paper, we provide a general theoretical analysis of forgetting in the linear regression model via Stochastic Gradient Descent (SGD) applicable to both underparameterized and overparameterized regimes.
no code implementations • 27 May 2024 • Jidong Jia, Pei Zhao, Di Wang
Voice activity detection (VAD) is the task of detecting speech in an audio stream, which is challenging due to numerous unseen noises and low signal-to-noise ratios in real environments.
no code implementations • 24 May 2024 • Keyuan Cheng, Muhammad Asif Ali, Shu Yang, Gang Lin, Yuxuan zhai, Haoyang Fei, Ke Xu, Lu Yu, Lijie Hu, Di Wang
To address these issues, in this paper, we propose a novel framework named RULE-KE, i. e., RULE based Knowledge Editing, which is a cherry on the top for augmenting the performance of all existing MQA methods under KE.
no code implementations • 24 May 2024 • Lijie Hu, Chenyang Ren, Zhengyu Hu, Cheng-Long Wang, Di Wang
Concept Bottleneck Models (CBMs) have garnered much attention for their ability to elucidate the prediction process through a human-understandable concept layer.
no code implementations • 23 May 2024 • Shuaipeng Li, Penghao Zhao, Hailin Zhang, Xingwu Sun, Hao Wu, Dian Jiao, Weiyan Wang, Chengjun Liu, Zheng Fang, Jinbao Xue, Yangyu Tao, Bin Cui, Di Wang
First, we raise the scaling law between batch sizes and optimal learning rates in the sign of gradient case, in which we prove that the optimal learning rate first rises and then falls as the batch size increases.
no code implementations • 16 May 2024 • Jiahuan Pei, Irene Viola, Haochen Huang, Junxiao Wang, Moonisa Ahsan, Fanghua Ye, Jiang Yiming, Yao Sai, Di Wang, Zhumin Chen, Pengjie Ren, Pablo Cesar
We present a demonstration of a multimodal fine-grained training assistant for LEGO brick assembly in a pilot XR environment.
1 code implementation • 16 May 2024 • Wentao Jiang, Jing Zhang, Di Wang, Qiming Zhang, Zengmao Wang, Bo Du
Experimental results in classification and dense prediction tasks show that LeMeViT has a significant $1. 7 \times$ speedup, fewer parameters, and competitive performance compared to the baseline models, and achieves a better trade-off between efficiency and performance.
1 code implementation • 14 May 2024 • Zhimin Li, Jianwei Zhang, Qin Lin, Jiangfeng Xiong, Yanxin Long, Xinchi Deng, Yingfang Zhang, Xingchao Liu, Minbin Huang, Zedong Xiao, Dayou Chen, Jiajun He, Jiahao Li, Wenyue Li, Chen Zhang, Rongwei Quan, Jianxiang Lu, Jiabin Huang, Xiaoyan Yuan, Xiaoxiao Zheng, Yixuan Li, Jihong Zhang, Chao Zhang, Meng Chen, Jie Liu, Zheng Fang, Weiyan Wang, Jinbao Xue, Yangyu Tao, Jianchen Zhu, Kai Liu, Sihuan Lin, Yifu Sun, Yun Li, Dongdong Wang, Mingtao Chen, Zhichao Hu, Xiao Xiao, Yan Chen, Yuhong Liu, Wei Liu, Di Wang, Yong Yang, Jie Jiang, Qinglin Lu
For fine-grained language understanding, we train a Multimodal Large Language Model to refine the captions of the images.
no code implementations • 3 May 2024 • Mudit Gaur, Amrit Singh Bedi, Di Wang, Vaneet Aggarwal
The current state-of-the-art theoretical analysis of Actor-Critic (AC) algorithms significantly lags in addressing the practical aspects of AC implementations.
no code implementations • 11 Apr 2024 • Kumar Avinava Dubey, Zhe Feng, Rahul Kidambi, Aranyak Mehta, Di Wang
We study an auction setting in which bidders bid for placement of their content within a summary generated by a large language model (LLM), e. g., an ad auction in which the display is a summary paragraph of multiple ads.
no code implementations • 30 Mar 2024 • Shu Yang, Jiayuan Su, Han Jiang, Mengdi Li, Keyuan Cheng, Muhammad Asif Ali, Lijie Hu, Di Wang
With the rise of large language models (LLMs), ensuring they embody the principles of being helpful, honest, and harmless (3H), known as Human Alignment, becomes crucial.
no code implementations • 30 Mar 2024 • Muhammad Asif Ali, ZhengPing Li, Shu Yang, Keyuan Cheng, Yang Cao, Tianhao Huang, Lijie Hu, Lu Yu, Di Wang
Large language models (LLMs) have shown exceptional abilities for multiple different natural language processing tasks.
no code implementations • 30 Mar 2024 • Keyuan Cheng, Gang Lin, Haoyang Fei, Yuxuan zhai, Lu Yu, Muhammad Asif Ali, Lijie Hu, Di Wang
Multi-hop question answering (MQA) under knowledge editing (KE) has garnered significant attention in the era of large language models.
1 code implementation • 20 Mar 2024 • Di Wang, Jing Zhang, Minqiang Xu, Lin Liu, Dongsheng Wang, Erzhong Gao, Chengxi Han, HaoNan Guo, Bo Du, DaCheng Tao, Liangpei Zhang
However, transferring the pretrained models to downstream tasks may encounter task discrepancy due to their formulation of pretraining as image classification or object discrimination tasks.
Ranked #1 on Semantic Segmentation on SpaceNet 1 (using extra training data)
Aerial Scene Classification Building change detection for remote sensing images +13
no code implementations • 19 Mar 2024 • Cheng-Long Wang, Qi Li, Zihang Xiang, Yinzhi Cao, Di Wang
Our analysis, conducted across multiple unlearning benchmarks, reveals that these algorithms inconsistently fulfill their unlearning commitments due to two main issues: 1) unlearning new data can significantly affect the unlearning utility of previously requested data, and 2) approximate algorithms fail to ensure equitable unlearning utility across different groups.
no code implementations • 20 Feb 2024 • Zihang Xiang, Chenglong Wang, Di Wang
Recent works propose a generic private solution for the tuning process, yet a fundamental question still persists: is the current privacy bound for this solution tight?
1 code implementation • 19 Feb 2024 • Zihao Luo, Xilie Xu, Feng Liu, Yun Sing Koh, Di Wang, Jingfeng Zhang
To mitigate this issue, we propose Stable PrivateLoRA that adapts the LDM by minimizing the ratio of the adaptation loss to the MI gain, which implicitly rescales the gradient and thus stabilizes the optimization.
no code implementations • 17 Feb 2024 • Shu Yang, Muhammad Asif Ali, Cheng-Long Wang, Lijie Hu, Di Wang
Adapting large language models (LLMs) to new domains/tasks and enabling them to be efficient lifelong learners is a pivotal challenge.
no code implementations • 17 Feb 2024 • Shu Yang, Muhammad Asif Ali, Lu Yu, Lijie Hu, Di Wang
The increasing significance of large models and their multi-modal variants in societal information processing has ignited debates on social safety and ethics.
1 code implementation • 23 Jan 2024 • Shuai Liu, Di Wang, Quan Wang, Kai Huang
NIV strategy can serve as a bridge between classification and regression branches by calculating two types of statistical data from the regression output to correct the classification confidence.
no code implementations • 19 Jan 2024 • Youming Tao, Cheng-Long Wang, Miao Pan, Dongxiao Yu, Xiuzhen Cheng, Di Wang
We start by giving a rigorous definition of \textit{exact} federated unlearning, which guarantees that the unlearned model is statistically indistinguishable from the one trained without the deleted data.
no code implementations • 18 Jan 2024 • Muhammad Asif Ali, Yan Hu, Jianbin Qin, Di Wang
In this paper, we propose InterlaCed Encoder NETworks (i. e., ICE-NET) for antonym vs synonym distinction, that aim to capture and model the relation-specific properties of the antonyms and synonyms pairs in order to perform the classification task in a performance-enhanced manner.
no code implementations • 16 Jan 2024 • Xiaotong Liu, Jinxin Wang, Di Wang, Shao-Bo Lin
In this paper, we introduce a weighted spectral filter approach to reduce the condition number of the kernel matrix and then stabilize kernel interpolation.
no code implementations • 30 Dec 2023 • Linhao Xu, Lin Zhao, Xinxin Sun, Di Wang, Guangyu Li, Kedong Yan
The challenges posed by occlusion can be attributed to the following factors: 1) Data: The collection and annotation of occluded human pose samples are relatively challenging.
1 code implementation • 29 Dec 2023 • Zhongzhi Chen, Xingwu Sun, Xianfeng Jiao, Fengzong Lian, Zhanhui Kang, Di Wang, Cheng-Zhong Xu
We introduce Truth Forest, a method that enhances truthfulness in LLMs by uncovering hidden truth representations using multi-dimensional orthogonal probes.
no code implementations • 27 Dec 2023 • Chenyang Qiu, Guoshun Nan, Tianyu Xiong, Wendi Deng, Di Wang, Zhiyang Teng, Lijuan Sun, Qimei Cui, Xiaofeng Tao
This finding motivates us to present a novel method that aims to harden GCNs by automatically learning Latent Homophilic Structures over heterophilic graphs.
Ranked #3 on Node Classification on Actor
1 code implementation • 21 Dec 2023 • Zhixiang Su, Di Wang, Chunyan Miao, Lizhen Cui
To address this challenge, we propose Anchoring Path Sentence Transformer (APST) by introducing Anchoring Paths (APs) to alleviate the reliance of CPs.
1 code implementation • 19 Dec 2023 • Yubin Xiao, Di Wang, Boyang Li, Mingzhao Wang, Xuan Wu, Changliang Zhou, You Zhou
To the best of our knowledge, this study is first-of-its-kind to obtain NAR VRP solvers from AR ones through knowledge distillation.
1 code implementation • 17 Dec 2023 • Xiaoqi An, Lin Zhao, Chen Gong, Nannan Wang, Di Wang, Jian Yang
In this paper, we address the following question: "Only sparse human keypoint locations are detected for human pose estimation, is it really necessary to describe the whole image in a dense, high-resolution manner?"
1 code implementation • 12 Dec 2023 • Xiaochuan Li, Baoyu Fan, Runze Zhang, Liang Jin, Di Wang, Zhenhua Guo, YaQian Zhao, RenGang Li
Considering causal reasoning in visual content generation is significant.
no code implementations • 29 Nov 2023 • Lijie Hu, Yixin Liu, Ninghao Liu, Mengdi Huai, Lichao Sun, Di Wang
However, ViTs suffer from issues with explanation faithfulness, as their focal points are fragile to adversarial attacks and can be easily changed with even slight perturbations on the input image.
no code implementations • 29 Nov 2023 • Jia Li, Lijie Hu, Jingfeng Zhang, Tianhang Zheng, Hua Zhang, Di Wang
In this paper, we address the limitations of existing text-to-image diffusion models in generating demographically fair results when given human-related descriptions.
no code implementations • 12 Nov 2023 • Zihang Xiang, Tianhao Wang, Di Wang
In this study, we propose a solution that specifically addresses the issue of node-level privacy.
no code implementations • 25 Oct 2023 • Shao-Bo Lin, Xingping Sun, Di Wang
For radial basis function (RBF) kernel interpolation of scattered data, Schaback in 1995 proved that the attainable approximation error and the condition number of the underlying interpolation matrix cannot be made small simultaneously.
1 code implementation • 19 Oct 2023 • Muhammad Asif Ali, Maha Alshmrani, Jianbin Qin, Yan Hu, Di Wang
Bilingual Lexical Induction (BLI) is a core challenge in NLP, it relies on the relative isomorphism of individual embedding spaces.
1 code implementation • 18 Oct 2023 • Muhammad Asif Ali, Yan Hu, Jianbin Qin, Di Wang
Automated construction of bilingual dictionaries using monolingual embedding spaces is a core challenge in machine translation.
no code implementations • 15 Oct 2023 • Hongjun Wu, Di Wang
The worst-case resource usage of a program can provide useful information for many software-engineering tasks, such as performance optimization and algorithmic-complexity-vulnerability discovery.
no code implementations • 12 Oct 2023 • Hanpu Shen, Cheng-Long Wang, Zihang Xiang, Yiming Ying, Di Wang
This paper focuses on the problem of Differentially Private Stochastic Optimization for (multi-layer) fully connected neural networks with a single output node.
no code implementations • 11 Oct 2023 • Liyang Zhu, Meng Ding, Vaneet Aggarwal, Jinhui Xu, Di Wang
To address these issues, we first consider the problem in the $\epsilon$ non-interactive LDP model and provide a lower bound of $\Omega(\frac{\sqrt{dk\log d}}{\sqrt{n}\epsilon})$ on the $\ell_2$-norm estimation error for sub-Gaussian data, where $n$ is the sample size and $d$ is the dimension of the space.
1 code implementation • 9 Oct 2023 • Shaopeng Fu, Di Wang
Adversarial training (AT) is a canonical method for enhancing the robustness of deep neural networks (DNNs).
no code implementations • 15 Sep 2023 • Jinyan Su, Terry Yue Zhuo, Jonibek Mansurov, Di Wang, Preslav Nakov
The spread of fake news has emerged as a critical challenge, undermining trust and posing threats to society.
no code implementations • 8 Sep 2023 • Di Wang, Xiaotong Liu, Shao-Bo Lin, Ding-Xuan Zhou
Data silos, mainly caused by privacy and interoperability, significantly constrain collaborations among different organizations with similar data for the same purpose.
no code implementations • 22 Aug 2023 • Di Wang, JinYuan Liu, Long Ma, Risheng Liu, Xin Fan
Both stages directly estimate the respective target deformation fields.
1 code implementation • 11 Aug 2023 • Xinyue Ma, Suyeon Jeong, Minjia Zhang, Di Wang, Jonghyun Choi, Myeongjae Jeon
Continual learning (CL) trains NN models incrementally from a continuous stream of tasks.
1 code implementation • 1 Aug 2023 • Yubin Xiao, Di Wang, Boyang Li, Huanhuan Chen, Wei Pang, Xuan Wu, Hao Li, Dong Xu, Yanchun Liang, You Zhou
The Traveling Salesman Problem (TSP) is a well-known combinatorial optimization problem with broad real-world applications.
no code implementations • 18 Jun 2023 • Mudit Gaur, Amrit Singh Bedi, Di Wang, Vaneet Aggarwal
To achieve that, we propose a Natural Actor-Critic algorithm with 2-Layer critic parametrization (NAC2L).
no code implementations • 1 Jun 2023 • Yulian Wu, Xingyu Zhou, Sayak Ray Chowdhury, Di Wang
Under each framework, we consider both joint differential privacy (JDP) and local differential privacy (LDP) models.
no code implementations • 26 May 2023 • Puyu Wang, Yunwen Lei, Di Wang, Yiming Ying, Ding-Xuan Zhou
This sheds light on sufficient or necessary conditions for under-parameterized and over-parameterized NNs trained by GD to attain the desired risk rate of $O(1/\sqrt{n})$.
1 code implementation • 23 May 2023 • Jinyan Su, Terry Yue Zhuo, Di Wang, Preslav Nakov
One is called DetectLLM-LRR, which is fast and efficient, and the other is called DetectLLM-NPR, which is more accurate, but slower due to the need for perturbations.
1 code implementation • 17 May 2023 • Di Wang, JinYuan Liu, Risheng Liu, Xin Fan
Their common characteristic of seeking complementary cues from different source images motivates us to explore the collaborative relationship between Fusion and Salient object detection tasks on infrared and visible images via an Interactively Reinforced multi-task paradigm for the first time, termed IRFS.
no code implementations • 3 May 2023 • Tao Chen, Liang Lv, Di Wang, Jing Zhang, Yue Yang, Zeyang Zhao, Chen Wang, Xiaowei Guo, Hao Chen, Qingye Wang, Yufei Xu, Qiming Zhang, Bo Du, Liangpei Zhang, DaCheng Tao
With the world population rapidly increasing, transforming our agrifood systems to be more productive, efficient, safe, and sustainable is crucial to mitigate potential food shortages.
2 code implementations • NeurIPS 2023 • Di Wang, Jing Zhang, Bo Du, Minqiang Xu, Lin Liu, DaCheng Tao, Liangpei Zhang
In this study, we leverage SAM and existing RS object detection datasets to develop an efficient pipeline for generating a large-scale RS segmentation dataset, dubbed SAMRS.
1 code implementation • 23 Apr 2023 • Di Wang, Bo Du, Liangpei Zhang, DaCheng Tao
Recent neural architecture search (NAS) based approaches have made great progress in hyperspectral image (HSI) classification tasks.
2 code implementations • 19 Apr 2023 • Di Wang, Jing Zhang, Bo Du, Liangpei Zhang, DaCheng Tao
Hyperspectral image (HSI) classification is challenging due to spatial variability caused by complex imaging conditions.
1 code implementation • 15 Apr 2023 • Zihang Xiang, Tianhao Wang, WanYu Lin, Di Wang
In contrast, we leverage the random noise to construct an aggregation that effectively rejects many existing Byzantine attacks.
1 code implementation • 6 Apr 2023 • Cheng-Long Wang, Mengdi Huai, Di Wang
To extend machine unlearning to graph data, \textit{GraphEraser} has been proposed.
no code implementations • 31 Mar 2023 • Jinyan Su, Changhong Zhao, Di Wang
In this paper, we revisit the problem of Differentially Private Stochastic Convex Optimization (DP-SCO) in Euclidean and general $\ell_p^d$ spaces.
no code implementations • 20 Mar 2023 • Shu-Hao Yeh, Shuangyu Xie, Di Wang, Wei Yan, Dezhen Song
Here we propose a novel neural network-based approach that estimates $\mathrm{K}$ matrix in real-time so that pose estimation or scene reconstruction can be run at camera native resolution for the highest accuracy on mobile devices.
no code implementations • 8 Mar 2023 • Shao-Bo Lin, Di Wang, Ding-Xuan Zhou
These interesting findings show that the proposed sketching strategy is capable of fitting massive and noisy data on spheres.
no code implementations • 21 Feb 2023 • Di Wang, Yao Wang, Shaojie Tang, Shao-Bo Lin
The novelties of our research are as follows: 1) From a methodological perspective, we present a novel and scalable approach for generating DTRs by combining distributed learning with Q-learning.
1 code implementation • 20 Feb 2023 • Juexiao Zhou, Longxi Zhou, Di Wang, Xiaopeng Xu, Haoyang Li, Yuetan Chu, Wenkai Han, Xin Gao
However, there are few open-source frameworks for federated heterogeneous medical image analysis with personalization and privacy protection simultaneously without the demand to modify the existing model structures or to share any private data.
no code implementations • 16 Feb 2023 • Bhargav Ganguly, Yulian Wu, Di Wang, Vaneet Aggarwal
This improvement is a key to the significant regret improvement in quantum reinforcement learning.
no code implementations • 3 Feb 2023 • Santiago Balseiro, Rachitesh Kumar, Vahab Mirrokni, Balasubramanian Sivan, Di Wang
Given the inherent non-stationarity in an advertiser's value and also competing advertisers' values over time, a commonly used approach is to learn a target expenditure plan that specifies a target spend as a function of time, and then run a controller that tracks this plan.
no code implementations • 23 Jan 2023 • Yulian Wu, Chaowen Guan, Vaneet Aggarwal, Di Wang
In this paper, we study multi-armed bandits (MAB) and stochastic linear bandits (SLB) with heavy-tailed rewards and quantum reward oracle.
no code implementations • 22 Jan 2023 • Lijie Hu, Ivan Habernal, Lei Shen, Di Wang
In this paper, we provide the first systematic review of recent advances in DP deep learning models in NLP.
1 code implementation • 17 Jan 2023 • Yan Zhang, Zhong Ji, Di Wang, Yanwei Pang, Xuelong Li
(2) It limits the scale of negative sample pairs by employing the mini-batch based end-to-end training mechanism.
no code implementations • 11 Jan 2023 • Di Wang, Junzhi Shi, PingPing Wang, Shuo Zhuang, Hongyue Li
By comparison, the predictors built by the proposed loss-controlling approach are not limited to set predictors, and the loss function can be any measurable function without the monotone assumption.
no code implementations • 6 Jan 2023 • Di Wang, Ping Wang, Zhong Ji, Xiaojun Yang, Hongyue Li
Conformal prediction is a learning framework controlling prediction coverage of prediction sets, which can be built on any learning algorithm for point prediction.
1 code implementation • 4 Jan 2023 • Zhixiang Su, Di Wang, Chunyan Miao, Lizhen Cui
Recent studies on knowledge graphs (KGs) show that path-based methods empowered by pre-trained language models perform well in the provision of inductive and explainable relation predictions.
no code implementations • 31 Dec 2022 • Di Wang, Simon X. Yang
In this paper, a novel broad learning system with Takagi-Sugeno (TS) fuzzy subsystem is proposed for rapid identification of tobacco origin.
no code implementations • 30 Dec 2022 • Junren Chen, Michael K. Ng, Di Wang
Our major standpoint is that (near) minimax rates of estimation error are achievable merely from the quantized data produced by the proposed scheme.
no code implementations • 24 Dec 2022 • Di Wang, Simon X. Yang
As a common appearance defect of concrete bridges, cracks are important indices for bridge structure health assessment.
no code implementations • 23 Nov 2022 • Lijie Hu, Yixin Liu, Ninghao Liu, Mengdi Huai, Lichao Sun, Di Wang
Results show that SEAT is more stable against different perturbations and randomness while also keeps the explainability of attention, which indicates it is a more faithful explanation.
1 code implementation • 19 Nov 2022 • Di Wang, Long Ma, Risheng Liu, Xin Fan
To address the above limitations, we develop an efficient and compact enhancement network in collaboration with a high-level semantic-aware pretrained model, aiming to exploit its hierarchical feature representation as an auxiliary for the low-level underwater image enhancement.
no code implementations • 7 Oct 2022 • Hao Wang, WanYu Lin, Hao He, Di Wang, Chengzhi Mao, Muhan Zhang
Recent years have seen advances on principles and guidance relating to accountable and ethical use of artificial intelligence (AI) spring up around the globe.
no code implementations • 3 Oct 2022 • Meng Ding, Mingxi Lei, Yunwen Lei, Di Wang, Jinhui Xu
In this paper, we conduct a thorough analysis on the generalization of first-order (gradient-based) methods for the bilevel optimization problem.
no code implementations • 17 Sep 2022 • Jinyan Su, Jinhui Xu, Di Wang
In this paper, we study the problem of PAC learning halfspaces in the non-interactive local differential privacy model (NLDP).
no code implementations • 16 Sep 2022 • Yuan Qiu, Jinyan Liu, Di Wang
In the first part of the paper, we consider the case where the covariates are sub-Gaussian and the responses are heavy-tailed where they only have the finite fourth moments.
no code implementations • 29 Aug 2022 • Zhe Feng, Swati Padmanabhan, Di Wang
We contribute a simple online algorithm that achieves near-optimal regret in expectation while always respecting the specified RoS constraint when the input sequence of queries are i. i. d.
no code implementations • 10 Aug 2022 • Li Liu, Xiangeng Fang, Di Wang, Weijing Tang, Kevin He
Neural Network (Deep Learning) is a modern model in Artificial Intelligence and it has been exploited in Survival Analysis.
2 code implementations • 8 Aug 2022 • Di Wang, Qiming Zhang, Yufei Xu, Jing Zhang, Bo Du, DaCheng Tao, Liangpei Zhang
Large-scale vision foundation models have made significant progress in visual tasks on natural images, with vision transformers being the primary choice due to their good scalability and representation ability.
Ranked #1 on Aerial Scene Classification on AID (50% as trainset)
no code implementations • 22 Jul 2022 • Di Wang, Nicolas Honnorat, Peter T. Fox, Kerstin Ritter, Simon B. Eickhoff, Sudha Seshadri, Mohamad Habes
Deep neural networks currently provide the most advanced and accurate machine learning models to distinguish between structural MRI scans of subjects with Alzheimer's disease and healthy controls.
1 code implementation • 24 May 2022 • Di Wang, JinYuan Liu, Xin Fan, Risheng Liu
Moreover, to better fuse the registered infrared images and visible images, we present a feature Interaction Fusion Module (IFM) to adaptively select more meaningful features for fusion in the Dual-path Interaction Fusion Network (DIFN).
2 code implementations • 6 Apr 2022 • Di Wang, Jing Zhang, Bo Du, Gui-Song Xia, DaCheng Tao
To this end, we train different networks from scratch with the help of the largest RS scene recognition dataset up to now -- MillionAID, to obtain a series of RS pretrained backbones, including both convolutional neural networks (CNN) and vision transformers such as Swin and ViTAE, which have shown promising performance on computer vision tasks.
Ranked #1 on Aerial Scene Classification on UCM (80% as trainset)
Aerial Scene Classification Building change detection for remote sensing images +5
no code implementations • 26 Feb 2022 • Junren Chen, Cheng-Long Wang, Michael K. Ng, Di Wang
In heavy-tailed regime, while the rates of our estimators become essentially slower, these results are either the first ones in an 1-bit quantized and heavy-tailed setting, or already improve on existing comparable results from some respect.
no code implementations • 10 Jan 2022 • Di Wang, Jinhui Xu
Firstly, we study the case where the $\ell_2$ norm of data has bounded second order moment.
no code implementations • 29 Dec 2021 • Yizhang Wang, Di Wang, You Zhou, Xiaofeng Zhang, Chai Quek
Furthermore, we divide all data points into different levels according to their local density and propose a unified clustering framework by combining the advantages of both DPC and DBSCAN.
no code implementations • 28 Nov 2021 • Fuxun Yu, Weishan Zhang, Zhuwei Qin, Zirui Xu, Di Wang, ChenChen Liu, Zhi Tian, Xiang Chen
Federated learning learns from scattered data by fusing collaborative models from local nodes.
no code implementations • 28 Nov 2021 • Fuxun Yu, Di Wang, Longfei Shangguan, Minjia Zhang, Xulong Tang, ChenChen Liu, Xiang Chen
With both scaling trends, new problems and challenges emerge in DL inference serving systems, which gradually trends towards Large-scale Deep learning Serving systems (LDS).
no code implementations • 15 Oct 2021 • Muhammad F. A. Chaudhary, Sarah E. Gerard, Di Wang, Gary E. Christensen, Christopher B. Cooper, Joyce D. Schroeder, Eric A. Hoffman, Joseph M. Reinhardt
Once trained, the framework can be used as a registration-free method for predicting local tissue expansion.
1 code implementation • 14 Oct 2021 • Soobee Lee, Minindu Weerakoon, Jonghyun Choi, Minjia Zhang, Di Wang, Myeongjae Jeon
In particular, in mobile and IoT devices, real-time data can be stored not just in high-speed RAMs but in internal storage devices as well, which offer significantly larger capacity than the RAMs.
1 code implementation • 25 Aug 2021 • Xuan Wu, Jizong Han, Di Wang, Pengyue Gao, Quanlong Cui, Liang Chen, Yanchun Liang, Han Huang, Heow Pueh Lee, Chunyan Miao, You Zhou, Chunguo Wu
While many Particle Swarm Optimization (PSO) algorithms only use fitness to assess the performance of particles, in this work, we adopt Surprisingly Popular Algorithm (SPA) as a complementary metric in addition to fitness.
1 code implementation • ACL 2021 • Xuepeng Wang, Li Zhao, Bing Liu, Tao Chen, Feng Zhang, Di Wang
In this paper, we propose a novel concept-based label embedding method that can explicitly represent the concept and model the sharing mechanism among classes for the hierarchical text classification.
1 code implementation • ACL 2021 • Shulin Liu, Tao Yang, Tianchi Yue, Feng Zhang, Di Wang
In this paper, we propose a Pre-trained masked Language model with Misspelled knowledgE (PLOME) for CSC, which jointly learns how to understand language and correct spelling errors.
no code implementations • 31 Jul 2021 • Jinyan Su, Lijie Hu, Di Wang
Specifically, we first show that under some mild assumptions on the loss functions, there is an algorithm whose output could achieve an upper bound of $\tilde{O}((\frac{1}{\sqrt{n}}+\frac{\sqrt{d\log \frac{1}{\delta}}}{n\epsilon})^\frac{\theta}{\theta-1})$ for $(\epsilon, \delta)$-DP when $\theta\geq 2$, here $n$ is the sample size and $d$ is the dimension of the space.
no code implementations • 23 Jul 2021 • Lijie Hu, Shuo Ni, Hanshen Xiao, Di Wang
To better understand the challenges arising from irregular data distribution, in this paper we provide the first study on the problem of DP-SCO with heavy-tailed data in the high dimensional space.
2 code implementations • 26 Jun 2021 • Di Wang, Bo Du, Liangpei Zhang
To tackle these problems, in this paper, different from previous approaches, we perform the superpixel generation on intermediate features during network training to adaptively produce homogeneous regions, obtain graph structures, and further generate spatial descriptors, which are served as graph nodes.
1 code implementation • Findings (ACL) 2021 • Huanqin Wu, Wei Liu, Lei LI, Dan Nie, Tao Chen, Feng Zhang, Di Wang
Keyphrase Prediction (KP) task aims at predicting several keyphrases that can summarize the main idea of the given document.
no code implementations • 4 Jun 2021 • Youming Tao, Yulian Wu, Peng Zhao, Di Wang
Finally, we establish the lower bound to show that the instance-dependent regret of our improved algorithm is optimal.
no code implementations • 30 May 2021 • Li Chen, Richard Peng, Di Wang
Diffusion is a fundamental graph procedure and has been a basic building block in a wide range of theoretical and empirical applications such as graph partitioning and semi-supervised learning on graphs.
no code implementations • 13 Apr 2021 • Yang Li, Di Wang, José M. F. Moura
This task is challenging as models need not only to capture spatial dependency and temporal dependency within the data, but also to leverage useful auxiliary information for accurate predictions.
no code implementations • 20 Mar 2021 • Jihua Zhu, Di Wang, Jiaxi Mu, Huimin Lu, Zhiqiang Tian, Zhongyu Li
Under the NDT framework, this paper proposes a novel multi-view registration method, named 3D multi-view registration based on the normal distributions transform (3DMNDT), which integrates the K-means clustering and Lie algebra solver to achieve multi-view registration.
no code implementations • 14 Jan 2021 • Jan van den Brand, Yin Tat Lee, Yang P. Liu, Thatchaphol Saranurak, Aaron Sidford, Zhao Song, Di Wang
In the special case of the minimum cost flow problem on $n$-vertex $m$-edge graphs with integer polynomially-bounded costs and capacities we obtain a randomized method which solves the problem in $\tilde{O}(m+n^{1. 5})$ time.
Data Structures and Algorithms Optimization and Control
no code implementations • 15 Dec 2020 • Peixiang Zhong, Di Wang, Pengfei Li, Chen Zhang, Hao Wang, Chunyan Miao
Experimental results on two large-scale datasets support our hypothesis and show that our model can produce more accurate and commonsense-aware emotional responses and achieve better human ratings than state-of-the-art models that only specialize in one aspect.
no code implementations • 22 Nov 2020 • Fuxun Yu, Dimitrios Stamoulis, Di Wang, Dimitrios Lymberopoulos, Xiang Chen
This paper gives an overview of our ongoing work on the design space exploration of efficient deep neural networks (DNNs).
no code implementations • 11 Nov 2020 • Di Wang, Marco Gaboardi, Adam Smith, Jinhui Xu
In our second attempt, we show that for any $1$-Lipschitz generalized linear convex loss function, there is an $(\epsilon, \delta)$-LDP algorithm whose sample complexity for achieving error $\alpha$ is only linear in the dimensionality $p$.
1 code implementation • 7 Nov 2020 • Muhammad Hassan, Yan Wang, Di Wang, Daixi Li, Yanchun Liang, You Zhou, Dong Xu
We collected 100, 000 shoeprints of subjects ranging from 7 to 80 years old and used the data to develop a deep learning end-to-end model ShoeNet to analyze age-related patterns and predict age.
no code implementations • 22 Oct 2020 • Di Wang, Jiahao Ding, Lijie Hu, Zejun Xie, Miao Pan, Jinhui Xu
To address this issue, we propose in this paper the first DP version of (Gradient) EM algorithm with statistical guarantees.
no code implementations • ICML 2020 • Di Wang, Hanshen Xiao, Srini Devadas, Jinhui Xu
For this case, we propose a method based on the sample-and-aggregate framework, which has an excess population risk of $\tilde{O}(\frac{d^3}{n\epsilon^4})$ (after omitting other factors), where $n$ is the sample size and $d$ is the dimensionality of the data.
no code implementations • 19 Oct 2020 • Di Wang, Xiangyu Guo, Chaowen Guan, Shi Li, Jinhui Xu
To the best of our knowledge, this is the first work that studies and provides theoretical guarantees for the stochastic linear combination of non-linear regressions model.
no code implementations • 19 Oct 2020 • Di Wang, Xiangyu Guo, Shi Li, Jinhui Xu
In this paper, we study the problem of estimating latent variable models with arbitrarily corrupted samples in high dimensional space ({\em i. e.,} $d\gg n$) where the underlying parameter is assumed to be sparse.
no code implementations • 16 Oct 2020 • Goran Zuzic, Di Wang, Aranyak Mehta, D. Sivakumar
In this paper, we focus on the AdWords problem, which is a classical online budgeted matching problem of both theoretical and practical significance.
1 code implementation • 8 Sep 2020 • Nanyu Li, Yujuan Si, Di Wang, Tong Liu, Jinrun Yu
In VQ method, a set of dictionaries corresponding to segments of ECG beats is trained, and VQ codes are used to represent each heartbeat.
no code implementations • 15 Aug 2020 • Fuxun Yu, Weishan Zhang, Zhuwei Qin, Zirui Xu, Di Wang, ChenChen Liu, Zhi Tian, Xiang Chen
Specifically, we design a feature-oriented regulation method ({$\Psi$-Net}) to ensure explicit feature information allocation in different neural network structures.
no code implementations • 14 Aug 2020 • Fuxun Yu, ChenChen Liu, Di Wang, Yanzhi Wang, Xiang Chen
Based on the neural network attention mechanism, we propose a comprehensive dynamic optimization framework including (1) testing-phase channel and column feature map pruning, as well as (2) training-phase optimization by targeted dropout.
no code implementations • 24 Jun 2020 • Di Wang, David M Kahn, Jan Hoffmann
The effectiveness of the technique is evaluated by analyzing the sample complexity of discrete distributions and with a novel average-case estimation for deterministic programs that combines expected cost analysis with statistical methods.
Programming Languages
2 code implementations • 20 May 2020 • Kimon Fountoulakis, Di Wang, Shenghao Yang
Local graph clustering and the closely related seed set expansion problem are primitives on graphs that are central to a wide range of analytic and learning tasks such as local clustering, community detection, nodes ranking and feature inference.
no code implementations • 15 May 2020 • Tianhang Zheng, Di Wang, Baochun Li, Jinhui Xu
Based on our framework, we assess the Gaussian and Exponential mechanisms by comparing the magnitude of additive noise required by these mechanisms and the lower bounds (criteria).
no code implementations • 27 Mar 2020 • Shao-Bo Lin, Di Wang, Ding-Xuan Zhou
This paper focuses on generalization performance analysis for distributed algorithms in the framework of learning theory.
no code implementations • 25 Nov 2019 • Saman Fahandezh-Saadi, Di Wang, Masayoshi Tomizuka
This paper presents a robust probabilistic point registration method for estimating the rigid transformation (i. e. rotation matrix and translation vector) between two pointcloud dataset.
1 code implementation • 17 Nov 2019 • Fuxun Yu, Di Wang, Yinpeng Chen, Nikolaos Karianakis, Tong Shen, Pei Yu, Dimitrios Lymberopoulos, Sidi Lu, Weisong Shi, Xiang Chen
In this work, we show that such adversarial-based methods can only reduce the domain style gap, but cannot address the domain content distribution gap that is shown to be important for object detectors.
no code implementations • NeurIPS 2019 • Yunus Esencayi, Marco Gaboardi, Shi Li, Di Wang
On the negative side, we show that the approximation ratio of any $\epsilon$-DP algorithm is lower bounded by $\Omega(\frac{1}{\sqrt{\epsilon}})$, even for instances on HST metrics with uniform facility cost, under the super-set output setting.
no code implementations • 1 Oct 2019 • Di Wang, Lijie Hu, Huanyu Zhang, Marco Gaboardi, Jinhui Xu
In the second part of the paper, we extend our idea to the problem of estimating non-linear regressions and show similar results as in GLMs for both multivariate Gaussian and sub-Gaussian cases.
no code implementations • 30 Sep 2019 • Wei Zhan, Liting Sun, Di Wang, Haojie Shi, Aubrey Clausse, Maximilian Naumann, Julius Kummerle, Hendrik Konigshof, Christoph Stiller, Arnaud de La Fortelle, Masayoshi Tomizuka
3) The driving behavior is highly interactive and complex with adversarial and cooperative motions of various traffic participants.
no code implementations • NeurIPS 2019 • Digvijay Boob, Saurabh Sawlani, Di Wang
As a special case of our result, we report a $1+\eps$ approximation algorithm for the densest subgraph problem which runs in time $O(md/ \eps)$, where $m$ is the number of edges in the graph and $d$ is the maximum graph degree.
no code implementations • 25 Sep 2019 • Tianhang Zheng, Di Wang, Baochun Li, Jinhui Xu
We answer the above two questions by first demonstrating that Gaussian mechanism and Exponential mechanism are the (near) optimal options to certify the $\ell_2$ and $\ell_\infty$-normed robustness.
no code implementations • 25 Sep 2019 • Goran Zuzic, Di Wang, Aranyak Mehta, D. Sivakumar
To answer this question, we draw insights from classic results in game theory, analysis of algorithms, and online learning to introduce a novel framework.
1 code implementation • IJCNLP 2019 • Peixiang Zhong, Di Wang, Chunyan Miao
Messages in human conversations inherently convey emotions.
Ranked #8 on Emotion Recognition in Conversation on EC
no code implementations • 23 Sep 2019 • Yaping Zheng, Shiyi Chen, Xinni Zhang, Xiaofeng Zhang, Xiaofei Yang, Di Wang
Community detection has long been an important yet challenging task to analyze complex networks with a focus on detecting topological structures of graph data.
no code implementations • 10 Sep 2019 • Haidong Rong, Yangzihao Wang, Feihu Zhou, Junjie Zhai, Haiyang Wu, Rui Lan, Fan Li, Han Zhang, Yuekui Yang, Zhenyu Guo, Di Wang
We present Distributed Equivalent Substitution (DES) training, a novel distributed training framework for large-scale recommender systems with dynamic sparse features.
no code implementations • 6 Sep 2019 • Di Wang, Feiqing Huang, Jingyu Zhao, Guodong Li, Guangjian Tian
Autoregressive networks can achieve promising performance in many sequence modeling tasks with short-range dependence.
2 code implementations • 19 Jul 2019 • Shusen Liu, Di Wang, Dan Maljovec, Rushil Anirudh, Jayaraman J. Thiagarajan, Sam Ade Jacobs, Brian C. Van Essen, David Hysom, Jae-Seung Yeom, Jim Gaffney, Luc Peterson, Peter B. Robinson, Harsh Bhatia, Valerio Pascucci, Brian K. Spears, Peer-Timo Bremer
With the rapid adoption of machine learning techniques for large-scale applications in science and engineering comes the convergence of two grand challenges in visualization.
2 code implementations • 18 Jul 2019 • Peixiang Zhong, Di Wang, Chunyan Miao
Finally, investigations on the neuronal activities reveal important brain regions and inter-channel relations for EEG-based emotion recognition.
Ranked #1 on EEG Emotion Recognition on SEED-IV
1 code implementation • 1 Jul 2019 • Dimitrios Stamoulis, Ruizhou Ding, Di Wang, Dimitrios Lymberopoulos, Bodhi Priyantha, Jie Liu, Diana Marculescu
In this work, we alleviate the NAS search cost down to less than 3 hours, while achieving state-of-the-art image classification results under mobile latency constraints.
no code implementations • 5 Jun 2019 • Di Wang, Qi Wu, Wen Zhang
This paper takes a deep learning approach to understand consumer credit risk when e-commerce platforms issue unsecured credit to finance customers' purchase.
no code implementations • 10 May 2019 • Dimitrios Stamoulis, Ruizhou Ding, Di Wang, Dimitrios Lymberopoulos, Bodhi Priyantha, Jie Liu, Diana Marculescu
Can we automatically design a Convolutional Network (ConvNet) with the highest image classification accuracy under the latency constraint of a mobile device?
9 code implementations • 5 Apr 2019 • Dimitrios Stamoulis, Ruizhou Ding, Di Wang, Dimitrios Lymberopoulos, Bodhi Priyantha, Jie Liu, Diana Marculescu
Can we automatically design a Convolutional Network (ConvNet) with the highest image classification accuracy under the runtime constraint of a mobile device?
Ranked #892 on Image Classification on <h2>oi</h2>
1 code implementation • NAACL 2019 • Chunting Zhou, Xuezhe Ma, Di Wang, Graham Neubig
Recent approaches to cross-lingual word embedding have generally been based on linear transformations between the sets of embedding vectors in the two languages.
no code implementations • 18 Jan 2019 • Di Wang, Jinhui Xu
In this paper, we study the problem of estimating the covariance matrix under differential privacy, where the underlying covariance matrix is assumed to be sparse and of high dimensions.
1 code implementation • 21 Dec 2018 • Thatchaphol Saranurak, Di Wang
Our result achieve both nearly linear running time and the strong expander guarantee for clusters.
Data Structures and Algorithms
no code implementations • 17 Dec 2018 • Di Wang, Adam Smith, Jinhui Xu
For the case of \emph{generalized linear losses} (such as hinge and logistic losses), we give an LDP algorithm whose sample complexity is only linear in the dimensionality $p$ and quasipolynomial in other terms (the privacy parameters $\epsilon$ and $\delta$, and the desired excess risk $\alpha$).
no code implementations • NeurIPS 2018 • Di Wang, Marco Gaboardi, Jinhui Xu
In this paper, we revisit the Empirical Risk Minimization problem in the non-interactive local model of differential privacy.
1 code implementation • 17 Nov 2018 • Peixiang Zhong, Di Wang, Chunyan Miao
Affect conveys important implicit information in human communication.
4 code implementations • ACL 2019 • Zhiting Hu, Haoran Shi, Bowen Tan, Wentao Wang, Zichao Yang, Tiancheng Zhao, Junxian He, Lianhui Qin, Di Wang, Xuezhe Ma, Zhengzhong Liu, Xiaodan Liang, Wangrong Zhu, Devendra Singh Sachan, Eric P. Xing
The versatile toolkit also fosters technique sharing across different text generation tasks.
no code implementations • WS 2018 • Yuanhang Ren, Ye Du, Di Wang
Given a paragraph of an article and a corresponding query, instead of directly feeding the whole paragraph to the single BiDAF system, a sentence that most likely contains the answer to the query is first selected, which is done via a deep neural network based on TreeLSTM (Tai et al., 2015).
no code implementations • WS 2018 • Zhiting Hu, Zichao Yang, Tiancheng Zhao, Haoran Shi, Junxian He, Di Wang, Xuezhe Ma, Zhengzhong Liu, Xiaodan Liang, Lianhui Qin, Devendra Singh Chaplot, Bowen Tan, Xingjiang Yu, Eric Xing
The features make Texar particularly suitable for technique sharing and generalization across different text generation applications.
no code implementations • 29 Jun 2018 • Fandong Meng, Zhaopeng Tu, Yong Cheng, Haiyang Wu, Junjie Zhai, Yuekui Yang, Di Wang
Although attention-based Neural Machine Translation (NMT) has achieved remarkable progress in recent years, it still suffers from issues of repeating and dropping translations.
no code implementations • NeurIPS 2017 • Di Wang, Minwei Ye, Jinhui Xu
In this paper we study the differentially private Empirical Risk Minimization (ERM) problem in different settings.
no code implementations • NeurIPS 2018 • Di Wang, Marco Gaboardi, Jinhui Xu
In the case of constant or low dimensionality ($p\ll n$), we first show that if the ERM loss function is $(\infty, T)$-smooth, then we can avoid a dependence of the sample complexity, to achieve error $\alpha$, on the exponential of the dimensionality $p$ with base $1/\alpha$ (i. e., $\alpha^{-p}$), which answers a question in [smith 2017 interaction].
1 code implementation • 9 Feb 2018 • Di Wang, Jinhui Xu
In this paper, we revisit the large-scale constrained linear regression problem and propose faster methods based on some recent developments in sketching and optimization.
1 code implementation • EMNLP 2017 • Di Wang, Nebojsa Jojic, Chris Brockett, Eric Nyberg
We propose simple and flexible training and decoding methods for influencing output style and topic in neural encoder-decoder based language generation.
no code implementations • ICML 2017 • Di Wang, Kimon Fountoulakis, Monika Henzinger, Michael W. Mahoney, Satish Rao
As an application, we use our CRD Process to develop an improved local algorithm for graph clustering.
no code implementations • 19 Jun 2017 • Di Wang, Kimon Fountoulakis, Monika Henzinger, Michael W. Mahoney, Satish Rao
Thus, our CRD Process is the first local graph clustering algorithm that is not subject to the well-known quadratic Cheeger barrier.
no code implementations • LREC 2014 • Nancy Ide, James Pustejovsky, Christopher Cieri, Eric Nyberg, Di Wang, Keith Suderman, Marc Verhagen, Jonathan Wright
The Language Application (LAPPS) Grid project is establishing a framework that enables language service discovery, composition, and reuse and promotes sustainability, manageability, usability, and interoperability of natural language Processing (NLP) components.
no code implementations • NeurIPS 2013 • Xiaoqin Zhang, Di Wang, Zhengyuan Zhou, Yi Ma
In this context, the state-of-the-art algorithms RASL'' and "TILT'' can be viewed as two special cases of our work, and yet each only performs part of the function of our method."