Datasets:
source sequence | source_labels sequence | rouge_scores sequence | paper_id stringlengths 9 11 | ic unknown | target sequence |
|---|---|---|---|---|---|
[
"Due to the success of deep learning to solving a variety of challenging machine learning tasks, there is a rising interest in understanding loss functions for training neural networks from a theoretical aspect.",
"Particularly, the properties of critical points and the landscape around them are of importance to ... | [
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.30188679695129395,
0.3720930218696594,
0.6037735939025879,
0.5714285373687744,
0.7234042286872864,
0.15094339847564697,
0.16129031777381897,
0.2222222238779068,
0.3478260934352875,
0.2380952388048172,
0.1875,
0.3589743673801422,
0.3829787075519562,
0.3589743673801422,
0.324324309825897... | SysEexbRb | true | [
"We provide necessary and sufficient analytical forms for the critical points of the square loss functions for various neural networks, and exploit the analytical forms to characterize the landscape properties for the loss functions of these neural networks."
] |
[
"The backpropagation (BP) algorithm is often thought to be biologically implausible in the brain.",
"One of the main reasons is that BP requires symmetric weight matrices in the feedforward and feedback pathways.",
"To address this “weight transport problem” (Grossberg, 1987), two biologically-plausible algorit... | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0
] | [
0,
0,
0.1304347813129425,
0.1428571343421936,
0,
0.11764705181121826,
0,
0.1111111044883728,
0.06666666269302368,
0,
0.0476190447807312,
0,
0,
0,
0.072727270424366,
0.0833333283662796,
0.0476190447807312,
0.05714285373687744,
0.08163265138864517,
0,
0.027397258207201958,
0.... | SygvZ209F7 | true | [
"Biologically plausible learning algorithms, particularly sign-symmetry, work well on ImageNet"
] |
[
"We introduce the 2-simplicial Transformer, an extension of the Transformer which includes a form of higher-dimensional attention generalising the dot-product attention, and uses this attention to update entity representations with tensor products of value vectors.",
"We show that this architecture is a useful in... | [
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.3333333432674408,
0.8888888955116272,
0.11428570747375488,
0,
0.26923075318336487,
0.1515151411294937,
0.2800000011920929,
0.2711864411830902,
0.1818181723356247,
0.17391303181648254,
0.31111109256744385,
0.1538461446762085,
0.19512194395065308,
0.2448979616165161,
0.1111111044883728,
... | rkecJ6VFvr | true | [
"We introduce the 2-simplicial Transformer and show that this architecture is a useful inductive bias for logical reasoning in the context of deep reinforcement learning."
] |
[
"We present Tensor-Train RNN (TT-RNN), a novel family of neural sequence architectures for multivariate forecasting in environments with nonlinear dynamics.",
"Long-term forecasting in such systems is highly challenging, since there exist long-term temporal dependencies, higher-order correlations and sensitivity ... | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0
] | [
0.06666666269302368,
0.06451612710952759,
0.060606054961681366,
0.13793103396892548,
0.06666666269302368,
0.052631575614213943,
0,
0.05882352590560913,
0,
0.06896550953388214,
0,
0.1428571343421936,
0.11428570747375488,
0.1538461446762085,
0.04081632196903229,
0.17241379618644714,
0,... | HJJ0w--0W | true | [
"Accurate forecasting over very long time horizons using tensor-train RNNs"
] |
[
"Recent efforts on combining deep models with probabilistic graphical models are promising in providing flexible models that are also easy to interpret.",
"We propose a variational message-passing algorithm for variational inference in such models.",
"We make three contributions.",
"First, we propose structur... | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0
] | [
0.277777761220932,
0.5714285373687744,
0.0952380895614624,
0.34285715222358704,
0,
0.2222222238779068,
0.17142856121063232,
0.12903225421905518,
0.11764705181121826,
0.24242423474788666,
0.13333332538604736,
0.1621621549129486,
0.1875,
0.05882352590560913,
0.04999999329447746,
0.172413... | HyH9lbZAW | true | [
"We propose a variational message-passing algorithm for models that contain both the deep model and probabilistic graphical model."
] |
[
"Modern deep neural networks have a large amount of weights, which make them difficult to deploy on computation constrained devices such as mobile phones.",
"One common approach to reduce the model size and computational cost is to use low-rank factorization to approximate a weight matrix.",
"However, performin... | [
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.04651162400841713,
0.21052631735801697,
0.1621621549129486,
0.1304347813129425,
0.23529411852359772,
0.09756097197532654,
0.09090908616781235,
0,
0.06666666269302368,
0,
0.04651162400841713,
0.11764705181121826,
0.1860465109348297,
0.11428570747375488,
0.08888888359069824,
0.07142856... | B1eHgu-Fim | true | [
"A simple modification to low-rank factorization that improves performances (in both image and language tasks) while still being compact."
] |
[
"Deep learning training accesses vast amounts of data at high velocity, posing challenges for datasets retrieved over commodity networks and storage devices.",
"We introduce a way to dynamically reduce the overhead of fetching and transporting training data with a method we term Progressive Compressed Records (PC... | [
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.2222222238779068,
0.30434781312942505,
0.1599999964237213,
0.2222222238779068,
0.2181818187236786,
0.1904761791229248,
0.25,
0.10526315122842789,
0.1463414579629898,
0.08510638028383255,
0.17910447716712952,
0.1269841194152832,
0.16326530277729034,
0.19999998807907104,
0.14999999105930... | S1e0ZlHYDB | true | [
"We propose a simple, general, and space-efficient data format to accelerate deep learning training by allowing sample fidelity to be dynamically selected at training time"
] |
[
"It is fundamental and challenging to train robust and accurate Deep Neural Networks (DNNs) when semantically abnormal examples exist.",
"Although great progress has been made, there is still one crucial research question which is not thoroughly explored yet: What training examples should be focused and how much ... | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0... | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0... | rylUOn4Yvr | true | [
"ROBUST DISCRIMINATIVE REPRESENTATION LEARNING VIA GRADIENT RESCALING: AN EMPHASIS REGULARISATION PERSPECTIVE"
] |
[
"Generative Adversarial Networks (GANs) have achieved remarkable results in the task of generating realistic natural images.",
"In most applications, GAN models share two aspects in common.",
"On the one hand, GANs training involves solving a challenging saddle point optimization problem, interpreted as an adve... | [
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.08695651590824127,
0.04999999701976776,
0.25925925374031067,
0.17391303181648254,
0.13333332538604736,
0.1428571343421936,
0.35483869910240173,
0.08695651590824127,
0.08888888359069824,
0.2222222238779068,
0.13636362552642822,
0.1538461446762085,
0.09090908616781235,
0.1395348757505417,
... | ryj38zWRb | true | [
"Are GANs successful because of adversarial training or the use of ConvNets? We show a ConvNet generator trained with a simple reconstruction loss and learnable noise vectors leads many of the desirable properties of a GAN."
] |
[
"In this paper, we propose a novel kind of kernel, random forest kernel, to enhance the empirical performance of MMD GAN.",
"Different from common forests with deterministic routings, a probabilistic routing variant is used in our innovated random-forest kernel, which is possible to merge with the CNN frameworks.... | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.14814814925193787,
0.1818181723356247,
0.125,
0.23076923191547394,
0.1428571343421936,
0.1764705777168274,
0,
0.052631575614213943,
0.08695651590824127,
0.1428571343421936,
0,
0.0833333283662796,
0,
0.06666666269302368,
0.1428571343421936,
0.054054051637649536,
0.0952380895614624,
... | HJxhWa4KDr | true | [
"Equip MMD GANs with a new random-forest kernel."
] |
[
"Reinforcement learning in an actor-critic setting relies on accurate value estimates of the critic.",
"However, the combination of function approximation, temporal difference (TD) learning and off-policy training can lead to an overestimating value function.",
"A solution is to use Clipped Double Q-learning (C... | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.4166666567325592,
0.06896550953388214,
0.12121211737394333,
0.10526315122842789,
0.054054051637649536,
0.20689654350280762,
0.1463414579629898,
0.06451612710952759,
0.23529411852359772,
0.09090908616781235,
0.24242423474788666,
0,
0.0476190447807312,
0,
0.29411762952804565,
0.0555555... | r1xyayrtDS | true | [
"A method for more accurate critic estimates in reinforcement learning."
] |
[
"We introduce a systematic framework for quantifying the robustness of classifiers to naturally occurring perturbations of images found in videos.",
"As part of this framework, we construct ImageNet-Vid-Robust, a human-expert--reviewed dataset of 22,668 images grouped into 1,145 sets of perceptually similar image... | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
1,
0.21276594698429108,
0.25,
0.26923075318336487,
0.25,
0,
0.23255813121795654,
0.1764705777168274,
0.1818181723356247,
0.1818181723356247,
0.19999998807907104,
0.3529411852359772,
0,
0.25,
0.3050847351551056,
0.14035087823867798
] | SklRoy3qaN | true | [
"We introduce a systematic framework for quantifying the robustness of classifiers to naturally occurring perturbations of images found in videos."
] |
[
"Structured tabular data is the most commonly used form of data in industry according to a Kaggle ML and DS Survey.",
"Gradient Boosting Trees, Support Vector Machine, Random Forest, and Logistic Regression are typically used for classification tasks on tabular data.",
"The recent work of Super Characters metho... | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0
] | [
0.12121211737394333,
0.1818181723356247,
0.05405404791235924,
0.10526315122842789,
0.1904761791229248,
0,
0.1818181723356247,
0.1428571343421936,
0.1111111044883728,
0.13333332538604736,
0.06451612710952759,
0.2222222238779068,
0,
0.178571417927742,
0.08888888359069824,
0.1290322542190... | r1MCjkn5pV | true | [
"Deep learning for structured tabular data machine learning using pre-trained CNN model from ImageNet."
] |
[
"Learning rich representations from predictive learning without labels has been a longstanding challenge in the field of machine learning.",
"Generative pre-training has so far not been as successful as contrastive methods in modeling representations of raw images.",
"In this paper, we propose a neural architec... | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0... | [
0.0714285671710968,
0.07407406717538834,
0.2631579041481018,
0,
0.07692307233810425,
0.20000000298023224,
0.09999999403953552,
0.05882352590560913,
0.0833333283662796,
0.1428571343421936,
0.14999999105930328,
0.04651162400841713,
0.0833333283662796,
0.04651162400841713,
0,
0,
0.04255... | SJg1lxrYwS | true | [
"Decoding pixels can still work for representation learning on images"
] |
[
"Adaptive regularization methods pre-multiply a descent direction by a preconditioning matrix.",
"Due to the large number of parameters of machine learning problems, full-matrix preconditioning methods are prohibitively expensive.",
"We show how to modify full-matrix adaptive regularization in order to make it ... | [
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0,
0.0714285671710968,
0.14814814925193787,
0,
0.42105263471603394,
0,
0,
0,
0.08510638028383255,
0,
0,
0,
0.08888888359069824,
0.09090908616781235,
0.05882352590560913,
0.17391303181648254,
0.1818181723356247,
0,
0.10810810327529907,
0,
0,
0.0714285671710968,
0.243902429... | rkxd2oR9Y7 | true | [
"fast, truly scalable full-matrix AdaGrad/Adam, with theory for adaptive stochastic non-convex optimization"
] |
[
"Dialogue systems require a great deal of different but complementary expertise to assist, inform, and entertain humans.",
"For example, different domains (e.g., restaurant reservation, train ticket booking) of goal-oriented dialogue systems can be viewed as different skills, and so does ordinary chatting abiliti... | [
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.2222222238779068,
0.178571417927742,
0.9818181991577148,
0.17543859779834747,
0.2448979616165161,
0.03389830142259598,
0.1428571343421936,
0.1395348757505417,
0.039215680211782455,
0.08695651590824127,
0.05970148742198944,
0.11267605423927307,
0.1355932205915451,
0.04081632196903229,
0... | BJepraEFPr | true | [
"In this paper, we propose to learn a dialogue system that independently parameterizes different dialogue skills, and learns to select and combine each of them through Attention over Parameters (AoP). "
] |
[
"Model distillation aims to distill the knowledge of a complex model into a simpler one.",
"In this paper, we consider an alternative formulation called dataset distillation: we keep the model fixed and instead attempt to distill the knowledge from a large training dataset into a small one.",
"The idea is to sy... | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.2857142686843872,
0.2857142686843872,
0.2545454502105713,
0.25925925374031067,
0.13333332538604736,
0.1463414579629898,
0.23076923191547394,
0.21276594698429108,
0.24390242993831635,
0.2448979616165161,
0.260869562625885,
0.20408162474632263,
0.2380952388048172,
0.24242423474788666,
0.... | ryxO3gBtPB | true | [
"We propose to distill a large dataset into a small set of synthetic data that can train networks close to original performance. "
] |
[
"We relate the minimax game of generative adversarial networks (GANs) to finding the saddle points of the Lagrangian function for a convex optimization problem, where the discriminator outputs and the distribution of generator outputs play the roles of primal variables and dual variables, respectively.",
"This fo... | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.16326530277729034,
0.3125,
0.21052631735801697,
0.11764705181121826,
0.1463414579629898,
0.11428570747375488,
0.13333332538604736,
0.0624999962747097,
0.07999999821186066,
0.07999999821186066,
0,
0.05405404791235924,
0.0952380895614624,
0.052631575614213943,
0.05405404791235924,
0,
... | BJNRFNlRW | true | [
"We propose a primal-dual subgradient method for training GANs and this method effectively alleviates mode collapse."
] |
[
"Specifying reward functions is difficult, which motivates the area of reward inference: learning rewards from human behavior.",
"The starting assumption in the area is that human behavior is optimal given the desired reward function, but in reality people have many different forms of irrationality, from noise to... | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0
] | [
0.0624999962747097,
0.0833333283662796,
0.1249999925494194,
0.17142856121063232,
0.1249999925494194,
0.1875,
0.1395348757505417,
0.0714285671710968,
0.08510638028383255,
0.039215680211782455,
0.12903225421905518,
0,
0,
0.05128204822540283,
0.07017543166875839,
0.10526315122842789,
0.... | BJlo91BYPr | true | [
"We find that irrationality from an expert demonstrator can help a learner infer their preferences. "
] |
[
"Natural Language Processing models lack a unified approach to robustness testing.",
"In this paper we introduce WildNLP - a framework for testing model stability in a natural setting where text corruptions such as keyboard errors or misspelling occur.",
"We compare robustness of models from 4 popular NLP tasks... | [
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1764705777168274,
0.04081632196903229,
0.8571428656578064,
0.09999999403953552,
0.1599999964237213,
0.1304347813129425,
0.1111111044883728,
0.15789473056793213,
0,
0.039215680211782455,
0.0476190410554409,
0.12121211737394333,
0.1428571343421936,
0.24390242993831635,
0.1428571343421936... | SkxgBPr3iN | true | [
"We compare robustness of models from 4 popular NLP tasks: Q&A, NLI, NER and Sentiment Analysis by testing their performance on perturbed inputs."
] |
[
"Training generative models like Generative Adversarial Network (GAN) is challenging for noisy data.",
"A novel curriculum learning algorithm pertaining to clustering is proposed to address this issue in this paper.",
"The curriculum construction is based on the centrality of underlying clusters in data points... | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.20689654350280762,
0.5161290168762207,
0.25806450843811035,
0.25806450843811035,
0.23076923191547394,
0.4516128897666931,
0.2666666507720947,
0.2702702581882477,
0.2222222238779068,
0.0615384578704834,
0.2857142686843872,
0.1428571343421936,
0.06896550953388214,
0.15686273574829102,
0.... | BklTQCEtwH | true | [
"A novel cluster-based algorithm of curriculum learning is proposed to solve the robust training of generative models."
] |
[
"Backdoor attacks aim to manipulate a subset of training data by injecting adversarial triggers such that machine learning models trained on the tampered dataset will make arbitrarily (targeted) incorrect prediction on the testset with the same trigger embedded.",
"While federated learning (FL) is capable of aggr... | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.19999998807907104,
0.21212120354175568,
0.28169015049934387,
0.0714285671710968,
0.3333333432674408,
0.24561403691768646,
0.21052631735801697,
0.11538460850715637,
0.08955223113298416,
0.1599999964237213,
0.09090908616781235,
0.1090909019112587,
0.0555555522441864,
0.17721518874168396,
... | rkgyS0VFvr | true | [
"We proposed a novel distributed backdoor attack on federated learning and show that it is not only more effective compared with standard centralized attacks, but also harder to be defended by existing robust FL methods"
] |
[
"Graph networks have recently attracted considerable interest, and in particular in the context of semi-supervised learning.",
"These methods typically work by generating node representations that are propagated throughout a given weighted graph.\n\n",
"Here we argue that for semi-supervised learning, it is mor... | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.19354838132858276,
0.060606054961681366,
0.17142856121063232,
0.0624999962747097,
0.060606054961681366,
0.04999999701976776,
0.0714285671710968,
0.14999999105930328,
0.1818181723356247,
0,
0.06896550953388214,
0.0476190410554409,
0.1111111044883728,
0.1111111044883728,
0,
0.072727270... | r1g7y2RqYX | true | [
"Neural net for graph-based semi-supervised learning; revisits the classics and propagates *labels* rather than feature representations"
] |
[
"Neural architecture search (NAS) has made rapid progress incomputervision,wherebynewstate-of-the-artresultshave beenachievedinaseriesoftaskswithautomaticallysearched neural network (NN) architectures.",
"In contrast, NAS has not made comparable advances in natural language understanding (NLU).",
"Corresponding... | [
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0... | [
0.15789473056793213,
0,
0.4000000059604645,
0.09756097197532654,
0.09999999403953552,
0.21621620655059814,
0.14999999105930328,
0.1538461446762085,
0.1818181723356247,
0.0624999962747097,
0.2028985470533371,
0,
0.12121211737394333,
0,
0.1621621549129486,
0.19999998807907104,
0.196078... | rkgARFTUjB | true | [
"Neural Architecture Search for a series of Natural Language Understanding tasks. Design the search space for NLU tasks. And Apply differentiable architecture search to discover new models"
] |
[
"Network embedding (NE) methods aim to learn low-dimensional representations of network nodes as vectors, typically in Euclidean space.",
"These representations are then used for a variety of downstream prediction tasks.",
"Link prediction is one of the most popular choices for assessing the performance of NE m... | [
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.1904761791229248,
0.2222222238779068,
0.2631579041481018,
0.3720930218696594,
0.05714285373687744,
0.3414634168148041,
0.4651162624359131,
0.1395348757505417,
0.13636362552642822,
0.1818181723356247,
0.1666666567325592,
0.15686273574829102,
0.1304347813129425,
0.22068965435028076,
0.20... | H1eJH3IaLN | true | [
"In this paper we introduce EvalNE, a Python toolbox for automating the evaluation of network embedding methods on link prediction and ensuring the reproducibility of results."
] |
Dataset Card for SciTLDR
Dataset Summary
SciTLDR: Extreme Summarization of Scientific Documents
SciTLDR is a new multi-target dataset of 5.4K TLDRs over 3.2K papers. SciTLDR contains both author-written and expert-derived TLDRs, where the latter are collected using a novel annotation protocol that produces high-quality summaries while minimizing annotation burden.
Supported Tasks and Leaderboards
summarization
Languages
English
Dataset Structure
SciTLDR is split in to a 60/20/20 train/dev/test split. For each file, each line is a json, formatted as follows
{
"source":[
"sent0",
"sent1",
"sent2",
...
],
"source_labels":[binary list in which 1 is the oracle sentence],
"rouge_scores":[precomputed rouge-1 scores],
"paper_id":"PAPER-ID",
"target":[
"author-tldr",
"pr-tldr0",
"pr-tldr1",
...
],
"title":"TITLE"
}
The keys rouge_scores and source_labels are not necessary for any code to run, precomputed Rouge scores are provided for future research.
Data Instances
{ "source": [ "Mixed precision training (MPT) is becoming a practical technique to improve the speed and energy efficiency of training deep neural networks by leveraging the fast hardware support for IEEE half-precision floating point that is available in existing GPUs.", "MPT is typically used in combination with a technique called loss scaling, that works by scaling up the loss value up before the start of backpropagation in order to minimize the impact of numerical underflow on training.", "Unfortunately, existing methods make this loss scale value a hyperparameter that needs to be tuned per-model, and a single scale cannot be adapted to different layers at different training stages.", "We introduce a loss scaling-based training method called adaptive loss scaling that makes MPT easier and more practical to use, by removing the need to tune a model-specific loss scale hyperparameter.", "We achieve this by introducing layer-wise loss scale values which are automatically computed during training to deal with underflow more effectively than existing methods.", "We present experimental results on a variety of networks and tasks that show our approach can shorten the time to convergence and improve accuracy, compared with using the existing state-of-the-art MPT and single-precision floating point." ], "source_labels": [ 0, 0, 0, 1, 0, 0 ], "rouge_scores": [ 0.2399999958000001, 0.26086956082230633, 0.19999999531250012, 0.38095237636054424, 0.2051282003944774, 0.2978723360796741 ], "paper_id": "rJlnfaNYvB", "target": [ "We devise adaptive loss scaling to improve mixed precision training that surpass the state-of-the-art results.", "Proposal for an adaptive loss scaling method during backpropagation for mix precision training where scale rate is decided automatically to reduce the underflow.", "The authors propose a method to train models in FP16 precision that adopts a more elaborate way to minimize underflow in every layer simultaneously and automatically." ], "title": "Adaptive Loss Scaling for Mixed Precision Training" }
Data Fields
source: The Abstract, Introduction and Conclusion (AIC) or Full text of the paper, with one sentence per line.source_labels: Binary 0 or 1, 1 denotes the oracle sentence.rouge_scores: Precomputed ROUGE baseline scores for each sentence.paper_id: Arxiv Paper ID.target: Multiple summaries for each sentence, one sentence per line.title: Title of the paper.
Data Splits
| train | valid | test | |
|---|---|---|---|
| SciTLDR-A | 1992 | 618 | 619 |
| SciTLDR-AIC | 1992 | 618 | 619 |
| SciTLDR-FullText | 1992 | 618 | 619 |
Dataset Creation
[More Information Needed]
Curation Rationale
[More Information Needed]
Source Data
Initial Data Collection and Normalization
[More Information Needed]
Who are the source language producers?
Annotations
Annotation process
Given the title and first 128 words of a reviewer comment about a paper, re-write the summary (if it exists) into a single sentence or an incomplete phrase. Summaries must be no more than one sentence. Most summaries are between 15 and 25 words. The average rewritten summary is 20 words long.
Who are the annotators?
[More Information Needed]
Personal and Sensitive Information
[More Information Needed]
Considerations for Using the Data
Social Impact of Dataset
To encourage further research in the area of extreme summarization of scientific documents.
Discussion of Biases
[More Information Needed]
Other Known Limitations
[More Information Needed]
Additional Information
Dataset Curators
[More Information Needed]
Licensing Information
Apache License 2.0
Citation Information
@article{cachola2020tldr, title={{TLDR}: Extreme Summarization of Scientific Documents}, author={Isabel Cachola and Kyle Lo and Arman Cohan and Daniel S. Weld}, journal={arXiv:2004.15011}, year={2020}, }
Contributions
Thanks to @Bharat123rox for adding this dataset.
- Downloads last month
- 881