repo stringclasses 1
value | github_id int64 378M 4.24B | github_node_id stringlengths 18 24 | number int64 3 45.4k | html_url stringlengths 52 56 | api_url stringlengths 62 66 | title stringlengths 1 487 | body stringlengths 0 234k ⌀ | state stringclasses 2
values | state_reason stringclasses 4
values | locked bool 2
classes | comments_count int64 0 196 | labels listlengths 0 8 | assignees listlengths 0 8 | created_at stringdate 2018-11-05 21:35:51 2026-04-10 15:39:40 | updated_at stringdate 2018-11-07 23:43:42 2026-04-11 01:17:51 | closed_at stringdate 2018-11-07 22:37:09 2026-04-10 15:41:46 ⌀ | milestone_title stringclasses 0
values | snapshot_id stringclasses 17
values | extracted_at stringdate 2026-03-26 02:00:23 2026-04-11 02:00:33 | author_login stringlengths 2 29 | author_id int64 1.33k 274M | author_node_id stringlengths 12 32 | author_type stringclasses 2
values | author_site_admin bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
huggingface/transformers | 377,736,844 | MDU6SXNzdWUzNzc3MzY4NDQ= | 6 | https://github.com/huggingface/transformers/issues/6 | https://api.github.com/repos/huggingface/transformers/issues/6 | Failure during pytest (and solution for python3) | ```
foo@bar:~/foo/bar/pytorch-pretrained-BERT$ pytest -sv ./tests/
===================================================================================================================== test session starts =================================================================================================================... | closed | completed | false | 1 | [] | [] | 2018-11-06T08:23:29Z | 2018-11-07T23:43:42Z | 2018-11-07T23:43:42Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | dandelin | 3,676,247 | MDQ6VXNlcjM2NzYyNDc= | User | false |
huggingface/transformers | 377,698,378 | MDU6SXNzdWUzNzc2OTgzNzg= | 5 | https://github.com/huggingface/transformers/issues/5 | https://api.github.com/repos/huggingface/transformers/issues/5 | MRPC hyperparameters question | When describing how you reproduced the MRPC results, you say:
"Our test ran on a few seeds with the original implementation hyper-parameters gave evaluation results between 82 and 87."
and you link to the SQuAD hyperparameters (https://github.com/google-research/bert#squad).
Is the link a mistake? Or did you use t... | closed | completed | false | 5 | [] | [] | 2018-11-06T05:30:36Z | 2018-11-08T02:04:37Z | 2018-11-07T23:42:51Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | ethanjperez | 6,402,205 | MDQ6VXNlcjY0MDIyMDU= | User | false |
huggingface/transformers | 378,935,595 | MDU6SXNzdWUzNzg5MzU1OTU= | 9 | https://github.com/huggingface/transformers/issues/9 | https://api.github.com/repos/huggingface/transformers/issues/9 | Crash at the end of training | Hi, I tried running the Squad model this morning (on a single GPU with gradient accumulation over 3 steps) but after 3 hours of training, my job failed with the following output:
I was running the code, unmodified, from commit 3bfbc21376af691b912f3b6256bbeaf8e0046ba8
Is this an issue you know about?
```
11/08/2... | closed | completed | false | 2 | [] | [] | 2018-11-08T22:01:57Z | 2018-11-09T08:17:26Z | 2018-11-09T08:17:26Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | bkgoksel | 6,436,274 | MDQ6VXNlcjY0MzYyNzQ= | User | false |
huggingface/transformers | 379,422,090 | MDU6SXNzdWUzNzk0MjIwOTA= | 12 | https://github.com/huggingface/transformers/issues/12 | https://api.github.com/repos/huggingface/transformers/issues/12 | py2 code | if I convert code to python2 version of code, it can't converage ; Would you present py2 code? | closed | completed | false | 1 | [] | [] | 2018-11-10T13:23:31Z | 2018-11-10T15:06:35Z | 2018-11-10T15:06:35Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | antxiaojun | 44,923,827 | MDQ6VXNlcjQ0OTIzODI3 | User | false |
huggingface/transformers | 379,440,759 | MDU6SXNzdWUzNzk0NDA3NTk= | 13 | https://github.com/huggingface/transformers/issues/13 | https://api.github.com/repos/huggingface/transformers/issues/13 | Bug in run_classifier.py | If I am running only evaluation and not training, there are errors as tr_loss and nb_tr_steps are undefined. | closed | completed | false | 0 | [] | [] | 2018-11-10T17:16:01Z | 2018-11-10T17:49:15Z | 2018-11-10T17:45:28Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | rawatprateek | 32,642,916 | MDQ6VXNlcjMyNjQyOTE2 | User | false |
huggingface/transformers | 377,592,631 | MDU6SXNzdWUzNzc1OTI2MzE= | 3 | https://github.com/huggingface/transformers/issues/3 | https://api.github.com/repos/huggingface/transformers/issues/3 | run_squad questions | Thanks a lot for the port! I have some minor questions, for the run_squad file, I see two options for accumulating gradients, accumulate_gradients and gradient_accumulation_steps but it seems to me that it can be combined into one. The other one is for the global_step variable, seems we are only counting but not using ... | closed | completed | false | 15 | [] | [
"thomwolf",
"VictorSanh"
] | 2018-11-05T21:35:51Z | 2018-11-12T13:59:43Z | 2018-11-07T22:37:09Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | ZhaoyueCheng | 3,590,333 | MDQ6VXNlcjM1OTAzMzM= | User | false |
huggingface/transformers | 380,271,134 | MDU6SXNzdWUzODAyNzExMzQ= | 15 | https://github.com/huggingface/transformers/issues/15 | https://api.github.com/repos/huggingface/transformers/issues/15 | activation function in BERTIntermediate | BERTConfig is not used for `BERTIntermediate`'s activation function. `intermediate_act_fn` is always `gelu`. Is this normal?
https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/modeling.py#L240 | closed | completed | false | 4 | [] | [] | 2018-11-13T15:09:33Z | 2018-11-13T15:18:30Z | 2018-11-13T15:17:39Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | lukovnikov | 1,732,910 | MDQ6VXNlcjE3MzI5MTA= | User | false |
huggingface/transformers | 380,555,132 | MDU6SXNzdWUzODA1NTUxMzI= | 19 | https://github.com/huggingface/transformers/issues/19 | https://api.github.com/repos/huggingface/transformers/issues/19 | will you push the pytorch code for the pre-training process? | Can you push the pytorch code for the pre-training process,such as MLM task, please?
I really want to study, but I can't understand tensorflow, it's so complex.
thanks!!! | closed | completed | false | 1 | [] | [] | 2018-11-14T06:30:59Z | 2018-11-17T21:55:41Z | 2018-11-17T21:55:41Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | koukoulala | 30,341,159 | MDQ6VXNlcjMwMzQxMTU5 | User | false |
huggingface/transformers | 381,387,717 | MDU6SXNzdWUzODEzODc3MTc= | 24 | https://github.com/huggingface/transformers/issues/24 | https://api.github.com/repos/huggingface/transformers/issues/24 | [Feature request] Port SQuAD 2.0 support | Recently the Google team added support for Squad 2.0:
https://github.com/google-research/bert/commit/60454702590a6c69bd45c5d4258c7e17b8a3e1da
Would be great to also have it available in the Pytorch version. | closed | completed | false | 1 | [] | [] | 2018-11-15T23:47:04Z | 2018-11-17T21:57:08Z | 2018-11-17T21:57:07Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | elyase | 1,175,888 | MDQ6VXNlcjExNzU4ODg= | User | false |
huggingface/transformers | 381,490,584 | MDU6SXNzdWUzODE0OTA1ODQ= | 25 | https://github.com/huggingface/transformers/issues/25 | https://api.github.com/repos/huggingface/transformers/issues/25 | can you push the run-pretraining and create_pretraining_data codes? | just want to study codes, don't need to have same pre-train performance. | closed | completed | false | 1 | [] | [] | 2018-11-16T08:15:33Z | 2018-11-17T21:57:19Z | 2018-11-17T21:57:19Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | koukoulala | 30,341,159 | MDQ6VXNlcjMwMzQxMTU5 | User | false |
huggingface/transformers | 381,835,436 | MDU6SXNzdWUzODE4MzU0MzY= | 28 | https://github.com/huggingface/transformers/issues/28 | https://api.github.com/repos/huggingface/transformers/issues/28 | speed is very slow | convert samples to features, is very slow | closed | completed | false | 2 | [] | [] | 2018-11-17T06:51:54Z | 2018-11-17T22:02:38Z | 2018-11-17T22:02:38Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | susht3 | 12,723,964 | MDQ6VXNlcjEyNzIzOTY0 | User | false |
huggingface/transformers | 381,250,921 | MDU6SXNzdWUzODEyNTA5MjE= | 23 | https://github.com/huggingface/transformers/issues/23 | https://api.github.com/repos/huggingface/transformers/issues/23 | ValueError while using --optimize_on_cpu | > Traceback (most recent call last): | 1/87970 [00:00<8:35:35, 2.84it/s]
File "./run_squad.py", line 990, in <module>
main()
File "./run_squad.py", line 922, in main
is_nan = set_optimizer_params_grad(param_optimizer, model.named_parameters(), test_nan=True)
File "./run_squad.py", line 691, in set_optimizer_params... | closed | completed | false | 3 | [] | [] | 2018-11-15T16:53:12Z | 2018-11-18T10:17:01Z | 2018-11-17T21:56:46Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | rsanjaykamath | 18,527,321 | MDQ6VXNlcjE4NTI3MzIx | User | false |
huggingface/transformers | 381,998,040 | MDU6SXNzdWUzODE5OTgwNDA= | 35 | https://github.com/huggingface/transformers/issues/35 | https://api.github.com/repos/huggingface/transformers/issues/35 | issues with accents on convert_ids_to_tokens() | Hello, the BertTokenizer seems loose accents when convert_ids_to_tokens() is used :
Example:
- original sentence: "great breakfasts in a nice furnished cafè, slightly bohemian."
- corresponding list of token produced : ['great', 'breakfast', '##s', 'in', 'a', 'nice', 'fur', '##nis', '##hed', 'cafe', ',', 'slightly... | closed | completed | false | 2 | [] | [] | 2018-11-18T20:41:24Z | 2018-11-19T08:39:56Z | 2018-11-19T08:39:56Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | perezjln | 5,373,778 | MDQ6VXNlcjUzNzM3Nzg= | User | false |
huggingface/transformers | 381,965,833 | MDU6SXNzdWUzODE5NjU4MzM= | 34 | https://github.com/huggingface/transformers/issues/34 | https://api.github.com/repos/huggingface/transformers/issues/34 | Can not find vocabulary file for Chinese model | After I convert the TF model to pytorch model, I run a classification task on a new Chinese dataset, but get this:
CUDA_VISIBLE_DEVICES=3 python run_classifier.py --task_name weibo --do_eval --do_train --bert_model chinese_L-12_H-768_A-12 --max_seq_length 128 --train_batch_size 32 --learning_rate 2e-5 --num_... | closed | completed | false | 5 | [] | [] | 2018-11-18T14:33:58Z | 2018-11-19T11:13:14Z | 2018-11-19T03:17:31Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | zlinao | 33,000,929 | MDQ6VXNlcjMzMDAwOTI5 | User | false |
huggingface/transformers | 382,489,751 | MDU6SXNzdWUzODI0ODk3NTE= | 41 | https://github.com/huggingface/transformers/issues/41 | https://api.github.com/repos/huggingface/transformers/issues/41 | Typo in README | I think I spotted a typo in the README file under the Usage header. There is a piece of code that uses `BertTokenizer` and the typo is on this line:
`tokenized_text = "Who was Jim Henson ? Jim Henson was a puppeteer"`
I think `tokenized_text` should be replaced with `text`, since the next line is
`tokenized_text =... | closed | completed | false | 1 | [] | [] | 2018-11-20T03:52:35Z | 2018-11-20T09:02:15Z | 2018-11-20T09:02:15Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | weiyumou | 9,312,916 | MDQ6VXNlcjkzMTI5MTY= | User | false |
huggingface/transformers | 382,300,869 | MDU6SXNzdWUzODIzMDA4Njk= | 39 | https://github.com/huggingface/transformers/issues/39 | https://api.github.com/repos/huggingface/transformers/issues/39 | Command-line interface Document Bug | There is a bug in README.md about Command-line interface:
`export BERT_BASE_DIR=chinese_L-12_H-768_A-12`
**Wrong:**
```
pytorch_pretrained_bert convert_tf_checkpoint_to_pytorch \
--tf_checkpoint_path $BERT_BASE_DIR/bert_model.ckpt.index \
--bert_config_file $BERT_BASE_DIR/bert_config.json \
--pytorch_... | closed | completed | false | 1 | [] | [] | 2018-11-19T16:42:56Z | 2018-11-20T09:03:06Z | 2018-11-20T09:03:06Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | delldu | 31,266,222 | MDQ6VXNlcjMxMjY2MjIy | User | false |
huggingface/transformers | 381,939,792 | MDU6SXNzdWUzODE5Mzk3OTI= | 33 | https://github.com/huggingface/transformers/issues/33 | https://api.github.com/repos/huggingface/transformers/issues/33 | [Bug report] Ineffective no_decay when using BERTAdam | https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_classifier.py#L505-L508
With this code, all parameters are decayed because the condition "parameter_name in no_decay" will never be satisfied.
I've made a PR #32 to fix it. | closed | completed | false | 1 | [] | [] | 2018-11-18T08:28:52Z | 2018-11-20T09:07:58Z | 2018-11-20T09:07:58Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | xiaoda99 | 6,015,633 | MDQ6VXNlcjYwMTU2MzM= | User | false |
huggingface/transformers | 382,579,717 | MDU6SXNzdWUzODI1Nzk3MTc= | 45 | https://github.com/huggingface/transformers/issues/45 | https://api.github.com/repos/huggingface/transformers/issues/45 | Issue of `bert_model` arg in `run_classify.py` | Hi,
I am trying to understand the `bert_model` arg in `run_classify.py`. In the file, I can see
```
tokenizer = BertTokenizer.from_pretrained(args.bert_model)
```
where `bert_model` is expected to be the vocab text file of the model
However, I also see
```
model = BertForSequenceClassification.from_pretr... | closed | completed | false | 1 | [] | [] | 2018-11-20T09:48:09Z | 2018-11-20T13:07:14Z | 2018-11-20T13:07:14Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | llidev | 29,957,883 | MDQ6VXNlcjI5OTU3ODgz | User | false |
huggingface/transformers | 382,553,589 | MDU6SXNzdWUzODI1NTM1ODk= | 43 | https://github.com/huggingface/transformers/issues/43 | https://api.github.com/repos/huggingface/transformers/issues/43 | grad is None in squad example | Hi, guys, I try the `run_squad` example with
```
Traceback (most recent call last): | 0/7331 [00:00<?, ?it/s]
File "examples/run_squad.py", line 973, in <m... | closed | completed | false | 2 | [] | [] | 2018-11-20T08:38:03Z | 2018-11-20T23:04:28Z | 2018-11-20T23:04:28Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | vpegasus | 22,723,154 | MDQ6VXNlcjIyNzIzMTU0 | User | false |
huggingface/transformers | 383,028,844 | MDU6SXNzdWUzODMwMjg4NDQ= | 49 | https://github.com/huggingface/transformers/issues/49 | https://api.github.com/repos/huggingface/transformers/issues/49 | Multilingual Issue | Dear authors,
I have two questions.
First, how can I use multilingual pre-trained BERT in pytorch?
Is it all download model to $BERT_BASE_DIR?
Second is tokenization issue.
For Chinese and Japanese, tokenizer may works, however, for Korean, it shows different result that I expected
```
import torch
from p... | closed | completed | false | 1 | [] | [] | 2018-11-21T09:32:32Z | 2018-11-21T09:39:42Z | 2018-11-21T09:39:41Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | hahmyg | 3,884,429 | MDQ6VXNlcjM4ODQ0Mjk= | User | false |
huggingface/transformers | 383,586,156 | MDU6SXNzdWUzODM1ODYxNTY= | 52 | https://github.com/huggingface/transformers/issues/52 | https://api.github.com/repos/huggingface/transformers/issues/52 | UnicodeDecodeError: 'charmap' codec can't decode byte 0x90 in position 3920: character maps to <undefined> | Installed pytorch-pretrained-BERT from source, Python 3.7, Windows 10
When I run the following snippet:
import torch
from pytorch_pretrained_bert import BertTokenizer, BertModel, BertForMaskedLM
# Load pre-trained model tokenizer (vocabulary)
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
... | closed | completed | false | 2 | [] | [] | 2018-11-22T15:42:08Z | 2018-11-23T11:21:57Z | 2018-11-23T11:21:56Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | superchthonic | 5,455,837 | MDQ6VXNlcjU0NTU4Mzc= | User | false |
huggingface/transformers | 384,044,666 | MDU6SXNzdWUzODQwNDQ2NjY= | 55 | https://github.com/huggingface/transformers/issues/55 | https://api.github.com/repos/huggingface/transformers/issues/55 | Loss calculation error | https://github.com/huggingface/pytorch-pretrained-BERT/blob/982339d82984466fde3b1466f657a03200aa2ffb/pytorch_pretrained_bert/modeling.py#L744
Got `ValueError: Expected target size (1, 30522), got torch.Size([1, 11])` at line 744 of `modeling.py`. I think the line should be changed to `masked_lm_loss = loss_fct(predi... | closed | completed | false | 3 | [] | [] | 2018-11-25T03:48:17Z | 2018-11-26T08:52:00Z | 2018-11-26T08:52:00Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | jwang-lp | 944,876 | MDQ6VXNlcjk0NDg3Ng== | User | false |
huggingface/transformers | 383,967,106 | MDU6SXNzdWUzODM5NjcxMDY= | 54 | https://github.com/huggingface/transformers/issues/54 | https://api.github.com/repos/huggingface/transformers/issues/54 | example in BertForSequenceClassification() conflicts with the api | Hi, firstly, admire u for the great job. but I encounter 2 problems when i use it:
**1**. `UnicodeDecodeError: 'gbk' codec can't decode byte 0x85 in position 4527: illegal multibyte sequence`,
same problem as ISSUE 52 when I excute the `BertTokenizer.from_pretrained('bert-base-uncased')`, but I successfully excute `... | closed | completed | false | 1 | [] | [] | 2018-11-24T07:27:50Z | 2018-11-26T08:54:47Z | 2018-11-26T08:54:47Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | labixiaoK | 24,908,364 | MDQ6VXNlcjI0OTA4MzY0 | User | false |
huggingface/transformers | 383,162,319 | MDU6SXNzdWUzODMxNjIzMTk= | 51 | https://github.com/huggingface/transformers/issues/51 | https://api.github.com/repos/huggingface/transformers/issues/51 | Missing options/arguments in run_squad.py for BERT Large | Thanks for the great code..However, the `run_squad.py` for BERT Large seems to not have the `vocab_file` and `bert_config_file` (or other) options/arguments. Did you push the latest version?
Also, it is looking for a pytorch model file (a bin file). Does it need to be there?
I also had to add this line to the file... | closed | completed | false | 1 | [] | [] | 2018-11-21T15:10:45Z | 2018-11-26T08:57:23Z | 2018-11-26T08:57:23Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | avisil | 43,005,718 | MDQ6VXNlcjQzMDA1NzE4 | User | false |
huggingface/transformers | 382,297,444 | MDU6SXNzdWUzODIyOTc0NDQ= | 38 | https://github.com/huggingface/transformers/issues/38 | https://api.github.com/repos/huggingface/transformers/issues/38 | truncated normal initializer | I have a reasonable truncated normal approximation. (Actually that is what tf does).
https://discuss.pytorch.org/t/implementing-truncated-normal-initializer/4778/16?u=ruotianluo
| closed | completed | false | 2 | [] | [] | 2018-11-19T16:35:08Z | 2018-11-26T09:42:42Z | 2018-11-26T09:42:42Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | ruotianluo | 16,023,153 | MDQ6VXNlcjE2MDIzMTUz | User | false |
huggingface/transformers | 384,525,339 | MDU6SXNzdWUzODQ1MjUzMzk= | 57 | https://github.com/huggingface/transformers/issues/57 | https://api.github.com/repos/huggingface/transformers/issues/57 | Missing function convert_to_unicode in tokenization.py | The function _convert_to_unicode_ is not in tokenization.py but used to be there in v0.1.2. When fine tuning with run_classifier.py, you get an ImportError: cannot import name 'convert_to_unicode'.
https://github.com/huggingface/pytorch-pretrained-BERT/blob/ce37b8e4819142171b61558e64f7dcb0286e9937/examples/run_class... | closed | completed | false | 1 | [] | [] | 2018-11-26T21:50:15Z | 2018-11-26T22:33:47Z | 2018-11-26T22:33:47Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | ptrichel | 15,148,709 | MDQ6VXNlcjE1MTQ4NzA5 | User | false |
huggingface/transformers | 382,576,559 | MDU6SXNzdWUzODI1NzY1NTk= | 44 | https://github.com/huggingface/transformers/issues/44 | https://api.github.com/repos/huggingface/transformers/issues/44 | Race condition when prepare pretrained model in distributed training | Hi,
I launched two processes per node to run distributed run_classifier.py. However, I am occasionally get below error:
```
11/20/2018 09:31:48 - INFO - pytorch_pretrained_bert.file_utils - copying /tmp/tmpa25_y4es to cache at /root/.pytorch_pretrained_bert/9c41111e2de84547a463fd39217199738d1e3deb72d4fec4399e6... | closed | completed | false | 4 | [] | [] | 2018-11-20T09:40:25Z | 2018-11-27T09:16:02Z | 2018-11-26T09:23:03Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | llidev | 29,957,883 | MDQ6VXNlcjI5OTU3ODgz | User | false |
huggingface/transformers | 383,946,736 | MDU6SXNzdWUzODM5NDY3MzY= | 53 | https://github.com/huggingface/transformers/issues/53 | https://api.github.com/repos/huggingface/transformers/issues/53 | Multi-GPU training vs Distributed training | Hi,
I have a question about Multi-GPU vs Distributed training, probably unrelated to BERT itself.
I have a 4-GPU server, and was trying to run `run_classifier.py` in two ways:
(a) run single-node distributed training with 4 processes and minibatch of 32 each
(b) run Multi-GPU training with minibatch of 128, a... | closed | completed | false | 2 | [] | [] | 2018-11-24T00:49:45Z | 2018-11-27T09:22:06Z | 2018-11-26T09:03:23Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | llidev | 29,957,883 | MDQ6VXNlcjI5OTU3ODgz | User | false |
huggingface/transformers | 386,047,173 | MDU6SXNzdWUzODYwNDcxNzM= | 67 | https://github.com/huggingface/transformers/issues/67 | https://api.github.com/repos/huggingface/transformers/issues/67 | `TypeError: object of type 'NoneType' has no len()` when tuning on squad | When running the following command for tuning on squad, I am getting a petty error inside logger `TypeError: object of type 'NoneType' has no len()`. Any thoughts what could be the main cause of the problem?
Full log:
```
python3.6 examples/run_squad.py \
> --bert_model bert-base-uncased \
> --do_train ... | closed | completed | false | 1 | [] | [] | 2018-11-30T05:48:04Z | 2018-11-30T13:24:03Z | 2018-11-30T13:24:02Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | danyaljj | 2,441,454 | MDQ6VXNlcjI0NDE0NTQ= | User | false |
huggingface/transformers | 386,303,565 | MDU6SXNzdWUzODYzMDM1NjU= | 71 | https://github.com/huggingface/transformers/issues/71 | https://api.github.com/repos/huggingface/transformers/issues/71 | run_squad script gets stuck | Hello,
I am trying to run the squad fine tuning script, but it hangs after printing out a few predictions. I am attaching the log. Can you help take a look?
I am running the script on a machine with 8 M40s.
[bert_squad.log](https://github.com/huggingface/pytorch-pretrained-BERT/files/2634588/bert_squad.log)
... | closed | completed | false | 3 | [] | [] | 2018-11-30T18:39:54Z | 2018-11-30T20:53:04Z | 2018-11-30T19:47:07Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | samyam | 3,409,344 | MDQ6VXNlcjM0MDkzNDQ= | User | false |
huggingface/transformers | 384,276,059 | MDU6SXNzdWUzODQyNzYwNTk= | 56 | https://github.com/huggingface/transformers/issues/56 | https://api.github.com/repos/huggingface/transformers/issues/56 | [Feature request ] Add support for the new cased version of the multilingual model | https://github.com/google-research/bert/commit/332a68723c34062b8f58e5fec3e430db4563320a | closed | completed | false | 1 | [] | [] | 2018-11-26T10:56:18Z | 2018-11-30T22:28:49Z | 2018-11-30T22:28:32Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | elyase | 1,175,888 | MDQ6VXNlcjExNzU4ODg= | User | false |
huggingface/transformers | 385,304,675 | MDU6SXNzdWUzODUzMDQ2NzU= | 61 | https://github.com/huggingface/transformers/issues/61 | https://api.github.com/repos/huggingface/transformers/issues/61 | BERTConfigs in example usages in `modeling.py` are not OK (?) | Hi!
In the `config` definition https://github.com/huggingface/pytorch-pretrained-BERT/blob/21f0196412115876da1c38652d22d1f7a14b36ff/pytorch_pretrained_bert/modeling.py#L848
in the Example usage of `BertForSequenceClassification` in `modeling.py`, there's things I don't understand:
- `vocab_size` in not an accept... | closed | completed | false | 1 | [] | [] | 2018-11-28T14:53:01Z | 2018-11-30T22:29:24Z | 2018-11-30T22:29:24Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | davidefiocco | 4,547,987 | MDQ6VXNlcjQ1NDc5ODc= | User | false |
huggingface/transformers | 385,368,286 | MDU6SXNzdWUzODUzNjgyODY= | 62 | https://github.com/huggingface/transformers/issues/62 | https://api.github.com/repos/huggingface/transformers/issues/62 | Specify a model from a specific directory for extract_features.py | I have downloaded the model and vocab files into a specific location, using their original file names, so my directory for bert-base-cased contains:
```
bert-base-cased-vocab.txt
bert_config.json
pytorch_model.bin
```
But when I try to specify the directory which contains these files for the `--bert_model` par... | closed | completed | false | 4 | [] | [] | 2018-11-28T17:04:39Z | 2018-11-30T22:30:12Z | 2018-11-30T22:30:12Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | johann-petrak | 619,106 | MDQ6VXNlcjYxOTEwNg== | User | false |
huggingface/transformers | 386,055,987 | MDU6SXNzdWUzODYwNTU5ODc= | 68 | https://github.com/huggingface/transformers/issues/68 | https://api.github.com/repos/huggingface/transformers/issues/68 | Accuracy on classification task is lower than the official tensorflow version | Hi, I am running the same task with the same hyper parameters as the official Google Tensorflow implementation of BERT, however, I am getting around 1.5% lower accuracy. Can you please give any hint about the possible cause?
Thanks! | closed | completed | false | 2 | [] | [] | 2018-11-30T06:30:56Z | 2018-11-30T22:56:45Z | 2018-11-30T22:56:45Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | ejld | 31,990,860 | MDQ6VXNlcjMxOTkwODYw | User | false |
huggingface/transformers | 386,489,436 | MDU6SXNzdWUzODY0ODk0MzY= | 76 | https://github.com/huggingface/transformers/issues/76 | https://api.github.com/repos/huggingface/transformers/issues/76 | Wrong signature in model call in run_classifier.py example (?) | I think that
https://github.com/huggingface/pytorch-pretrained-BERT/blob/063be09b714bf4d2fbbc3de7f52c45b8bc6817eb/examples/run_classifier.py#L608
may well have a problem, as it's not consistent with
https://github.com/huggingface/pytorch-pretrained-BERT/blob/063be09b714bf4d2fbbc3de7f52c45b8bc6817eb/examples/run_cl... | closed | completed | false | 2 | [] | [] | 2018-12-01T19:34:40Z | 2018-12-02T12:02:34Z | 2018-12-02T12:02:34Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | davidefiocco | 4,547,987 | MDQ6VXNlcjQ1NDc5ODc= | User | false |
huggingface/transformers | 386,553,265 | MDU6SXNzdWUzODY1NTMyNjU= | 78 | https://github.com/huggingface/transformers/issues/78 | https://api.github.com/repos/huggingface/transformers/issues/78 | TypeError: object of type 'WindowsPath' has no len() | Hi, when I run "tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')", the error "TypeError: object of type 'WindowsPath' has no len()" occurs, what is the problem? Thank you for your excellent code! | closed | completed | false | 4 | [] | [] | 2018-12-02T12:03:51Z | 2018-12-02T15:30:43Z | 2018-12-02T15:30:43Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | Deep1994 | 24,366,782 | MDQ6VXNlcjI0MzY2Nzgy | User | false |
huggingface/transformers | 386,698,511 | MDU6SXNzdWUzODY2OTg1MTE= | 79 | https://github.com/huggingface/transformers/issues/79 | https://api.github.com/repos/huggingface/transformers/issues/79 | numpy.core._internal.AxisError: axis 1 is out of bounds for array of dimension 1 | hello, when I am running run_classifier.py with MRPC dataset, there seems to be an mistake. the mistake is as following:
<img width="752" alt="default" src="https://user-images.githubusercontent.com/29532760/49360256-9de0e100-f713-11e8-9a5c-d9f2bc5331e6.PNG">
the mistake is happening when training is over and the mod... | closed | completed | false | 1 | [] | [] | 2018-12-03T07:56:56Z | 2018-12-03T08:37:11Z | 2018-12-03T08:37:11Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | A-Rain | 29,532,760 | MDQ6VXNlcjI5NTMyNzYw | User | false |
huggingface/transformers | 386,887,965 | MDU6SXNzdWUzODY4ODc5NjU= | 82 | https://github.com/huggingface/transformers/issues/82 | https://api.github.com/repos/huggingface/transformers/issues/82 | AttributeError: 'tuple' object has no attribute 'backward' | Traceback (most recent call last): | 0/11 [00:00<?, ?it/s]
File "examples/run_classifier.py", line 637, in <module>
main()
File "examples/run_classifier.py", line 558, in main
... | closed | completed | false | 2 | [] | [] | 2018-12-03T16:06:20Z | 2018-12-04T07:27:06Z | 2018-12-04T07:27:06Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | Qzsl123 | 23,257,340 | MDQ6VXNlcjIzMjU3MzQw | User | false |
huggingface/transformers | 386,988,878 | MDU6SXNzdWUzODY5ODg4Nzg= | 83 | https://github.com/huggingface/transformers/issues/83 | https://api.github.com/repos/huggingface/transformers/issues/83 | Error while runing example | Hi!
I have a problem when running the example, could you please give me a hint on what may I be doing wrong?
I use:
`PYTHONPATH=. python examples/run_classifier.py --task_name MNLI --do_train --do_eval --do_lower_case --data_dir ../GLUE-baselines/glue_data/MNLI/ --bert_model bert-base-uncased --max_seq_len 40 -... | closed | completed | false | 2 | [] | [] | 2018-12-03T20:21:12Z | 2018-12-05T00:12:48Z | 2018-12-05T00:12:48Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | chledowski | 24,462,884 | MDQ6VXNlcjI0NDYyODg0 | User | false |
huggingface/transformers | 387,286,653 | MDU6SXNzdWUzODcyODY2NTM= | 88 | https://github.com/huggingface/transformers/issues/88 | https://api.github.com/repos/huggingface/transformers/issues/88 | Error when calculating loss and running backward | I'm using the sentence classification example. I used my own dataset for emotionclassification (4 classes).
The hyper-parameters are as follows:
<pre>
args.max_seq_length = 100
args.do_train = True
args.do_eval = True
args.do_lower_case = True
args.train_batch_size = 32
args.eval_batch_size = 8
args.learning... | closed | completed | false | 2 | [] | [] | 2018-12-04T13:30:58Z | 2018-12-05T03:41:38Z | 2018-12-05T03:41:38Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | zhongpeixiang | 11,826,803 | MDQ6VXNlcjExODI2ODAz | User | false |
huggingface/transformers | 387,233,714 | MDU6SXNzdWUzODcyMzM3MTQ= | 86 | https://github.com/huggingface/transformers/issues/86 | https://api.github.com/repos/huggingface/transformers/issues/86 | code in run_squad.py line 263 | # Zero-pad up to the sequence length.
while len(input_ids) < max_seq_length:
input_ids.append(0)
input_mask.append(0)
segment_ids.append(0)
in segment_ids array,1 indicates token from passage and 0 indicate token form query.
when padding,why segment_ids filled with 0,which represents que... | closed | completed | false | 3 | [] | [] | 2018-12-04T11:08:09Z | 2018-12-06T01:30:36Z | 2018-12-06T01:30:36Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | xilinniao123 | 11,830,865 | MDQ6VXNlcjExODMwODY1 | User | false |
huggingface/transformers | 388,713,951 | MDU6SXNzdWUzODg3MTM5NTE= | 100 | https://github.com/huggingface/transformers/issues/100 | https://api.github.com/repos/huggingface/transformers/issues/100 | Squad dataset has multiple answers to a question. | https://github.com/huggingface/pytorch-pretrained-BERT/blob/3ba5470eb85464df62f324bea88e20da234c423f/examples/run_squad.py#L143
The confusing part here is that in line 146, only the first answer is considered, so I am wondering why is there a check for multiple answers before.
Also, SQuad dataset has multiple answe... | closed | completed | false | 2 | [] | [] | 2018-12-07T16:02:00Z | 2018-12-08T11:57:22Z | 2018-12-08T11:57:22Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | nischalhp | 1,147,533 | MDQ6VXNlcjExNDc1MzM= | User | false |
huggingface/transformers | 388,930,579 | MDU6SXNzdWUzODg5MzA1Nzk= | 104 | https://github.com/huggingface/transformers/issues/104 | https://api.github.com/repos/huggingface/transformers/issues/104 | BERT for classification example training files | Are there any example training files for `run_classifier.py`? | closed | completed | false | 1 | [] | [] | 2018-12-08T15:16:50Z | 2018-12-08T15:19:17Z | 2018-12-08T15:19:17Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | artemlos | 6,392,760 | MDQ6VXNlcjYzOTI3NjA= | User | false |
huggingface/transformers | 386,786,079 | MDU6SXNzdWUzODY3ODYwNzk= | 81 | https://github.com/huggingface/transformers/issues/81 | https://api.github.com/repos/huggingface/transformers/issues/81 | There is some problem in supporting continuously training | I change the run_classfifier.py in order to support continuously training. i save the model.state_dict() and the BertAdam optimizer.state_dict(), and I load them when start continuously training. However, After some epochs, the loss will increase little by little and finally end with a large loss value. I do not know t... | closed | completed | false | 1 | [] | [] | 2018-12-03T12:00:09Z | 2018-12-09T21:01:03Z | 2018-12-09T21:01:02Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | ZacharyWaseda | 16,608,767 | MDQ6VXNlcjE2NjA4NzY3 | User | false |
huggingface/transformers | 387,683,054 | MDU6SXNzdWUzODc2ODMwNTQ= | 89 | https://github.com/huggingface/transformers/issues/89 | https://api.github.com/repos/huggingface/transformers/issues/89 | bert-base-multilingual-cased - Text bigger than 512 | Hello,
I am trying to extract features from German text using bert-base-multilingual-cased. However, my text is bigger than 512 words.
Is there any way to use the pertained Bert for text greater than 512 words | closed | completed | false | 2 | [] | [] | 2018-12-05T10:11:21Z | 2018-12-09T21:04:53Z | 2018-12-09T21:04:53Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | agemagician | 6,087,313 | MDQ6VXNlcjYwODczMTM= | User | false |
huggingface/transformers | 388,994,586 | MDU6SXNzdWUzODg5OTQ1ODY= | 105 | https://github.com/huggingface/transformers/issues/105 | https://api.github.com/repos/huggingface/transformers/issues/105 | weights initialized two times | Hi,
I found that you initilized all weights twice:
The first one is in BertModel class:
https://github.com/huggingface/pytorch-pretrained-BERT/blob/3ba5470eb85464df62f324bea88e20da234c423f/pytorch_pretrained_bert/modeling.py#L586
And the second one is in classes of each tasks such as in BertForSequenceClass... | closed | completed | false | 2 | [] | [] | 2018-12-09T07:06:52Z | 2018-12-09T21:17:51Z | 2018-12-09T21:17:51Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | friskit-china | 2,494,883 | MDQ6VXNlcjI0OTQ4ODM= | User | false |
huggingface/transformers | 389,201,876 | MDU6SXNzdWUzODkyMDE4NzY= | 106 | https://github.com/huggingface/transformers/issues/106 | https://api.github.com/repos/huggingface/transformers/issues/106 | Picking max_sequence_length in run_classifier.py CoLA task | Is there an upper bound for the max_sequence_length parameter when using run_classifier.py with CoLA task?
When I tested with the default max_sequence_length of 128, everything worked good, but once I changed it to something else, eg 1024, it started the training and failed on the first iteration with the error show... | closed | completed | false | 2 | [] | [] | 2018-12-10T09:04:47Z | 2018-12-10T15:14:47Z | 2018-12-10T15:14:47Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | artemlos | 6,392,760 | MDQ6VXNlcjYzOTI3NjA= | User | false |
huggingface/transformers | 388,915,407 | MDU6SXNzdWUzODg5MTU0MDc= | 103 | https://github.com/huggingface/transformers/issues/103 | https://api.github.com/repos/huggingface/transformers/issues/103 | Words after tokenization replaced with # | Hello,
When training the bert-base-multilingual-cased model for Question and Answering, I see that the tokens look like this :
```tokens: [CLS] what is the ins ##ured _ name ? [SEP] versi ##cherung ##ss ##che ##in erg ##o hau ##srat ##versi ##cherung hr - sv 927 ##26 ##49 ##2 ```
Any idea why words are gettin... | closed | completed | false | 6 | [] | [] | 2018-12-08T11:56:57Z | 2018-12-11T13:32:37Z | 2018-12-11T10:33:23Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | nischalhp | 1,147,533 | MDQ6VXNlcjExNDc1MzM= | User | false |
huggingface/transformers | 389,846,897 | MDU6SXNzdWUzODk4NDY4OTc= | 114 | https://github.com/huggingface/transformers/issues/114 | https://api.github.com/repos/huggingface/transformers/issues/114 | What is the best dataset structure for BERT? | First I want to say thanks for setting up all this!
I am using BertForSequenceClassification and am wondering what the optimal way is to structure my sequences.
Right now my sequences are blog post which could be upwards to 400 words long.
Would it be better to split my blog posts in sentences and use the se... | closed | completed | false | 0 | [] | [] | 2018-12-11T16:28:00Z | 2018-12-11T20:57:45Z | 2018-12-11T20:57:45Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | wahlforss | 73,305 | MDQ6VXNlcjczMzA1 | User | false |
huggingface/transformers | 389,549,868 | MDU6SXNzdWUzODk1NDk4Njg= | 110 | https://github.com/huggingface/transformers/issues/110 | https://api.github.com/repos/huggingface/transformers/issues/110 | Pretrained Tokenizer Loading Fails: 'PosixPath' object has no attribute 'rfind' | I was trying to work through the toy tokenization example from the main README, and I hit an error on the step of loading in a pre-trained BERT tokenizer.
```
~/bert_transfer$ python3 test_tokenizer.py
Traceback (most recent call last):
File "test_tokenizer.py",... | closed | completed | false | 2 | [] | [] | 2018-12-11T00:48:11Z | 2018-12-13T11:16:27Z | 2018-12-11T10:28:47Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | decodyng | 5,902,855 | MDQ6VXNlcjU5MDI4NTU= | User | false |
huggingface/transformers | 390,793,183 | MDU6SXNzdWUzOTA3OTMxODM= | 117 | https://github.com/huggingface/transformers/issues/117 | https://api.github.com/repos/huggingface/transformers/issues/117 | logging.basicConfig overrides user logging | I think logging.basicConfig should not be called inside library code
check out this SO thread
https://stackoverflow.com/questions/27016870/how-should-logging-be-used-in-a-python-package | closed | completed | false | 1 | [] | [] | 2018-12-13T17:58:02Z | 2018-12-14T13:46:51Z | 2018-12-14T13:46:51Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | asafamr | 5,182,534 | MDQ6VXNlcjUxODI1MzQ= | User | false |
huggingface/transformers | 387,100,844 | MDU6SXNzdWUzODcxMDA4NDQ= | 85 | https://github.com/huggingface/transformers/issues/85 | https://api.github.com/repos/huggingface/transformers/issues/85 | How to use pre-trained SQUAD model? | After training squad, I have a model file in a local folder:
```
-rw-rw-r-- 1 khashab2 cs_danr 4.7M Nov 21 19:20 dev-v1.1.json
-rw-rw-r-- 1 khashab2 cs_danr 3.4K Nov 29 22:52 evaluate-v1.1.py
drwxrwsr-x 2 khashab2 cs_danr 10 Nov 30 14:57 out2
-rw-rw-r-- 1 khashab2 cs_danr 29M Nov 21 19:20 train-v1.1.json... | closed | completed | false | 1 | [] | [] | 2018-12-04T03:13:30Z | 2018-12-14T14:42:04Z | 2018-12-14T14:42:04Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | danyaljj | 2,441,454 | MDQ6VXNlcjI0NDE0NTQ= | User | false |
huggingface/transformers | 388,660,132 | MDU6SXNzdWUzODg2NjAxMzI= | 98 | https://github.com/huggingface/transformers/issues/98 | https://api.github.com/repos/huggingface/transformers/issues/98 | Problem about convert TF model and pretraining | First of all, Thank you for this great job. I use the official tensorflow implementation to pretrain on my corpus and then save the model. I want to convert this model to pytorch format and use it, but I got the error:
Traceback (most recent call last):
File "convert_tf_checkpoint_to_pytorch.py", line 105, in <mo... | closed | completed | false | 3 | [] | [] | 2018-12-07T13:42:59Z | 2018-12-14T14:42:40Z | 2018-12-14T14:42:40Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | zhezhaoa | 10,495,098 | MDQ6VXNlcjEwNDk1MDk4 | User | false |
huggingface/transformers | 389,950,888 | MDU6SXNzdWUzODk5NTA4ODg= | 115 | https://github.com/huggingface/transformers/issues/115 | https://api.github.com/repos/huggingface/transformers/issues/115 | How to run a saved model? | How can you run the model without training the model? If we already trained a model with run_classifer? | closed | completed | false | 2 | [] | [] | 2018-12-11T20:58:38Z | 2018-12-14T14:43:43Z | 2018-12-14T14:43:43Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | wahlforss | 73,305 | MDQ6VXNlcjczMzA1 | User | false |
huggingface/transformers | 391,402,013 | MDU6SXNzdWUzOTE0MDIwMTM= | 120 | https://github.com/huggingface/transformers/issues/120 | https://api.github.com/repos/huggingface/transformers/issues/120 | RuntimeError: Expected object of type torch.LongTensor but found type torch.cuda.LongTensor for argument #3 'index' | I am using part of your evaluation code, with slight modifications:
https://github.com/danyaljj/pytorch-pretrained-BERT/blob/92e22d710287db1b4aa4fda951714887878fa728/examples/daniel_run.py#L582-L616
Wondering if you have encountered the following error:
```
(env3.6) khashab2@gissing:/shared/shelley/khashab2/... | closed | completed | false | 1 | [] | [] | 2018-12-15T18:43:53Z | 2018-12-15T20:45:37Z | 2018-12-15T20:45:37Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | danyaljj | 2,441,454 | MDQ6VXNlcjI0NDE0NTQ= | User | false |
huggingface/transformers | 391,458,997 | MDU6SXNzdWUzOTE0NTg5OTc= | 121 | https://github.com/huggingface/transformers/issues/121 | https://api.github.com/repos/huggingface/transformers/issues/121 | High accuracy for CoLA task | I try to reproduce the CoLA results from the BERT paper (BERTBase, Single GPU).
Running the following command
```
python run_classifier.py \
--task_name cola \
--do_train \
--do_eval \
--do_lower_case \
--data_dir $GLUE_DIR/CoLA/ \
--bert_model bert-base-uncased \
--max_seq_length 128 \
-... | closed | completed | false | 2 | [] | [] | 2018-12-16T11:39:56Z | 2018-12-17T06:41:06Z | 2018-12-17T06:41:06Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | pfecht | 26,819,398 | MDQ6VXNlcjI2ODE5Mzk4 | User | false |
huggingface/transformers | 391,979,075 | MDU6SXNzdWUzOTE5NzkwNzU= | 123 | https://github.com/huggingface/transformers/issues/123 | https://api.github.com/repos/huggingface/transformers/issues/123 | big memory occupied | When I run the examples for MRPC, my program was always killed becaused of big memory occupied. Anyone encounter with this issue? | closed | completed | false | 1 | [] | [] | 2018-12-18T03:13:11Z | 2018-12-18T08:04:38Z | 2018-12-18T08:04:38Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | AIRobotZhang | 20,748,608 | MDQ6VXNlcjIwNzQ4NjA4 | User | false |
huggingface/transformers | 392,409,375 | MDU6SXNzdWUzOTI0MDkzNzU= | 129 | https://github.com/huggingface/transformers/issues/129 | https://api.github.com/repos/huggingface/transformers/issues/129 | BERT + CNN classifier doesn't work after migrating from 0.1.2 to 0.4.0 | I used BERT in a very simple sentence classification task:
in `__init__` I have
```python3
self.bert = BertModel(config)
self.cnn_classifier = CNNClassifier(self.config.hidden_size, intent_cls_num)
```
and in forward it's just
```python3
encoded_layers, _ = self.bert(input_ids, token_type_ids, attention_mask, o... | closed | completed | false | 2 | [] | [] | 2018-12-19T01:57:22Z | 2018-12-20T00:20:48Z | 2018-12-20T00:20:48Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | jwang-lp | 944,876 | MDQ6VXNlcjk0NDg3Ng== | User | false |
huggingface/transformers | 378,996,831 | MDU6SXNzdWUzNzg5OTY4MzE= | 10 | https://github.com/huggingface/transformers/issues/10 | https://api.github.com/repos/huggingface/transformers/issues/10 | Is there a plan to have a FP16 for GPU so to have larger batch size or longer text documents support ? | Is there a plan to have an FP16 for GPU so to have a larger batch size or longer text documents support? | closed | completed | false | 4 | [] | [] | 2018-11-09T02:23:34Z | 2018-12-20T18:42:11Z | 2018-11-12T16:06:47Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | howardhsu | 10,661,375 | MDQ6VXNlcjEwNjYxMzc1 | User | false |
huggingface/transformers | 393,058,463 | MDU6SXNzdWUzOTMwNTg0NjM= | 136 | https://github.com/huggingface/transformers/issues/136 | https://api.github.com/repos/huggingface/transformers/issues/136 | It's possible to avoid download the pretrained model? | When I run this code `model = BertModel.from_pretrained('bert-base-uncased')` , it would download a big file and sometimes that's very slow. Now I have download the model from [https://github.com/google-research/bert](url). So, It's possible to avoid download the pretrained model when I use pytorch-pretrained-BERT at ... | closed | completed | false | 3 | [] | [] | 2018-12-20T14:00:03Z | 2018-12-21T13:47:03Z | 2018-12-20T14:08:10Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | rxy1212 | 14,829,556 | MDQ6VXNlcjE0ODI5NTU2 | User | false |
huggingface/transformers | 394,064,499 | MDU6SXNzdWUzOTQwNjQ0OTk= | 147 | https://github.com/huggingface/transformers/issues/147 | https://api.github.com/repos/huggingface/transformers/issues/147 | Does the final hidden state contains the <CLS> for Squad2.0 | Recently I'm modifying the `run_squad.py` to run on CoQA. In the implementation of TensorFlow from Google, they use the probability on the first token of a context segment, where is the location of `<CLS>` to as the that of the question is unanswerable. So I try to modified the `run_squad.py` in your implementation as ... | closed | completed | false | 1 | [] | [] | 2018-12-26T02:05:34Z | 2018-12-26T02:48:04Z | 2018-12-26T02:48:04Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | SparkJiao | 16,469,472 | MDQ6VXNlcjE2NDY5NDcy | User | false |
huggingface/transformers | 394,310,682 | MDU6SXNzdWUzOTQzMTA2ODI= | 148 | https://github.com/huggingface/transformers/issues/148 | https://api.github.com/repos/huggingface/transformers/issues/148 | Embeddings from BERT for original tokens | I am trying out the `extract_features.py` example program. I noticed that a sentence gets split into tokens and the embeddings are generated. For example, if you had the sentence “Definitely not”, and the corresponding workpieces can be [“Def”, “##in”, “##ite”, “##ly”, “not”]. It then generates the embeddings for thes... | closed | completed | false | 1 | [] | [] | 2018-12-27T06:48:23Z | 2018-12-28T09:17:16Z | 2018-12-28T09:17:16Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | nihalnayak | 5,679,782 | MDQ6VXNlcjU2Nzk3ODI= | User | false |
huggingface/transformers | 393,876,320 | MDU6SXNzdWUzOTM4NzYzMjA= | 146 | https://github.com/huggingface/transformers/issues/146 | https://api.github.com/repos/huggingface/transformers/issues/146 | BertForQuestionAnswering: Predicting span on the question? | Hello,
I have a question regarding the `BertForQuestionAnswering` implementation. If I am not mistaken, for this model the sequence should be of the form `Question tokens [SEP] Passage tokens`. Therefore, the embedded representation computed by `BertModel` returns the states of both the question and the passage (a t... | closed | completed | false | 1 | [] | [] | 2018-12-24T12:51:49Z | 2018-12-28T09:20:49Z | 2018-12-28T09:20:49Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | valsworthen | 18,659,328 | MDQ6VXNlcjE4NjU5MzI4 | User | false |
huggingface/transformers | 393,167,784 | MDU6SXNzdWUzOTMxNjc3ODQ= | 139 | https://github.com/huggingface/transformers/issues/139 | https://api.github.com/repos/huggingface/transformers/issues/139 | Not able to use FP16 in pytorch-pretrained-BERT | I'm not able to work with FP16 for pytorch BERT code. Particularly for BertForSequenceClassification, which I tried and got the issue
**Runtime error: Expected scalar type object Half but got scalar type Float for argument #2 target**
when I enabled fp16.
Also when using
`logits = logits.half()
labels = labels.ha... | closed | completed | false | 0 | [] | [] | 2018-12-20T18:46:14Z | 2018-12-28T09:23:34Z | 2018-12-28T09:23:34Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | Ashish-Gupta03 | 7,694,700 | MDQ6VXNlcjc2OTQ3MDA= | User | false |
huggingface/transformers | 392,898,311 | MDU6SXNzdWUzOTI4OTgzMTE= | 132 | https://github.com/huggingface/transformers/issues/132 | https://api.github.com/repos/huggingface/transformers/issues/132 | NONE | closed | completed | false | 0 | [] | [] | 2018-12-20T05:42:29Z | 2018-12-28T14:04:26Z | 2018-12-28T13:56:36Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | HuXiangkun | 6,700,036 | MDQ6VXNlcjY3MDAwMzY= | User | false | |
huggingface/transformers | 394,865,030 | MDU6SXNzdWUzOTQ4NjUwMzA= | 154 | https://github.com/huggingface/transformers/issues/154 | https://api.github.com/repos/huggingface/transformers/issues/154 | the run_squad report "for training,each question should exactly have 1 answer" when I tried to fintune bert on squad2.0 | But some questions of train-v2.0.json are unanswerable. | closed | completed | false | 0 | [] | [] | 2018-12-30T11:33:29Z | 2018-12-30T11:48:50Z | 2018-12-30T11:48:50Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | zhaoguangxiang | 17,742,385 | MDQ6VXNlcjE3NzQyMzg1 | User | false |
huggingface/transformers | 391,564,653 | MDU6SXNzdWUzOTE1NjQ2NTM= | 122 | https://github.com/huggingface/transformers/issues/122 | https://api.github.com/repos/huggingface/transformers/issues/122 | _load_from_state_dict() takes 7 positional arguments but 8 were given | closed | completed | false | 3 | [] | [] | 2018-12-17T05:38:40Z | 2019-01-07T11:46:27Z | 2019-01-07T11:46:27Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | guanlongtianzi | 10,386,366 | MDQ6VXNlcjEwMzg2MzY2 | User | false | |
huggingface/transformers | 392,093,383 | MDU6SXNzdWUzOTIwOTMzODM= | 125 | https://github.com/huggingface/transformers/issues/125 | https://api.github.com/repos/huggingface/transformers/issues/125 | Warning/Assert when embedding sequences longer than positional embedding size | Hi team, love the work.
Just a feature suggestion: when running on GPU (presumably the CPU too), BERT will break when you try to run on sentences longer than 512 tokens (on bert-base).
This is because the position embedding matrix size is only 512 (or whatever else it is for the other bert models)
Could the to... | closed | completed | false | 2 | [] | [] | 2018-12-18T10:36:23Z | 2019-01-07T11:46:41Z | 2019-01-07T11:46:41Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | patrick-s-h-lewis | 15,031,366 | MDQ6VXNlcjE1MDMxMzY2 | User | false |
huggingface/transformers | 392,922,322 | MDU6SXNzdWUzOTI5MjIzMjI= | 133 | https://github.com/huggingface/transformers/issues/133 | https://api.github.com/repos/huggingface/transformers/issues/133 | lower accuracy on OMD(Obama-McCain Debate twitter sentiment dataset) | I run the classification task with BERT pretrianed model, but while it's much lower than other methods on OMD dataset, which has 2 labels. The final accuracy result is only 62% on binary classification task! | closed | completed | false | 3 | [] | [] | 2018-12-20T07:27:11Z | 2019-01-07T12:11:22Z | 2019-01-07T12:11:22Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | AIRobotZhang | 20,748,608 | MDQ6VXNlcjIwNzQ4NjA4 | User | false |
huggingface/transformers | 393,142,144 | MDU6SXNzdWUzOTMxNDIxNDQ= | 138 | https://github.com/huggingface/transformers/issues/138 | https://api.github.com/repos/huggingface/transformers/issues/138 | Problem loading finetuned model for squad | Hi,
i'm trying to load a fine tuned model for question answering which i trained with squad.py:
```
import torch
from pytorch_pretrained_bert import BertModel, BertForQuestionAnswering
from pytorch_pretrained_bert import modeling
config = modeling.BertConfig(attention_probs_dropout_prob=0.1, hidden_dropout_prob... | closed | completed | false | 4 | [] | [] | 2018-12-20T17:27:40Z | 2019-01-07T12:17:58Z | 2019-01-07T12:17:58Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | ni40in | 9,155,183 | MDQ6VXNlcjkxNTUxODM= | User | false |
huggingface/transformers | 393,167,870 | MDU6SXNzdWUzOTMxNjc4NzA= | 140 | https://github.com/huggingface/transformers/issues/140 | https://api.github.com/repos/huggingface/transformers/issues/140 | Not able to use FP16 in pytorch-pretrained-BERT. Getting error **Runtime error: Expected scalar type object Half but got scalar type Float for argument #2 target** | I'm not able to work with FP16 for pytorch BERT code. Particularly for BertForSequenceClassification, which I tried and got the issue
**Runtime error: Expected scalar type object Half but got scalar type Float for argument #2 target**
when I enabled fp16.
Also when using
`logits = logits.half()
labels = labels.ha... | closed | completed | false | 3 | [] | [] | 2018-12-20T18:46:30Z | 2019-01-07T12:18:36Z | 2019-01-07T12:18:36Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | Ashish-Gupta03 | 7,694,700 | MDQ6VXNlcjc2OTQ3MDA= | User | false |
huggingface/transformers | 393,365,633 | MDU6SXNzdWUzOTMzNjU2MzM= | 143 | https://github.com/huggingface/transformers/issues/143 | https://api.github.com/repos/huggingface/transformers/issues/143 | bug in init_bert_weights | hi ,
there is a bug in init_bert_weights().
the BERTLayerNorm has twice init, the first init is in the BERTLayerNorm module __init__(). the second init in init_bert_weights().
if you want to get pre-training model that is not from google model, the second init will lead to bad convergence in my experime... | closed | completed | false | 1 | [] | [] | 2018-12-21T08:29:40Z | 2019-01-07T12:18:49Z | 2019-01-07T12:18:49Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | mjc14 | 15,847,067 | MDQ6VXNlcjE1ODQ3MDY3 | User | false |
huggingface/transformers | 394,673,351 | MDU6SXNzdWUzOTQ2NzMzNTE= | 151 | https://github.com/huggingface/transformers/issues/151 | https://api.github.com/repos/huggingface/transformers/issues/151 | Using large model with fp16 enable causes the server down | I am using a server with Ubuntu 16.04 and 4 TITAN X GPUs. The server runs the base model with no problems. But it cannot run the large model with 32-bit float point, so I enabled fp16, and the server went down.
(When I successfully ran the base model, it consumes 8G GPU memory for each of the 4 GPUS. ) | closed | completed | false | 2 | [] | [] | 2018-12-28T16:32:05Z | 2019-01-07T12:24:34Z | 2019-01-07T12:24:34Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | hguan6 | 19,914,123 | MDQ6VXNlcjE5OTE0MTIz | User | false |
huggingface/transformers | 395,941,645 | MDU6SXNzdWUzOTU5NDE2NDU= | 164 | https://github.com/huggingface/transformers/issues/164 | https://api.github.com/repos/huggingface/transformers/issues/164 | pretrained model | is the pretrained model downloaded include word embedding?
I do not see any embedding in your code
please | closed | completed | false | 4 | [] | [] | 2019-01-04T14:20:49Z | 2019-01-07T12:28:07Z | 2019-01-07T12:28:07Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | minmummax | 25,759,762 | MDQ6VXNlcjI1NzU5NzYy | User | false |
huggingface/transformers | 396,141,181 | MDU6SXNzdWUzOTYxNDExODE= | 167 | https://github.com/huggingface/transformers/issues/167 | https://api.github.com/repos/huggingface/transformers/issues/167 | Question about hidden layers from pretained model | In the example shown to get hidden states https://github.com/huggingface/pytorch-pretrained-BERT#usage
I want to confirm - the final hidden layer corresponds to the last element of `encoded_layers`, right? | closed | completed | false | 1 | [] | [] | 2019-01-05T07:09:20Z | 2019-01-07T12:28:19Z | 2019-01-07T12:28:19Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | mvss80 | 5,709,876 | MDQ6VXNlcjU3MDk4NzY= | User | false |
huggingface/transformers | 396,232,776 | MDU6SXNzdWUzOTYyMzI3NzY= | 168 | https://github.com/huggingface/transformers/issues/168 | https://api.github.com/repos/huggingface/transformers/issues/168 | Cannot reproduce the result of run_squad 1.1 | I train 5 epochs with learning rate 5e-5, but my evaluation result is {'exact_match': 32.04351939451277, 'f1': 36.53574674513405}.
What is the problem? | closed | completed | false | 5 | [] | [] | 2019-01-06T06:34:47Z | 2019-01-07T12:30:56Z | 2019-01-07T12:30:56Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | hmt2014 | 9,130,751 | MDQ6VXNlcjkxMzA3NTE= | User | false |
huggingface/transformers | 396,375,768 | MDU6SXNzdWUzOTYzNzU3Njg= | 170 | https://github.com/huggingface/transformers/issues/170 | https://api.github.com/repos/huggingface/transformers/issues/170 | How to pretrain my own data with this pytorch code? | I wonder how to pretrain with my own data. | closed | completed | false | 6 | [] | [] | 2019-01-07T07:22:53Z | 2019-01-07T13:05:35Z | 2019-01-07T12:29:44Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | Gpwner | 19,349,207 | MDQ6VXNlcjE5MzQ5MjA3 | User | false |
huggingface/transformers | 394,870,891 | MDU6SXNzdWUzOTQ4NzA4OTE= | 155 | https://github.com/huggingface/transformers/issues/155 | https://api.github.com/repos/huggingface/transformers/issues/155 | Why not the mlm use the information of adjacent sentences? |
I prepare two sentences for mlm predict the mask part:"Tom cant run fast. He [mask] his back a few years ago." The result of model (uncased base) is 'got'. That is meaningless. Obviously ,"hurt" is better.
I wander how to make mlm to use the information of adjacent sentences. | closed | completed | false | 3 | [] | [] | 2018-12-30T13:08:53Z | 2019-01-08T07:01:28Z | 2019-01-07T12:25:24Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | l126t | 21,979,549 | MDQ6VXNlcjIxOTc5NTQ5 | User | false |
huggingface/transformers | 396,776,254 | MDU6SXNzdWUzOTY3NzYyNTQ= | 173 | https://github.com/huggingface/transformers/issues/173 | https://api.github.com/repos/huggingface/transformers/issues/173 | What 's the mlm accuracy of pretrained model? | What 's the mlm accuracy of pretrained model? In my case, I find the scores of candidate in top 10 are very close,but most are not suitable. Is this the same prediction as Google's original project?
_Originally posted by @l126t in https://github.com/huggingface/pytorch-pretrained-BERT/issues/155#issuecomment-452195... | closed | completed | false | 1 | [] | [] | 2019-01-08T07:08:35Z | 2019-01-08T10:07:23Z | 2019-01-08T10:07:23Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | l126t | 21,979,549 | MDQ6VXNlcjIxOTc5NTQ5 | User | false |
huggingface/transformers | 398,588,638 | MDU6SXNzdWUzOTg1ODg2Mzg= | 188 | https://github.com/huggingface/transformers/issues/188 | https://api.github.com/repos/huggingface/transformers/issues/188 | Weight Decay Fix Original Paper | Hi There!
Is the weight decay fix from?
https://arxiv.org/abs/1711.05101
Thanks! | closed | completed | false | 1 | [] | [] | 2019-01-12T20:22:45Z | 2019-01-14T01:08:36Z | 2019-01-14T01:08:36Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | PetrochukM | 7,424,737 | MDQ6VXNlcjc0MjQ3Mzc= | User | false |
huggingface/transformers | 394,864,622 | MDU6SXNzdWUzOTQ4NjQ2MjI= | 153 | https://github.com/huggingface/transformers/issues/153 | https://api.github.com/repos/huggingface/transformers/issues/153 | Did you suport squad2.0 | What is the command to reproduce the results of squad2.0 reported in the BERT.
Thanks~ | closed | completed | false | 2 | [] | [] | 2018-12-30T11:25:55Z | 2019-01-14T09:03:51Z | 2019-01-14T09:03:50Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | zhaoguangxiang | 17,742,385 | MDQ6VXNlcjE3NzQyMzg1 | User | false |
huggingface/transformers | 397,703,107 | MDU6SXNzdWUzOTc3MDMxMDc= | 178 | https://github.com/huggingface/transformers/issues/178 | https://api.github.com/repos/huggingface/transformers/issues/178 | Can we use BERT for Punctuation Prediction? | Can we use the pre-trained BERT model for Punctuation Prediction for Conversational Speech? Let say punctuating an ASR output? | closed | completed | false | 1 | [] | [] | 2019-01-10T07:25:30Z | 2019-01-14T09:05:22Z | 2019-01-14T09:05:22Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | dalonlobo | 12,654,849 | MDQ6VXNlcjEyNjU0ODQ5 | User | false |
huggingface/transformers | 398,143,878 | MDU6SXNzdWUzOTgxNDM4Nzg= | 180 | https://github.com/huggingface/transformers/issues/180 | https://api.github.com/repos/huggingface/transformers/issues/180 | Weights not initialized from pretrained model | Thanks for your awesome work!
When I execute the following code for a named entity recognition tasks:
`model = BertForTokenClassification.from_pretrained("bert-base-uncased", num_labels=num_labels)`
Output the following information:
> Weights of BertForTokenClassification not initialized from pretrained model... | closed | completed | false | 3 | [] | [] | 2019-01-11T06:03:47Z | 2019-01-14T09:08:01Z | 2019-01-14T09:05:33Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | lemonhu | 22,219,073 | MDQ6VXNlcjIyMjE5MDcz | User | false |
huggingface/transformers | 398,148,589 | MDU6SXNzdWUzOTgxNDg1ODk= | 181 | https://github.com/huggingface/transformers/issues/181 | https://api.github.com/repos/huggingface/transformers/issues/181 | All about the training speed in classification job | I run the bert-base-uncased model with task 'mrpc' in ubuntu,nvidia p4000 8G.
It's a classification problem, and I use the default demo data.
But the training speed is about 2 batch every second. Any problem?
I think it maybe too slow, but can not find why. I have another task with 1300000 data costs 6 hours per ep... | closed | completed | false | 1 | [] | [] | 2019-01-11T06:27:39Z | 2019-01-14T09:09:04Z | 2019-01-14T09:09:04Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | zhusleep | 17,355,556 | MDQ6VXNlcjE3MzU1NTU2 | User | false |
huggingface/transformers | 398,208,606 | MDU6SXNzdWUzOTgyMDg2MDY= | 184 | https://github.com/huggingface/transformers/issues/184 | https://api.github.com/repos/huggingface/transformers/issues/184 | Python 3.5 + Torch 1.0 does not work | When running `run_lm_finetuning.py` to fine-tune language model with default settings (see command below), sometimes I could run successfully, but sometimes I received different errors like `RuntimeError: The size of tensor a must match the size of tensor b at non-singleton dimension 1`, `RuntimeError: Creating MTGP c... | closed | completed | false | 2 | [] | [] | 2019-01-11T09:43:43Z | 2019-01-14T09:10:03Z | 2019-01-14T09:10:02Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | yuhui-zh15 | 17,669,473 | MDQ6VXNlcjE3NjY5NDcz | User | false |
huggingface/transformers | 398,229,727 | MDU6SXNzdWUzOTgyMjk3Mjc= | 186 | https://github.com/huggingface/transformers/issues/186 | https://api.github.com/repos/huggingface/transformers/issues/186 | BertOnlyMLMHead is a duplicate of BertLMPredictionHead | https://github.com/huggingface/pytorch-pretrained-BERT/blob/35becc6d84f620c3da48db460d6fb900f2451782/pytorch_pretrained_bert/modeling.py#L387-L394
I don't understand how it is useful to wrap the BertLMPredictionHead class like that, perhaps it was forgotten in some refactoring ? I can do a PR if you confirm me it ca... | closed | completed | false | 1 | [] | [] | 2019-01-11T10:35:36Z | 2019-01-14T09:14:56Z | 2019-01-14T09:14:56Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | artemisart | 9,201,969 | MDQ6VXNlcjkyMDE5Njk= | User | false |
huggingface/transformers | 397,243,635 | MDU6SXNzdWUzOTcyNDM2MzU= | 175 | https://github.com/huggingface/transformers/issues/175 | https://api.github.com/repos/huggingface/transformers/issues/175 | RuntimeError: Dimension out of range (expected to be in range of [-1, 0], but got 1) | sir i was pretrained for our BERT-Base model for Multi-GPU training 8 GPUs. preprocessing succeed but next step training it shown error. in run_lm_finetuning.py.
--
`python3 run_lm_finetuning.py --bert_model bert-base-uncased --do_train --train_file vocab007.txt --output_dir models --num_train_epochs 5.0 --learning_r... | closed | completed | false | 11 | [] | [] | 2019-01-09T07:26:46Z | 2019-01-14T09:15:38Z | 2019-01-14T09:15:11Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | MuruganR96 | 35,978,784 | MDQ6VXNlcjM1OTc4Nzg0 | User | false |
huggingface/transformers | 398,771,339 | MDU6SXNzdWUzOTg3NzEzMzk= | 194 | https://github.com/huggingface/transformers/issues/194 | https://api.github.com/repos/huggingface/transformers/issues/194 | run_classifier.py doesn't save any configurations and I can't load the trained model. | closed | completed | false | 2 | [] | [] | 2019-01-14T07:16:07Z | 2019-01-14T09:19:59Z | 2019-01-14T09:19:59Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | anz2 | 24,385,276 | MDQ6VXNlcjI0Mzg1Mjc2 | User | false | |
huggingface/transformers | 381,872,071 | MDU6SXNzdWUzODE4NzIwNzE= | 30 | https://github.com/huggingface/transformers/issues/30 | https://api.github.com/repos/huggingface/transformers/issues/30 | [Feature request] Add example of finetuning the pretrained models on custom corpus | closed | completed | false | 2 | [] | [] | 2018-11-17T15:19:58Z | 2019-01-15T14:27:27Z | 2018-11-17T22:03:43Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | elyase | 1,175,888 | MDQ6VXNlcjExNzU4ODg= | User | false | |
huggingface/transformers | 397,673,308 | MDU6SXNzdWUzOTc2NzMzMDg= | 177 | https://github.com/huggingface/transformers/issues/177 | https://api.github.com/repos/huggingface/transformers/issues/177 | run_lm_finetuning.py does not define a do_lower_case argument | The file references `args.do_lower_case`, but doesn't have the corresponding `parser.add_argument` call.
As an aside, has anyone successfully applied LM fine-tuning for a downstream task (using this code, or maybe using the original tensorflow implementation)? I'm not even sure if the code will run in its current st... | closed | completed | false | 7 | [] | [] | 2019-01-10T05:01:17Z | 2019-01-15T14:34:15Z | 2019-01-14T09:04:46Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | nikitakit | 252,225 | MDQ6VXNlcjI1MjIyNQ== | User | false |
huggingface/transformers | 399,155,566 | MDU6SXNzdWUzOTkxNTU1NjY= | 196 | https://github.com/huggingface/transformers/issues/196 | https://api.github.com/repos/huggingface/transformers/issues/196 | TODO statement on Question/Answering Model | Has this been confirmed?
https://github.com/huggingface/pytorch-pretrained-BERT/blob/647c98353090ee411e1ef9016b2a458becfe36f9/pytorch_pretrained_bert/modeling.py#L1084 | closed | completed | false | 1 | [] | [] | 2019-01-15T01:56:48Z | 2019-01-16T12:23:14Z | 2019-01-16T12:23:14Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | phatlast96 | 10,504,024 | MDQ6VXNlcjEwNTA0MDI0 | User | false |
huggingface/transformers | 398,252,066 | MDU6SXNzdWUzOTgyNTIwNjY= | 187 | https://github.com/huggingface/transformers/issues/187 | https://api.github.com/repos/huggingface/transformers/issues/187 | issue is, that ##string will repeats at intermediate, it collapses all index for mask words | ```
----------------------------------> how much belan i havin my credit card and also debitcard
----------------------------------> ['how', 'much', 'belan', 'i', 'havin', 'my', 'credit', 'card', 'and', 'also', 'debitcard']
----------------------------------> ['**belan**', '**havin**']
-----------------------------... | closed | completed | false | 3 | [] | [] | 2019-01-11T11:35:06Z | 2019-01-18T09:07:34Z | 2019-01-14T09:16:36Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | MuruganR96 | 35,978,784 | MDQ6VXNlcjM1OTc4Nzg0 | User | false |
huggingface/transformers | 400,968,613 | MDU6SXNzdWU0MDA5Njg2MTM= | 209 | https://github.com/huggingface/transformers/issues/209 | https://api.github.com/repos/huggingface/transformers/issues/209 | Missing softmax in BertForQuestionAnswering after linear layer? | https://github.com/huggingface/pytorch-pretrained-BERT/blob/0a9d7c7edb20a3e82cfbb4b72515575543784823/pytorch_pretrained_bert/modeling.py#L1089-L1113
It seems there should be a softmax after the linear layer, or did I miss something? | closed | completed | false | 1 | [] | [] | 2019-01-19T06:55:30Z | 2019-01-19T08:26:35Z | 2019-01-19T08:26:35Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | jianyucai | 28,853,070 | MDQ6VXNlcjI4ODUzMDcw | User | false |
huggingface/transformers | 400,582,170 | MDU6SXNzdWU0MDA1ODIxNzA= | 204 | https://github.com/huggingface/transformers/issues/204 | https://api.github.com/repos/huggingface/transformers/issues/204 | Two to Three mask word prediction at the same sentence is very complex | Two to Three mask word prediction at the same sentence also very complex.
how to get good accuracy?
if i have to pretrained bert model and own dataset with **masked_lm_prob=0.25** (https://github.com/google-research/bert#pre-training-with-bert), what will happened?
Thanks. | closed | completed | false | 2 | [] | [] | 2019-01-18T05:52:40Z | 2019-01-22T16:51:09Z | 2019-01-22T16:50:03Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | MuruganR96 | 35,978,784 | MDQ6VXNlcjM1OTc4Nzg0 | User | false |
huggingface/transformers | 402,103,567 | MDU6SXNzdWU0MDIxMDM1Njc= | 219 | https://github.com/huggingface/transformers/issues/219 | https://api.github.com/repos/huggingface/transformers/issues/219 | How can I get the confidence score for the classification task | In evaluation step, it seems it only shows the predicted label for the data instance.
How can I get the confidence score for each class? | closed | completed | false | 1 | [] | [] | 2019-01-23T07:21:51Z | 2019-01-23T07:36:01Z | 2019-01-23T07:35:25Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | fenneccat | 22,452,009 | MDQ6VXNlcjIyNDUyMDA5 | User | false |
huggingface/transformers | 401,890,579 | MDU6SXNzdWU0MDE4OTA1Nzk= | 216 | https://github.com/huggingface/transformers/issues/216 | https://api.github.com/repos/huggingface/transformers/issues/216 | Training classifier does not work for more than two classes | I am trying to run a classifier on the AGN data which has four classes. I am using the following command to train and evaluate the classifier.
python examples/run_classifier.py \
--task_name agn \
--do_train \
--do_eval \
--do_lower_case \
--data_dir $GLUE_DIR/AGN/ \
--bert_model bert-base-uncased ... | closed | completed | false | 2 | [] | [] | 2019-01-22T18:14:52Z | 2019-01-23T13:38:42Z | 2019-01-23T13:38:42Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | satyakesav | 7,447,204 | MDQ6VXNlcjc0NDcyMDQ= | User | false |
huggingface/transformers | 400,544,254 | MDU6SXNzdWU0MDA1NDQyNTQ= | 203 | https://github.com/huggingface/transformers/issues/203 | https://api.github.com/repos/huggingface/transformers/issues/203 | Add some new layers from BertModel and then 'grad' error occurs | I wanna do the fine-tuning work by adding a textcnn on the base of BertModel. I write a new class and add two layers of conv (like a textcnn) basically on Embedding Layer. And then an error occurs, called "grad can be implicitly created only for scalar outputs" i search for the Internet and can't find a good solution t... | closed | completed | false | 2 | [] | [] | 2019-01-18T02:19:58Z | 2019-01-23T16:34:28Z | 2019-01-23T16:34:28Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | lhbrichard | 33,123,730 | MDQ6VXNlcjMzMTIzNzMw | User | false |
huggingface/transformers | 402,517,534 | MDU6SXNzdWU0MDI1MTc1MzQ= | 224 | https://github.com/huggingface/transformers/issues/224 | https://api.github.com/repos/huggingface/transformers/issues/224 | how to add new vocabulary? | for specific task, it is required to add new vocabulary for tokenizer.
It is ok that re-training for those vocabulary for me :)
Is it possible to add new vocabulary for tokenizer?
| closed | completed | false | 1 | [] | [] | 2019-01-24T02:42:38Z | 2019-01-24T05:13:11Z | 2019-01-24T05:13:10Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | hahmyg | 3,884,429 | MDQ6VXNlcjM4ODQ0Mjk= | User | false |
huggingface/transformers | 403,125,784 | MDU6SXNzdWU0MDMxMjU3ODQ= | 226 | https://github.com/huggingface/transformers/issues/226 | https://api.github.com/repos/huggingface/transformers/issues/226 | Logical error in the run_lm_finetuning? | Hi,
@thomwolf @nhatchan
@tholor @deepset-ai
Many thanks for amazing work with this repository =)
I maybe grossly wrong or just missed some line of the code somewhere, but it seems to me that there is a glaring issue in the overall logic of `examples/run_lm_finetuning.py` - I guess you never pre-trained the m... | closed | completed | false | 2 | [] | [] | 2019-01-25T11:51:02Z | 2019-01-25T14:35:21Z | 2019-01-25T14:35:21Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | snakers4 | 12,515,440 | MDQ6VXNlcjEyNTE1NDQw | User | false |
huggingface/transformers | 403,423,004 | MDU6SXNzdWU0MDM0MjMwMDQ= | 228 | https://github.com/huggingface/transformers/issues/228 | https://api.github.com/repos/huggingface/transformers/issues/228 | Freezing base transformer weights | As I understand, say if I'm doing a classification task, then the transformer weights, along with the top classification layer weights, are both trainable (i.e. `requires_grad=True`), correct? If so, is there a way to freeze the transformer weights, but only train the top layer? Is that a good idea in general when I ha... | closed | completed | false | 2 | [] | [] | 2019-01-26T09:09:36Z | 2019-01-26T09:45:04Z | 2019-01-26T09:45:04Z | null | 20260326T020023Z | 2026-03-26T02:00:23Z | ZhaofengWu | 11,954,789 | MDQ6VXNlcjExOTU0Nzg5 | User | false |
End of preview. Expand in Data Studio
Transformers PR Slop Dataset
Normalized snapshots of issues, pull requests, comments, reviews, and linkage data from huggingface/transformers.
Files:
issues.parquetpull_requests.parquetcomments.parquetissue_comments.parquet(derived view of issue discussion comments)pr_comments.parquet(derived view of pull request discussion comments)pr_files.parquetpr_diffs.parquetreviews.parquetreview_comments.parquetlinks.parquetevents.parquet
Use:
- duplicate PR and issue analysis
- triage and ranking experiments
- eval set creation
Notes:
- updated daily
- latest snapshot:
20260411T020033Z - raw data only; no labels or moderation decisions
- PR metadata, file-level patch hunks, and full unified diffs are included
- full file contents for changed files are not included
- Downloads last month
- 2,211