![]() parallel, distributed & accumulation) = 16įollowed by this error: RuntimeError: Caught RuntimeError in replica 1 on device 1. Printing the following: ***** Running training ***** Tokenizer = om_pretrained('facebook/bart-large-mnli')īut when I use ain() I get the following: This is the tokenizer I used: from transformers import BartTokenizerFast Train_dataset=train_dataset, # training datasetĮval_dataset=val_dataset # evaluation dataset Model=model, # the instantiated □ Transformers model to be trainedĪrgs=training_args, # training arguments, defined aboveĬompute_metrics=new_compute_metrics, # a function to compute the metrics Model = om_pretrained("facebook/bart-large-mnli") # bart-large-mnli Logging_dir=model_logs, # directory for storing logs Weight_decay=0.01, # strength of weight decay Warmup_steps=50, # number of warmup steps for learning rate scheduler - 500 ![]() Per_device_eval_batch_size=16, # batch size for evaluation - 64 Per_device_train_batch_size=4, # batch size per device during training - 16 Num_train_epochs=1, # total number of training epochs - 3 Output_dir=model_directory, # output directory I'm trying to finetune the Facebook BART model, I'm following this article in order to classify text using my own dataset.Īnd I'm using the Trainer object in order to train: training_args = TrainingArguments( ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |