Comments (6)
I remembered I have multi-GPU functionality for bonito:
https://github.com/kishwarshafin/bonito/blob/nanoporetech-master/bonito/basecaller_distributed.py
from bonito.
Hey @noncodo
Yes, it's possible and easy to add, see the patch below.
+++ b/bonito/train.py
@@ -69,6 +69,12 @@ def main(args):
print("* error: Cannot use AMP: Apex package needs to be installed manually, See https://github.com/NVIDIA/apex")
exit(1)
+ if args.multi-gpu:
+ from torch.nn import DataParallel
+ model = DataParallel(model)
+ model.stride = model.module.stride
+ model.alphabet = model.module.alphabet
+
schedular = CosineAnnealingLR(optimizer, args.epochs * len(train_loader))
log_interval = np.floor(len(train_dataset) / args.batch * 0.05)
@@ -80,7 +86,15 @@ def main(args):
log_interval, model, device, train_loader, optimizer, epoch, use_amp=args.amp
)
test_loss, mean, median = test(model, device, test_loader)
+
+ if args.multi_gpu:
+ state = model.module.state_dict()
+ else:
+ state = model.state_dict()
+
+ # save optim state
torch.save(model.state_dict(), os.path.join(workdir, "weights_%s.tar" % epoch))
+
with open(os.path.join(workdir, 'training.csv'), 'a', newline='') as csvfile:
csvw = csv.writer(csvfile, delimiter=',')
if epoch == 1:
@@ -111,6 +125,7 @@ def argparser():
parser.add_argument("--batch", default=32, type=int)
parser.add_argument("--chunks", default=1000000, type=int)
parser.add_argument("--validation_split", default=0.99, type=float)
+ parser.add_argument("--multi-gpu", action="store_true", default=False)
parser.add_argument("--amp", action="store_true", default=False)
parser.add_argument("-f", "--force", action="store_true", default=False)
return parser
I found DataParallel
could hang multi-gpu systems without NVLink/NVSwitch so I haven't merged it yet, guarding the import and uses behind --multi-gpu
is probably safe so I'll look to get this in master.
from bonito.
Wonderful! I'll give'er a spin on my NVLink-less system and report back. Beats the heck out of splitting fast5s into n batches.
from bonito.
Oh sorry, @noncodo I just realised you are after multi-gpu inference, not training!
That will be a little more complicated as the fast5 reader, decoder and fasta writer sit in different processes and the main loop is currently set up for a single consumer.
from bonito.
@iiSeymour and @noncodo ,
I have recently implemented multi-GPU support for HELEN (https://github.com/kishwarshafin/helen), both training and inference. You'd have to switch to DistributedDataParallel
. The DataParallel
implementation we had was a little bit of a speedup but nothing much, but DistributedDataParallel
gave us a big improvement.
The way for bonito
can do it is to use a dataloader that can create segments of data for each GPU to process and each process can write it's own fasta/fastq. Keep track of the names of the fasta/fastq files and concatenate those. Or leave it to the user to cat those.
I am not sure if Bonito is at that point in production, I think it still not producing quality scores? At least the version we are working on doesn't. Let me know if you have any questions.
Distributed training script: https://github.com/kishwarshafin/helen/blob/master/helen/modules/python/models/train_distributed.py
Distributed inference script:
https://github.com/kishwarshafin/helen/blob/master/helen/modules/python/models/predict_gpu.py
from bonito.
For performant multi-gpu inference see https://github.com/nanoporetech/dorado
from bonito.
Related Issues (20)
- Bonito Train missing dataset.py HOT 5
- RNA002 model
- RNA004 Does not output any CTC data HOT 3
- bonito basecaller max_reads HOT 1
- Bonito export for dorado error HOT 6
- Feature request: export the list of reads used to train a model
- bonito WASM
- Training data for R10.4, 4 and 5kHz? HOT 3
- Chunks.npy and Dataset.py not being generated HOT 2
- how to best fine tune a model with different references HOT 12
- Applying INT8 quantization, just like in Dorado HOT 2
- How can I set the `gpu_memory_utilization` ?
- bonito duplex runs out of memory HOT 1
- [Errno 2] No such file or directory: '/data/training/ctc-data/dataset.py' HOT 2
- how to choose the model? HOT 1
- Exporting a model - which weights does it use? HOT 2
- Is it possible to use a Bonito trained or fine-tuned model with dorado duplex basecalling.
- Training Data Used for Pre-made models
- Retrieving every paths examined in beam search module
- undefined symbol: cublasGemmEx
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from bonito.