Training deep learning models is time-consuming, and you can easily spend a day on just that. So most of the models are trained, and the best practice in this area is to train using GPU.
I personally prefer using Pytorch for deep learning, and in this blog post, I would like to provide helper functions that I use for training the same.
Get Default device on the Machine
To get the Default device on the device. Sometimes I use my personal laptop to train the model, which doesn’t have GPU then; this function allows me to seamlessly switch based on where the model is trained.
torch.cuda.is_available() is an inbuilt fn provided by PyTorch to verify if there is a GPU available or not.
Move the data to default Device
By default, Data is stored in the CPU, and here is a function to move to GPU.
This function takes the data and device as the input, e.g., usage of this function.
In this function I moved the model(convNet()) to default device(GPU).
Data to GPU
The above example shows how to move a single value to GPU; it’s a bit tricky when it comes to Data. Since GPU has a finite amount of memory when most of the dl datasets may be in GB, We need to have a delicate balance between handling the data.
yield to_device(b,self.device) — Allows the Data to loaded as on when the data is needed for training, so based on the batch size used it will reduce the usage of memory or in another way will only use the memory it is required.
We wrap the PyTorch data loader with the devicedataloader to easily be used like any other model. Here is the complete code block for usage.
References :
Deep learning with Python course :
aakashns/04-feedforward-nn - Jovian
Collaborate with aakashns on 04-feedforward-nn notebook.
jovian.ai
As the author of the course mentions the classes are derived from the fast ai library