- Tutorials >
- Deep Learning with PyTorch: A 60 Minute Blitz >
- Training a Classifier
beginner/blitz/cifar10_tutorial
Run in Google Colab
Colab
Download Notebook
Notebook
View on GitHub
GitHub
Note
Click hereto download the full example code
This is it. You have seen how to define neural networks, compute loss and makeupdates to the weights of the network.
Now you might be thinking,
What about data?¶
Generally, when you have to deal with image, text, audio or video data,you can use standard python packages that load data into a numpy array.Then you can convert this array into a torch.*Tensor
.
For images, packages such as Pillow, OpenCV are useful
For audio, packages such as scipy and librosa
For text, either raw Python or Cython based loading, or NLTK andSpaCy are useful
Specifically for vision, we have created a package calledtorchvision
, that has data loaders for common datasets such asImageNet, CIFAR10, MNIST, etc. and data transformers for images, viz.,torchvision.datasets
and torch.utils.data.DataLoader
.
This provides a huge convenience and avoids writing boilerplate code.
For this tutorial, we will use the CIFAR10 dataset.It has the classes: ‘airplane’, ‘automobile’, ‘bird’, ‘cat’, ‘deer’,‘dog’, ‘frog’, ‘horse’, ‘ship’, ‘truck’. The images in CIFAR-10 are ofsize 3x32x32, i.e. 3-channel color images of 32x32 pixels in size.
cifar10¶
Training an image classifier¶
We will do the following steps in order:
Load and normalize the CIFAR10 training and test datasets using
torchvision
Define a Convolutional Neural Network
Define a loss function
Train the network on the training data
Test the network on the test data
1. Load and normalize CIFAR10¶
Using torchvision
, it’s extremely easy to load CIFAR10.
import torchimport torchvisionimport torchvision.transforms as transforms
The output of torchvision datasets are PILImage images of range [0, 1].We transform them to Tensors of normalized range [-1, 1].
Note
If running on Windows and you get a BrokenPipeError, try settingthe num_worker of torch.utils.data.DataLoader() to 0.
transform = transforms.Compose( [transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])batch_size = 4trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform)trainloader = torch.utils.data.DataLoader(trainset, batch_size=batch_size, shuffle=True, num_workers=2)testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform)testloader = torch.utils.data.DataLoader(testset, batch_size=batch_size, shuffle=False, num_workers=2)classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
Downloading https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz to ./data/cifar-10-python.tar.gz 0%| | 0/170498071 [00:00<?, ?it/s] 0%| | 425984/170498071 [00:00<00:40, 4220958.47it/s] 3%|3 | 5799936/170498071 [00:00<00:04, 33096544.04it/s] 9%|8 | 15335424/170498071 [00:00<00:02, 61085411.30it/s] 14%|#3 | 23494656/170498071 [00:00<00:02, 69106748.06it/s] 19%|#8 | 31686656/170498071 [00:00<00:01, 73677255.28it/s] 23%|##3 | 39780352/170498071 [00:00<00:01, 76114624.27it/s] 29%|##8 | 49020928/170498071 [00:00<00:01, 81378708.96it/s] 34%|###4 | 58523648/170498071 [00:00<00:01, 85701826.69it/s] 39%|###9 | 67272704/170498071 [00:00<00:01, 86247665.28it/s] 45%|####4 | 75923456/170498071 [00:01<00:01, 85389410.82it/s] 50%|####9 | 84475904/170498071 [00:01<00:01, 84984153.23it/s] 55%|#####4 | 93487104/170498071 [00:01<00:00, 86497661.18it/s] 60%|###### | 102662144/170498071 [00:01<00:00, 87974351.84it/s] 66%|######5 | 111968256/170498071 [00:01<00:00, 89374786.72it/s] 71%|####### | 120913920/170498071 [00:01<00:00, 87053775.35it/s] 76%|#######6 | 129662976/170498071 [00:01<00:00, 85039164.57it/s] 81%|########1 | 138215424/170498071 [00:01<00:00, 84801008.33it/s] 86%|########6 | 146735104/170498071 [00:01<00:00, 84124982.48it/s] 91%|#########1| 155877376/170498071 [00:01<00:00, 86125202.65it/s] 97%|#########7| 165511168/170498071 [00:02<00:00, 89113254.22it/s]100%|##########| 170498071/170498071 [00:02<00:00, 81088486.53it/s]Extracting ./data/cifar-10-python.tar.gz to ./dataFiles already downloaded and verified
Let us show some of the training images, for fun.
import matplotlib.pyplot as pltimport numpy as np# functions to show an imagedef imshow(img): img = img / 2 + 0.5 # unnormalize npimg = img.numpy() plt.imshow(np.transpose(npimg, (1, 2, 0))) plt.show()# get some random training imagesdataiter = iter(trainloader)images, labels = next(dataiter)# show imagesimshow(torchvision.utils.make_grid(images))# print labelsprint(' '.join(f'{classes[labels[j]]:5s}' for j in range(batch_size)))
frog plane deer car
2. Define a Convolutional Neural Network¶
Copy the neural network from the Neural Networks section before and modify it totake 3-channel images (instead of 1-channel images as it was defined).
import torch.nn as nnimport torch.nn.functional as Fclass Net(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(3, 6, 5) self.pool = nn.MaxPool2d(2, 2) self.conv2 = nn.Conv2d(6, 16, 5) self.fc1 = nn.Linear(16 * 5 * 5, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10) def forward(self, x): x = self.pool(F.relu(self.conv1(x))) x = self.pool(F.relu(self.conv2(x))) x = torch.flatten(x, 1) # flatten all dimensions except batch x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return xnet = Net()
3. Define a Loss function and optimizer¶
Let’s use a Classification Cross-Entropy loss and SGD with momentum.
import torch.optim as optimcriterion = nn.CrossEntropyLoss()optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
4. Train the network¶
This is when things start to get interesting.We simply have to loop over our data iterator, and feed the inputs to thenetwork and optimize.
for epoch in range(2): # loop over the dataset multiple times running_loss = 0.0 for i, data in enumerate(trainloader, 0): # get the inputs; data is a list of [inputs, labels] inputs, labels = data # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize outputs = net(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() # print statistics running_loss += loss.item() if i % 2000 == 1999: # print every 2000 mini-batches print(f'[{epoch + 1}, {i + 1:5d}] loss: {running_loss / 2000:.3f}') running_loss = 0.0print('Finished Training')
[1, 2000] loss: 2.144[1, 4000] loss: 1.835[1, 6000] loss: 1.677[1, 8000] loss: 1.573[1, 10000] loss: 1.526[1, 12000] loss: 1.447[2, 2000] loss: 1.405[2, 4000] loss: 1.363[2, 6000] loss: 1.341[2, 8000] loss: 1.340[2, 10000] loss: 1.315[2, 12000] loss: 1.281Finished Training
Let’s quickly save our trained model:
PATH = './cifar_net.pth'torch.save(net.state_dict(), PATH)
See herefor more details on saving PyTorch models.
5. Test the network on the test data¶
We have trained the network for 2 passes over the training dataset.But we need to check if the network has learnt anything at all.
We will check this by predicting the class label that the neural networkoutputs, and checking it against the ground-truth. If the prediction iscorrect, we add the sample to the list of correct predictions.
Okay, first step. Let us display an image from the test set to get familiar.
dataiter = iter(testloader)images, labels = next(dataiter)# print imagesimshow(torchvision.utils.make_grid(images))print('GroundTruth: ', ' '.join(f'{classes[labels[j]]:5s}' for j in range(4)))
GroundTruth: cat ship ship plane
Next, let’s load back in our saved model (note: saving and re-loading the modelwasn’t necessary here, we only did it to illustrate how to do so):
net = Net()net.load_state_dict(torch.load(PATH))
<All keys matched successfully>
Okay, now let us see what the neural network thinks these examples above are:
outputs = net(images)
The outputs are energies for the 10 classes.The higher the energy for a class, the more the networkthinks that the image is of the particular class.So, let’s get the index of the highest energy:
_, predicted = torch.max(outputs, 1)print('Predicted: ', ' '.join(f'{classes[predicted[j]]:5s}' for j in range(4)))
Predicted: cat ship truck ship
The results seem pretty good.
Let us look at how the network performs on the whole dataset.
correct = 0total = 0# since we're not training, we don't need to calculate the gradients for our outputswith torch.no_grad(): for data in testloader: images, labels = data # calculate outputs by running images through the network outputs = net(images) # the class with the highest energy is what we choose as prediction _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item()print(f'Accuracy of the network on the 10000 test images: {100 * correct // total} %')
Accuracy of the network on the 10000 test images: 54 %
That looks way better than chance, which is 10% accuracy (randomly pickinga class out of 10 classes).Seems like the network learnt something.
Hmmm, what are the classes that performed well, and the classes that didnot perform well:
# prepare to count predictions for each classcorrect_pred = {classname: 0 for classname in classes}total_pred = {classname: 0 for classname in classes}# again no gradients neededwith torch.no_grad(): for data in testloader: images, labels = data outputs = net(images) _, predictions = torch.max(outputs, 1) # collect the correct predictions for each class for label, prediction in zip(labels, predictions): if label == prediction: correct_pred[classes[label]] += 1 total_pred[classes[label]] += 1# print accuracy for each classfor classname, correct_count in correct_pred.items(): accuracy = 100 * float(correct_count) / total_pred[classname] print(f'Accuracy for class: {classname:5s} is {accuracy:.1f} %')
Accuracy for class: plane is 37.9 %Accuracy for class: car is 62.2 %Accuracy for class: bird is 45.6 %Accuracy for class: cat is 29.2 %Accuracy for class: deer is 50.3 %Accuracy for class: dog is 45.9 %Accuracy for class: frog is 60.1 %Accuracy for class: horse is 70.3 %Accuracy for class: ship is 82.9 %Accuracy for class: truck is 63.1 %
Okay, so what next?
How do we run these neural networks on the GPU?
Training on GPU¶
Just like how you transfer a Tensor onto the GPU, you transfer the neuralnet onto the GPU.
Let’s first define our device as the first visible cuda device if we haveCUDA available:
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')# Assuming that we are on a CUDA machine, this should print a CUDA device:print(device)
cuda:0
The rest of this section assumes that device
is a CUDA device.
Then these methods will recursively go over all modules and convert theirparameters and buffers to CUDA tensors:
net.to(device)
Remember that you will have to send the inputs and targets at every stepto the GPU too:
inputs, labels = data[0].to(device), data[1].to(device)
Why don’t I notice MASSIVE speedup compared to CPU? Because your networkis really small.
Exercise: Try increasing the width of your network (argument 2 ofthe first nn.Conv2d
, and argument 1 of the second nn.Conv2d
–they need to be the same number), see what kind of speedup you get.
Goals achieved:
Understanding PyTorch’s Tensor library and neural networks at a high level.
Train a small neural network to classify images
Training on multiple GPUs¶
If you want to see even more MASSIVE speedup using all of your GPUs,please check out Optional: Data Parallelism.
Where do I go next?¶
Train neural nets to play video games
Train a face generator using Generative Adversarial Networks
Train a word-level language model using Recurrent LSTM networks
Discuss PyTorch on the Forums
del dataiter
Total running time of the script: ( 1 minutes 59.903 seconds)
Download Python source code: cifar10_tutorial.py
Download Jupyter notebook: cifar10_tutorial.ipynb