Binary Classification Using PyTorch: Defining a Network
Dr. James McCaffrey of Microsoft Research tackles how to define a network in the second of a series of four articles that present a complete end-to-end production-quality example of binary classification using a PyTorch neural network, including a full Python code sample and data files.
The goal of a binary classification problem is to predict an output value that can be one of just two possible discrete values, such as “male” or “female.”
This article is the second in a series of four articles that present a complete end-to-end production-quality example of binary classification using a PyTorch neural network (see the first article about preparing data here).
The example problem is to predict if a banknote (think euro or dollar bill) is authentic or a forgery based on four predictor variables extracted from a digital image of the banknote.
The process of creating a PyTorch neural network binary classifier consists of six steps:
- Prepare the training and test data
- Implement a Dataset object to serve up the data
- Design and implement a neural network
- Write code to train the network
- Write code to evaluate the model (the trained network)
- Write code to save and use the model to make predictions for new previously unseen data
Each of the six steps is fairly complicated, and the six steps are tightly coupled which adds to the difficulty. This article covers the third step.
A good way to see where this series of articles is headed is to take a look at the screenshot of the demo program in Figure 1. The demo begins by creating Dataset and DataLoader objects which have been designed to work with the well-known Banknote Authentication data. Next, the demo creates a 4-(8-8)-1 deep neural network. Then the demo prepares training by setting up a loss function (binary cross entropy), a training optimizer function (stochastic gradient descent), and parameters for training (learning rate and max epochs).
The demo trains the neural network for 100 epochs using batches of 10 items at a time. An epoch is one complete pass through the training data. For example, if there were 2,000 training data items and training was performed using batches of 50 items at a time, one epoch would consist processing 40 batches of data. During training, the demo computes and displays a measure of the current error. Because error slowly decreases, training is succeeding.
After training the network, the demo program computes the classification accuracy of the model on the training data (99.09 percent correct) and on the test data (99.27 percent correct). Because the two accuracy values are similar, it is likely that model overfitting has not occurred. After evaluating the trained model, the demo program saves the model using the state dictionary approach, which is the most common of three standard techniques.
The demo concludes by using the trained model to make a prediction. The four normalized input predictor values are (0.22, 0.09, -0.28, 0.16). The computed output value is 0.277069 which is less than 0.5 and therefore the prediction is class 0, which in turn means authentic banknote.
This article assumes you have an intermediate or better familiarity with a C-family programming language, preferably Python, but doesn’t assume you know very much about PyTorch. The complete source code for the demo program, and the two data files used, are available in the download that accompanies this article. All normal error checking code has been omitted to keep the main ideas as clear as possible.
To run the demo program, you must have Python and PyTorch installed on your machine. The demo programs were developed on Windows 10 using the Anaconda 2020.02 64-bit distribution (which contains Python 3.7.6) and PyTorch version 1.6.0 for CPU installed via pip. You can find detailed step-by-step installation instructions for this configuration in my blog post here.
The Banknote Authentication Data
The raw Banknote Authentication data looks like:
3.6216, 8.6661, -2.8073, -0.44699, 0 4.5459, 8.1674, -2.4586, -1.46210, 0 . . . -2.5419, -0.65804, 2.6842, 1.1952, 1
The raw data can be found online at banknote authentication Data Set. The goal is to predict the value in the fifth column (0 = authentic banknote, 1 = forged banknote) using the four predictor values. There are a total of 1,372 data items. The raw data was prepared in the following way. First, all four raw numeric predictor values were normalized by dividing by 20 so they’re all between -1.0 and +1.0. Next, 1-based ID values from 1 to 1372 were added so that items can be tracked. Next, a utility program split the data into a training data file with 1,097 randomly selected items (80 percent of the 1,372 items) and a test data file with 275 items (the other 20 percent).
After the structure of the training and test files was established, I coded a PyTorch Dataset class to read data into memory and serve the data up in batches using a PyTorch DataLoader object. You can find the article that explains how to create Dataset objects and use them with DataLoader objects here.
The Overall Program Structure
The overall structure of the PyTorch binary classification program, with a few minor edits to save space, is shown in Listing 1. I indent my Python programs using two spaces rather than the more common four spaces as a matter of personal preference.
Listing 1: The Structure of the Demo Program
# banknote_bnn.py # PyTorch 1.6.0-CPU Anaconda3-2020.02 # Python 3.7.6 Windows 10 import numpy as np import torch as T device = T.device("cpu") # IDs 0001 to 1372 added # data has been k=20 normalized (all four columns) # ID variance skewness kurtosis entropy class #       # (0 = authentic, 1 = forgery) # verified # train: 1097 items (80%), test: 275 item (20%) class BanknoteDataset(T.utils.data.Dataset): def __init__(self, src_file, num_rows=None): . . . def __len__(self): . . . def __getitem__(self, idx): . . . # ---------------------------------------------------- def accuracy(model, ds): . . . # ---------------------------------------------------- class Net(T.nn.Module): def __init__(self): . . . def forward(self, x): . . . # ---------------------------------------------------- def main(): # 0. get started print("Banknote authentication using PyTorch ") T.manual_seed(1) np.random.seed(1) # 1. create Dataset and DataLoader objects # 2. create neural network # 3. train network # 4. evaluate model # 5. save model # 6. make a prediction print("End Banknote demo ") if __name__== "__main__": main()
It’s important to document the versions of Python and PyTorch being used because both systems are under continuous development. Dealing with versioning incompatibilities is a significant headache when working with PyTorch and is something you should not underestimate.
I like to use “T” as the top-level alias for the torch package. Most of my colleagues don’t use a top-level alias and spell out “torch” dozens of times per program. Also, I use the full form of sub-packages rather than supplying aliases such as “import torch.nn.functional as functional.” In my opinion, using the full form is easier to understand and less error-prone than using many aliases.
The demo program defines a program-scope CPU device object. I usually develop my PyTorch programs on a desktop CPU machine. After I get that version working, converting to a CUDA GPU system only requires changing the global device object to T.device(“cuda”) plus a minor amount of debugging.
The demo program defines just one helper method, accuracy(). All of the rest of the program control logic is contained in a single main() function. It is possible to define other helper functions such as train_net(), evaluate_model(), and save_model(), but in my opinion this modularization approach unexpectedly makes the program more difficult to understand rather than easier to understand.
Defining a Neural Network for Binary Classification
The first step when designing a PyTorch neural network class is to determine its architecture. The number of input nodes is determined by the number of predictor values, four in the case of the Banknote Authentication data. Although there are several design alternatives for the output layer, by far the most common is to use a single output node, where the value of the node is coerced to between 0.0 and 1.0. Then a computed output value that is less than 0.5 corresponds to class 0 (authentic banknote for the demo data) and a computed output value that is greater then 0.5 corresponds to class 1 (forgery). This design assumes that the class-to-predict is encoded as 0 or 1 in the training data, rather than -1 or +1 as is used by some other machine learning binary classification techniques such as averaged perceptron.
The demo network uses two hidden layers, each with eight nodes, resulting in a 4-(8-8)-1 network. The number of hidden layers and the number of nodes in each layer are hyperparameters. Their values must be determined by trial and error guided by experience. The term “AutoML” is sometimes used for any system that programmatically, to some extent, tries to determine good hyperparameter values.
More hidden layers and more hidden nodes is not always better. The Universal Approximation Theorem (sometimes called the Cybenko Theorem) says, loosely, that for any neural architecture with multiple hidden layers, there is an equivalent architecture that has just one hidden layer. For example, a neural network that has two hidden layers with 5 nodes each, is roughly equivalent to a network that has one hidden layer with 25 nodes.
The definition of class Net is shown in Listing 2. In general, most of my colleagues and I use the term “network” or “net” to describe a neural network before it’s been trained, and the term “model” to describe a neural network after it’s been trained.
Listing 2: Class BanknoteDataset Definition
class Net(T.nn.Module): def __init__(self): super(Net, self).__init__() self.hid1 = T.nn.Linear(4, 8) # 4-(8-8)-1 self.hid2 = T.nn.Linear(8, 8) self.oupt = T.nn.Linear(8, 1) T.nn.init.xavier_uniform_(self.hid1.weight) T.nn.init.zeros_(self.hid1.bias) T.nn.init.xavier_uniform_(self.hid2.weight) T.nn.init.zeros_(self.hid2.bias) T.nn.init.xavier_uniform_(self.oupt.weight) T.nn.init.zeros_(self.oupt.bias) def forward(self, x): z = T.tanh(self.hid1(x)) z = T.tanh(self.hid2(z)) z = T.sigmoid(self.oupt(z)) return z
The Net class inherits from torch.nn.Module which provides much of the complex behind-the-scenes functionality. The most common structure for a binary classification network is to define the network layers and their associated weights and biases in the __init__() method, and the input-output computations in the forward() method.
The __init__() Method
The __init__() method begins by defining the demo network’s three layers of nodes:
def __init__(self): super(Net, self).__init__() self.hid1 = T.nn.Linear(4, 8) # 4-(8-8)-1 self.hid2 = T.nn.Linear(8, 8) self.oupt = T.nn.Linear(8, 1)
The first statement invokes the __init__() constructor method of the Module class from which the Net class is derived. The next three statements define the two hidden layers and the single output layer. Notice that you don’t explicitly define an input layer because no processing takes place on the input values.
The Linear() class defines a fully connected network layer. You can loosely think of each of the three layers as three standalone functions (they’re actually class objects). Therefore the order in which you define the layers doesn’t matter. In other words, defining the three layers in this order:
self.hid2 = T.nn.Linear(8, 8) # hidden 2 self.oupt = T.nn.Linear(8, 1) # output self.hid1 = T.nn.Linear(4, 8) # hidden 1
has no effect on how the network computes its output. However, it makes sense to define the networks layers in the order in which they’re used when computing an output value.
The demo program initializes the network’s weights and biases like so:
T.nn.init.xavier_uniform_(self.hid1.weight) T.nn.init.zeros_(self.hid1.bias) T.nn.init.xavier_uniform_(self.hid2.weight) T.nn.init.zeros_(self.hid2.bias) T.nn.init.xavier_uniform_(self.oupt.weight) T.nn.init.zeros_(self.oupt.bias)
If a neural network with one hidden layer has ni input nodes, nh hidden nodes, and no output nodes, there are (ni * nh) weights connecting the input nodes to the hidden nodes, and there are (nh * no) weights connecting the hidden nodes to the output nodes. Each hidden node and each output node has a special weight called a bias, so there’d be (nh + no) biases. For example, a 4-5-3 neural network has (4 * 5) + (5 * 3) = 35 weights and (5 + 3) = 8 biases. Therefore, the demo network has (4 * 8) + (8 * 8) + (8 * 1) = 104 weights and (8 + 8 + 1) = 17 biases.
Each layer has a set of weights which connect it to the previous layer. In other words, self.hid1.weight is a matrix of weights from the input nodes to the nodes in the hid1 layer, self.hid2.weight is a matrix of weights from the hid1 nodes to the hid2 nodes, and self.oupt.weight is a matrix of weights from the hid2 nodes to the output nodes.