|
|
Line 1: |
Line 1: |
| Presented by: Jan Lau, Anas Mahdi, Will Thibault, Jiwon Yang
| |
|
| |
|
| Introduction:
| |
| Face recognition is a technology that can label a face to a specific identity. The process involves two tasks: 1. Identifying and classifying a face to a certain identity and 2. Verifying if this face and another face map to the same identity. Loss functions are a method of evaluating how good the prediction models the given data. In the application of face recognition, they are used for training convolutional neural networks (CNNs) with discriminative features. Softmax probability is the probability for each class. It contains a vector of values that add up to 1 while ranging between 0 and 1. Cross-entropy loss is the negative log of the probabilities. When softmax probability is combined with cross-entropy loss in the last fully connected layer of the CNN, it yields the softmax loss function:
| |
|
| |
| < equation L1 > [1]
| |
|
| |
| Specifically for face recognition, L1 is modified such that wyTx is normalized and s represents the magnitude of wyTx:
| |
|
| |
| < equation L2 > [1]
| |
|
| |
| This function is crucial in face recognition because it is used for enhancing feature discrimination. While there are different variations of the softmax loss function, they build upon the same structure as the equation above. Some of these variations will be discussed in detail in the later sections.
| |
|
| |
| In this paper, the authors first identified that reducing the softmax probability is a key contribution to feature discrimination and designed two design search spaces (random and reward-guided method). They then evaluated their Random-Softmax and Search-Softmax approaches by comparing the results against other face recognition algorithms using nine popular face recognition benchmarks.
| |