[][src]Module deep_learning_playground::neural_network::activate_functions

Functions

identity

identity generates the identity function \( \text{id}(x)=x \) for Array2

rectified_linear_unit

rectified_linear_unit generates the Rectufied Linear Unit (ReLU) function \[ \text{ReLU}(x)=\begin{cases} x & (x\gt 0) \\ 0 & (\text{otherwise}) \end{cases} \] for Array2 where \(x\in\mathbb{R}\).

sigmoid

sigmoid generates the sigmoid function \[ S(x)=\dfrac{1}{1+\exp(-x)} \] for Array2 where \(x\in\mathbb{R}\).

softmax

softmax generates the softmax function: \[ \text{SoftMax}(\boldsymbol{x})=\left( \dfrac{\exp(x_1)}{\displaystyle\sum^n_{j=1}\exp(x_j)}, \dfrac{\exp(x_2)}{\displaystyle\sum^n_{j=1}\exp(x_j)}, \cdots, \dfrac{\exp(x_n)}{\displaystyle\sum^n_{j=1}\exp(x_j)} \right) \] for Array2 where \(\boldsymbol{x}=\left(x_1,\cdots,x_n\right)^T\subseteq\mathbb{R}^{n\times 1}\). To prevent overflow, actually calculate according to the following equation: \[ \begin{array}{lll} \dfrac{\exp(x_i)}{\displaystyle\sum^n_{j=1}\exp(x_j)}&=&\dfrac{C\exp(x_i)}{C\displaystyle\sum^n_{j=1}\exp(x_j)}\\ &=&\dfrac{\exp(x_i+\log C)}{\displaystyle\sum^n_{j=1}\exp(x_j+\log C)}\\ &=&\dfrac{\exp(x_i+C')}{\displaystyle\sum^n_{j=1}\exp(x_j+C')} \end{array} \] Therefore, \(C'\) can be for all value. Thus \(C'=x_{\text{max}}\) where \(^\forall x_i, ^\exists x_{\text{max}}\in\boldsymbol{x}\) s.t. \(x_{\text{max}}\geq x_i\).