# Deep Learning Dictionary - Lightweight Crash Course

Deep Learning Course 1 of 7 - Level: Beginner

## Sigmoid Activation Function - Deep Learning Dictionary

### video

expand_more chevron_left

### text

expand_more chevron_left

### Sigmoid Activation Function - Deep Learning Dictionary

In a neural network, an activation function applies a nonlinear transformation to the output of a layer.

One activation function, called $$\text{sigmoid}$$, maps its supplied inputs to a value in the interval $$(0,1)$$. For a given value $$x$$ passed to $$\text{sigmoid}$$, we define

$$\text{sigmoid}(x)=\frac{e^{x}}{e^{x}+1}$$

The table below summarizes how sigmoid transforms its input.

Input Sigmoid Output
Most negative values A value very close to $$0$$
Most positive values A value very close to $$1$$
Values relatively close to $$0$$ A value between $$0$$ and $$1$$

When sigmoid is used as an activation function following a layer in a neural network, it accepts the weighted sum of outputs from the previous layer and transforms this sum to a value between $$0$$ and $$1$$.

Intuitively, we can think of a given node's activated output as being "more activated" the closer it is to the upper limit $$1$$, and similarly "less activated" the closer it is to $$0$$.

We can therefore think of $$\text{sigmoid}$$'s output as being a probability of activation for a given node.

### quiz

expand_more chevron_left

### resources

expand_more chevron_left

expand_more chevron_left