“All models are wrong, but some are useful”
– George Box
Logistic regression is used to predict binary variables and is characterized by the logistic function
$$
\begin{align*}
logistic(z) &= \frac{e^z}{e^z + 1} \\
\sigma (z) &= \frac{e^z}{e^z + 1} \\
&= \frac{1}{1 + e^{-z}} \\
\end{align*}
$$
Cost Function:
With $\mathbf{X}$ being the observed variables, $\boldsymbol{\beta}$, $\beta_0$ being the fitted parameters, $ z = \beta_0 + \boldsymbol{\beta}^t\mathbf{X} $ and $logistic(z)$ being the predicted value also known as $\hat{y}$ and $y$ being the true label. Letting our labels for $\mathbf{y} \in \{0,1\} $ allows some simplification in equations, but other conventions use $\mathbf{y} \in \{-1,1\} $ and slightly more verbose notation.
$$
\begin{align}
\mathcal{L}(\mathbf{y}|\beta_0,\boldsymbol{\beta}) &= \prod_{i=1}^N \hat{y}_i^{y_i} (1 - \hat{y}_i)^{1-y_i} \\
-log(p(\mathbf{y}|\beta_0,\boldsymbol{\beta})) &= -\sum_{i=1}^N y_i log(\hat{y}_i) + (1-y_i) log (1-\hat{y}_i) \\
\end{align}
$$
We can use maximum likelihood to estimate the parameters. The likelihood of the labels $\mathcal{L}(\mathbf{y})$ given our data $\mathbf{X}$ is dependent on our weights $(\beta_0,\boldsymbol{\beta})$ and maximized in equation 1 above. We can see that the likelihood is maximized as if the labels are perfectly predicted the probability would be 1, and if the labels mismatch exactly opposite then the probability would be 0 and something in between if $\hat{y}$ is between $[0,1]$.
$$
\begin{align}
-log(p(\mathbf{y}|\beta_0,\boldsymbol{\beta})) &= -\sum_{i=1}^N y_i log(\hat{y}_i) + (1-y_i) log (1-\hat{y}_i) \\
Error(\beta_0,\boldsymbol{\beta}) &= -\sum_{i=1}^N y_i log(\hat{y}_i) + (1-y_i) log (1-\hat{y}_i) \\
\end{align}
$$
Equation 4 above is also known as cross entropy and minimizing it we can come up with our best estimate of logistic
regression coefficients.
Derivatives for Optimization:
To optimize equation 4 it would be convenient to take the derivative which turns out to have the form of $\sigma (z) (1-\sigma(z))$ as can be seen below
$$
\begin{align*}
\sigma (z) &= \frac{1}{1 + e^{-z}} \\
\sigma (z) &= \frac{e^z}{e^z + 1} \\
\frac{d \sigma (z)}{dz} &= \frac{e^ze^z + e^z - e^ze^z}{(e^z + 1)^2} \\
\frac{d \sigma (z)}{dz} &= \frac{e^z}{(e^z + 1)^2} \\
\frac{d \sigma (z)}{dz} &= \sigma (z) (1-\sigma(z)) \\
\end{align*}
$$
So putting together everything into our cost function we can derive loss for per item loss $J(\beta_0,\boldsymbol{\beta})$
where we fold $\beta_0$ into vector of $\boldsymbol{\beta}$ and extend $\mathbf{X}$ to have an additional column of 1s.
$$
\require{cancel}
\begin{align*}
Error(\boldsymbol{\beta}) &= -\sum_{i=1}^N y_i log(\hat{y}_i) + (1-y_i) log (1-\hat{y}_i) \\
J(\boldsymbol{\beta}) &= - \sum_{i=1}^N y_i log(\hat{y}_i) + (1-y_i) log (1-\hat{y}_i) \\
J(\boldsymbol{\beta}) &= - \sum_{i=1}^N y_i log(\sigma(\boldsymbol{\beta}^t\mathbf{X}_i)) + (1-y_i) log (1-\sigma(\boldsymbol{\beta}^t\mathbf{X}_i)) \\
\frac{dJ(\boldsymbol{\beta})}{d\boldsymbol{\beta}} &= - \frac{d\sum_{i=1}^N y_i log(\sigma(\boldsymbol{\beta}^t\mathbf{X}_i)) + (1-y_i) log (1-\sigma(\boldsymbol{\beta}^t\mathbf{X}_i))}{d\boldsymbol{B}} \\
&= - \sum_{i=1}^N y_i\frac{1}{\sigma(\boldsymbol{\beta}^t\mathbf{X})}\sigma(\boldsymbol{\beta}^t\mathbf{X})(1-\sigma(\boldsymbol{\beta}^t\mathbf{X}))\mathbf{X} - (1 - y_i) \frac{1}{1 - \sigma(\boldsymbol{\beta}^t\mathbf{X})}\sigma(\boldsymbol{\beta}^t\mathbf{X})(1 - \sigma(\boldsymbol{\beta}^t\mathbf{X}))\mathbf{X} \\
&= - \sum_{i=1}^N y_i\frac{1}{\cancel{\sigma(\boldsymbol{\beta}^t\mathbf{X})}}\cancel{\sigma(\boldsymbol{\beta}^t\mathbf{X})}(1-\sigma(\boldsymbol{\beta}^t\mathbf{X}))\mathbf{X} - (1 - y_i) \frac{1}{\cancel{1 - \sigma(\boldsymbol{\beta}^t\mathbf{X})}}\sigma(\boldsymbol{\beta}^t\mathbf{X})\cancel{(1 - \sigma(\boldsymbol{\beta}^t\mathbf{X}))}\mathbf{X} \\
&= - \sum_{i=1}^N y_i(1-\sigma(\boldsymbol{\beta}^t\mathbf{X}))\mathbf{X} - (1 - y_i) \sigma(\boldsymbol{\beta}^t\mathbf{X})\mathbf{X} \\
&= - \sum_{i=1}^N y_i\mathbf{X} - y_i\sigma(\boldsymbol{\beta}^t\mathbf{X})\mathbf{X} - \sigma(\boldsymbol{\beta}^t\mathbf{X})\mathbf{X} + y_i\sigma(\boldsymbol{\beta}^t\mathbf{X})\mathbf{X} \\
&= - \sum_{i=1}^N y_i\mathbf{X} - \sigma(\boldsymbol{\beta}^t\mathbf{X})\mathbf{X} \\
&= - \sum_{i=1}^N (y_i - \sigma(\boldsymbol{\beta}^t\mathbf{X}))\mathbf{X} \\
&= \sum_{i=1}^N (\sigma(\boldsymbol{\beta}^t\mathbf{X}) - y_i)\mathbf{X}
\end{align*}
$$
This means our update rule for gradient descent using a learning rate of $\alpha$ would be
$$
\boldsymbol{\beta}_{new} := \boldsymbol{\beta}_{old} - \alpha \sum_{i=1}^N (\sigma(\boldsymbol{\beta}_{old}^t\mathbf{X}) - y_i)\mathbf{X}
$$
If data is linearly separable or if data $\mathbf{X}$ are correlated we can have unstable derivatives and values of $\boldsymbol{\beta}$ can be large and unstable and not generalize well, being overtrained on the data that we have.
We can try to favor a simpler model with smaller $\boldsymbol{\beta}$ by placing a constraint on $\boldsymbol{\beta}$ such that we pay a cost of $\frac{\lambda}{2}||\boldsymbol{\beta}||_2$ where $||\boldsymbol{\beta}||_2 = \sqrt{\sum_{i=1}^k\beta_i^2}$ known as the $L_2$ norm or euclidean norm of the vector.
Generally regularizing using the $L_2$ norm is called tikhonov regularization but is also known as ridge regression.
There is also an $L_1$ norm $|\boldsymbol{\beta}|_1 = \sum_{i=1}^k|\beta_i|$ also known as the taxicab or manhattan norm but that is mainly used in lasso or extensions such as elastic net and glmnet. In practice for logistic regression we constrain the size of the parameters by adding $\frac{\lambda}{2}\sum_{i=1}^k\beta_i^2$
$$
\begin{align*}
J(\boldsymbol{\beta}) &= - \sum_{i=1}^N y_i log(\hat{y}_i) + (1-y_i) log (1-\hat{y}_i) + \frac{\lambda}{2}\sum_{i=1}^k\beta_i^2 \\
\frac{dJ(\boldsymbol{\beta})}{d\boldsymbol{\beta}} &= \sum_{i=1}^N (\sigma(\boldsymbol{\beta}^t\mathbf{X}) - y_i)\mathbf{X} + \lambda\boldsymbol{\beta} \\
\end{align*}
$$