Dice Loss is designed to maximise the Dice Coefficient during training, by minimising the inverse of the Dice Coefficient.

  • Minimising Dice Loss encourages greater overlap between predicted and true masks.
  • It inherently addresses class imbalance.

$$ Dice Loss=1-Dice Coefficient $$

Binary Cross-Entropy Loss (BCE) measures the error between predicted probabilities and actual binary labels for each pixel.

  • It is used for pixel-wise classification tasks, which is how segmentation can be viewed (each pixel is classified as foreground or background).

$$  BCE=-\frac{1}{N}\sum_{i=1}^{N}\left [ _{y_1}log(_{\hat{h}_i})+(1-_{y_i})log(1-_{\hat{y}_i}) \right ] $$

Combined Loss (BCE + Dice Loss) leverages the strengths of BCE Loss and Dice Loss.

  • BCE guides the model to learn pixel-wise classification accurately, pushing predicted probabilities towards 0 for background and 1 for foreground. It's good for overall learning!
  • Dice Loss directly optimises for overlap, which is crucial for segmentation performance, especially for small or imbalance foreground objects. It helps the model prioritise getting the object boundaries right and not being overwhelmed by the background.

$$ Combined Loss=BCE+Dice Loss $$

 

Tversky Loss extends Dice Loss by introducing two coefficients, 𝛼 and 𝛽, which control the relative importance of False Positives (FP) and False Negatives (FN).

  • By weighting FP and FN differently, it becomes especially useful for imbalanced segmentation tasks, where either precision or recall needs to be emphasised.

$$ Tversky Index=\frac{TP}{TP+\alpha FP+\beta FN} $$

 

Focal Tversky Loss combines the ideas of Tversky Loss and Focal Loss.

  • Focal Loss was introduced to address the problem of class imbalance by down-weighting the loss contribution from easy examples (well-classified instances) and focusing more on hard examples (misclassified or difficult-to-classify instances).

$$ Focal Tversky Loss=(1-Tversky Index)^{\gamma }=(1-\frac{TP+\epsilon }{TP+\alpha FP+\beta FN+\epsilon }) $$

'MachineLearning > Evaluation' 카테고리의 다른 글

Metrics  (0) 2025.06.26

Confusion matrix

  Predicted Class
Positive Negative
Actual Class Positive True Positive (TP) False Positive (FP)
Negative False Negative (FN) True Negative (TN)
  • Accuracy measures the proportion of correctly classified instances among the total number of instances:

$$  Accuracy=\frac{TP+TN}{TP+TN+FP+FN} $$

  • Precision measures the proportion of true positives among all positive predictions made by the model.
    • How many of the instances predicted as positive were actually positive?

$$  Precision=\frac{TP}{TP+FP} $$

  • Recall (also known as Sensitivity or True Positive Rate) measures how well the model captures actual positive cases.
    • How many did the model correctly identify?

$$  Recall=\frac{TP}{TP+FN} $$

  • F1-Measure (also known as F1-Score) is the harmonic mean of precision and recall, providing a single score that balances both metrics.
    • How well does the model balance both precision and recall?

$$ F1-Measure=2\times \frac{Precision\times Recall}{Precision+Recall} $$

  • Dice Coefficient  (also known as Dice Similarity Coefficient or DSC) measures the overlap between the predicted segmentation mask and the ground truth mask. It ranges from 0 (no overlap) to 1 (perfect overlap).

$$ Dice Coefficient=\frac{2\times \left | A\cap B\right |}{\left | A\right |+\left | B\right |}=\frac{2\times TP}{2\times TP+FP+FN} $$

  • Jaccard Index (also called Intersection over Union, IoU), the Jaccard Index measures the size of the intersection divided by the size of the union between the predicted and ground truth masks.

$$ Jaccard=\frac{\left | A\cap B\right |}{\left | A\cup B\right |}=\frac{TP}{TP+FP+FN} $$

 

'MachineLearning > Evaluation' 카테고리의 다른 글

Loss  (3) 2025.06.26

+ Recent posts