Shawn's blog Shawn's blog
About Me
  • Category
  • Tag
  • Archive
GitHub (opens new window)

Shawn Jin

I am not a creator of knowledge, I am just a porter of knowledge.
About Me
  • Category
  • Tag
  • Archive
GitHub (opens new window)
  • linear-algebra

  • statistic

  • data-mining

  • machine-learning

    • linear-regression

      • Basic Terminologies in Linear Algebra
    • linear-modelling

    • nerual-networks

    • Difference between Training set, Validation set and Test Set
    • Regularization in Machine Learning
    • Learning Feedforward Neural Network Through XOR
    • Backprop Algorithm in Machine Learning
  • Data Science or Information Science
  • Data-Science
  • machine-learning
Shawn Jin
2021-09-13

Regularization in Machine Learning

# Regularization in ML

A central problem in machine learning is how to make an algorithm that will perform well not just on the training data, but also on new inputs. Many strategies used in machine learning are explicitly designed to reduce the test error, possibly at the expense of increased training error. These strategies are known collectively as regularization. Regularization is any modification we make to a learning algorithm that is intended to reduce its generalization error but not its training error.

#Machine Learning
Updated: 2021/09/15, 20:43:56
Difference between Training set, Validation set and Test Set
Learning Feedforward Neural Network Through XOR

← Difference between Training set, Validation set and Test Set Learning Feedforward Neural Network Through XOR→

最近更新
01
Python import files from different directories
12-31
02
Classmethod in Python
09-15
03
Single/Double Star (/*) Parameters in Python
09-15
更多文章>
Theme by Vdoing | Copyright © 2019-2021 Shawn Jin | MIT License
  • 跟随系统
  • 浅色模式
  • 深色模式
  • 阅读模式