一直是阵雨🌦️
目前该博客不再更新,最新内容已请在新博客 zhengyua.cn/new_blog 中查看,暂不考虑迁移兼容
基于梯度下降法的一元线性回归应用 回归Regression 一元线性回归 回归分析(regression analysis)用来建立方程模拟两个或者
Week 9 Density Estimation(异常检测) Problem Motivation Density Estimation Algotithm Anomaly detection example Gaussian Distribution(高斯分布或正态分布) The formula for the Gaussian density is: $$ p(x) = \frac{1}{\sqrt{2\pi}\sigma}\exp\left(-\frac{(x-\mu)^2}{2\sigma^2}\right) $$ Gaussian distribution example Parameter estimation Algorithm Anomaly
Clustering K-Means Algorithm(K均值 (K-means) 算法) Optimization Objective Random Initialization Choosing the Number of Clusters Motivation Motivation I : Data Compression Motivation II:Visualization Principal Component Analysis(PCA) Principal Component Analysis(PCA)Problem Formulation Principal Component Analysisi Algorithm PCA Applying Reconstruction from Compressed Representation Choosing the Number of Principal Components Advice for Applying PCA
Large Margin Classification Optimization Objective Large Margin Intuition Mathematics Behind Large Margin Classification Kernels Kernels I Kernels II Using An SVM
Liner Regression Cost Function $h(x)=\theta_0+\theta_1x+….$ $h(x)=\theta^Tx$ Linear Regression $J(\theta) = \frac{1}{2m}\sum_{1}^{m}(h_\theta(x^i)-y^i)$ $\frac{\partial{J(\theta)}}{\partial{\theta_j}}=\frac{1}{m}\sum_{1}^{m}(h_\theta(x^i)-y^i)$ Gradient descent algorithm repeat until convergence{ $\theta_j := \theta_j - \frac{ \alpha}{m}\sum_{i=1}^{m}(h_\theta(x^{(i)})-y^{(i)}) x^{(i)}$ } Feature scaling and mean normalization $x_i=\frac{x_i-\mu_i}{s_i}$ $\mu_i$: the average of all the values for feature (i) $s_i$ : standard deviation learning rate If α is too small: slow convergence. If α is too large: may not decrease on every iteration and thus may not