/images/avatar.png

一直是阵雨🌦️

目前该博客不再更新,最新内容已请在新博客 zhengyua.cn/new_blog 中查看,暂不考虑迁移兼容

MachineLearning(AndrewNg)Notes-Week9

Week 9 Density Estimation(异常检测) Problem Motivation Density Estimation Algotithm Anomaly detection example Gaussian Distribution(高斯分布或正态分布) The formula for the Gaussian density is: $$ p(x) = \frac{1}{\sqrt{2\pi}\sigma}\exp\left(-\frac{(x-\mu)^2}{2\sigma^2}\right) $$ Gaussian distribution example Parameter estimation Algorithm Anomaly

MachineLearning(AndrewNg)Notes-Week8

Clustering K-Means Algorithm(K均值 (K-means) 算法) Optimization Objective Random Initialization Choosing the Number of Clusters Motivation Motivation I : Data Compression Motivation II:Visualization Principal Component Analysis(PCA) Principal Component Analysis(PCA)Problem Formulation Principal Component Analysisi Algorithm PCA Applying Reconstruction from Compressed Representation Choosing the Number of Principal Components Advice for Applying PCA

MachineLearning(AndrewNg)Notes-Week1-Week5总结

Liner Regression Cost Function $h(x)=\theta_0+\theta_1x+….$ $h(x)=\theta^Tx$ Linear Regression $J(\theta) = \frac{1}{2m}\sum_{1}^{m}(h_\theta(x^i)-y^i)$ $\frac{\partial{J(\theta)}}{\partial{\theta_j}}=\frac{1}{m}\sum_{1}^{m}(h_\theta(x^i)-y^i)$ Gradient descent algorithm repeat until convergence{ $\theta_j := \theta_j - \frac{ \alpha}{m}\sum_{i=1}^{m}(h_\theta(x^{(i)})-y^{(i)}) x^{(i)}$ } Feature scaling and mean normalization $x_i=\frac{x_i-\mu_i}{s_i}$ $\mu_i$: the average of all the values for feature (i) $s_i$ : standard deviation learning rate If α is too small: slow convergence. If α is too large: may not decrease on every iteration and thus may not