Git Product home page Git Product logo

Comments (1)

ssfc avatar ssfc commented on August 11, 2024

1.Neural Networks and Deep Learning
2.-----------------------------------------------------Week 1-------------------------------------------------------
3.W1-1: Welcome; AI is new electricity; (2019-5-2)
4.W1-2: What is a neural network; housing price prediction; (2019-5-2)
5.W1-3: Supervised Learning with Neural Networks; supervised learning; neural network examples; structured data/unstructured data; (2019-5-2)
6.W1-4: Why is Deep Learning taking off; scale drives deep learning progress; (2019-5-2)
7.W1-5: About this Course; courses in this specialization; outline of this course; (2019-5-2)
8.W1-6: Course Resources; course resources; (2019-5-2)
9.W1-7: Geoffrey Hinton interview; (2019-5-2)
10.
11.-----------------------------------------------------Week 2-------------------------------------------------------
12.W2-1: Binary Classification; binary classification; notation; (2019-5-3)
13.W2-2: Logistic Regression; logistic regression; (2019-5-3)
14.W2-3: Logistic Regression Cost Function; logistic regression cost function; (2019-5-3)
15.W2-4: Gradient Descent; gradient descent; (2019-5-3)
16.W2-5: Derivatives; intuition about derivatives; (2019-5-3)
17.W2-6: More Derivative Examples; intuition about derivatives; more derivative examples; (2019-5-3)
18.W2-7: Computation graph; computation graph; (2019-5-4)
19.W2-8: Derivatives with a Computation Graph; computing derivatives; (2019-5-4)
20.W2-9: Logistic Regression Gradient Descent; logistic regression recap; logistic regression derivative; (2019-5-4)
21.W2-10: Gradient Descent on m Examples; logistic regression on m examples; (2019-5-4)
22.W2-11: Vectorization; what is vectorization; (2019-5-4)
23.W2-12: More Vectorization Examples; neural network programming guideline; vectors and matrix valued functions; logistic regression derivatives; (2019-5-4)
24.W2-13: Vectorizing Logistic Regression; vectorizing logistic regression; (2019-5-4)
25.W2-14: Vectorizing Logistic Regression's Gradient Output; vectorizing logistic regression; implementing logistic regression; (2019-5-5)
26.W2-15: Broadcasting in Python; broadcasting example; general principle; (2019-5-5)
27.W2-16: A note on python/numpy vectors; python/numpy vectors; (2019-5-5)
28.W2-17: Quick tour of Jupyter/iPython Notebooks; (2019-5-5)
29.W2-18: Explanation of logistic regression cost function (optional); logistic regression cost function; cost on m examples; (2019-5-5)
30.
31.-----------------------------------------------------Week 3-------------------------------------------------------
32.W3-1: Neural Networks Overview; what is a neural network; (2019-5-10)
33.W3-2: Neural Network Representation; neural network representation; (2019-5-10)
34.W3-3: Computing a Neural Network's Output; neural network representation; neural network representation learning; (2019-5-10)
35.W3-4: Vectorizing across multiple examples; (2019-5-10)
36.W4-4: Explanation for Vectorized Implementation; justification for vectorized implementation; recap of vectorizing across multiple examples; (2019-5-11)
37.W4-5: Activation functions; pros and cons of activation functions; (2019-5-11)
38.W4-6: Why do you need non-linear activation functions; activation function; (2019-5-11)
39.W4-7: Derivatives of activation functions; sigmoid activation function; tanh activation function; ReLU and leaky ReLU; (2019-5-11)
40.W4-8: Gradient descent for Neural Networks; formula for computing derivatives; (2019-5-11)
41.W4-9: Backpropagation intuition (optional); computing gradients; neural network gradients; summary of gradient descent; (2019-5-11)
42.W4-10: Random Initialization; what happens if you initialize weights to zero; random initialization; (2019-5-11)
43.
44.-----------------------------------------------------Week 4-------------------------------------------------------
45.W4-1: Deep L-layer neural network; what is a deep neural network; deep neural network notation; (2019-5-11)
46.W4-2: Forward Propagation in a Deep Network; (2019-5-12)
47.W4-3: Getting your matrix dimensions right; parameters wl and bl; vectorized implementation; (2019-5-12)
48.W4-4: Why deep representations; intuition about deep representation; circuit theory and deep learning; (2019-5-12)
49.W4-5: Building blocks of deep neural networks; forward and backward functions; (2019-5-12)
50.W4-6: Forward and Backward Propagation; forward propagation for layer l; backward propagation for layer l; summary; (2019-5-12)
51.W4-7: Parameters vs Hyperparameters; what are hyperparameters; applied deep learning is a very empirical process; (2019-5-12)
52.W4-8: What does this have to do with the brain; forward and backward propagation; (2019-5-12)
53.
54.
55.-------------------------------------------------------------------------------------------------------------------------
56.-------------------------------------------------------------------------------------------------------------------------
57.Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization
58.-----------------------------------------------------Week 1-------------------------------------------------------
59.W1-1: Train / Dev / Test sets; applied ML is a highly iterative process; train/dev/test sets; mismatched train/test distribution; (2019-5-13)
60.W1-2: Bias / Variance; bias and variance; high bias and high variance; (2019-5-13)
61.W1-3: Basic Recipe for Machine Learning; (2019-5-13)
62.W1-4: Regularization; logistic regression; neural network; (2019-5-13)
63.W1-5: Why regularization reduces overfitting; how does regularization prevent overfitting; (2019-5-13)
64.W1-6: Dropout Regularization; implementing dropout; making predictions at test time; (2019-5-13)
65.W1-7: Understanding Dropout; why does dropout work; (2019-5-13)
66.W1-8: Other regularization methods; data augmentation; early stopping; (2019-5-13)
67.W1-9: Normalizing inputs; normalizing training sets; why normalize inputs; (2019-5-13)
68.W1-10: Vanishing / Exploding gradients; (2019-5-13)
69.W1-11: Weight Initialization for Deep Networks; single neuron example; (2019-5-13)
70.W1-12: Numerical approximation of gradients; checking your derivative computation; (2019-5-13)
71.W1-13: Gradient checking; gradient check for a neural network; gradient checking (grad check); (2019-5-13)
72.W1-14: Gradient Checking Implementation Notes; (2019-5-13)
73.
74.-----------------------------------------------------Week 2-------------------------------------------------------
75.W2-1: Mini-batch gradient descent; batch vs mini-batch; mini-batch gradient descent; (2019-5-15)
76.W2-2: Understanding mini-batch gradient descent; training with mini batch gradient descent; choosing your mini-batch size; (2019-5-15)
77.W2-3: Exponentially weighted averages; temperature in London; exponentially weighted averages; (2019-5-15)
78.W2-4: Understanding exponentially weighted averages; exponentially weighted averages; implementing exponentially weighted averages; (2019-5-15)
79.W2-5: Bias correction in exponentially weighted averages; bias correction; (2019-5-15)
80.W2-6: Gradient descent with momentum; gradient descent example; implementation details; (2019-5-15)
81.W2-7: RMSprop; RMSprop; (2019-5-15)
82.W2-8: Adam optimization algorithm; hyperparameters choice; (2019-5-15)
83.W2-9: Learning rate decay; other learning rate decay methods; (2019-5-15)
84.W2-10: The problem of local optima; local optima in neural networks; problem of plateaus; (2019-5-15)
85.
86.-----------------------------------------------------Week 3-------------------------------------------------------
87.W3-1: Tuning process; hyperparameters; try random values, don’t use a grid; coarse to fine; (2019-5-16)
88.W3-2: Using an appropriate scale to pick hyperparameters; picking hyperparameters at random; appropriate scale for hyperparameters; hyperparameters for exponentially weighted averages; (2019-5-16)
89.W3-3: Hyperparameters tuning in practice: Pandas vs. Caviar; re-test hyperparameters occasionally; babysitting one model; training many models in parallel; (2019-5-17)
90.W3-4: Normalizing activations in a network; normalizing inputs to speed up learning; implementing batch norm; (2019-5-17)
91.W3-5: Fitting Batch Norm into a neural network; adding batch norm to a network; working with mini-batches; implementing gradient descent; (2019-5-17)
92.W3-6: Why does Batch Norm work; learning on shifting input distribution; why this is a problem with neural networks; batch norm as regularization; (2019-5-17)
93.W3-7: Batch Norm at test time; batch norm at test time; (2019-5-17)
94.W3-8: Softmax Regression; recognizing cats, dogs and baby chicks; softmax layer; (2019-5-17)
95.W3-9: Training a softmax classifier; understanding softmax; loss function; gradient descent with softmax; (2019-5-17)
96.W3-10: Deep learning frameworks; deep learning frameworks; (2019-5-17)
97.W3-11: TensorFlow; motivating problem; code example; (2019-5-17)
98.
99.
100.-------------------------------------------------------------------------------------------------------------------------
101.-------------------------------------------------------------------------------------------------------------------------
102.Structuring Machine Learning Projects
103.-----------------------------------------------------Week 1-------------------------------------------------------
104.W1-1: Why ML Strategy; motivating example; (2019-5-18)
105.W1-2: Orthogonalization; TV tuning example; chains of assumptions in ML; (2019-5-18)
106.W1-3: Single number evaluation metric; using a single number evaluation metric; another example; (2019-5-18)
107.W1-4: Satisficing and Optimizing metric; another cat classification example; (2019-5-18)
108.W1-5: Train/dev/test distributions; cat classification dev/test sets; true story; guideline; (2019-5-18)
109.W1-6: Size of the dev and test sets; old way of splitting data; size of test set; (2019-5-18)
110.W1-7: When to change dev/test sets and metrics; cat dataset examples; Orthogonalization for cat pictures, anti-porn; another example; (2019-5-18)
111.W1-8: Why human-level performance; why compare to human-level performance; (2019-5-18)
112.W1-9: Avoidable bias; cat classification example; (2019-5-18)
113.W1-10: Understanding human-level performance; human-level error as a proxy for Bayes error; error analysis example; (2019-5-18)
114.W1-11: Surpassing human-level performance; problems where ML significantly surpasses human-level performance; (2019-5-19)
115.W1-12: Improving your model performance; the two fundamental assumptions of supervised learning; reducing bias and variance; (2019-5-19)
116.
117.-----------------------------------------------------Week 2-------------------------------------------------------
118.W2-1: Carrying out error analysis; look at dev examples to evaluate ideas; evaluate multiple ideas in parallel; (2019-5-19)
119.W2-2: Cleaning up incorrectly labeled data; incorrectly labeled examples; error analysis; correcting incorrect dev/test set examples; (2019-5-19)
120.W2-3: Build your first system quickly, then iterate; speech recognition example; (2019-5-19)
121.W2-4: Training and testing on different distributions; cat app example; speech recognition example; (2019-5-19)
122.W2-5: Bias and Variance with mismatched data distributions; cat classifier example; bias/variance on mismatched training and dev/test sets; more general formulation; (2019-5-19)
123.W2-6: Addressing data mismatch; addressing data mismatch; artificial data synthesis; (2019-5-19)
124.W2-7: Transfer learning; transfer learning; when transfer learning makes sense; (2019-5-19)
125.W2-8: Multi-task learning; simplified autonomous driving example; neural network architecture; when multi-task learning makes sense; (2019-5-20)
126.W2-9: What is end-to-end deep learning; what is end-to-end learning; face recognition; more examples; (2019-5-20)
127.W2-10: Whether to use end-to-end deep learning; pros and cons of end-to-end deep learning; applying end-to-end deep learning; (2019-5-20)
128.
129.
130.-------------------------------------------------------------------------------------------------------------------------
131.-------------------------------------------------------------------------------------------------------------------------
132.Convolutional Neural Networks
133.-----------------------------------------------------Week 1-------------------------------------------------------
134.W1-1: Computer Vision; computer vision problems; deep learning on large images; (2019-5-20)
135.W1-2: Edge Detection Example; computer vision problem; vertical edge detection; (2019-5-21)
136.W1-3: More Edge Detection; vertical edge detection examples; learning to detect edges; (2019-5-21)
137.W1-4: Padding; valid and same convolutions; (2019-5-21)
138.W1-5: Strided Convolutions; strided convolution; summary of convolutions; cross-correction vs convolution; (2019-5-21)
139.W1-6: Convolutions Over Volume; convolutions on RGB images; multiple filters; (2019-5-21)
140.W1-7: One Layer of a Convolutional Network; example of a layer; number of parameters in one layer; summary of notation; (2019-5-22)
141.W1-8: Simple Convolutional Network Example; example ConvNet; types of layer in a convolutional network; (2019-5-22)
142.W1-9: Pooling Layers; pooling layer, max pooling; pooling layer, average pooling; summary of pooling; (2019-5-22)
143.W1-10: CNN Example; neural network example; (2019-5-22)
144.W1-11: Why Convolutions; putting it together; (2019-5-22)
145.
146.-----------------------------------------------------Week 2-------------------------------------------------------
147.W2-1: Why look at case studies; outline; (2019-5-23)
148.W2-2: Classic Networks; LeNet-5; AlexNet; VGG-16; (2019-5-23)
149.W2-3: ResNets; residual block; (2019-5-23)
150.W2-4: Why ResNets Work; why do residual networks work; ResNet; (2019-5-23)
151.W2-5: Networks in Networks and 1x1 Convolutions; why does a 11 convolution do; (2019-5-23)
152.W2-6: Inception Network Motivation; motivation for inception network; the problem of computational cost; using 1
1 convolution; (2019-5-23)
153.W2-7: Inception Network; inception module; inception network; (2019-5-23)
154.W2-8: Using Open-Source Implementation; (2019-5-23)
155.W2-9: Transfer Learning; transfer learning; (2019-5-23)
156.W2-10: Data Augmentation; common augmentation method; color shifting; implementing distortions during training; (2019-5-23)
157.W2-11: State of Computer Vision; data vs hand-engineering; tips for doing well on benchmarks/winning competitions; use open source code; (2019-5-23)
158.
159.-----------------------------------------------------Week 3-------------------------------------------------------
160.W3-1: Object Localization; what are localization and detection; classification with localization; defining the target label y; (2019-5-24)
161.W3-2: Landmark Detection; landmark detection; (2019-5-24)
162.W3-3: Object Detection; car detection example; sliding windows detection; (2019-5-24)
163.W3-4: Convolutional Implementation of Sliding Windows; turning FC layer into convolutional layers; convolution implementation of sliding windows; (2019-5-24)
164.W3-5: Bounding Box Predictions; output accurate bounding boxes; YOLO algorithm; specify the bounding boxes; (2019-5-24)
165.W3-6: Intersection Over Union; evaluating object localization; (2019-5-24)
166.W3-7: Non-max Suppression; non-max suppression example; non-max suppression algorithm; (2019-5-24);
167.W3-8: Anchor Boxes; overlapping object; anchor box algorithm; another box example; (2019-5-24)
168.W3-9: YOLO Algorithm; training; (2019-5-24)
169.W3-10: (Optional) Region Proposals; region proposal, R-CNN; faster algorithms; (2019-5-25)
170.
171.-----------------------------------------------------Week 4-------------------------------------------------------
172.W4-1: What is face recognition; face recognition; face verification vs face recognition; (2019-5-25)
173.W4-2: One Shot Learning; learning a similarity function; (2019-5-25)
174.W4-3: Siamese Network; Siamese network; goal of learning; (2019-5-25)
175.W4-4: Triplet Loss; learning objective; loss function; choosing the triplets A, P, N; (2019-5-25)
176.W4-5: Face Verification and Binary Classification; learning the similarity function; face verification supervised learning; (2019-5-25)
177.W4-6: What is neural style transfer; neural style transfer; (2019-5-25)
178.W4-7: What are deep ConvNets learning; visualizing what a deep network is learning; visualizing deep layers; visualizing deep layers, Layer 3; (2019-5-25)
179.W4-8: Cost Function; neural style transfer cost function; find the generated image G; (2019-5-25)
180.W4-9: Content Cost Function; (2019-5-25)
181.W4-10: Style Cost Function; meaning of the style of an image; intuition about style of an image; style matrix; style cost function; (2019-5-25)
182.W4-11: 1D and 3D Generalizations; convolutions in 2D and 1D; 3D data; (2019-5-25)
183.
184.
185.-------------------------------------------------------------------------------------------------------------------------
186.-------------------------------------------------------------------------------------------------------------------------
187.Sequence Models
188.-----------------------------------------------------Week 1-------------------------------------------------------
189.W1-1: Why sequence models; examples of sequence data; (2019-5-26)
190.W1-2: Notation; motivating example; representing words; (2019-5-26)
191.W1-3: Recurrent Neural Network Model; why not a standard network; recurrent neural network; forward propagation; simplified RNN notation; (2019-5-26)
192.W1-4: Backpropagation through time; forward propagation and back propagation; (2019-5-26)
193.W1-5: Different types of RNNs; examples of sequence data; examples of RNN architectures; summary of RNN types; (2019-5-26)
194.W1-6: Language model and sequence generation; what is language modeling; language modelling with an RNN; RNN model; (2019-5-26)
195.W1-7: Sampling novel sequences; sampling a sequence from a trained RNN; character-level language model; sequence generation; (2019-5-26)
196.W1-8: Vanishing gradients with RNNs; vanishing gradients with RNNs; (2019-5-26)
197.W1-9: Gated Recurrent Unit (GRU); RNN unit; GRU (simplified); full GRU; (2019-5-26)
198.W1-10: Long Short Term Memory (LSTM); GRU and LSTM; LSTM in pictures; (2019-5-26)
199.W1-11: Bidirectional RNN; getting information from the future; bidirectional RNN (BRNN); (2019-5-26)
200.W1-12: Deep RNNs; Deep RNN examples; (2019-5-26)
201.
202.-----------------------------------------------------Week 2-------------------------------------------------------
203.W2-1: Word Representation; word representation; featurized representation, word embedding; visualizing word embedding; (2019-5-28)
204.W2-2: Using word embeddings; named entity recognition example; transfer learning and word embeddings; relation to face encoding; (2019-5-28)
205.W2-3: Properties of word embeddings; analogies; analogies using word vectors; cosine similarity; (2019-5-28)
206.W2-4: Embedding matrix; embedding matrix; (2019-5-28)
207.W2-5: Learning word embeddings; neural language model; other context/target pairs; (2019-5-28)
208.W2-6: Word2Vec; skip-grams; model; problems with softmax classification; (2019-5-28)
209.W2-7: Negative Sampling; defining a new learning problem; model; selecting negative examples; (2019-5-28)
210.W2-8: GloVe word vectors; global vectors for word representation; Model; a note on the featurization view of word embeddings; (2019-5-28)
211.W2-9: Sentiment Classification; sentiment classification problem; RNN for sentiment classification; (2019-5-29)
212.W2-10: Debiasing word embeddings; the problem of bias in word embeddings; addressing bias in word embeddings; (2019-5-29)
213.
214.-----------------------------------------------------Week 3-------------------------------------------------------
215.W3-1: Basic Models; sequence to sequence model; image captioning; (2019-5-30)
216.W3-2: Picking the most likely sentence; machine translation as building a conditional language model; finding the most likely translation; why not a greedy search; (2019-5-30)
217.W3-3: Beam Search; beam search algorithm; beam search (B=3); (2019-5-30)
218.W3-4: Refinements to Beam Search; length normalization; beam search discussion; (2019-5-30)
219.W3-5: Error analysis in beam search; example; error analysis on beam search; error analysis process; (2019-5-30)
220.W3-6: Bleu Score; evaluating machine translation; Bleu score on bigrams; Bleu score on unigrams; Bleu details; (2019-5-30)
221.W3-7: Attention Model Intuition; the problem of long sequences; attention model intuition; (2019-5-31)
222.W3-8: Attention Model; computing attention; attention examples; (2019-5-31)
223.W3-9: Speech recognition; speech recognition problem; attention model for speech recognition; CTC cost for speech recognition; (2019-5-31)
224.W3-10: Trigger Word Detection; what is trigger word detection; trigger word detection algorithm; (2019-5-31)
225.W3-11: Conclusion and thank you; specialization outline; (2019-5-31)
226.
227.-

from machine-learning.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.