burton2000 / cs231n-2017 Goto Github PK
View Code? Open in Web Editor NEWCompleted the CS231n 2017 spring assignments from Stanford university
Completed the CS231n 2017 spring assignments from Stanford university
Hi, I am also doing assignment 3 of CS231n. Yet I didn't get the correct number for the function rnn_backward
. My code looks like this:
for t in reversed(range(T)):
dh[:,t,:]= dprev_h + dh[:,t,:]
dx[:,t,:], dprev_h, dWx_temp, dWh_temp, db_temp = rnn_step_backward(dh[:,t,:], cache[t])
dWx += dWx_temp
dWh += dWh_temp
db += db_temp
My cache is also a list of tuple of numpy array and every else is the same. Yet I did not get the correct number. May I ask if the reason of this weird result?
how do i add my .jpg file and check the caption output of the algorithm?
Hello!
First of all, nice work.
Secondly, this might be a bug in the original assignment but:
AFAIK logits are the scores after applying softmax.
For the Vanille GAN (the first one you implemented),the lines
logits_fake = D(fake_images)
and
logits_real = D(2* (real_data - 0.5)).type(dtype)
in the function run_a_gan (the one which trains GAN) bother me since those are scores and not logits...
It does work though (generated images are not that bad).
So what do you think? Is it a bug in the original assignment, or am I missing something here? :)
Hi, thanks for sharing your solution.
But I have a question about the loss definition in neural_net.py line 117,118 as:
correct_class_scores = np.choose(y, shift_scores.T) # Size N vector
loss = -correct_class_scores + np.log(np.sum(np.exp(shift_scores), axis=1))
why correct_class_scores is added to the loss? Thanks
Hello! First great thanks to your brilliant answers of these assignments. It really helps me a lot!
I would like to ask a question in neural_net.py. When db1 is calculated, I noticed that 2 * reg * b1 hasn't been added. db2 has the the same problem. The original code is as below.
# Backprop dRelu1 to calculate db1. db1 = dRelu1 * 1 grads['b1'] = np.sum(db1, axis=0)
However, according to the forward process, when calculating the loss we have added reg * np.sum(b1 * b1). The original code is as below.
loss += reg * (np.sum(W1*W1) + np.sum(W2*W2) + np.sum(b1*b1) + np.sum(b2*b2))
Is this part missing during back propogation? I add the 2 * reg * b1 to my code, and the analytic gradients is still less than 1e-8 for b1, and b2.
I'm starting this course just by watching it on YouTube. Don't know a way to verify my work.
Do you know if your solutions are correct (not to suggest that they aren't, but curious)?
hey, bro, I have a question in k_nearest_neighbor.py line 139. why X_squared need to add newaxis while Y_squared doesnt
I'm not able to get results as given in the paper when using vgg19 ( style transfer assignment)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.