Git Product home page Git Product logo

sciml / scimlbook Goto Github PK

View Code? Open in Web Editor NEW
1.8K 67.0 329.0 112.37 MB

Parallel Computing and Scientific Machine Learning (SciML): Methods and Applications (MIT 18.337J/6.338J)

Home Page: https://book.sciml.ai/

HTML 96.99% Julia 0.46% CSS 0.78% JavaScript 1.77% Smarty 0.01%
differential-equations scientific-machine-learning neural-networks numerical-methods gpu-computing stiff-equations lecture-notes performance-engineering parallelism scientific-simulators

scimlbook's Introduction

Parallel Computing and Scientific Machine Learning (SciML): Methods and Applications

DOI

This book is a compilation of lecture notes from the MIT Course 18.337J/6.338J: Parallel Computing and Scientific Machine Learning. Links to the old notes https://mitmath.github.io/18337 will redirect here.

This repository is meant to be a live document, updating to continuously add the latest details on methods from the field of scientific machine learning and the latest techniques for high-performance computing.

To view this book, go to book.sciml.ai.

Editing the SciML Book

This is a Franklin.jl site. Much of the material originated from Julia Markdown Documents (*.jmd). Each of these documents are Weave.jl-ed with a custom template, and the resulting HTML is inserted into a corresponding markdown file. Updating the files in _weave will automatically update the webpages.

The theme is adapted from Tufte CSS.

scimlbook's People

Contributors

00krishna avatar alanedelman avatar amarkpayne avatar anandijain avatar arnostrouwen avatar bowenszhu avatar chrisrackauckas avatar chunjiw avatar claforte avatar dpsanders avatar fercook avatar georgesterpu avatar gregoirepourtier avatar gustavdelius avatar jvaverka avatar lalitchauhan56 avatar lmilechin avatar moelf avatar oxinabox avatar pcatach avatar pitmonticone avatar rishikesh2338 avatar sciemon avatar simeonschaub avatar stephanie-fu avatar vchuravy avatar vincentmeijer avatar vleon1234 avatar wi11dey avatar wkirgsn avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

scimlbook's Issues

Empty training data

Hi Chris, @ChrisRackauckas

I am trying to follow your tutorial on youtube, it's really nice and helpful. I got a bit lost here at data = Iterators.repeated((), 5000), for example, the Hooks law, I am quite new to NN, I was expecting the training data to be (position, force) pairs. Could you explain a bit ? Thanks

vs code/atom

Just curious, which one will you be using for the next iteration of the class: vs code or atom?

Chapter enumeration

It seems that Ch. 12 went missing.
Furthermore, it would be nice to have the number of a Chapter displayed together with the heading at the top (or indicate the current chapter in some other way) to make navigation easier (maybe this is an issue of Weave).

Other than that, great course and thanks a lot for sharing!!

Faster Feedback Loop

Make it easier for readers to submit new issues detailing any mistakes, typos, or problems navigating the site.

  • Add link to a standard issue template
  • Add template for new issues to include
    • page source
    • mistake, typo, or problem
    • suggested correction

Chapter 5 has a lot of code with no obvious point

Everything from We can make this faster by preallocating the cache vector
to before And let's get the mean of the trajectory for each of the parameters.
hardly speeds anything up so one doesn't pick up the point of anything.
I feel my time was wasted looking at all these codes.

Also all the parallel speed ups that follow are also unimpressive -- downright discouraging.
Chapter 5 has a few good points

  • threads having their own stack
  • parallel codes can be slow downs
  • parallel codes are best on big problems

but otherwise the examples are just horrible sadly.

Images

This is used as a hack for storage images for the Weave files.

search bar

i was trying to find that pic you used in a number of presentations to explain stiff dynamics and couldnt find it

Two small issues

In the 03: SciML Intro, I tried out the code snippets and examples. I came accross the following issues, which probably must be corrected:

  1. The code sequence
using InteractiveUtils
@which Dense(10,32,tanh)

produced on my Windows 11 machine with Julia 1.8.3 the following output

julia> using InteractiveUtils

julia> @which Dense(10,32,tanh)
Dense(in::Integer, out::Integer, σ; kw...) in Flux at C:\Users\Andreas\.julia\packages\Flux\ZdbJr\src\deprecations.jl:63

which is different from the present text (see also the deprecations.jl file mentioned)

  1. The code sequence
f = Dense(32,32,tanh)
f(rand(32))

generated no errors:

julia> f = Dense(32,32,tanh)
Dense(32 => 32, tanh)  # 1_056 parameters

julia> f(rand(32))
32-element Vector{Float64}:
  0.9062770976213805
  0.8425807828837493
  0.26792840252578065
  0.666140647367476
  0.4405276873148863
  0.19332285187403578
 -0.13799441384013086
  0.7189609454578357
  0.2718110166518423
 -0.6121470269007114
  0.1182030593450979
 -0.5633580380133035
  0.10511116299410357
  ⋮
 -0.6040242147542774
  0.2563146309890773
 -0.6861582833270895
 -0.8277344376601083
 -0.6012511813196918
  0.27191114186414983
 -0.01659445287046109
  0.15963767325232198
  0.6874323041110563
 -0.7743081464822815
 -0.11817069631320432
 -0.43434795650833513

Due date holdover from Fall 2019?

There's a line in the Final Project section that says "Final project topics must be declared by October 18th with a 1 page extended abstract.". I looked at the commit history and it looks like this was here in Fall 2019, and it conflicts with the October 30th due date for the project proposal given earlier in the README.

CITATION.cff file

Hi,

I have found this (live) book very useful and would like to cite it in my Master's research essay. It would be great if there were a CITATION.cff file so that I knew how I should cite this work.

Thank you kindly!

Clarify Navigation: Chapter Number + Title

Breaking out separate request from #48

it would be nice to have the number of a Chapter displayed together with the heading at the top (or indicate the current chapter in some other way) to make navigation easier (maybe this is an issue of Weave).

This would indeed improve the ability to navigate the site since currently the only clue as to which chapter a note belongs to is in the URL, e.g., book.sciml.ai/notes/<chapter number>/.

Indexing into Float64 calculation

In this line in Lecture 2, you calculate val = A[i,j] + B[i,j], which should come out as a Float64. In the next line, you index into val, the Float64, which somehow works without erroring.
https://github.com/mitmath/18337/blob/cdd7b2078048d83ff1180f7c8832ff2efb3ad058/lecture2/optimizing.jmd#L157

The function compiles and runs correctly, however, I don't know if this is necessary or just leftover from a previous function. In addition, at least on my machine, the @btime is faster if you take the indexing out.

Fix spelling in Chapter 14

I am enjoying reading the book which builds the necessary mathematical background. Thanks for the content.
While reading I found some mistakes that is worth correcting.
Correction
I think this in chapter 14:
What ML can learning from SciComp: Stability of Convolutional RNNs
should be

What ML can learn from SciComp: Stability of Convolutional RNNs

Redefinition of `A`

This section

struct A
x
end
(a::A)(y) = a.x+y
a = A(2)
a(3)

currently reads with the error ERROR: cannot declare A constant; it already has a value; since the exact lines work fine, I suspect it may be an issue either with Weave, or its definition in other sections (there's an A in section 2 as well).

Also thanks for the book, excited to catch up on what's happening in sciml!

missing log in https://book.sciml.ai/notes/16/ equation (2)

Suggest Correction
A clear and concise description of what corrections are needed.

Identify

  • Chapter number
  • Section (lectures, notes, link, etc.)

Suggested Correction
A clear and concise description of what would correct this issue.

Screenshots
If applicable, add screenshots to help explain.

Additional context
Add any other context about the problem here.

What is E in Chapter 9 - Adaptive Time Stepping?

I'm assuming $\text{E}$ from here is the residual function value, i.e. $g(u_i) = \text{E}$. Is this correct? And is it scalarized or in vector form?

Does the relative tolerance $\tau_r$ ever take into account the scaling of $u$? For example, a hydraulic problem that has pressure values in the order of 1e7 and valve displacements in the order of 1e-6. If $\text{E}$ is in vector form then is $q$ computed as norm( E ./ (tau_r*u + tau_a) ) ?

Fix Typo: Chpt. 15, Hamiltonian MC

Suggest Correction
Pretty sure eq (4) should read \dot{p} = - dH/dx; currently the left side of this equation is missing (making it look like dH/dp = -dH/dx).

Identify
Chapter 15, the section on Hamiltonian Monte Carlo.

derivation mistake in note 11

Suggestion
derivation mistake on ODE adjoints

Identification
Notes 11

Correction

$$ \int_{t_{0}}^{T}\lambda^{\ast}\left(s^{\prime}-f_{u}s-f_{p}\right)dt =\int_{t_{0}}^{T}\lambda^{\ast}s^{\prime}dt-\int_{t_{0}}^{T}\lambda^{\ast}\left(f_{u}s-f_{p}\right)dt $$

should be

$$ \int_{t_{0}}^{T}\lambda^{\ast}\left(s^{\prime}-f_{u}s-f_{p}\right)dt =\int_{t_{0}}^{T}\lambda^{\ast}s^{\prime}dt-\int_{t_{0}}^{T}\lambda^{\ast}\left(f_{u}s+f_{p}\right)dt $$

and that affects the latter derivations.

Screenshots

Additional context

Deprecated Flux fields - Lecture 3

Flux has been updated since the contents of lecture 3 were created. Should be updated to reflect the new field names etc (eg. for x of type "Dense" x.W and x.b are deprecated in favor of x.weight and x.bias)

Flux not Precompiling in when running Julia v1.10

Suggestion

Flux is not precompling when run with Julia 1.10

Identification

Lecture 03

Getting Started with Machine Learning: Adding Flux
As well as other cells which are dependent on Flux

Correction

I'm not actually sure exactly what the solution to the problem is, but the correct behaviour is for Flux to import properly and not throw an error 😉

Screenshots

image

Chapter 8: The title array of structs should probably be struct of arrays instead of array of structs.

Suggestion

The subtitle reads "Array of Structs Representation". But, the section describes using a struct of arrays representation instead. The title should probably be changed to "Struct of Arrays Representation".
Identification

  • Chapter 8
  • Section: "Array of Structs Representation"

Correction

The subtitle reads "Array of Structs Representation". But, the first sentence is "Instead of thinking about a vector of dual numbers, thus we can instead think of dual numbers with vectors for the components", which describes a struct of arrays (SoA) representation. Also, the \hdots in the equation just after this paragraph is not rendering on my browser (Firefox 116.0.1).
Screenshots

image

Two code errors in Lecture 3 Notes (probably due to Flux library changes)

Suggest Correction

  1. Line 378 in https://github.com/SciML/SciMLBook/blob/master/_weave/lecture03/sciml.jmd
NN[1].weights # The W matrix of the first layer

produces ERROR: type Dense has no field weights:
image

I think the line should read

NN[1].weight # The W matrix of the first layer

(weight instead of weights)

  1. Line 384 also in https://github.com/SciML/SciMLBook/blob/master/_weave/lecture03/sciml.jmd
p = params(NN)

Produces ERROR: UndefVarError: params not defined
image

I don't think Flux is exposing params with the using Flux command any more so I think the line should now read

p = Flux.params(NN)

Those screenshots are taken from https://book.sciml.ai/notes/03/

Possible mistake in lecture 5 code

Hi,

I think there is a mistake in lecture 5 notes, in the "Multithreaded Parameter Searches" section, in the following code:

const _u_cache_threads = [Vector{typeof(@SVector([1.0,0.0,0.0]))}(undef,1000) for i in 1:Threads.nthreads()]
function compute_trajectory_mean5(u0,p)
  # u is automatically captured
  solve_system_save!(_u_cache_threads[Threads.threadid()],lorenz,u0,p,1000);
  mean(_u_cache)
end
@btime compute_trajectory_mean5(@SVector([1.0,0.0,0.0]),p)

After solving the system, the iteration are stored in _u_cache_threads[Threads.threadid()], but then the mean is computed on _u_cache, which, in my understanding, is not used anymore at this point of the notes.
It can be deceiving since serial_out - threaded_out it's still all 0s, but if you check the output of any of serial_out or threaded_out, the means for different ps have all the same value.

I think it should be mean(_u_cache_threads[Threads.threadid()]), is that correct?

(I'm not actually taking the course, I'm not a MIT student, I studying from the lecture notes in this repo)

Giacomo Randazzo

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.