Git Product home page Git Product logo

parcels's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

parcels's Issues

Field gradient calculator

Getting the spatial gradient of a field with arbitrary lat/long spacings is none-trivial as the sample distance varies in the x direction by latitude.

I'm currently coding a static method that takes a field and calculates this gradient using central difference, and forward/backward difference at the edges and between NaN/real values.

I suggest that each Field object has a .gradient(time) function, that calls this static method to return and equivalent gradient field that can then be stored as a new Field instance, if required. Any other ideas?

calling pset.add in JIT stops particle advection in output

Hit a very strange bug in JIT mode: As soon as pset.add is called, then particles are not advected anymore in the animation or in the output file

See the example_delay_start_pset_add branch for an example of this behaviour.

Interestingly, adding debugging print statements to AdvectionRK4 does indicate that the particles are still advected within the C code.

My best bet of what's happening is that the python ParticleSet doesn't get updated anymore after a call of pset.add, so that there's no transfer of data from the C-code to the python timeleaps loop. Perhaps a memory allocation issue?
No idea though, how to fix it...

Accidental Merger

I just by accident clicked the wrong button on Github Desktop and merged my mask_off_grid_particles branch with master! Very sorry. I am trying to undo this and apologize for any inconveniences this causes.

Depth-less netCDF fields

Hi gang,
As I'm now importing fields from more new ocean/ecosystem models, I'm coming across the issue of fields that have no depth dimension, i.e. just represent some variable that has been integrated through the water column at a lat/long position. I suspect that we will use these kinds of fields more and more going forward- habitat maps, fields that give the starting densities of particles (another feature I'd like to try and code up), and perhaps even our own internal particle density field that we have discussed using to simulate local interactions.

Correct me if I'm wrong, from what I can tell at the moment, field.py expects four-dimensional datasets (even though we only advect in 2d) structured [time, depth, lat, lon]
From field.py:
# Pre-allocate grid data before reading files into buffer data = np.empty((time.size, 1, lat.size, lon.size), dtype=np.float32) tidx = 0 for tslice, dset in zip(timeslices, datasets): data[tidx:, 0, :, :] = dset[dimensions['data']][:, 0, :, :] tidx += tslice.size
it appears that we expect four dimensions but only take the top depth slice anyway.

A quick fix for me is to try and add a depth dimension of length 1 to these "2d" netCDFs, but perhaps we want to be able to allow PARCELS to deal with such fields explicitly?

Implement vertical velocities

Currently, parcels can only advect in 2D (horizontal) flow. However, for many applications particles will need to be tracked in 3D.

This is actually quite a critical functionality, which once we have it will allow much more uptake in the physical oceanographic community

This will require an extension of the advection functions, but should for the rest be relatively straightforward?

Lagrangian Diffusion

While a random walk approximates Eulerian diffusion in a discretised framework when diffusivity is uniform, things are not so trivial when the diffusion field is spatially variable. Getting these kernels right are critical part of the tuna project milestone (and any other application that wishes to diffuse particles), so I've been working away on this (which is why I've been quiet for the last week or so!)

Will be finishing up a PR soon that uses the gradient and density additions from #67 and #62

grid.from_netcdf does not efficiently allocate memory for models with many time snapshots

The filebuffer currently used does not appear to be very efficient when a model has a large number of time snapshots.

Take for example the Globcurrent data. Running memory_profiler on it shows that if all files are used (i.e. 'U': "examples/GlobCurrent_example_data/20*.nc") then the total memory footprint of the grid is 12.5 MiB.

However, if only the January data is read in (i.e. 'U': "examples/GlobCurrent_example_data/200201*.nc"), then the total memory footprint of the grid is 2.5 MiB.

This becomes more problematic with much larger grids. For example the 0.1 degree global OFES grid (3600 * 1800 grid points in each snapshot) takes 70 MiB for one snapshot, but if the filebuffer is given 10 snapshots it require 512 MiB. And if it given 73 snapshots (a year's worth, at 3-day interval), the memory footprint of grid is 5,075 MiB.

For Parcels to be useful for long experiments on large grids, it needs to be smarter with memory allocation. For example the CMS always has only three consecutive snapshots of the velocity fields in memory, and shifts them through as time runs forward.

new feature: plot velocites in full vector form

I wrote a new method that utilizes the Basemap.quiver module to produce vector plots of the velocity data. The usage of Basemap necessitates that mpl_toolkits be added to the requirements list. The method, called show_velocity(), belongs to the ParticleSet class and relies on the temporal_interpolate_fullfield method to find the velocities over the domain at any arbitrary time. It produces plots of the normalized velocity vector field colored by speed.

show_velocity() has the following arguments:
t: datetime or timedelta object that specifies time at which to plot velocity
land: boolean that is true iff land should be drawn on plot
latN, latS, lonW, lonE: floats that delineate the geographic domain to be plotted.

The user-specified domain is projected onto the velocity grid (using a new nearest_index function). This feature allows the user to be ignorant of the precise values of the velocity grid's lat/lon coordinates. If no domain is specified by the user, the method defaults to plotting the full domain. The method also plots the current locations of particles in the ParticleSet. Using the GlobCurrent example data set, after instantiating a grid and pset, calling pset.show_velocity(t=datetime(2002, 1, 2), land=True) produces the following sample output:
globcur_sample.
By alternating between advecting the particles and saving velocity plots, one can create very useful animations:
animation_fast

The new code is over on my fork along with a file called example_vector_plots.py that is located in the example folder. This file demonstrates the features of the new method with the MovingEddies and the GlobCurrent example datasets. If you are interested in including this feature, the master branch on my fork is ready to be merged with the main fork. I am looking forward to hearing any suggestions or feedback!

Allow for 'looping' of velocity grid files

One feature that oceanographers might want when they use PARCELS is the ability to 'loop' velocity fields; i.e. to go to data[:,0] after reaching data[:,-1]. This allows them to run particles over much longer time scales than what they have velocity data for.

Probably easiest to do this with an extra option to the .execution call

API for time-coupling / complex sub-timestepping

Various features in parcels need to adhere to a specific sub-timestepping, such as file I/O and animation and deriving particle densities. This becomes quite complex once we start having more than one particle set in a simulation, say a predator-prey pair with different file I/O frequencies, a global animation frequency and additional synchronisation between the sets (lockstepping). We need to (at least conceptually) define an API that allows users to specify these things easily and concisely, especially the co-execution of two particle sets in some lockstepping.

One suggestions would be to keep set/store the file I/O frequency with the ParticleFile object, and similarly devise a Viewer class of some kind that performs live animations and stores a single global animation frequency. Similarly, diagnostic density fields might define their own update frequency, but I am open to suggestions for how we should define things for co-exection of two ParticleSets.

Once these components all have their own time-stepping frequency, a sort of event registry framework might then be used to allow the runtime (currently ParticleSet.execute()) to then derive the appropriate interval and then invoke the corresponding event, ie file I/O, once that interval has been performed.

As always, any thoughts and suggestions are very welcome.

Add capacity to run in time-backward mode

For the GMD manuscript, we will need to have functionality to run particles in time-backward mode. As for the integration, this can be done by simply negating velocity fields. The trick is to also handle input fields in reverse order

issues running AdvectionRK4 with JIT particles

In the ipython tutorial, using JITParticles with AdvectionRK4 works fine for me with the first NEMO formatted example. At the end of the tutorial, I tried changing the netCDF example's pset from Particle datatype to JITParticle datatype. If I try to execute the AdvectionRK4 method now, the kernel stalls after printing the Compiled JITParticleAdvectionRK4 ==> ... output. I converted the notebook to a python file and repeated the test. My kernel stalled again after printing the compiled line. Any thoughts on what might be going on here? Is this particular to my local setup/installation or perhaps already known to be an issue? Thanks in advance for your help.

Time-varying tracer fields in plotting script

The visualisation script can currently not handle time-varying tracer fields. Fixing this will require interpolation between the individual frames of tracer data. Since this can be quite a computationally intense process, depending on your tracer field, we might also want to start thinking about creating permanent output, ie. plot and video files to avoid having to recompute this.

Add along-trajectory values of 'other' fields to netcdf output

When a user adds another field, such as temperature, salinity, pressure or anything, it would be good if by default the interpolated values on each of the particle locations is added to the netcdf file. This is often very useful information for post-processing analysis

recent hot fix of field interpolator broke field plotting

The recent hot fix, designed to address the out of bounds problem #85, modified the interpolator1D() function in field.py. In particular, the hot fix introduced some new conditionals. In the case that (x is None and y is None) == True, the interpolator1D() attempts to return the variable, val, without first assigning it. I assume that the val = f0 + (f1 - f0) * ((time - t0) / (t1 - t0)) line was just forgotten for this case. Without that line, plotting fields crashes (like e.g. in the ipython tutorial). I submitted a pull request #90 with this fix to the original hot fix.

KernelOp doen't work in scipy with RK45

While working on Issue #80, I found a strange bug where Kernel additions (AdvectionRK45 + UpdateP) don't work in scipy mode when using the AdvectionRK45 kernel.

The AdvectionRK4 and AdvectionEE kernels work, and all works in JIT; out of the 6 possible permutations of modes and advection kernels, it's only RK45/scipy that doesn't work

I've created a branch fix_scipyRK45KernelOps (commit ae88f19) to highlight this error. @mlange05, could you have a look at it when you have time?

What to do if particles are on land?

We need to decide how to treat particles that are on land. @Abobie has worked hard on the boundaries branch where particles are 'pushed back' using ghost points, when they come very close to land. However, there can still be instances where particles end up on land. For example when the timestep is so large that particles step over multiple grid cells in one timestep. Or if random-walk diffusion is added

My thinking is that at the end of the kernel, we need a function that, for each timestep, assesses whether a particle is on land (and whether it has gone out of the domain, see #47). If that function return True, the user should have a few options:

  1. Remove the particle (to simulate beaching)
  2. Try computing the particle position again (essentially redrawing the random number in the diffusion; this should probably be defualt behaviour)
  3. Move the particle to the closest ocean point (might be difficult/expensive to compute, though)

@Jacketless, you have been thinking about this too for your diffusion steps, right? Any comments?

And @mlange05 what do you think about a general test at the end of each particle timestep to accept/reject the move in that timestep? Would that be feasible?

Update setup.py script

The setup.py script in the repo root is fairly outdated. Now that parcels is becoming bigger and installation is becoming more subtle, we should update this script.

Things that setup.py should do

  • check for proper versions of netcdf etc
  • try install matplotlib and Basemap (and gracefully fail)
  • pip install requirements
  • run the pull_data script (with the option not to do this if on low-bandwidth, perhaps by asking users whether they want to download the auxiliary data)
  • set the PYTHONPATH to include parcels

Multi-file parsing in grid.py does not seem to work

It seems that the multi-file parsing in grid.py does not work. Even when multiple files can be read in the .from_netcdf classmethod, only the last one is returned.

I've made a new branch with a very simple test function to show the issue:

erik:~/Codes/PARCELScode] python tests/test_multi_filename.py 
Generating NEMO grid output with basename: multi_filename0
Generating NEMO grid output with basename: multi_filename1
Generating NEMO grid output with basename: multi_filename2
Generating NEMO grid output with basename: multi_filename3
Generating NEMO grid output with basename: multi_filename4
Generating NEMO grid output with basename: multi_filename5
Grid.time as computed in .from_netcdf [      0.   86400.  172800.  259200.  345600.  432000.]
Grid.time as returned by .from_netcdf [ 432000.]

Moving eddies test velocity field looks different depending on grid size

I noticed a while ago that the moving eddies test works well for its default grid size, but when you change the grid size the entire fields change, see pictures below. I thought I'd just fix it myself, but still haven't, so I thought I might as well post an issue in case anyone else wants to have a look at it before I get around to. Or maybe someone already knows what's causing this?
eddiesu1
U field with default grid size, (200, 350)
eddiesu2
U field with different grid size, (100, 200)

Efficient code for computing particle densities

One thing PARCELS will need to be able to do is efficiently compute particle densities, given the positions of N particles. As we don't want to solve an N-body problem, we'll need to have some smart data structure

I chatted to @pwolfram today, and he suggested k-d trees: https://en.wikipedia.org/wiki/K-d_tree
This seems indeed what we might need

No huge rush, and perhaps we don't even need this for the GMD milestone. But I wanted to open an issue on it anyway, so that we know this might be the way forward

Check that particles remain within grid domain

Currently, there is no check on whether particles stay within the grid domain. For example, setting the advection time in test_peninsula to time = 24 * 3600. * 100 will cause the particles to be moved well beyond the domain boundary

Now, it is not clear what intended behaviour is when a particle reaches the domain boundary. Should it stop? Should it 'die'?
And what about global domains that are periodic (i.e. where the western edge directly connects to the eastern edge, as on a globe)? CMS uses a option here to flag whether the domain is periodic or not

Raising the issue now so that we can think about this

RK4 time dependence

I noticed that when taking a step with the RK4 method in particle.py it uses the velocities at different points in the grid as it should, but always at the starting time of the step. In python I could easily adjust it to take velocities at t=time+dt/2 or t=time+dt where it should, but JIT still uses only time as an argument when calling temporal_interpolation_linear.

Also, I'm still trying to get used to using github... Is there an easy way for me to push just the changes I made in particle.py? I'm currently working in the numerics branch I made, and I have things there that I don't want to push along with it yet. I'm not even sure if I should have added the analytical eddies test to the branch or just kept it locally to play with...

Find a place to host large (GBs) example ocean data sets

We need to find a place to host a few of the very large data sets that are often used to run the particles in more realistic ocean simulations. For example, one month of OFAM data is almost 30GB.
Options include the Imperial Box service, some sort of Amazon EC2 storage, or another server somewhere

Data needs to be accessible from outside firewall and without credentials. Also, ideally would support LDAP, so that downloading can be scripted

reference to peninsula case needs updating

The reference to the original peninsula testcase (in the comments op peninsula.py) should be

"""Grid representing the flow field around an idealised peninsula.

The original test description can be found in Fig. 2.2.3 in:
North, E. W., Gallego, A., Petitgas, P. (Eds). 2009. Manual of recommended practices 
for modelling physical - biological interactions during fish early life. 
ICES Cooperative Research Report No. 295. 111 pp.
http://archimer.ifremer.fr/doc/00157/26792/24888.pdf"""

Allow circumpolar/periodic movement for particles on global grids

On a global grid, a particle that exits on the eastern boundary should reappear on the western boundary; i.e. the grid should allow periodic motion.

Easiest to do this with the %360 operator; but not sure if this should be implemented throughout the entire code by default, or only if the grid is predefined to be global and/or circumpolar

And while this is relatively straightforward for the zonal dimension (east-west), it is much harder in the north-south direction. They will need to be transferred across the Northpole (no ocean on the Southpole, so no need to worry there ;-)

Interpolation at coastlines

I just started looking into the problem with boundary values and I don't really remember/understand what the problem is... I tried setting the top right quarter of the peninsula grid to NaN. I used a grid size of (20,20) and let it run for a bit longer (36 hours) and got the results displayed in the picture below (the background is the U field). Now, could you please explain to me again what the problem is here? I was under the impression that if some values in the grid were NaN it would mess up the entire grid, but I think the results look pretty good. It even manages to do some nice interpolation when the particles are in a grid cell between NaN and non-NaN, like the cyan particle.
interpolation

Initiate particleset particle positions from field

Last week I roughly coded up the ability to define inital starting positions of the particles in a particle set from a distribution specified in one of the grid fields. I'd like to clean that code up and commit. Anyone have any alternative ideas about how to do this?
At present, users specify the name of the field that they wish to use in place of the usual start lat,lon numerical argument. Then particleset tries to distribute the requested number of particles

Parcels crashes if too many files open

The code doesn't seem to work with too many files open, at the moment.

Do a git pull on the latest parcels-examples code, and then run python examples/test_globcurrent.py. This will give the netcdf error message RuntimeError: Too many open files

The problem is that we currently read in all data before we start running particles; this can become an enormous amount of data very quickly (the total size of the GlobCurrent data set is > 200GB)

Ideal is to read in data 'on the fly', I guess. But this might require (yet again) a large rewrite of the code?

A simpler fix that might go a long way for now is to close files after we're done with them?

pset.add and netcdf output; alternative output option in plain csv?

As I've started to work more with Parcels for my own research, I've ran into a serious problem with the netcdf output: it doesn't work as soon as you start adding/deleting particles...

The netcdf library assumes that there is only one dimension that can vary (which we use for time), and that all other dimensions need to be set on initialisation of the file.
This means you can't add particles to the file, or remove them, during a .execute if you also want to write them away.

Now, in CMS we give the option to output in simple csv/ascii format for exactly this reason. I.e. every time a .write is called, we would simply add all particle locations and variables to the end of a txt file. That file will be order in time, rather than ordered by particles. So we will need to

  1. provide an alternative write routine in ascii
  2. give every particle a unique ID (so that is can be tracked within the file)
  3. create a tool to postprocess the time-ordered file into a more convenient format for analysis after one is done with particle tracking

Parallel execution

Now that we are homing in on what Parcels will be and how one uses it we should start considering parallel execution. I propose that we should aim to add automatically MPI parallelism the key ideas outlined in the workshop. Ultimately the user should only be required to execute mpiexec -n num_proc python my_parcels_script.py. The key ingredients for this would be:

  • Grids need to compute a parallel partitioning in parallel.
  • Particle serialisation is required to send small particle groups to neighbouring partitions via MPI.
  • We need to detect once a particle enters a "halo" region to trigger sending. This avoids global communications, which otherwise would impede scalability.

Ideally we would implement this for both modes equally, but depending on feasibility I would prioritise C/JIT. A pre-requisite for the particle serialisation would be more dynamic data structures for the particle sets, since we would need to add and remove particles frequently on local processes.

requirements.txt also needs py

As I tried to get the peninsula testcase running, I found out I also needed to pip install py. Not sure what the minimum version required is, though; so I couldn't add it myself directly

Odd field sampling

I've been getting some strange results when sampling from none U/V forcing fields, and I'm not sure where they were coming from.

I have a Field object, with data that I have checked is > 0 in all cases. Occasionally however, when a kernel samples this field for continuous [time, lon, lat] values I'm given a negative number by the interpolator. This can still occur when I remove the RK4 temporal interpolation and use discrete time.

Is there a way that the `RectBivariateSpline' function can be somehow returning me negative numbers? Perhaps it is interpolating with null cell values present in some points on the field...

'scipy' mode argument not causing moving_eddies to run in pure python

I'm calling np.random functions in a RandomWalk function, as discussed in issue #27, and trying to use "pure python" mode for the moment as these functions are not implemented in the compiler.

However, when from the terminal I run
pythonw2.7 tests/test_moving_eddies.py scipy --grid 200 200 -v -p 50
I still get the error
File "<ast>", line 260, in AdvectionRK4RandomWalk
NameError: global name 'np' is not defined

which does not makes sense because I believed calling mode 'scipy' did not even generate a AdvectionRK4RandomWalk C function, let alone not understand the np shorthand for numpy.
If I force the default mode to be scipy rather than jit, I get the same error without the terminal line scipy argument, so this must be something else....

Probability Distribution Objects

To keep things polymorphic for some of the planned behavioural functions I will be coding, it's going to be useful to have objects representing different types of distributions that can be passed as arguments. These distributions can be used to randomly draw values for things like step-lengths, turning angles, behavioural switching etc. and take parameters that have been calculated from quantitative studies on real observed behaviour.

Scipy has full PDF objects (like this), so eventually we could use these, but I guess this would involve writing lots of conversion code for the JIT side of things (see issue #27)?

For the moment though, is there an issue with me writing a simple container object that just holds the 'name' of the distribution (so that it can be found and called from the simpler numpy library), along with whatever parameters it's going to take?

RK4/Jit problem in GlobCurrent data

Simple advection now works for scipy in the GlibCurrent data, but somewhat surprising the jit doesn't work at all. The velocities are way off, and particles simply get swept outside of the domain very quickly.

I have made a test script in the parcels-examples/tree/globcurrent_support branch, but because we don't have travis enabled on the parcels-examples repository I had to do it in the __main__

So you'll see that python examples/test_globcurrent.py works for scipy, but not for jit

Re-organise tests and examples

Now that we have a separate examples sub-repo we should start caring about the way we test internal functionality and how we demonstrate it by separating low-level feature and unit tests from full-scale example setups. The current test setups are all small examples and should thus be migrated to the examples repo eventually. In their place we require a whole range of additional low-level feature tests that cover the basic functionalities individually, for example:

  • Grid and field initialisation from file using different formats
  • Internally provided kernels, such as AdvectionRK4/EE should be tested against known solutions
  • Various constructs of our internal JIT kernel language
  • Particle trajectory and data output

As we add more and more features and cover a wider range of formats, this list should grow, while the examples repo will be used to maintain a useful set of setups that demonstrate complete scenarios.

JIT cos and pi

When adjusting RK to work differently in latitudinal and longitudinal directions I'm using np.cos and np.pi, it would be nice to get these translated to C by JIT.

Bug in field object time concatenation

I believe I've found a bug in the time concatenation from multiple files when initialising a Field objects from Field.from_netcdf().

When individual netcdf files contain single time slices (i.e. the time dimension is of length one) the code works, but when each netcdf contains multiple time-slices (e.g. perhaps one month of data), the line
data[tidx:, 0, :, :] = filebuffer.data[:, :, :]
throws an error when attempting to allocate the array from the time-slice to the full array for the entire time-series. I'm not quite sure why it works in the case of single time indices, but it does!

I have a working example, and a fix, but it requires some example netCDF files. What is the best way to demonstrate this bug?

Improving out-of-bounds check: meta-data on location

Following the merging of PR #122, there is still the outstanding issue that in JIT mode, OutOfBounds errors do not propagate the exact sampling location that created the error as SciPy mode does, but only the location of the particle at the time. Adding that would require dynamically allocating memory for meta-information

This would be a key feature allowing for periodic boundary conditions for the different sides of the domain. We then want particles which exit on the right (east) to enter on the left (west). But particles that exit on the left have to enter on the right.

Include numpy probability distribution sampling functions in code generator

It would be great to be able to use the numpy.random distribution functions for random walks and other behaviours, but I believe they are not yet supported in the JIT code generator. In particular random.uniform, random.normal and random.power would be pretty sweet to be able to use.

(Not sure if raising an issue is appropriate without having discussed first, but I guess this is one way to get to know our github workflow!)

Allow default values of particle variables to be variables themselves

In the notebook tutorial, we highlight the usecase of a custom ParticleClass for calculating distance travelled. However, this currently still requires an __init__ statement, as the prev_lon Variable cannot be defaulted to lon within the declaration of prev_lon (and same for prev_lat).

Ideally we would be able to do something like

class DistParticle(JITParticle):
    distance = Variable('distance', dtype=np.float32, default=0) 
    prev_lon = Variable('prev_lon', dtype=np.float32, default='lon')
    prev_lat = Variable('prev_lat', dtype=np.float32, default='lat')

While this is almost possible with attrgetter (via default=attrgetter('lon')), this currently fails with the error

/Users/erik/Codes/PARCELScode/parcels/particle.py in __init__(self, name, dtype, default)
     12         self.name = name
     13         self.dtype = dtype
---> 14         self.default = self.dtype(default)
     15 
     16     def __get__(self, instance, cls):

TypeError: float() argument must be a string or a number

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.