Git Product home page Git Product logo

xhydro's Introduction

xhydro's People

Contributors

dependabot[bot] avatar mayetea avatar pre-commit-ci[bot] avatar richardarsenault avatar rondeaug avatar sebastienlanglois avatar tc-ff avatar zeitsperre avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

xhydro's Issues

Problème avec importation de ravenpy

Generic Issue

Avec le setup actuel (xhydro 0.3.0, git clone master puis "pip install -e ."), on a pydantic 2.5.2 qui s'installe avec xarray 2023.10.1.

Cependant, ravenpy a besoin du package pydantic<2.0, >=1.10.8. et de xarray<2023.9.9, >=2022.12.0.

Quand j'essaie d'installer ravenpy, import ravenpy retourne juste un paquet d'erreurs de pydantic et d'erreurs de TypeError @validator, cannot be applied to fields with a schema of str.

Si on peut s'arranger pour avoir une certaine continuité, on pourra mieux gérer ces packages. Je ne sais pas comment régler ce genre de problème alors je m'en remets aux experts!

Modélisation hydrologique par classes ou par dictionnaire

Addressing a Problem?

Actuellement, la modélisation hydrologique est gérée par une fonction qsim = run_hydrological_model() qui utilise un dictionnaire model_config en entrée et retourne des débits. Cela fonctionne correctement pour un modèle simple à la GR4J, mais devient très compliqué très vite pour un modèle plus complexe à la Hydrotel ou Raven, où une bonne partie des paramètres et des fonctionnalités se cachent dans des fichiers de configuration.

La localisation et le nom des fichiers pertinents (météo, sorties) dépend d'informations qui se trouvent à travers quelques fichiers CSV.

Potential Solution

Peu importe notre décision, dans le cas d'Hydrotel, model_config devra contenir des paramètres comme simulation_options ou output_options pour permettre de consulter et modifier les fichiers CSV.

Solution 1: Continuer avec 'approche par dictionnaire
La liste de fonctions actuelle n'est pas suffisante. On aura absolument besoin de coder des fonctions supplémentaires.

  1. qsim = run_hydrological_model(model_config, return_outputs=True) pour exécuter le modèle et retourner des débits.
  2. ds_in = get_inputs(model_config) pour retrouver les bons fichiers d'entrée.
  3. qsim = get_streamflow(model_config) pour retrouver le bon fichier et retourner des débits, après que le modèle ait été exécuté.

Bref, model_config est toujours un intrant nécessaire. Un enjeu est qu'il pourrait être difficile d'avoir une fonction unique, puisque certains modèles pourraient demander des arguments supplémentaire.

Solution 2: Implémenter une classe avec une liste prédéfinie de fonctions
Je ne vois pas d'enjeu à garder l'approche model_config ici. Une fois le modèle initialisé, on pourrait avoir une liste de sous-fonctions similaires à la Solution 1:

  1. model = HydrologicalModel(model="Hydrotel", model_config)
  2. qsim = model.run(return_outputs=True) pour exécuter le modèle et retourner des débits.
  3. ds_in = model.get_inputs() pour retrouver les bons fichiers d'entrée.
  4. qsim = model.get_streamflow() pour retrouver le bon fichier et retourner des débits, après que le modèle ait été exécuté.

Ici, model_config est seulement utilisé une fois, car ses attributs sont ajoutés à la classe pendant un __init__(). Ça simplifie potentiellement les appels aux autres fonctions. Cela ouvre aussi la porte à plus facilement avoir une liste de paramètres qui diffère d'un modèle hydrologique à un autre pour les fonctions comme .get_inputs(), mais on voudra probablement éviter cela au maximum pour ne pas ajouter trop de complexité...

Contribution

  • I would be willing/able to open a Pull Request to contribute this feature.

Importations xscen

Addressing a Problem?

xscen possède déjà plusieurs fonctionnalités qui pourraient être réutilisées directement dans xhydro, notamment pour le calcul d'indicateurs (excluant les indicateurs fréquentiels plus avancés) et pour les fonctions nécessaires aux analyses hydroclimatiques (climatological_mean, compute_deltas, ensemble_stats, generate_weights, produce_horizon).

Potential Solution

Plutôt que de copier-coller le code, je propose d'importer les fonctions xscen pertinentes et de les passer au __init__, ce qui signifie qu'on peut par exemple directement utiliser xhydro.ensemble_stats, sans avoir à se soucier du fait que le code se trouve dans une autre librairie. Il existe également la mécanique pour transposer la documentation vers le ReadTheDocs.

Additional context

J'ai créé une branche, pour que vous puissiez voir à quoi ça pourrait ressembler: https://github.com/hydrologie/xhydro/tree/indicators/xhydro

Contribution

  • I would be willing/able to open a Pull Request to contribute this feature.

Add necessary hooks to link PRs, Issues and users in documentation

Addressing a Problem?

Links following the format (:pull:number) are used throughout a few files such as HISTORY.rst to link to previous PRs, Issues, and Users. However, the proper sphinx hooks need to be implemented for them to work.

Potential Solution

No response

Additional context

No response

Contribution

  • I would be willing/able to open a Pull Request to contribute this feature.

Endroit pour déposer les données pour les tests et la documentation

Context

Pour les tests et la documentation dans xhydro, on va avoir besoin de déposer certaines données hydrologiques simulées quelque part. Pour l’instant, on envisage devoir avoir minimalement :

  1. Une de 15 régions d’Hydrotel pour quelques pas de temps, afin d’illustrer l’interpolation optimale. @Mayetea
  2. Une petite région, mais pour une longue série temporelle et plusieurs simulations climatiques, afin d’illustrer les statistiques de changements climatiques.

Une première option serait de mettre les données dans un repo existant, raven-testdata ou xclim-testdata, mais le match ne semble pas idéal pour ni l’un ni l’autre.

Une deuxième option serait de nous créer un repo xhydro-testdata, soit ici, soit sur Ouranosinc.

Finalement, une troisième option serait de déposer ces données sur xdatasets. Toutefois, je ne crois pas que ça soit dans le scope de xdatasets d'avoir de petits jeux de données pour faire des tests ? Vu leur taille, je ne crois pas que ce soit réaliste non plus de déposer les données complètes. @sebastienlanglois

Edit: Sauf avis contraire, déposer ces données directement dans xhydro serait mal avisé.

value n'affiche pas ce qu'il faut pour les indicateurs

Setup Information

  • xhydro version: 0.2
  • Python version:
  • Operating System:

Description

J'extrait une station (j'ai testé 023422 et 090605), je calcul un indicateur (j'ai testé min et max) et les valeurs affichées par
ds_4fa.streamflow_min_annual et ds_4fa.streamflow_min_annual.values sont différentes

Steps To Reproduce

ds = xd.Query(
    **{
        "datasets":{
            "deh":{
                "id" :["023422"],
                "variables":["streamflow"],'spatial_agg':['watershed']
            }
        },

  }
).data.squeeze().load()

ds["id"].attrs["cf_role"] = "timeseries_id"
ds["streamflow"].attrs = {"long_name": "Streamflow", "units": "m3 s-1", "standard_name": "water_volume_transport_in_river_channel", "cell_methods": "time: mean"}

ds_4fa = xh.indicators.get_yearly_op(ds, op="min", missing="pct", missing_options={"tolerance": 0.15})

ds_4fa.streamflow_min_annual

ds_4fa.streamflow_min_annual.values

Additional context

No response

Contribution

  • I would be willing/able to open a Pull Request to address this bug.

Utilisation / installer de xhydro (ou une partie) avec Pip

Setup Information

Sur nos clusters de calcul, l'installation via conda est impossible (et nous ne sommes potentiellement pas les seuls)
image
xHydro est donc actuellement impossible à utiliser à cause de EMSPY. Est-ce que ESMPY pourrait être un wheel ? Ou est-ce que ça pourrait être sorti du init et que les fonctions qui nécessitent ESMPY soit dans un seul module ?

Context

No response

Add the xhydro planification

Addressing a Problem?

J'ai créé un schéma de ce que l'on cherche à accomplir avec xhydro. Ce schéma devrait être inclus quelque part ici.

Potential Solution

Une page dans la documentation qui est mise à jour avant chaque nouveau release.

Additional context

No response

Contribution

  • I would be willing/able to open a Pull Request to contribute this feature.

Standardizing and sharing n-dimensional array hydrometric datasets as inputs for XHydro

Addressing a Problem?

While netCDF/zarr and CF standards are widely used for storing and exchanging n-dimensional arrays in the realm of climate sciences, there is presently no comparable standard or specification in place for n-d array hydrometric data (WaterML exists but it consist of XML files and still require lots of processing to use with modern python stack).

Furthermore, as we report to diverse organizations, each entity already has its own unique methods for organizing and sharing hydrometric data (ex: miranda for Ouranos).

To foster collaboration, facilitate development, enable rigorous testing with real data, and enhance reproducibility of studies conducted through Xhydro, substantial benefits can be gained by standardizing hydrometric data and ensuring its universal accessibility on the internet through open-source means wherever feasible.

More specifically, this would involve:

  1. Establishing a specification (using xarray) for representing hydrometric data in n-dimensional arrays.
  2. Implementing a continuous updating process for hydrological datasets through DataOps practices.
  3. Storing the data in high-performance servers, accessible to all stakeholders, such as a cloud-based data lake.
  4. Creating a data catalog, leveraging tools like Intake, to facilitate easy querying of the catalog and retrieval of requested data using lazy loading or in-memory.
  5. Enabling the execution of common operations (e.g., filters, selections, geospatial operations) to enhance data analysis capabilities and productivity.

While it may appear as a significant undertaking, I have already dedicated several months to implementing a solution, drawing upon the advancements achieved in PAVICS/PAVICS-Hydro. I am excited to present what I have so far and seek valuable feedback from experts in the field.

Potential Solution

Here is a simplified overview of the solution currently being developed, which follows a similar approach to accessing large-scale climate data as described in this GitHub issue:
image001

  1. Data sources : We have implemented multiple data pipelines for the continuous acquisition of data (mostly daily)
  2. Data lakes : The data is stored in Wasabisys high-bandwidth data lakes. Unlike other cloud providers, data in Wasabisys can be extracted from the cloud (either on a local machine or to other cloud providers) at no cost (no egress or API requests fees), which offers a significant advantage in making the data truly accessible and open-source.
  3. Data catalog : Intake is utilized to create a data catalog that abstracts all configuration and data access drivers for users, ensuring seamless data access for the end user. Currently, the data catalog resides here (docs). Thanks to the integration of intake plugins, we have the capability to directly reference datasets from various sources, including the Planetary Computer and the majority of datasets available in PAVICS from its THREDDS Data Server.
  4. Xdatasets : While it is possible to access data directly through the catalog, utilizing the xdatasets library (docs) provides additional capabilities to perform common operations on multiple sites and datasets simultaneously, such as selection, filtering, spatial selection, resampling (spatial and temporal), data averaging, and more. It becomes a one-stop shop for all hydrometric datasets access and preparation. (Note for Ouranos's team: Initially, we employed clisops for certain operations (ex: weighted spatial averaging); however, due to occasional instability and slow performance attributed to xesmf, we ultimately switched to using xagg instead. Our tests have shown that xagg yields the same results (for weighted spatial averaging) but with significantly improved speed and stability.)

Here is an example for an actual study that we are working on right now. The requirements are:

  • Streamflow data from DEH
  • Only select hydrological regions 03 to 06
  • Start date is 1970-01-01
  • At least 15 years of data
  • Natural regime or influenced (daily) only

This can be achieved simply with the following query by leveraging xdatasets's capabilities :
image

Below is the list of retrieved data that can be easily viewed :
Screenshot from 2023-06-17 00-50-16

The hydrometric data specification presented above is the result of extensive deliberation and collaboration with @TC-FF drawing from our real-world experience of utilizing this kind of data.. Through this process, we have determined that this format enables the representation of a wide range of hydrometric data types (flow rates, water levels, basin-scale or station-specific weather data), at various time intervals, with different temporal aggregations (maximum, minimum, mean, sum, etc.), spatial aggregations (such as a point (outlet or station) or polygon (basin)), and includes information about the data source. We are seeking valuable feedback on the proposed data specification for representing hydrometric datasets, including suggestions for improved variable naming, adherence to conventions, and potential modifications to the data model itself. This can include for example adding timezones info, time bounds, etc. Your input on these aspects would be greatly appreciated.

Also note that, we intend to have approximately 20 000 daily-updated gauged basins in xdatasets with precomputed climate variables at each basin (temperatures, precipitation, radiation, dew point, SWE, etc.) from different sources (ERA5, ERA5-Land, Daymet, etc.) by the end of July. To retrieve the additional variables, one will simply need to include them in the query. The majority of basins are located in North America, with additional regions worldwide utilized for training deep learning algorithms. For this, we build upon the work already accomplished in HYSETS and CARAVAN, but our focus is on making it operational and easily queryable.

Additional context

There is much more details to be said regarding the various components of the presented solution. Additionally, xdatasets offers a broader range of capabilities (such as working with climate datasets such as ERA5 directly) than the simple example presented here, with even more ambitious plans in the roadmap. However, considering the length of this post, I will conclude here to let you absorb all the details. If you have any questions or suggestions, please don't hesitate to reach out.

Contribution

  • I would be willing/able to open a Pull Request to contribute this feature.

return_period affiche la fréquence

Setup Information

  • xhydro version: 0.2
  • Python version:
  • Operating System:

Description

image
On demande les périodes de retours et ce qui est affiché dans la dimension return-period sont des fréquences

Steps To Reproduce

No response

Additional context

No response

Contribution

  • I would be willing/able to open a Pull Request to address this bug.

Add a ReadTheDocs workflow

Addressing a Problem?

The current documentation is hosted on Github Pages, but ReadTheDocs would be preferable.

Additional context

Original conversation in #11 (comment)

Contribution

  • I would be willing/able to open a Pull Request to contribute this feature.

Document the Translation process

Addressing a Problem?

We should have some documentation under CONTRIBUTING.rst (or on its own page) that explains how to generate and edit the .po files needed for the French translations.

Potential Solution

I can help with the more generalized steps (i.e.: project creation and generation of the initial .po files), but it would be great if @TC-FF could briefly document how best to use the poedit tool.

Additional context

https://www.gnu.org/software/gettext/manual/html_node/PO-Files.html
https://poedit.net/
https://userbase.kde.org/Lokalize (KDE-based application**)

Contribution

  • I would be willing/able to open a Pull Request to contribute this feature.

Bug dans Notebook local_frequency_analysis

Setup Information

  • xhydro version: 0.3.3
  • Python version: 3.11.8
  • Operating System: Windows 10

Description

J'obtiens une erreur à la ligne:

ds = xd.Query(
*{
"datasets":{
"deh":{
"id" :["020
"],
"regulated":["Natural"],
"variables":["streamflow"],
}
}, "time":{"start": "1970-01-01",
"minimum_duration":(15*365, 'd')},

}
).data.squeeze().load()

L'erreur:


ValidationError Traceback (most recent call last)
Cell In[2], line 1
----> 1 ds = xd.Query(
2 *{
3 "datasets":{
4 "deh":{
5 "id" :["020
"],
6 "regulated":["Natural"],
7 "variables":["streamflow"],
8 }
9 }, "time":{"start": "1970-01-01",
10 "minimum_duration":(15*365, 'd')},
11
12 }
13 ).data.squeeze().load()
15 # This dataset lacks some of the aforementioned attributes, so we need to add them.
16 ds["id"].attrs["cf_role"] = "timeseries_id"

File ~\Anaconda3\envs\xhydro-dev\Lib\site-packages\xdatasets\core.py:122, in Query.init(self, datasets, space, time, catalog_path)
119 self.space = self._resolve_space_params(**space)
120 self.time = self._resolve_time_params(**time)
--> 122 self.load_query(datasets=self.datasets, space=self.space, time=self.time)

File ~\Anaconda3\envs\xhydro-dev\Lib\site-packages\xdatasets\core.py:256, in Query.load_query(self, datasets, space, time)
253 except:
254 pass
--> 256 ds_one = self._process_one_dataset(
257 dataset_name=dataset_name,
258 variables=variables_name,
259 space=space,
260 time=time,
261 **kwargs,
262 )
263 dsets.append(ds_one)
265 try:
266 # Try naively merging datasets into single dataset

File ~\Anaconda3\envs\xhydro-dev\Lib\site-packages\xdatasets\core.py:299, in Query._process_one_dataset(self, dataset_name, variables, space, time, **kwargs)
296 dataset_category = "user-provided"
298 elif isinstance(dataset_name, str):
--> 299 dataset_category = [
300 category
301 for category in self.catalog._entries.keys()
302 for name in self.catalog[category]._entries.keys()
303 if name == dataset_name
304 ][0]
306 if dataset_category in ["atmosphere"]:
307 with warnings.catch_warnings():

File ~\Anaconda3\envs\xhydro-dev\Lib\site-packages\xdatasets\core.py:302, in (.0)
296 dataset_category = "user-provided"
298 elif isinstance(dataset_name, str):
299 dataset_category = [
300 category
301 for category in self.catalog._entries.keys()
--> 302 for name in self.catalog[category]._entries.keys()
303 if name == dataset_name
304 ][0]
306 if dataset_category in ["atmosphere"]:
307 with warnings.catch_warnings():

File ~\Anaconda3\envs\xhydro-dev\Lib\site-packages\intake\catalog\base.py:472, in Catalog.getitem(self, key)
463 """Return a catalog entry by name.
464
465 Can also use attribute syntax, like cat.entry_name, or
(...)
468 cat['name1', 'name2']
469 """
470 if not isinstance(key, list) and key in self:
471 # triggers reload_on_change
--> 472 s = self._get_entry(key)
473 if s.container == "catalog":
474 s.name = key

File ~\Anaconda3\envs\xhydro-dev\Lib\site-packages\intake\catalog\utils.py:43, in reload_on_change..wrapper(self, *args, **kwargs)
40 @functools.wraps(f)
41 def wrapper(self, *args, **kwargs):
42 self.reload()
---> 43 return f(self, *args, **kwargs)

File ~\Anaconda3\envs\xhydro-dev\Lib\site-packages\intake\catalog\base.py:355, in Catalog._get_entry(self, name)
353 ups = [up for name, up in self.user_parameters.items() if name not in up_names]
354 entry._user_parameters = ups + (entry._user_parameters or [])
--> 355 return entry()

File ~\Anaconda3\envs\xhydro-dev\Lib\site-packages\intake\catalog\entry.py:60, in CatalogEntry.call(self, persist, **kwargs)
58 def call(self, persist=None, **kwargs):
59 """Instantiate DataSource with given user arguments"""
---> 60 s = self.get(**kwargs)
61 s._entry = self
62 s._passed_kwargs = list(kwargs)

File ~\Anaconda3\envs\xhydro-dev\Lib\site-packages\intake\catalog\local.py:313, in LocalCatalogEntry.get(self, **user_parameters)
310 return self._default_source
312 plugin, open_args = self._create_open_args(user_parameters)
--> 313 data_source = plugin(**open_args)
314 data_source.catalog_object = self._catalog
315 data_source.name = self.name

File ~\Anaconda3\envs\xhydro-dev\Lib\site-packages\intake\catalog\local.py:613, in YAMLFileCatalog.init(self, path, text, autoreload, **kwargs)
611 self.filesystem = kwargs.pop("fs", None)
612 self.access = "name" not in kwargs
--> 613 super(YAMLFileCatalog, self).init(**kwargs)

File ~\Anaconda3\envs\xhydro-dev\Lib\site-packages\intake\catalog\base.py:128, in Catalog.init(self, entries, name, description, metadata, ttl, getenv, getshell, persist_mode, storage_options, user_parameters)
126 self.updated = time.time()
127 self._entries = entries if entries is not None else self._make_entries_container()
--> 128 self.force_reload()

File ~\Anaconda3\envs\xhydro-dev\Lib\site-packages\intake\catalog\base.py:186, in Catalog.force_reload(self)
184 """Imperative reload data now"""
185 self.updated = time.time()
--> 186 self._load()

File ~\Anaconda3\envs\xhydro-dev\Lib\site-packages\intake\catalog\local.py:648, in YAMLFileCatalog._load(self, reload)
646 logger.warning("Use of '!template' deprecated - fixing")
647 text = text.replace("!template ", "")
--> 648 self.parse(text)

File ~\Anaconda3\envs\xhydro-dev\Lib\site-packages\intake\catalog\local.py:728, in YAMLFileCatalog.parse(self, text)
726 result = CatalogParser(data, context=context, getenv=self.getenv, getshell=self.getshell)
727 if result.errors:
--> 728 raise exceptions.ValidationError(
729 "Catalog '{}' has validation errors:\n\n{}"
730 "".format(self.path, "\n".join(result.errors)),
731 result.errors,
732 )
734 cfg = result.data
736 self._entries = {}

ValidationError: Catalog 'C:/Users/maied01/AppData/Local/Temp/catalogs//hydrology.yaml' has validation errors:

("missing 'module'", {'module': 'intake_xarray'})

Steps To Reproduce

No response

Additional context

J'ai créé un environnement conda en suivant les étapes de la procédure et j'ai lancé un Jupyter Notebook en utilisant Anaconda Navigator.

image

Contribution

  • I would be willing/able to open a Pull Request to address this bug.

README

Addressing a Problem?

Ça pourrait être bien d'écrire dans le README sur la page d'accueil du repo (https://github.com/hydrologie/xhydro/blob/main/README.rst). Pas besoin d'être très complet, il servirait simplement à accueillir les nouveaux contributeurs en décrivant les différentes sections du repo. On pourrait également y mettre un lien vers un document qui explique le projet dans le détail.

Potential Solution

No response

Additional context

No response

Contribution

  • I would be willing/able to open a Pull Request to contribute this feature.

Pydantic update is breaking xscen and xhydro

Setup Information

  • xhydro version: 0.3.0
  • Python version: 3.9 to 3.11
  • Operating System: Ubuntu

Description

Pydantic have just made a new release today (2023-12-22) and since then, we are getting errors coming from xscen :
image

Steps To Reproduce

Installing xscen or xhydro in a new environment with pydantic's newest release (v2.5.3).

Additional context

@Zeitsperre
@RondeauG

Contribution

  • I would be willing/able to open a Pull Request to address this bug.

Adding the notebooks under Usage instead of at the root level

Setup Information

  • xhydro version: not important
  • Python version:
  • Operating System:

Context

We are currently adding the notebooks at the root level of our documentation vs. under a section such as Usage in sphinx. To keep things more clean and prepare for the addition of other notebooks, could we bring the notebooks under the Usage section ?

@Zeitsperre @RondeauG

Mettre à jour les instructions d'installation

Addressing a Problem?

Il y a plusieurs petits détails qui ne sont pas bons ou pas assez précis dans nos instructions d'installation et dans "Contributing", dont l'utilisation de conda (alors que mamba serait préférable) et à quel moment (et pour quels besoins) faire pip install -e . vs pip install xhydro.

Potential Solution

No response

Additional context

No response

Contribution

  • I would be willing/able to open a Pull Request to contribute this feature.

Geospatial operations for hydrological analysis

Addressing a Problem?

Hydrology uses geospatial operations, encompassing tasks such as watershed delineation and extraction of physiographic variables at the watershed scale. PAVICS-Hydro has implemented various functionalities in ravenpy to execute these operations.

It would be interesting to integrate some of these features into xhydro by leveraging the work done in ravenpy while also adding some new functionalities.

Potential Solution

The solution would include the functionalities included in ravenpy plus the followings :
Watershed Delineation

  1. Support concurrent delineation of multiple watersheds simultaneously.
  2. Enable access to official watershed polygons (shapefiles/geojson/geoparquet) from authoritative sources (DEH, HYDAT, USGS, HQ, etc.) —implemented collaboratively with xdatasets.

Physiographic Variable (or others) Extraction

  1. Support simultaneous extraction of physiographic variables across multiple watersheds.
  2. Facilitate the extraction of variables present in STAC catalogs (e.g., Planetary Computer).
  3. Accommodate cases where users employ their own rasters for extraction.
  4. Implement extraction considering pixel weighting rather than an "all_touched" approach, as this can significantly impact final results —implemented collaboratively with xdatasets.

Additional context

No response

Contribution

  • I would be willing/able to open a Pull Request to contribute this feature.

Cleaner environment

Generic Issue

  • xhydro version: 0.1.5

environment.yml currently has many dependencies that should either be removed or moved to environment-docs (such as the sphinx ones). environment.yml should be kept as lean as possible, then populated as we add functions.

We also have 5 files where dependencies are listed. I think that we can get rid of a few of them.

Additional context

Original conversation in #11 (comment)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.