dhi-gras / budyko-qgis Goto Github PK
View Code? Open in Web Editor NEWQGIS interface to Budyko Hydological Model maintained by DTU Environment
License: GNU General Public License v3.0
QGIS interface to Budyko Hydological Model maintained by DTU Environment
License: GNU General Public License v3.0
The steps are:
OBS: Before running TauDEM algorithms, ensure that your no data value is a float (e.g. -32768) and not “nan” or similar. TauDEM does read the GDAL no-data tag (or similar) in the tiff metadata, but is unable to deal with non-float values in general. This can yield ridiculous results (particularly if you have ocean cells in your DEM).
Due to thouska/spotpy#200 the calibration is searching for the worst model performance instead of the best due to a sign issue of the likelihood function. I have submitted a pull request - the code works with Spotpy 1.13.10 otherwise (but it would of course be better to use the newest version...).
I also looked into getting the algorithm to use spotpy_multiobjective.py's objective function (it actually uses the default rmse, which amounts to the same, except it produces a positive number), but because the "negative likelihood" is hardcoded and not consistent, it seems hard to make a robust workaround. A part from this issue, the calibration implementation in QGIS runs smoothly with the test data, except a few tweaks to the calibration default settings (which I will also update in the code).
The steps (from @KittelC's Budyko guide) are:
OBS: Before running TauDEM algorithms, ensure that your no data value is a float (e.g. -32768) and not “nan” or similar. TauDEM does read the GDAL no-data tag (or similar) in the tiff metadata, but is unable to deal with non-float values in general. This can yield ridiculous results (particularly if you have ocean cells in your DEM).
From the QGIS processing toolbox run Pit Remove (TauDEM – Basic Grid Analysis tools – Pit Remove)
From the QGIS processing toolbox run Flow directions (TauDEM – Basic Grid Analysis tools – D8 Flow Directions)
From the QGIS processing toolbox run Contributing area (TauDEM – Basic Grid Analysis tools – D8 Contributing Area). Please note that the color stretch of the map is not adjusted automatically and needs to be manually adjusted in order to see something.
Load or generate requested outlet points in a point shapefile. Make sure outlet points are located on top of rivers. Re-run step 8. You can create a new shapefile from a table of coordinates using Layer – Add Layer – Add delimited text layer. Reprojection can be done with right click – save as and then changing the projection.
From the QGIS processing toolbox run Stream definition (TauDEM – Stream Network Analysis Tools – Stream Definition by threshold)
From the QGIS processing toolbox run Stream reach and watershed (TauDEM – Stream Network Analysis Tools – Stream Reach and Watershed)
Polygonize the watershed file using the processing toolbox – GDAL Conversion – Polygonize. The output field name for the polygons is “DN”.
Dissolve (merge) polygons with the same DN using processing toolbox – QGIS geoalgorithms – Vector geometry tools – Dissolve. Make sure you uncheck “Dissolve all”.
Replace stream and catchment IDs with continuous IDs (IDs must start at 0). To do this, proceed in the following steps:
a) Make a copy of the stream reaches file (in QGIS, right-click – export), retaining only the LINKNO field of the attribute table
b) Add a new ID field to the attribute table. From the processing toolbox use QGIS – vector table tools – add autoincremental field
c) Join the attribute tables of the stream reach file and the file generated in a-c (called id file from here onwards) using QGIS – vector general tools – join attribute tables. Use “LINKNO” as the join field in both tables.
d) Using the field calculator, replace the original “LINKNO” field with the id field in the joint table
e) Join attribute tables of the stream reach file and the id file again, using “DSLINKNO” as the join field in the stream reach file and “LINKNO” in the id file.
f) Using the field calculator, replace the original “DSLINKNO” field with the id field in the joint table, for all values except -1 (use if statement)
g) Proceed the same way with “USLINKNO1” and “USLINKNO2”.
h) Join attribute tables of the watershed file and the id file (using “DN” and “LINKNO”) and replace the “DN” field with the id field in the joint table.
All watershed delineation commands can be scripted as shown in the example script by Cécile (StreamNet_Edit_Taudem.py) – the script Stream_Subbasin_Cleanup.py shows how 0-length reaches can be removed and the reaches and subbasins re-numbered automatically.
Go to Raster-Zonal Statistics-Zonal statistics and add a column with the mean elevation to the subcatchment shape file attribute table.
Open the attribute table and click on the field calculator button. Calculate a new field containing the catchment area, using the $area operator.
Reproject the subcatchment shapefile into latitude-longitude coordinates. Right-click on layer, save as and pick the EPSG 4326 projection.
Create a point shapefile containing the centroids of the subcatchments using Vector-Geometry Tools-Polygon centroids and save the result into a file, e.g. centroids_ll.shp.
Add columns with latitude and longitude to the centroid shape file attribute table, using Vector-Geometry Tools-Export/Add geometry columns. Or using the field calculator on the attribute table and the $x and $y operators.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.