Git Product home page Git Product logo

geoclimate's Introduction

Geoclimate

GitHub

A geospatial processing toolbox for environmental and climate studies

bandeau_geoclimate

GeoClimate documentation is available at https://github.com/orbisgis/geoclimate/wiki

Note :

The documentation is inline with the current GeoClimate code source.

If you are using a released version of GeoClimate please read the documentation packaged with the version you are using.

geoclimate's People

Contributors

arfon avatar danielskatz avatar ebocher avatar elbeejay avatar elsw56 avatar ernlt avatar franetibe avatar gpetit avatar j3r3m1 avatar maxcollombin avatar spalominos avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

geoclimate's Issues

Correlation tables for the upper scale...

Some of the IProcess have correlationTable allocated to the upperscale object which can not work simply. This should be check and modified in all processes having this problem

processing chain when inputs having same name are not the same

A same IProcess (scalesRelations) is used several times in the mapper called "CreateScalesOfAnalysis" but this process does not used the same input names each time for the input called columnIdUp (sometimes it refers to id_block, sometimes id_rsu). @SPalominos @ebocher How could we manage this problem (use a same input twice but without the same value...) ?

Take into account null road type

When the data is transformed from OSM to Geoclimate model, it's possible to obtain a road type with a null value.
The inputDataFormatting process doesn't check that.

@j3r3m1

Do not substring column names

The process unweightedOperationFromLowerScale and weightedAggregatedStatistics received a set of column names that will be selected to run the aggregate function. Currently, the three first column letters are used to create the name of the indicator. I think we must keep the whole name.

OSM in Nozay (INSEE code 91458)

I have downloaded the OSM data for the city of Nozay (Essone) and when I wanted to compute the RSU, I got an error saying that the column "the_geom" was not found in the "rail" table.
The reason is that there is no rail and it seems that when H2 save an empty table in the geojson format, the column names are not conserved when we load it back (the table is emptied of its lines and its columns as well...).
If there is no opportunity to change the H2 behaviour, what could we do ? Add a test on the input loaded data to create an empty table with the right column names if the table is empty ?

Different use for indicators

The indicators may be used for LCZ identification, for TEB, for urban typology classification or other use.
In order to calculate only the needed processes in the RSU indicator processing chain, a solution could be to set a list of numbers to each process (for example 1 for LCZ, 2 for TEB, etc.) and then launch the calculation only if the user ask for this number.
This solution would need to have the entire processing chain in a scheme in order to set the number of all indicators that are used as intermediate indicators for the other indicator calculation.

ALERT : bug with the last H2 1.4.200

I'm not able to isolate it, but the last H2 1.4.200 introduces nio exception when a large process is run

Exception in thread "Thread-0" java.lang.AssertionError
	at org.h2.mvstore.MVMap.rewrite(MVMap.java:616)
	at org.h2.mvstore.MVMap.rewritePage(MVMap.java:775)
	at org.h2.mvstore.MVMap.rewrite(MVMap.java:734)
	at org.h2.mvstore.MVMap.rewrite(MVMap.java:748)
	at org.h2.mvstore.MVMap.rewrite(MVMap.java:748)
	at org.h2.mvstore.MVMap.rewrite(MVMap.java:710)
	at org.h2.mvstore.MVStore.compactRewrite(MVStore.java:2137)
	at org.h2.mvstore.MVStore.rewriteChunks(MVStore.java:2026)
	at org.h2.mvstore.MVStore.compact(MVStore.java:2007)
	at org.h2.mvstore.MVStore.compactFile(MVStore.java:1968)
	at org.h2.mvstore.MVStore.closeStore(MVStore.java:1170)
	at org.h2.mvstore.MVStore.close(MVStore.java:1133)
	at org.h2.mvstore.db.MVTableEngine$Store.close(MVTableEngine.java:404)
	at org.h2.engine.Database.closeOpenFilesAndUnlock(Database.java:1545)
	at org.h2.engine.Database.closeImpl(Database.java:1454)
	at org.h2.engine.Database.close(Database.java:1373)
	at org.h2.engine.OnExitDatabaseCloser.onShutdown(OnExitDatabaseCloser.java:85)
	at org.h2.engine.OnExitDatabaseCloser.run(OnExitDatabaseCloser.java:114)
	Suppressed: org.h2.message.DbException: GeneralError
		at org.h2.message.DbException.convert(DbException.java:352)
		at org.h2.mvstore.db.MVTableEngine$1.uncaughtException(MVTableEngine.java:93)
		at org.h2.mvstore.MVStore.handleException(MVStore.java:2877)
		at org.h2.mvstore.MVStore.writeInBackground(MVStore.java:2813)
		at org.h2.mvstore.MVStore$BackgroundWriterThread.run(MVStore.java:3290)
		Suppressed: java.lang.AssertionError
			at org.h2.mvstore.MVMap.rewrite(MVMap.java:616)
			at org.h2.mvstore.MVMap.rewritePage(MVMap.java:775)
			at org.h2.mvstore.MVMap.rewrite(MVMap.java:734)
			at org.h2.mvstore.MVMap.rewrite(MVMap.java:748)
			at org.h2.mvstore.MVMap.rewrite(MVMap.java:748)
			at org.h2.mvstore.MVMap.rewrite(MVMap.java:748)
			at org.h2.mvstore.MVMap.rewrite(MVMap.java:710)
			at org.h2.mvstore.MVStore.compactRewrite(MVStore.java:2137)
			at org.h2.mvstore.MVStore.rewriteChunks(MVStore.java:2026)
			at org.h2.mvstore.MVStore.doMaintenance(MVStore.java:2844)
			at org.h2.mvstore.MVStore.writeInBackground(MVStore.java:2788)
			... 1 more
	Caused by: java.sql.SQLException: GeneralError
		... 5 more
	[CIRCULAR REFERENCE:java.lang.AssertionError]

Please execute this unit test

@Test
    void testOSMGeoclimateChain() {
        String directory ="./target/geoclimate_chain"
        File dirFile = new File(directory)
        dirFile.delete()
        dirFile.mkdir()
        H2GIS datasource = H2GIS.open(dirFile.absolutePath+File.separator+"geoclimate_chain_db;AUTO_SERVER=TRUE")
        IProcess process = ProcessingChain.GeoclimateChain.OSMGeoIndicators()
        if(process.execute(datasource: datasource, placeName: "Vannes")){
            IProcess saveTables = ProcessingChain.DataUtils.saveTablesAsFiles()
            saveTables.execute( [inputTableNames: process.getResults().values()
                                 , directory: directory, datasource: datasource])
        }
    }

need for investigation

Create table data type

Change double type in create table by double precision to be compliant with postgresql

prefixName issue

In the SpatialUnits IProcesses, the outputTableName has been replaced by the prefixName but the outputTableName is still in the Javadoc as param. It should be removed.

The prefixName is used to build the outputTableName as follow:

String outputTableName = prefixName+"_"+UUID.randomUUID().toString().replaceAll("-","_")

I thought that the prefixName was useful to distinguish two calculations proposed by one user when he needs the same IProcess but not the same input data. Then I thought that a base name was automatically defined in the script (for example script name) and that the user would have the opportunity to add the prefixName to this base name. My proposition would be:

String outputTableName = prefixName+"_baseNameOfIprocess"

Using the current code, the output would be pretty ugly (due to the UUID) and hard for the user to simply identify.

Replace update in geoclimate/geoIndicators

Currently, IProcess such as "buildingMinimumBuildingSpacing" or "buildingMinimumRoadDistance" use an update query at the end to set a default value when the field is null...

An alternative solution to save time could be to define a default value when creating the table
CREATE TABLE $OutputTable($field AS Double DEFAULT 0, ...)
However, using this solution would imply to get rid of the "inputFields" arguments in each IProcess since we do not know their type neither their potential default values, etc.

The third solution would be to get rid of these "inputFields". In this case, the result of each process will be gathered only at the end of all processes.

Road initialization BDTopo

Trouble with the prepare data IProcesses. I have tried to use the BDTopo version 2016 for one IRIS of the city of Nantes and some linewidth having initially a value in their "largeur" field are set to null after the data preparation.

It seems that the problem comes from the nature of the roads (Route empierr) which is not taken into account in our code (I did not check but maybe the problem is that the nature type has changed since 2016) ?

A list of slow processes that must be optimized

  • Building interactions properties

  • Building road distance
    Buffering on roads takes a long time. Not sure if this will have a major impact on distance calculation...

  • RSU projected facade area distribution

  • RSU ground sky view factor (use the last st_generatepoints function)

  • createScalesRelations

  • Perkins Skill Score (new version) seems very low

indexes created twice

The index are create twice due to the following query that could have a different id_name along a "mapper"...

create index if not exists id_name on...

change test name

I should add the prefix name "test" for the test of each process in the test files

Metadata files

Generate for each output files some metadata : a readme or a pdf document that describe all variables stored in the tables and output files

LCZ type names

We use VARCHAR to store the LCZ type of an area. Should we modify to integer (like in WUDAPT, they use 1, 2, 3, etc. for built types and 101, 102, etc. for natural types) ?

building_likelihood_large_building

For the building_likelihood_large_building calculation, I need the building_number_building_neighbors.
I could recalculate it but it is probably costly (not a direct calculation).

The alternative is to consider that we know the number of neighbors of each building and I simply create the process from there. It means that if the user does not have this information he will have to create a mapper to connect several IProcesses.

RSU in Fontainebleau (INSEE 77186)

The created RSU for the commune of Fontainebleau are strange. Some of them are defined outside the commune limits and they even not intersect the other RSU within the buildings (they look like island in an ocean of empty...).
We probably have forgotten one of the last steps of the RSU creation used to select the RSU that are located within the commune zone ?

Connect mappers

We have finished the coding of the first mapper (to prepare the BDTopo data).
The objective is to create a mapper that integrate the full chain of processes (from BDTopo importing to indicator calculation).
We have decided to separate the main tasks into submappers as follow:

  • data preparation: the prepareBDTopoData is coded
  • create the spatial units: we should code the corresponding mapper
  • calculate some of the indicators : we should code the corresponding mapper

In order to make this job, I have two questions:

  • should we store each mapper in a separate file as it is done for the prepareBDTopo one or is it possible to code several mappers in a same file (for example all those that concerns the data preparation - OSM and BDTopo - all those that regards spatial unit preparation and all those that concerns the indicator calculation) ?
  • is it already possible to connect several mappers ?

Edit: I also have a third question: is the mapper appropriate to compute parrallel chains ? For examplei in the case of the spatial unit creation:
-->P1-->P2-->
----->P3------>

Trouble with mappers

I still have a problem with the mapper createScalesOfAnalysis.
I have a null table in the IProcess createScalesRelations between blocks and buildings. It seems that the mapper input inputLowerScaleTableName is null whereas it is well written as a string in input in my test.
Is this problem could come from the fact that I use several times the createScalesRelations and then the input is defined several times ? I do not get it. @SPalominos I am available for debugging if you have time.

Modify the uid name

To have a unique name for intermediate Tables, we shall replace the time system by a random uid
uid.randomuid.toString()
(then replace the "-" by "_" or by "")

Still problem with z data

The RSU creation works now but there is still a problem with the block creation (when executing the "osmGeoIndicatorsFromTestFiles" processing chain using the PR #143. The problem again comes from an union of data having z values as NaN.

The error message is the following:
[main] INFO org.orbisgis.geoindicators.Geoindicators - Building spatial clusters...
[main] INFO gui.class org.h2gis.network.functions.GraphCreator - Loading graph into memory...
[main] INFO gui.class org.h2gis.network.functions.GraphCreator - 0.089 seconds
[main] INFO gui.class org.h2gis.network.functions.ST_ConnectedComponents - Calculating connected components...
[main] INFO gui.class org.h2gis.network.functions.ST_ConnectedComponents - 0.038 seconds
[main] INFO gui.class org.h2gis.network.functions.ST_ConnectedComponents - Storing node connected components...
[main] INFO gui.class org.h2gis.network.functions.ST_ConnectedComponents - 0.286 seconds
[main] INFO gui.class org.h2gis.network.functions.ST_ConnectedComponents - Storing edge connected components...
[main] INFO gui.class org.h2gis.network.functions.ST_ConnectedComponents - 0.198 seconds
[main] INFO org.orbisgis.geoindicators.Geoindicators - Merging spatial clusters...
[main] ERROR org.orbisgis.processmanager.Process - Error while executing the process.
Exception lors de l'appel de la fonction définie par l'utilisateur: "union(MULTIPOLYGON (((516872.94233278395 5277759.794728824, 516880.52041002514 5277762.039496896, 516888.17325328063 5277764.39563172, 516887.7939132827 5277765.728210717, 516854.78013486054 5277756.0744980145, 516864.9842706023 5277759.104829133, 516865.5144383675 5277757.550405747, 516872.94233278395 5277759.794728824)), ((516865.5144383675 5277757.550405747, 516866.045572383 5277755.662565364, 516873.5485633215 5277757.907106766, 516872.94233278395 5277759.794728824, 516865.5144383675 5277757.550405747)), ((516872.94233278395 5277759.794728824, 516873.5485633215 5277757.907106766, 516881.126642796 5277760.151875611, 516880.52041002514 5277762.039496896, 516872.94233278395 5277759.794728824)), ((516880.52041002514 5277762.039496896, 516881.126642796 5277760.151875611, 516888.7047164715 5277762.396654348, 516888.47653194936 5277763.396251952, 516888.17325328063 5277764.39563172, 516880.52041002514 5277762.039496896)))): found non-noded intersection between LINESTRING ( 516887.7939132827 5277765.728210717, 516854.78013486054 5277756.0744980145 ) and LINESTRING ( 516864.9842706023 5277759.104829133, 516865.5144383675 5277757.550405747 ) [ (516864.9986896651, 5277759.062553216, NaN) ]"
Exception calling user-defined function: "union(MULTIPOLYGON (((516872.94233278395 5277759.794728824, 516880.52041002514 5277762.039496896, 516888.17325328063 5277764.39563172, 516887.7939132827 5277765.728210717, 516854.78013486054 5277756.0744980145, 516864.9842706023 5277759.104829133, 516865.5144383675 5277757.550405747, 516872.94233278395 5277759.794728824)), ((516865.5144383675 5277757.550405747, 516866.045572383 5277755.662565364, 516873.5485633215 5277757.907106766, 516872.94233278395 5277759.794728824, 516865.5144383675 5277757.550405747)), ((516872.94233278395 5277759.794728824, 516873.5485633215 5277757.907106766, 516881.126642796 5277760.151875611, 516880.52041002514 5277762.039496896, 516872.94233278395 5277759.794728824)), ((516880.52041002514 5277762.039496896, 516881.126642796 5277760.151875611, 516888.7047164715 5277762.396654348, 516888.47653194936 5277763.396251952, 516888.17325328063 5277764.39563172, 516880.52041002514 5277762.039496896)))): found non-noded intersection between LINESTRING ( 516887.7939132827 5277765.728210717, 516854.78013486054 5277756.0744980145 ) and LINESTRING ( 516864.9842706023 5277759.104829133, 516865.5144383675 5277757.550405747 ) [ (516864.9986896651, 5277759.062553216, NaN) ]"; SQL statement:

        CREATE INDEX ON spatial_clustersccd79a52_f37b_47f2_b27b_cf521b207259_NODE_CC(NODE_ID); [90105-199]

geoindicator names

For all geospatial indicator names (IProcesses), remove the part of the name corresponding to the scale (for example rsu_area --> area) since the indicator is already in a class corresponding to its scale...

List of the tests to be written

Below are listed the tests that have to be written concerning the data.
These tests are not unit tests since they are not based on a generic data set. They are just executed to check if the process outputs are error free. If the tests shows errors, then the following processes can not be executed.

Everybody is welcome to feed this list.

Output of the OSM / BD Topo preparation part

For buildings

  • only POLYGONS are accepted
  • null TYPE is not allowed
  • the TYPE has to have an equivalent in the abstract list
  • HEIGHT_WALL has to be lower than 1000
  • NB_LEV has to be lower than 200
  • check if the expected fields are present
  • Consider the possibility to have no objects in the table

For roads

  • only LINESTRINGS are accepted
  • null TYPE is not allowed
  • the TYPE has to have an equivalent in the abstract list
  • WIDTH has to be lower than 100
  • check if the expected fields are present
  • Consider the possibility to have no objects in the table

For rails

  • only LINESTRINGS are accepted
  • null TYPE is not allowed
  • the TYPE has to have an equivalent in the abstract list
  • check if the expected fields are present
  • Consider the possibility to have no objects in the table

For hydrographic area

  • only POLYGONS are accepted
  • check if the expected fields are present
  • Consider the possibility to have no objects in the table

For vegetation area

  • only POLYGONS are accepted
  • null TYPE is not allowed
  • the TYPE has to have an equivalent in the abstract list
  • check if the expected fields are present
  • Consider the possibility to have no objects in the table

Output of the generic data formatting part

For the table buildings

  • check if the expected fields are present (THE_GEOM, ID_BUILD, ID_SOURCE, HEIGHT_WALL, HEIGHT_ROOF, NB_LEV, TYPE, MAIN_USE, ZINDEX, ID_ZONE)

Check if the following fields have null values (in theory it's not possible. If so, warning!)

  • TYPE
  • HEIGHT_WALL
  • HEIGHT_ROOF
  • NB_LEV

For the table roads

  • check if the expected fields are present (THE_GEOM, ID_ROAD, ID_SOURCE, WIDTH, TYPE, SURFACE, SIDEWALK, ZINDEX)

Check if the following fields have null values (in theory it's not possible. If so, warning!)

  • TYPE
  • WIDTH

For the table rails

  • check if the expected fields are present (THE_GEOM, ID_RAIL, ID_SOURCE, TYPE, ZINDEX)

Check if the following fields have null values (in theory it's not possible. If so, warning!)

  • TYPE

For the table hydro

  • check if the expected fields are present (THE_GEOM, ID_HYDRO, ID_SOURCE)

For the table veget

  • check if the expected fields are present (THE_GEOM, ID_VEGET, ID_SOURCE, TYPE, HEIGHT_CLASS)

Check if the following fields have null values (in theory it's not possible. If so, warning!)

  • TYPE
  • HEIGHT_CLASS

Building preparation from OSM data: what a long time

I have applied the test "osmGeoIndicatorsFromApi" that call the processing chain coded within "PrepareOSM.prepareOSMDefaultConfig()". With the INSEE code given by Erwan (for the La-Roche-Bernard city - #whatacity), it works perfectly but if I try for the Paris commune (OK may be my mistake), the processing chain is spending too much time (more than 3h, then I had to stop the process) at the step that logs:
"Buildings preparation starts" (right after "Transform OSM data to GIS tables.")

@ELSW56 @ebocher any idea to speed up this step (I think that this is the IProcess called "defineBuildingScript" that takes so much time) ?

Problem solved but not identified

The compilation of the netCompacity branch was tough. It did not work directly. Finally, regarding the modifications I have made for making it work, two assumptions, both a bit strange:

  • either the problem came from the order of the test regarding the process order in the process file. The order was not conserved which would make the tests failed...
  • the string that defined the IProcess (at the beginning of each IProcess) was the same for holeAreaDensity and netCompacity, that could affect the good working of the code...

Building passiv volume

Seems there is an error in the definition of the building passiv volume. Indeed, the formula must take into account only the free facades (party walls are not considered)

... and in the code here, it's not the case

else if(operation=="building_passive_volume_ratio") {

Not sure, but for me this indicator is so complex, we should extract it into a specific script.

@j3r3m1 @ebocher

Relationships between spatial units

For the calculation of some of the geospatial variables, we need the correlation Table between spatial units as input (for example building and block, building and RSU and block and RSU).
In order to create those Tables, I see two solutions: we create a new file which could be called "units_relations" or we add an IProcess in the spatial_unit file which could be called "unitRelationCalculation".
What is your opinion concerning this point ?

Update the use of spatial index

The syntax for the creation of (spatial) indexes is not aligned with PostGreSQL/PostGIS (see h2database/h2database#2171).
So when the problem will be resolved on the H2/H2GIS side, we will have to update the .sql (and maybe .groovy) files in GeoClimate. For example :

@ebocher @SPalominos @j3r3m1 @ELSW56

Merging the geometries : long time

For information, it seems that the operation that does "Merging the geometries..." is also long (I have runned it for several territories and it seems even longer than the "Buildings preparation starts" that seems to last longer than 3 hours for Paris - not completed).
The associated IProcess is the SpatialUnits.createBlocks(). Maybe due to the buffer used, I do not know...

Chaining processes trouble if mapping

Sometimes IProcesses needs several indicators as inputs and these indicators should be in the same table. As only the indicator and the id of the geometry are the conserved columns in an IProcess, we need to make some "join" operations on the tables. Thus mapping would be complicated if we do not add an "inputField" in the input of each IProcesses.

Trouble with extra column and buildLCZ

When I want to build the LCZ from a table containing an other column than the_geom and id_rsu, a duplicate error appears (when some of the indicators are joinded).

IProcess not at the right scale

The "unweightedOperationFromLowerScale" IProcess should be a parent of the current BlockIndicator where it is located since it allows to summarize information from building to block, building to rsu or block to rsu scale.

The "geometryProperties" IProcess should also be at a parent directory since it can be applicable identically for each scale.

References for indicators

We do not describe well the indicators we calculate yet. Should we add the reference source ?

For example for buildingConcavity:
References:

  • Bocher, E., Petit, G., Bernard, J., & Palominos, S. (2018). A geoprocessing framework to cototompute urban indicators: The MApUCE tools chain. Urban climate, 24, 153-174.
  • Adolphe, L., 2001. A simplified model of urban morphology: application to an analysis of the environmental performance of cities. Environ. Plann. B. Plann. Des. 28, 183–200.
  • Atelier Parisien d’URbanisme (APUR), A. P., 2007. Consommations d'énergie et émissions de gaz à effet de serre liées au chauffage des résidences principales parisiennes. Technical Report. Atelier Parisien d’URbanisme (APUR).

Or at least one of the reference (the one of Bocher for example) ? Or only for new indicators and for the others we add only the Bocher reference at the beginning of each script ?

ID_ZONE in RSU table ?

I wonder when it is the best solution to add the ID_ZONE in the RSU table.
Two issues :

  • it is quite complicated to add it at the begining of the geoclimate processing chain since we would act differently for OSM or BDTopo.
  • It seems that it is only useful to feed a general table that could gather the results for several cities ?

Thus in my opinion the ID_ZONE must be integrated in the RSU table only at the feeding step (in the "loop") since we will process city by city.

geoindicators operations names

insert op.toUpperCase() in the tests order to leave the possibility for the user to use lower case or a mix of lower and upper case.

prefix prepare data

At the end of the prepare data step, the table names are fixed. In case we run several commune, there is a risk of conflict. Then we must add a prefix name for bdtopo and osm processes in prepare data.

Block_hole_area

The block_hole_area indicator could be calculated directly from a Mapper of IProcess (st_area(st_holes(the_geom))) or should we define an IProcess ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.