Git Product home page Git Product logo

gwlucastrig / tinfour Goto Github PK

View Code? Open in Web Editor NEW
144.0 17.0 34.0 26.71 MB

Delaunay and Constrained Delaunay Triangulations in Java, providing high-performance utilities for modeling surfaces with support for Lidar LAS files, Digital Elevation Models (DEM), finite element analysis, path planning, natural neighbor interpolation, and other applications of Triangulated Irregular Networks (TIN)

License: Apache License 2.0

Batchfile 0.10% Java 99.90%
lidar java digital-elevation-model delaunay triangulation delaunay-triangulation gis computational-geometry hydrography bathymetry

tinfour's Introduction

Tinfour

High-Performance 2D Delaunay Triangulation and Related Utilities Written in Java

Notice

The Tinfour compiled binary files (Jar files) are available at Sonatype's Maven Central Repository or the Maven Central Repository

We are migrating some of our documentation to a separate project. Visit TinfourDocs to check it out.

Delaunay Triangulation

The Delaunay Triangulation defines an optimal form for organizing unstructured or semi-random sample points into a triangular mesh. That optimality makes the Delaunay Triangulation a useful tool for interpolation, grid construction, and surface analysis.

Surface Models using TINs

Tinfour

Tinfour is a software library written in Java that provides tools for constructing and applying Triangulated Irregular Networks (TINs) that conform to the Delaunay criterion. Because it is intended to process large data sets, the implementation gives a great deal of attention to performance and memory use. On a conventional laptop, Tinfour is capable of processing sample points at a rate of better than one million points per second.

The Tinfour source code includes extensive documentation. This project also includes an informal paper that describes the uses, algorithms, and implementation of the software with enough detail to support potential developers who may wish to contribute code or employ Tinfour in their own work. For more details, see Data Elements and Algorithms for the Tinfour Libary. If you would like to discuss the Tinfour project or tell us about your own work, feel free to visit The Tinfour Discussion Page.

The Tinfour Viewer

When someone first sees a project like Tinfour, they might reasonably ask that most thorny of questions "What is it good for?" To answer that question, this library includes a simple demonstration application called Tinfour Viewer that allows the user to exercise the major functions of the Tinfour library. Using Tinfour Viewer, the user can explore data sets ranging in size from just a few points up to the millions.

Here's a screenshot from the Tinfour Viewer showing a collection of Lidar elevation data collected over a section of Interstate highway in the U.S. Northeast.

Lidar over Guilford, CT

The Tinfour Viewer application is intended to show how the Tinfour library could be integrated into a full-featured GIS application or other analysis tool. It's a simple implementation with a minimum of features. Instructions for setting up and running the Tinfour Viewer application are provided at the wiki page Tinfour Execution from the Command Line. Our wiki page attempts to simplify the process of running Tinfour demostration applications as much as possible. It also explains some of the nuances of the launch procedures and provides the details you will need to set up a command window and run the command-line variations for all the various Tinfour applications.

To run the Tinfour software, you must have Java installed on your system. If you do not have Java installed on your computer, you may download an installer for free from Oracle Corporation, Java Downloads.

Sources of Data

Lidar is a system for collecting surface elevation using laser measuring devices mounted on low flying aircraft. It's pretty amazing technology. There are excellent sources of Lidar data to be had for free, you might start at Free LiDAR Data Sources or the USGS 3D Elevation Program. The Commonwealth of Pennsylvania was one of the first states to collect and post a comprehensive survey of lidar data, and they did the job right... Their site includes not just lidar data, but the supporting breakline files (Shapefiles), multi-spectral imagery, and project metadata (including Dewberry reports). Visit this excellent resource at PAMAP Lidar Elevation Data.

If you just want to download a single Lidar file and view it, we recommend PAMAP Tile 4100133PAS which can be found at ftp://pamap.pasda.psu.edu/pamap_lidar/cycle1/LAS/South/2006/40000000/41001330PAS.zip. At 36.7 megabytes, the PAMAP file isn't dainty. But it does contain interesting land features and sufficient detail to exercise the major functions of the viewer.

A short demo

Recently, we found an earlier Delaunay triangulation project by "The Mad Creator" (Bill Dwyer) that provided a four-line demo. It was such a elegant way of introducing the package, that we decided to include one of our own.

public static void main(String []args) throws Exception {
    IncrementalTin tin = new IncrementalTin(1.0);
    List<Vertex>vertexList = TestVertices.makeRandomVertices(100, 0);
    tin.add(vertexList, null);
    TinRenderingUtility.drawTin(tin, 500, 500, new File("tin.png"));
}

Does Tinfour require external project dependencies?

The core Tinfour module has no external dependencies. All it requires is the standard Java API. Thus, you can integrate the core classes into your own applications without adding unnecessary object code to your software.

The associated, extended-functionality modules do depend on object code from external projects. These include modules that can read data from Geographic Information System (GIS) sources (Shapefiles and airborne Lidar LAS files) and those that perform advanced mathematical and statistical analysis. These modules and dependencies are described in the Tinfour wiki page Tinfour Builds and Dependencies.

What version of Java is required for Tinfour?

Tinfour is compiled under Java 11 or higher.

Configuring Tinfour in an IDE

In terms of its software and package organization, Tinfour has a relatively simple structure, so opening it in an Integrated Development Environment (IDE) is straight forward. The major Java IDEs (Netbeans, Eclipse, and IntelliJ) all support direct access to Maven projects. If you have one of these IDE's you can simply load the Tinfour project and run with it. All work fine. More hints and background information on configuring Tinfour for use in an IDE are included in the Tinfour wiki page Tinfour Builds and Dependencies.

Current Work

Development work on the Constrained Conforming Delaunay Triangulation is now complete.

Development work for the next release of Tinfour will focus on the introduction of Delaunay Refinement. Delaunay Refinement is a technique for improving the quality of the triangles formed by a Delaunay Triangulation through the introduction of synthetic vertices at well-chosen positions. Refinement techniques are particularly useful in areas near the boundaries of constraints or near the permimeter of a triangulation. These areas are often prone to the formation of "skinny" triangles (triangles with two small angles and one very large angle).

Currently, we are investigating the use of Ruppert's Algorithm as a refinement technique, though other refinement techniques do exist (such as Chew's Second Delaunay Refinement Algorithm).

For more detail about the Tinfour project development plans, see the Tinfour Project Status and Roadmap page.

Our Companion Project

Visit the Gridfour Software Project to learn more about our companion software project dedicated to creating open-source software tools for raster (grid) data sets.

Conclusion

Finally, the whole point of working on a project like Tinfour is to see it employed to do something useful. To that end, I welcome ideas, requests, and recommendations for analysis tools and applications that would benefit the open source and scientific communities. Got something to say? You can contact the Tinfour project at [email protected]

tinfour's People

Contributors

dependabot[bot] avatar gwlucastrig avatar micycle1 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tinfour's Issues

Overlapping Constraints

Hi Gary,
first of all, thanks for the amazing project. I am currently using it to triangulate enc/s57 data in my thesis.

One annoyance I often meet during my triangulations is some failing triangulation (usually in step 2 of IncrementalTin.processConstraint, h or c == null). I assume this happens because I added another constraint, which unfortunately is overlapping another one somewhere.

I built my own logic which checks for intersections before adding it to tinfour, however the robustness of this approach sometimes is not sufficient.

Can you estimate whether it would be possible to add the possibility of adding overlapping constraints and how much work it would require? I am not too much into the delaunay algorithm business, that's why for me it is hard to tell.

If you say that it is possible with a fair amount of work, than it would be great to have that feature. Maybe I could help with that.

Use of term "adaptive" in GWR bandwidth selection is misleading

The use of the term "adaptive" for the GWR bandwidth selection is inconsistent with its use in most of the published literature. Recommend changing the enumeration value in BandwidthSelectionMethod to be something like OptimalAICc which is what that particular technique really does. "Adaptive" is a broader term, but it looks like it's often used in association with the bi-square kernel, cross validation methods of bandwidth selection, and others, none of which Tinfour implements. So there's lots of room for confusion.

Also, technically, the Proportional specification that is currently implemented is an adaptive method.

Investigation: storage for QuadEdge

I want to investigate possibility of custom storage format for QuadEdge. Creating TIN takes some time and loading can take less time.

No action is required. I will try to make some tests based on semi-virtual implementation.

RFC: Proposal to Implement Limited Voronoi Diagram Class

I have a need for a Voronoi Diagram capability and am planning to implement a new class to construct Voronoi Diagram structures from an instance of a Tinfour IncrementalTin. The IncrementalTin classes implement Delaunay Triangulations and can be used to produce Voronoi diagrams in a relatively direct manner. The proposed class would accept a populated IncrementalTin as input and would use it to build a Voronoi diagram.

I am seeking comments from potential users who have features they would like to see supported or have suggestions regarding the implementation.

The preliminary name for the new class is "LimitedVoronoi". The implementation will be limited in two senses.

First off, it will be limited to a finite domain. I do not intend to attempt to make the implementation cover an infinite plane as a true Voronoi would. The calling routine will have the option of either using the bounds of the IncrementalTin or of supplying its own. The domain of the resulting structure will always be a rectangular area.

Second, the LimitedVoronoi class will be limited in the range of functions it offers. Some of the more difficult operations will be left for future development. Most notably, it will not support an "add vertex" or "remove vertex" operation. These may be added in the future, but will not be part of the initial release.

Please feel free to post to this issue if you have relevant comments.

Thanks.

Incorrect computation of descriptive statistics for GWR

Although the GWR regression coefficients computation appears to be okay, it looks like the calculation of descriptive statistics is incorrect. The problem may be due to an incorrect calculation for the "hat" matrix. In any case, calculations of effective degrees of freedom, which are based on the hat, are coming out too large. So confidence intervals will be too small. Also, I am unable to match the computed AICc criterion with the results from GWR4 program (unfortunately, GWR4 doesn't appear to be available on the web anymore, so I am using an old version 4.0.8, which has issues of its own).

NPE: TIN::addWithInsertOrAppend

NPE is thrown during adding vertices in both implementations (Incremental + SemiVirtual).

NPE is thrown at:

n2.setForward(c.getForward());

Code to reproduce:

        List<Vertex> vertexList = new ArrayList<>(9);

        vertexList.add(new Vertex(182.55700774327852, -44.80335783492774, Double.NaN, 1));
        vertexList.add(new Vertex(125.61337371205445, -45.553850029595196, Double.NaN, 2));
        vertexList.add(new Vertex(130.5337093362808, -4.28630821921659, Double.NaN, 3));
        vertexList.add(new Vertex(108.32916380139068, 13.0071487640962, Double.NaN, 4));
        vertexList.add(new Vertex(136.4296682029902, -8.87822886342019, Double.NaN, 5));
        vertexList.add(new Vertex(138.5582786850251, -10.536044095326556, Double.NaN, 6));
        vertexList.add(new Vertex(142.72869756363542, -13.78407091786698, Double.NaN, 7));
        vertexList.add(new Vertex(156.62775913006791, -24.609008673509535, Double.NaN, 8));
        vertexList.add(new Vertex(125.32986550155634, -11.496184864434124, Double.NaN, 9));
        
        IIncrementalTin tin = new IncrementalTin(1e-8);
        tin.add(vertexList, null);

Issue:
I think that problem is in bootstrap method that selects triangle with almost collinear edges. As can be seen on attached screenshot.
Similar issue can be reproduced when TIN is bootrapped from vertices that are too close together. Such vertices will be merged to group in next step. Creating invalid TIN.

Proposed solution:
Add validity check to bootstrap method: Ignore degenerated triangles according to TIN thresholds.

Screenshot with edges (RED) after NPE is thrown. ORANGE = vertices.
ss-tin-boot-npe

Enhance Shapefile Reader to handle UTF-8 Code Points

The Shapefile Reader / DBF file reader should be able to detect code-point files (CPG file extension) and adjust reading to handle non-ASCII cases.

Currently, the reader expect String inputs to be either ASCII or ISO-8859-1 (it doesn't distinguish between them). It should probably be able to handle at least UTF-8 and maybe in ISO-8859-1 mode make sure that it screens out non-valid characters.

Not sure I would bother with all the many variations. Just get these major ones likely to be used by Esri ArcGIS users. Something like Windows 1252 could wait until somebody identifies an actual data set that uses it.

Investigate adopting maven build process -- suggestions welcome

I am investigating adopting the maven build process (using pom.xml files) for Tinfour.

However, one of my highest priorities in the effort is to preserve the ability of people who do not have maven installations to still build and run Tinfour software for their own use.

This is unfamiliar territory to me and there are a few aspects of the Tinfour software collection that don't seem to fit well into the model established by the maven community. I'm encountering a steep learning curve for this effort. So I cannot promise speedy results.

On the other hand, moving to maven has a lot of positive aspects, most notably that it will take care of a problem with dependencies that has bothered me from the beginning. Tinfour depends on two external jar files (Apaches Commons Math and laszip4j). I include these jars in the distribution. But, of course, that means that potential users and developers have no way of being sure about the origin of of these files or whether they are the official releases from their respective projects. Moving to maven provides a way of handling these dependencies in a "industry best practices" sort of way.

Perf: SemiVirtualIncrementalTin::processConstraint set 'searchEdge'. Correct?

I've improved SemiVirtualIncrementalTin::processConstraint with setting searchEdge=e0 at last line of method.
Is it correct? I don't experience any issues.

I use modified Hilbert Sort for constraint sort. I'm sorting by first vertex of constraint.

My test data set:
Vertex: ~80M - non unique
Constraint: 1.8M - all polygons, every vertex is member of polygon
Time to finish: under 6 minutes

Idea for making bootstrap more robust

Had a pathological input data set of 60,000 input vertices that were all collinear. Clearly, it is impossible to build a Delaunay Triangulation from such a data set. The bootstrap utility (which attempts to build the first triangle for the TIN) required several minutes to run while it tried an exhaustive search of the input set for a set of vertices that would form a valid triangle.

I have an idea for an improvement. Currently the bootstrap utility makes an quick attempt at creating the initial triangle and then tries an exhaustive attempt. In between we could do something else:

  1. Perform a linear regression on the input set and fit it with a straight line.
  2. If the R-squared value is too close to 1.0, the input set is all collinear, and no triangulation is possible. Reject it.
  3. If the R-squared value is too close to zero, then the input set is basically symmetric (random data, data arranged in a grid, data arranged in a circle, etc.). Proceed to the exhaustive search.
  4. Otherwise, run through the input samples. Find the vertex farthest from the line. Find two vertices that are relatively close to the line and located at relatively far apart from each other. Perhaps you can compute the coordinate in the direction of the line for each vertex and use the one with the minimum and maximum values... If the three vertices selected in this manner form a valid triangle (I suspect that they usually will), use them for the bootstrap. Otherwise, proceed to the exhaustive search.

Perf: minor change SemiVirtualIncrementalTin::remove

I suggest to skip Ear creation for nEar = 3.
*) compute nRemoved edges in step 1: cavitation
*) check for nEar=3 is moved before step 2
*) if nRemoved = 3 skip step 2: ear creation

Suggested improvement:

        // step 1: Cavitation
        //         remove vertex and create a polygonal cavity
        //         eliminating all connecting edges
        n1 = n0.getForward();

        int nRemoved = 0;
        do {
            n2 = n1.getForward();
            n3 = n2.getForwardFromDual();
            //n2 is to be deleted.  set the forward edge
            //of n2 to point to n3.
            n1.setForward(n3);
            n1 = n3;
            //if (n2.getB() == null) {
            //   vertexIsOnPerimeter = true;
            //}
            nRemoved++;
            edgePool.deallocateEdge(n2);
        } while (!n2.equals(n0.getDual()));

        n0 = n1;
        n1 = n0.getForward();
        if (nRemoved == 3) {
            // the removal of the vertex resulted in a single triangle
            // which is already Delaunay.  The cavitation process should
            // have reset the links.  So the removal operation is done.
            setSearchEdgeAfterRemoval(n1);
            return true;
        }


        // Step 2 -- Ear Creation
        //           Create a set of Devillers Ears around
        //           the polygonal cavity.
        int nEar = 0;
        n1 = n0.getForward();
        SemiVirtualEdge pStart = n0;
        SemiVirtualDevillersEar firstEar = new SemiVirtualDevillersEar(nEar, null, n1, n0);
        SemiVirtualDevillersEar priorEar = firstEar;
        SemiVirtualDevillersEar nextEar;
        firstEar.computeScore(geoOp, vRemove);

        nEar = 1;
        do {
            n0 = n1;
            n1 = n1.getForward();
            SemiVirtualDevillersEar ear = new SemiVirtualDevillersEar(nEar, priorEar, n1, n0); //NOPMD
            ear.computeScore(geoOp, vRemove);
            priorEar = ear;
            nEar++;
        } while (!n1.equals(pStart));
        priorEar.next = firstEar;
        firstEar.prior = priorEar;

RFE: increase QuadEdge.CONSTRAINT_INDEX_MAX to 2^24

I suggest to increase CONSTRAINT_INDEX_MAX to 2^24. Current value is 2^20.

  /**
   * The maximum value of a constraint index based in space
   * allocated for its storage. This is a value of (2^24-1).
   * In practice this value is larger than the available
   * memory on many contemporary computers would allow.
  */
  public static final int CONSTRAINT_INDEX_MAX = 1 << 24;

  /**
   * A mask that can be anded with the QuadEdgePartner's
   * index field to extract the constraint index,
   * equivalent to the 20 low-order bits.
   */
  public static final int CONSTRAINT_INDEX_MASK = CONSTRAINT_INDEX_MAX - 1;

Investigate unification of standard and semi-virtual TIN classes

Currently, Tinfour includes two packages that use different versions of the IQuadEdge interface. The standard package uses QuadEdge, while the semivirtual package uses SemiVirtualEdge. The QuadEdge implementation is faster and simpler, but uses about twice as much memory as the SemiVirtualEdge implementation. Both implementations have their places depending on the needs of an application.

The problem with the current implementation is that the classes in the two packages are substantially duplicates of each other. Therefore, this Issue entry proposes an investigation into unifying the two sets of classes as much as possible.

Both QuadEdge and SemiVirtualEdge implement IQuadEdge, so the obvious approach for the unification would be to refactor the standard and semi-virtual classes so that that operate on the generic interface rather than on the specific classes that they use. Ordinarily, this approach would be "good coding practice" (or just plain common sense). But, Tinfour has unique performance issues due to the large number of samples in some of the data sets it processes.

So far, efforts to change the code to use the generic-interface approach have encountered a performance penalty. When linking together edges using the setForward() and setReverse() methods, there comes a point where it becomes necessary to specifically cast the IQuadEdge references to either SemiVirtualEdge or QuadEdge (depending on the implementation). This is a classic case of Java "downcasting". Because Java must specifically check object references at run time to verify that they represent instances of the relevant class, downcasting slows down program execution (this is in contrast to the case of "upcasting" where a specific instance is converted to a generic interface... since upcasting references can be verified and resolved at compile time, they carry no performance penalty). In tests so far, the "generic" implementations have incurred a 20 to 30 percent performance hit due to downcasting overhead. So avoiding downcasting is not just a fatuous preference.

Therefore I am currently investigating ways to organize the classes so the impact of downcasting is minimized. Doing so will involve some reorganization of the class-to-package assignments and general clean up of interfaces. More details will follow depending on the success or failure of the effort.

Code cleanup: use tin.getThresholds() where possible

Use tin.getThresholds() where possible. Some IDE has Find usages on constructor: Thresholds::Thresholds(final double nominalPointSpacing)

Usage found in constructors of following classes (not all listed):
SemiVirtualNeighborEdgeLocator
NeighborEdgeLocator
TriangularFacetInterpolator
NaturalNeighborInterpolator

Standard:

walker = new StochasticLawsonsWalk(nominalPointSpacing);

= new StochasticLawsonsWalk(nominalPointSpacing);

walker = new StochasticLawsonsWalk(nominalPointSpacing);

Implement ability to create contours from Delaunay Triangulation

I am developing a capability to create contours from Delaunay Triangulations. The end goal would be to create both isoline features and nested polygon features based that could be used to represent analysis products from Tinfour applications.

I welcome suggestions for features, API elements, or general approach.

The algorithm I am using is similar to the "meandering triangles algorithm". The contours would be developed directly from the triangulation (rather than using an intermediate step of creating a grid). The requirements for the implementation are complicated somewhat in that I wish to handle the special case where contours pass exactly through vertices.

One use of the contouring feature would be to make it an output option the Simple Volumetric Model (SVM). I've attached a picture of my work so far (created from data for The Lake of the Arbuckles in Oklahoma). The picture was created from a sample of about 100 thousand soundings. The contouring logic required about 0.23 seconds to complete.

Test

Changes to IConstraint

The original design of the IConstraint interface's method getConstraintWithNewGeometry has a flaw. I propose to change the input type to the method. This method currently accepts a Iterable as its input method. I propose to use a List. Although the proposed signature will be less specific than the old, it has a distinct performance advantage. Recent testing revealed that the performance disadvantage of the current approach may be much worse than I anticipated when I designed the interface.

This change may have an impact on existing implementations, though I suspect that in the majority of cases it will be transparent.

Explanation

Internally, the Tinfour constraint classes use an instance of ArrayList to store vertices. The process of adding elements to an ArrayList is more efficient if and application can supply the ArrayList constructor with the number of elements that will be added, This approach is particularly important if the number of elements is large and the elements must be added one at a time rather than through a call to addAll(). In the constraint implementations, vertices are added one at a time because the process screens out duplicate vertices (which would be toxic to the TIN algorithms).

When the getConstraintWithNewGeometry creates a new constraint instance, it creates an ArrayList and populates it with an ordered collection of vertices specified as an argument of type Iterable. Internally, the existing implementations loop over the iterator, adding the vertices one at a time. Because the input type is an Iterable, there is no way to know the number of vertices to be added a priori, so the capacity of the ArrayList cannot be specified beforehand, Thus the internal array for the collection must be re-allocated multiple times while building the list.

During testing for recent changes to ConstraintLoader class, I realized that the one-at-a-time approach was quite slow due to the multiple re-allocations of the list. I introduced new constructors for the IConstraint implementation classes that accepted a list of vertices and, thus, could ensure the capacity of the list before adding points. For a moderately large set of constraints (derived from a Shapefile giving a worldwide map of nations), the load time was reduced from an average of 13.3 seconds down to 0.8 seconds.

The getConstraintWithGeometry method is called for each constraint that is added to a TIN. So for applications which add many constraints to a TIN, it should provide a welcome improvement to performance.

ISE: Internal failure, constraint not added

*) List of polygon constraints in meters. nominalPointSpacing = 10.0m
*) 1 constraint has 2 vertices at index 78 and 79 with distance 1e-9. They are correctly merged in VertexMergerGroup when adding as vertex.
*) during processConstraint VertexMergerGroup is not recognized for vertex at in 78 and 79.
*) processConstraint throws ISE.
https://github.com/gwlucastrig/Tinfour/blob/master/src/main/java/tinfour/semivirtual/SemiVirtualIncrementalTin.java#L1979

Everything works correctly when I process failing polygon constrain separately.

I have to prepare minimal test case.

IncrementalTIN: infinite loop in processConstraint

cvList.add(iSegment + 1, b);

Above line is adding alternate vertices (Vertex@6135, Vertex@6131) to cvList and incrementing nSegments. More info/sample will follow

82 = {Vertex@6135} " 0: x=24.03662014933414, y=-55.7899709286193, z=-25.385998"
83 = {Vertex@6131} " 0: x=23.708504632432934, y=-55.53547969058978, z=-25.625874"
84 = {Vertex@6135} " 0: x=24.03662014933414, y=-55.7899709286193, z=-25.385998"
85 = {Vertex@6131} " 0: x=23.708504632432934, y=-55.53547969058978, z=-25.625874"
86 = {Vertex@6135} " 0: x=24.03662014933414, y=-55.7899709286193, z=-25.385998"
87 = {Vertex@6131} " 0: x=23.708504632432934, y=-55.53547969058978, z=-25.625874"

Perf: extend Vertex status with BIT_TIN_MEMBER

I suggest to extend Vertex status with new flag "TIN MEMBER". It will be set after inserting Vertex to TIN. And will be used in 'addWithInsertOrAppend' to skip calling 'walker.findAnEdgeFromEnclosingTriangleInternal' becuase it is known that Vertex is already inserted.

I use Hilbert Sort to improve performance.

  1. construct Vertex List from all vertices including from Constraints
  2. sort vertices with Hilbert Sort
  3. add vertices to TIN
  4. add constraints to TIN
  • here we can skip inserting already known vertices
  • constraint vertices cannot be sorted with Hilbert Sort
class Vertex {
  public static final int BIT_TIN_MEMBER = 0x04;
...
  public boolean isTINMember(){
    return (status&BIT_TIN_MEMBER)!=0;
  }

  public void addStatus(int status){ 
    this.status |= (byte)status;
  }
}

private boolean addWithInsertOrAppend(final Vertex v) {
    if (v.isTINMember()) return;
    v.addStatus(Vertex.BIT_TIN_MEMBER);
   ...
}

replace
m.setStatus(Vertex.BIT_SYNTHETIC | Vertex.BIT_CONSTRAINT);
with
m.addStatus(Vertex.BIT_SYNTHETIC | Vertex.BIT_CONSTRAINT);

Improve Tinfour's treatment of binary jar files under Git

Currently, the Tinfour project includes binary jar files in its Git-based code tree. While this approach has important advantages, it is not consistent with the design philosophy of Git. In particular, Git maintains the versioning history of these jar files. While the versioning history of code is valuable, the versioning history of binaries produced from that code is worse than useless. Consequently, pull requests and storage requirements for local copies of the project have gotten to be much larger than they should be.

I propose to remove the binaries from the main code distribution and use the BFG-repo-cleaner utility to remove the corresponding artifacts from the history.

I will, however, be putting the appropriate binaries into the Release feature of Github.

The original design choice was motivated by a desire to support people who wanted to look at the code or run the TinfourViewer tool without making a big investment in time climbing the learning curve for software build tools like maven and gradle or IDE's like Netbeans and Eclipse (all of these things are wonderful, but only if you already happen to have them installed on your system and have already mastered the art of using them). I still wish to support this goal. Thus I will be bundling compiled jars in the Releases. I will also revise the instructions in the main project "readme" file accordingly.

Bug: Linear+Polygon constraint return invalid bounds

Method IConstraint::getBounds not used in project. But for both implementations Linear+Polygon returns invalid value. Bounds always covers coordinate x=0, y=0.

private final Rectangle2D bounds = new Rectangle2D.Double();

  • x=0, y=0, w=0, h=0
  • Rectangle2D:add don't check whether Rectangle2D is initialized with valid coordinate.

Suggested fix:
public void add(Vertex v) {
if (v.getX() == x && v.getY() == y) {
return; // ignore duplicate points
}
v.setConstraintMember(true);
x = v.getX();
y = v.getY();
if (list.isEmpty())
bounds.setRect(v.getX(), v.getY(), 0.0, 0.0);
else
bounds.add(v.getX(), v.getY());
list.add(v);
}

Arithmetic error in orientation calculation in GeometricOperations

In the extended-precision arithmetic used for cases where a triangle has a very small magnitude area (is close to degenerate case), the calculation has a coding error in the last term. This will error can result in an significant violation of the Delaunay criterion when removing a point from the triangular mesh. The Natural Neighbor Interpolation, which depends on the Delaunay, will potentially give very large magnitude errors.

RFE: add support for writing constrains

There is already class ConstraintLoader.

    public static void writeConstraintFile(final Path path, final List<IConstraint> list) throws IOException {
        try (BufferedWriter w = Files.newBufferedWriter(path)) {
            for (IConstraint constraint : list) {
                List<Vertex> vertices = constraint.getVertices();
                w.write(String.format(Locale.ENGLISH, "%d\r\n", vertices.size()));
                for (Vertex vertex : vertices) {
                    w.write(String.format(Locale.ENGLISH, "%s,%s,%s\r\n", vertex.x, vertex.y, vertex.getZ()));
                }
            }
        }
    }

Add Lidar file access methods for Variable Length Records

When it opens a Lidar file, the LasFileReader class reads the headers for variable length records (VLR). It provides an access method to get a list of those records, but no access methods to get their content.

Please extend the class to correct this (rather silly) omission.

Note: I propose to provide a method that will return an array of bytes giving the content of the VLR. Currently, there is no handling is in place for the LAS file specification's "extended variable length record" format (EVLR). Because the content for a EVLR can be so large (potentially multiple gigabytes), reading it straight into memory is probably not the best solution. Also, I have no sample files that contain EVLRs for testing purposes. So extended variable length records will not be included in this issue.

RFE: use Thresholds instead of nominalPointSpacing

???StochasticLawsonsWalk
???Interpolator calls IIncrementalTin::getNominalPointSpacing
...
maybe it will be needed to add getThresholds to IIncrementalTin
Thresholds class is immutable so can be shared across all instances.

Class TriangleCount
Uses GeometricOperations with default Thresholds (nominalPointSpacing = 1.0). Maybe this is bug.

how can i get each triangle?

IncrementalTin tin = new IncrementalTin(1.0);
List<Vertex> vertexList = loadPointSet(newlineList);
tin.add(vertexList, null);
			
TinRenderingUtility.drawTin(tin, 500, 500, new File("tin.png"));

I need the three coordinates of each triangle,not a picture,what should i do?

Investigation: custom Vertex storage

I want to investigate possibility to introduce custom storage format for Vertex lists.
My idea is to have Vertex storage that will be accessed by integer index. It will have performance impact but decreasing memory demand.
For example my vertex data can use 2x int for XY and 1x short for Z value stored in 1 dimensional arrays. No memory overhead for Vertex as Java object.

No action required.

RFE: replace xxxPinwheel with one class

QuadEdgePinwheel and SemiVirtualPinwheel can be replaced with one common implementation. Only difference is that QuadEdgePinwheel unnecessary uses more strict QuadEdge fields instead of IQuadEdge interface.

Should IntegrityCheck match restoreConformity?

Method restoreConformity was modified to use delaunayThreshold instead of 0.0 to check condition for skipping restore conformity. But IntegrityCheck was not modified. I think that IntegrityCheck should use delaunayThreshold too. This issue is in both Standard & SemiVirtual implementations.

SemiVirtualIncrementalTin

double h = geoOp.inCircle(a, b, c, d);

SemiVirtualIntegrityCheck

double h = geoOp.inCircle(a, b, c, d);

Natural Neighbor Interpolator fails at borders of problematic TIN

The Natural Neighbor interpolation can fail near the borders of a mesh with problematic positioning of vertices near the borders of the TIN. For example, the following test set was contributed by a user who was having problems with interpolation. It gives rise to a mesh which has very thin triangles located along three edges of the mesh. Those triangles lead to the computation of circumcircles with very large radius values. In these cases, numerical issues lead to incorrect interpolations.

index, x,y,z
0,  120.83333333333333, 398.6666666666667,  28.0
1,  204.16666666666666, 351.3333333333333,  23.0
2,  292.49999999999994, 398.66666666666663, 18.0
3,  375.8333333333333,  351.3333333333333,   4.0
4,  464.16666666666663, 398.66666666666663,  0.0
5,  204.16666666666666, 256.6666666666667,  12.0
6,  287.5,              209.33333333333331,  5.0
7,  375.8333333333333,  256.66666666666663,  9.0
8,  287.5,              114.66666666666666,  0.0

The following images show the layout of the vertices and the resulting interpolation. Vertices 2 and 5 are just inside the line segments that would be formed by Vertices 0,4 and Vertices 0,8. The resulting triangles 2,4,0 and 5,8,0 have very small areas and are almost degenerate. In the picture below, the triangles are so flattened that their three points appear to be lying on a single line. Numerical issues result.

NNI_Problem_Wireframe

Here are the color-coded results from interpolation. Numerical issues lead to erratic behavior.
NNI_Problem

Extra metadata flags not extracted for query from LAS Zip (LAZ) file

When the Tinfour Viewer reads a LAZ-formatted file (compressed LAS), it does not report GPS time, return status, and other elements that are not normally loaded from the file when the vertices are first read. This feature works for ordinary LAS files, but does not work for LAZ files because the tinfour-viewer code cannot randomly-access a zipped file.

Until this shortcoming is fixed, the only way a user can inspect GPS time is by unzipping the LAZ file (using laszip utility) and converting it to a standard LAS file.

SemiVirtualEdgePage::deallocateEdge should reset constraint

SemiVirtualEdgePage::deallocateEdge shoud reset constraint. Otherwise when edge is reused can have invalid constraint flags/index. I don't have test to reproduce this issue. Maybe cannot occur in current algorithm. But still should be fixed for correctness.

Missing following line:
if (constraints != null) constraints[offset >> 1] = 0;

Stack overflow: IncrementalTin.floodFillConstrainedRegionsRecursion

I get Stack overflow during processing "hipped roof". When line [1] is commeted code works withou stack overflow. Tomorrow I will do more investigation. Black is polygon constraint. Orange are linear constraints

public static void main(String[] args) {
        double nominalPointSpacing = 0.01;
        IIncrementalTin tin = new IncrementalTin(nominalPointSpacing);
        List<IConstraint> constraintList = new ArrayList<>();

        Vertex v0 = new Vertex(-5.254566785879433, -10.120986399240792, Double.NaN);
        Vertex v1 = new Vertex(11.397855226881802, -0.2510828208178282, Double.NaN);
        Vertex v2 = new Vertex(5.243457765551284, 10.120986400172114, Double.NaN);
        Vertex v3 = new Vertex(-11.397855226648971, 0.2683861115947366, Double.NaN);

        PolygonConstraint polygon = new PolygonConstraint(v0, v1, v2, v3);
        constraintList.add(polygon);

        Vertex v4 = new Vertex(4.862010033102706, 2.8856616728007793, Double.NaN);
        Vertex v5 = new Vertex(-4.867160049267113, -2.8776928577572107, Double.NaN);

        constraintList.add(new LinearConstraint(v4, v5));
        constraintList.add(new LinearConstraint(v4, v2));
        constraintList.add(new LinearConstraint(v4, v1));
        constraintList.add(new LinearConstraint(v5, v0));
        constraintList.add(new LinearConstraint(v5, v3)); // [1] COMMENT THIS LINE

        constraintList.forEach(IPolyline::complete);

        tin.addConstraints(constraintList, false);
    }

ss-stack-overflow

SemiVirtualTIN: NPE - invalid reuse/load edge during construction

SemiVirtualTIN throws NPE during tin construction due to invalid modifying/loading search edge. SearchEdge is always valid (vertices A != null and B != null). But in 7 cases / 21 000 of TINs creations throws NPE because search edge is loaded with "outline" edge. I'm working on locating the problem.
So far I've replaced all edge reuses with creating new instance. There is no exception anymore after modification. Standard TIN is correct.

Problem loading geographic coordinates from CSV file, constraints from Shapefile

The VertexLoader class has a bug when reading geographic coordinates from a text file (CSV, whitespace delimited, etc.). If the column header is labeled with latitude and longitude, the fields will be improperly recognized. The problem occurs at about line 600 where s.toUpperCase().startsWith("lat"). Clearly, this test can never be true. Please use toLowerCase instead.

In TinfourViewer, if the Vertex data is given in geographic coordinates, it will be scaled to an Cartesian (x/y) coordinate system so that the geometry is correctly represented (sort of a weak coordinate projection). Unfortunately, if the constraints are then loaded from a Shapefile also given in geographic coordinates, the ConstraintLoader does not perform a conversion. Also, the ModelFromText class doesn't override or implement the geo2xy() method so it won't do the proper conversion either. Recommend fixing ModelFromText class (see the ModelFromLas class for an example).

The code for addressing this issue is already part of the ConstraintLoader. See the following code
ConstraintLoader conLoader = new ConstraintLoader(); if(vertexLoader.isSourceInGeographicCoordinates()){ conLoader.setGeographic( geoScaleX, geoScaleY, geoOffsetX, geoOffsetY); }

NPE: processing polygon constraint from #15

Separate issue for NPE. Sample code is attached to #15 - Internal failure, constraint not added.
Please remove/comment data row with ID 26 as said in comment.

Sorry that I don't provide stack trace. I have modified version with custom debug information. So line numbers will not match.

Implement Constrained Delaunay Triangulation

Many terrain modeling applications depend on constrained edges to represent "discontinuity" features such as rivers, roads, lakes, cliffs, and escarpment. There are a number of non-geophysical applications that would benefit from a Constrained Delaunay Triangulation as well.

Infinite/almost infinite adding vertices to constraint

New vertices are added to PolygonConstraint in SemiVirtualIncrementalTin::processConstraint. As can be seen there are only 2 repeating Vertex instances in this range.

19900 = {Vertex@8798} " 0: x=-673.6734854805982, y=922.2082379390486, z=211.81076"
19901 = {Vertex@8793} " 0: x=-673.673485453357, y=922.2080292212777, z=211.81073"
19902 = {Vertex@8798} " 0: x=-673.6734854805982, y=922.2082379390486, z=211.81076"
19903 = {Vertex@8793} " 0: x=-673.673485453357, y=922.2080292212777, z=211.81073"
19904 = {Vertex@8798} " 0: x=-673.6734854805982, y=922.2082379390486, z=211.81076"
19905 = {Vertex@8793} " 0: x=-673.673485453357, y=922.2080292212777, z=211.81073"
19906 = {Vertex@8798} " 0: x=-673.6734854805982, y=922.2082379390486, z=211.81076"

CRITICAL: Invalid TIN when adding vertices

Invalid TIN is created after adding vertex with id=4. Edge is created between vertices 0 and 4 instead of 3 and 4.
This issue is reproducible in both implementations. Test code attached. Distance is between edge and vertex.

!!! ERROR !!! Vertex 3: x=34.00562309702486, y=160.76445998375877, z=0.0 is near to edge: 18 21 <-- ( 4, 0) --> 13, distance=5.453774778801268E-15

ss-invalid-tin

test.zip

Missing some triangles

There are some missing triangles when I reconstruct triangles with same method like used in countTriangles.
while areas - missing triangles
red - valid triangles
green line - polygon constraint
cyan - vertices from tin

image
I have to create minimal sample to reproduce this issue

The method isPointInsideTin() is not thread safe

This issue identifies two shortcomings in the Tinfour library.

The first shortcoming has to do with the interface IIncrementalTin. IIncrementalTin defines a method called isPointInsideTin() that determines if a pair of coordinates is inside a TIN. It is not thread safe. The method should be deprecated and a new API should take its place.

An instance of IIncrementalTin (IncrementalTin or SemiVirtualIncrementalTin) operates in two distinct phases. The first phase, when an application builds the TIN (adding vertices and constraints) is necessarily a serial process. But once the TIN is complete, it can be accessed on a read-only basis by multiple threads without the need to worry about synchronization. This feature is very efficient. For example, the TinfourViewer demonstration application can expedite its interpolation of grids to support raster images by launching up to 4 threads at the same time.

However, isPointInsideTin needs to be replaced by a method in a separate class that will permit multi-threading. The logical candidate for this use is the neighbor-edge-locator implementations (INeighborEdgeLocator)

Which brings us to the second shortcoming. Neighbor Edge Locator is a terrible name for an API element. I'm not sure what I was thinking when I wrote it. I will be replacing it with something better. Perhaps IncrementalTinNavigator? I am looking for a name that basically describes what the class does. A name that says "I am the class you want to use for looking things up once you've constructed a TIN". It will include methods for the following operations:

  1. Is a point specified by a coordinate pair inside the bounds of the Delaunay Triangulation?
  2. Get the nearest edge to a specified point
  3. Get the nearest vertex to a specified point
  4. Get a SimpleTriangle instance that contains a specified point

Naming suggestions are welcome.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.