Git Product home page Git Product logo

bezier's People

Contributors

andrzej3393 avatar anntzer avatar behackl avatar brycestansfield avatar dependabot[bot] avatar dhermes avatar kianmeng avatar kiki007 avatar sharonahermes avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

bezier's Issues

Fallback to algebraic when geometric fails?

Consider quadratic curves given by the following points:

>>> import numpy as np
>>> 
>>> F = float.fromhex
>>> 
>>> nodes1 = np.asfortranarray([
...     [F('0x1.002f11833164ap-1'), F('-0x1.32718972d77a1p-1')],
...     [F('0x1.c516e980c0ce0p-2'), F('-0x1.5e002a95d165ep-1')],
...     [F('0x1.89092ee6e6df4p-2'), F('-0x1.8640e302433dfp-1')],
... ])
>>> nodes2 = np.asfortranarray([
...     [F('0x1.fbb8cfd966f05p-2'), F('-0x1.325bc2f4e012cp-1')],
...     [F('0x1.c6cd0e74ae3ecp-2'), F('-0x1.614b060a21ebap-1')],
...     [F('0x1.8c4a283f3c04dp-2'), F('-0x1.8192083f28f11p-1')],
... ])
>>>
>>>
>>> known_s1 = F('0x1.3c07c30226a3cp-2')
>>> known_t1 = F('0x1.30f6f2bdde113p-2')
>>>
>>> known_s2 = F('0x1.7d5b269eb3fecp-1')
>>> known_t2 = F('0x1.8700df9eac93bp-1')

Close-ish visual inspection shows two intersections (Bézout gives at most 4):

>>> import matplotlib.pyplot as plt
>>> import seaborn
>>> 
>>> import bezier
>>> 
>>> seaborn.set()
>>> 
>>> curve1 = bezier.Curve.from_nodes(nodes1, _copy=False)
>>> curve2 = bezier.Curve.from_nodes(nodes2, _copy=False)
>>> 
>>> ax = curve1.plot(256)
>>> _ = curve2.plot(256, ax=ax)
>>> _ = ax.axis('scaled')
>>> ax.figure.show()

figure_1


However, geometric intersection fails due to too many pairs of subdivided curves that have overlapping bounding boxes:

>>> from bezier._intersection_helpers import all_intersections
>>> from bezier._intersection_helpers import IntersectionStrategy
>>> 
>>> candidates = [(curve1, curve2)]
>>> 
>>> intersections_geom = all_intersections(
...     candidates, strategy=IntersectionStrategy.geometric)
Traceback (most recent call last):
  File "<stdin>", line 2, in <module>
  File ".../bezier/_intersection_helpers.py", line 1501, in all_intersections
    return _all_intersections_geometric(candidates)
  File ".../bezier/_intersection_helpers.py", line 1419, in _all_intersections_geometric
    raise NotImplementedError(msg)
NotImplementedError: The number of candidate intersections is too high.
29 accepted pairs gives 116 candidate pairs.

but algebraic intersection has no problem finding two intersections

>>> intersections_alg = all_intersections(
...     candidates, strategy=IntersectionStrategy.algebraic)
>>> len(intersections_alg)
2
>>> (intersections_alg[0].s, intersections_alg[0].t)
(0.30862335873247104, 0.29781703266006965)
>>> (intersections_alg[0].s - known_s1) / np.spacing(known_s1)
-398.0
>>> (intersections_alg[0].t - known_t1) / np.spacing(known_t1)
-417.0
>>>
>>> (intersections_alg[1].s, intersections_alg[1].t)
(0.74483605086606985, 0.76367853938999475)
>>> (intersections_alg[1].s - known_s2) / np.spacing(known_s2)
20.0
>>> (intersections_alg[1].t - known_t2) / np.spacing(known_t2)
23.0

$ git log -1 --pretty=%H
fb863038bc6b3effaf8a401e51e69b18bbf4b0ae

Stop referring to "left" and "right" curves in an intersection

They are really just the first and second inputs, the concept of orientation is overloaded (based on the left and right entries in the pair (first, second)).

This is more confusing when the "interior" or "left" edge is used during Surface intersection.

Get better sense of code coverage for Fortran code

Related to #9

To provide test coverage for these extensions, can use gcov. From a blog post, extension needs to be compiled with coverage hooks:

$ export CFLAGS="-coverage"
$ python setup.py build_ext --inplace

(Should produce .gcno files in build/ subdirs, then tests will produce .gcda files) How-to may be useful

May be possible to use gcovr to produce a report to be combined with our other reports.

Fix false positive for `Curve.locate()`

Taking the self-crossing curve:

>>> import numpy as np
>>> import bezier
>>> curve = bezier.Curve.from_nodes(np.array([
...     [ 0.0 , 2.0  ],
...     [-1.0 , 0.0  ],
...     [ 1.0 , 1.0  ],
...     [-0.75, 1.625],
... ]))

figure_1-1

we get a false-positive for the self-intersection:

>>> point = np.array([[-0.25, 1.375]])
>>> curve.locate(point)
0.59999999999999998

In reality, it occurs at two points equally spaced from 0.5:

>>> s1 = 0.5 - np.sqrt(5.0) / 6.0
>>> s2 = 0.5 + np.sqrt(5.0) / 6.0
>>> curve.evaluate(s1)
array([[-0.25 ,  1.375]])
>>> curve.evaluate(s2)
array([[-0.25 ,  1.375]])

Algebraic intersection fails curves 1 and 13 (ID: 10) in PyPy

We have the curves

>>> import numpy as np
>>>
>>> nodes1 = np.asfortranarray([
...     [0.0, 0.0],
...     [0.5, 1.0],
...     [1.0, 0.0],
... ])
>>> nodes13 = np.asfortranarray([
...     [0.0 ,  0.0],
...     [0.25,  2.0],
...     [0.5 , -2.0],
...     [0.75,  2.0],
...     [1.0 ,  0.0]
... ])

Visual inspection shows 4 intersections:

>>> import matplotlib.pyplot as plt
>>> import seaborn
>>>
>>> import bezier
>>>
>>> seaborn.set()
>>>
>>> curve1 = bezier.Curve.from_nodes(nodes1, _copy=False)
>>> curve13 = bezier.Curve.from_nodes(nodes13, _copy=False)
>>>
>>> ax = curve1.plot(256)
>>> _ = curve13.plot(256, ax=ax)
>>> _ = ax.axis('scaled')
>>> ax.figure.show()

figure_1


When using CPython (2.7 here, but any version works) there are 4 intersections as expected

Python 2.7.13 (default, Jan  5 2017, 18:58:25) 
[GCC 5.4.0 20160609] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from bezier._intersection_helpers import all_intersections
>>> from bezier._intersection_helpers import IntersectionStrategy
>>>
>>> candidates = [(curve1, curve13)]
>>>
>>> intersections = all_intersections(
...     candidates, strategy=IntersectionStrategy.algebraic)
>>> for intersection in intersections:
...     print((intersection.s, intersection.t))
... 
(0.0, 0.0)
(0.3110177634953864, 0.3110177634953864)
(0.68898223650461365, 0.68898223650461365)
(1.0, 1.0)

However, using PyPy there are only 3

Python 2.7.12 (aff251e543859ce4508159dd9f1a82a2f553de00, Nov 12 2016, 08:50:18)
[PyPy 5.6.0 with GCC 6.2.0] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>>> # imports, set-up of curve1 and curve13, etc.
>>>> intersections = all_intersections(
....     candidates, strategy=IntersectionStrategy.algebraic)
>>>> for intersection in intersections:
....     print((intersection.s, intersection.t))
....
(0.0, 0.0)
(0.31101776349538629, 0.31101776349538635)
(1.0, 1.0)

$ git log -1 --pretty=%H
ccfd43f0b594188d26df0910fe1c46827048435c

Add support for arbitrary dimension in Surface.is_valid

Currently only 2D is supported.

Right now, we just take the 2x2 Jacobian J = J(s, t) and convert it into the determinant polynomial j(s, t) = det(J) and then make sure that polynomial always takes the same sign on the reference triangle. In the 2D case, if the surface is degree n, the components of J are degree n - 1 hence j(s, t) is degree 2(n - 1).

However, in order for a surface to be "valid", (I think) we only care that the Jacobian J has full-rank throughout (the inverse fn. thm. doesn't really cover the non-square case, I'm drawing a blank right now). So if the dimension is d, then J is d x 2 and full rank means it'll have rank 2. Instead of checking the rank of J, we can apply our determinant trick to J^T J (which is 2 x 2). In this case, each component of J^T J is degree 2(n - 1) hence the degree of jj(s, t) = det(J^T J) is 4(n - 1).

reverse evaluate

I just discovered this package the other day, to make and debug cubic-bezier animations from CSS. I'd like to request or propose a feature. Take this example.

You have an animation that runs 1s and animates an element from 100 to 200 height, according to some curve. At which time exactly does the element reach 150 height? So basically at which distance in the curve is y a certain value.

This can be easily figured out by using evaluate on the curve in intervals, with more than sufficient precision, but I'm wondering if a mathematically exact solution is possible.

locate_point() uses base_x/base_y incorrectly

For example, consider:

>>> import numpy as np
>>> import bezier
>>>
>>> nodes = np.asfortranarray([
...     [0.5, 0.5],
...     [0.0, 0.5],
...     [0.5, 0.0],
... ])
>>> surface = bezier.Surface(
...     nodes,
...     degree=1,
...     base_x=0.5,
...     base_y=0.5,
...     width=-0.5,
...     _copy=False
... )
>>> point = B.evaluate_cartesian(0.25, 0.5)
>>> point
array([[ 0.375,  0.25 ]])

At 953985e, putting a breakpoint at the line

s_approx, t_approx = _mean_centroid(candidates)

we have

(Pdb) s_approx, t_approx = _mean_centroid(candidates)
(Pdb) (s_approx, t_approx)
(0.375, 0.25)
(Pdb)
(Pdb) surface.evaluate_cartesian(0.375, 0.25)
array([[ 0.3125,  0.375 ]])
(Pdb) x_val
0.375
(Pdb) y_val
0.25

i.e. the answer is wrong for surface because it is not using the default base_x / base_y. But the wrong answer gets bailed out by Newton's method:

(Pdb) (s, t) = newton_refine(surface._nodes, surface._degree, x_val, y_val, s_approx, t_approx)
(Pdb) (s, t)
(0.25, 0.5)
(Pdb) surface.evaluate_cartesian(0.25, 0.5)
array([[ 0.375,  0.25 ]])

Speed up unit tests for subdivision checks

Running py.test with durations shows the culprits

$ git log -1 --pretty=%H
61c8c4bee9c0b29b129c5d829e5aed262c453566
$ tox -e py27 -- --durations=6
...
1.45s call     tests/test_surface.py::TestSurface::test_subdivide_on_the_fly
0.99s call     tests/test_surface.py::TestSurface::test_subdivide_quartic_check_evaluate
0.30s call     tests/test_surface.py::TestSurface::test_subdivide_quadratic_check_evaluate
0.28s call     tests/test_surface.py::TestSurface::test_subdivide_cubic_check_evaluate
0.27s call     tests/test_surface.py::TestSurface::test_subdivide_line_check_evaluate
0.01s call     tests/test_surface.py::TestSurface::test__compute_valid_invalid_linear
...

Main Culprit

Surface.evaluate_multi() is slow, in particular, when we use 561 points within the reference triangle

Move coverage threshold back to 100%

Changed here: ccda2dc

The coverage is in a kind of funky state from 6626348 until the coverage drop.

So this issue is as much about adding unit test coverage for all that code as it is about moving the threshold back to 100.

Determine if possible to "lift" symbols from `libgfortran` into my object files

I don't think it is, but it'd be nice to do that so that libbezier.a could be "complete" without any dependencies.

As a much less appealing alternative, I could also go out of my way to avoid using the libgfortran "standard library" (not sure what to call it?). Right now, very little of it is used:

$ git log -1 --pretty=%H
8078351ea7fe7ef72f035ecb5793351713d191cf
$
$ strings .nox/doctest/lib/python3.6/site-packages/bezier/lib/libbezier.a | \
> grep -i fort \
> | sort | uniq
_gfortran_internal_pack
_gfortran_internal_unpack
_gfortran_maxval_r8
_gfortran_minval_r8
GNU Fortran2008 5.4.0 20160609 -mtune=generic -march=x86-64 ...

For now, my "solution" to this is documenting the accidental transitive dependency (7779bee and 8078351).

Link to the "current" CircleCI build and Coveralls build when formatting README for PyPI

This has been done manually for the first four releases (only partially in some, since Coveralls only added between 0.2.0 and 0.1.1). For example in 0.2.0 we have the CircleCI badge pointing to:

https://circleci.com/gh/dhermes/bezier/231

and the Coveralls badge pointing to

https://coveralls.io/builds/9283484

The tag build on CircleCI can definitely retrieve the current build ID (via CIRCLE_BUILD_URL) and can probably capture the coveralls output as well, e.g. (from the linked build):

Coverage submitted!
Job #69.1
https://coveralls.io/jobs/21103603

(which is actually part of https://coveralls.io/builds/9283463, which came from the commit to master that was exactly the same as the 0.2.0 tag)


It's also worth noting that I hardcoded the badges as

The first was added manually to this project since CircleCI doesn't host it statically anywhere.

Make Windows wheel building much more robust

Currently done on my Windows partition, no idea what I did in the past to get MinGW installed (or where). But I have built a wheel via:

python2.7 setup.py build --compiler=mingw32
python2.7 setup.py bdist_wheel 

I uploaded it to PyPI for 0.4.0 but this may have been a big fail, since there was a warning about the ABI when running bdist_wheel.

Add "doctest lint"

This would be a helper in that makes sure all doctest blocks are publicly exposed. Currently (as of 6f0420b) _2x2_det and _jacobian_det contain doctests that do not get run. (This will be fixed later today, i.e. they'll be added to algorithm-helpers.rst.)

Document (better) how tangent curves going opposite directions are classified

The classification is done here.

The four cases can be summarized succinctly:

figure_1

========================================
Tangent Vector on Curve 1: [[-1.  0.]]
Curvature on Curve 1: 2.0
Center of Curvature: [ 1.5  -0.25]
Tangent Vector on Curve 2: [[ 1.  0.]]
Curvature on Curve 2: 2.0
Center of Curvature: [ 1.5   0.75]
========================================
Tangent Vector on Curve 1: [[ 1.  0.]]
Curvature on Curve 1: -2.0
Center of Curvature: [ 1.5  -0.25]
Tangent Vector on Curve 2: [[-1.  0.]]
Curvature on Curve 2: -2.0
Center of Curvature: [ 1.5   0.75]
========================================
Tangent Vector on Curve 1: [[-1.  0.]]
Curvature on Curve 1: 4.0
Center of Curvature: [ 1.5   0.25]
Tangent Vector on Curve 2: [[ 3.  0.]]
Curvature on Curve 2: -0.444444444444
Center of Curvature: [ 1.5  -1.75]
========================================
Tangent Vector on Curve 1: [[ 1.  0.]]
Curvature on Curve 1: -4.0
Center of Curvature: [ 1.5   0.25]
Tangent Vector on Curve 2: [[-3.  0.]]
Curvature on Curve 2: 0.444444444444
Center of Curvature: [ 1.5  -1.75]

Made via this gist


At

$ git log -1 --pretty=%H
09550abcf0efeb239b17f9871ccac93fcd1bcd4a

Remove all usage of monomial basis in bezier._algebraic_intersection

By re-phrasing all computations as eigenvalue problems / rank computations, the code will be

  • more stable (converting from Bernstein basis is unstable)
  • more "Fortran"-izable (the numpy polynomial procedures are a not-so-thin wrapper around e.g. dgeev)

This is actually related to #38, as the code in locate_point() does not generalize nicely off of [0, 1] due to the use of normalize_polynomial().

Surface.is_valid / polynomial_sign() fails when Jacobian is 0 at a corner node

For example, the following invalid surface

>>> import numpy as np
>>> import bezier
>>> surface = bezier.Surface.from_nodes(np.array([
...     [0.0 , 0.0  ],
...     [0.5 , 0.5  ],
...     [1.0 , 0.625],
...     [0.0 , 0.5  ],
...     [0.5 , 0.5  ],
...     [0.25, 1.0  ],
... ]))
>>>
>>> import seaborn
>>> ax = surface.plot(256)
>>> _ = ax.axis('scaled')
>>> ax.figure.show()

figure_1

This is given by

x(s, t) = (4 s + t^2) / 4
y(s, t) = -(3s^2 + 8st - 8s - 8t) / 8
det(d PHI) = x y' - y x' =
J(s, t) = (3st - 8s + 4t^2 - 4t + 8) / 8
        = (16 L1^2 + 8 (2 L1 L2) + 0 L2^2 +
          12 (2 L1 L3) + 7 (2 L2 L3) + 16 L3^2) / 16

which has extrema at

J(0, 0)   = 1
J(1, 0)   = 0
J(0, 1/2) = 7/8
J(0, 1)   = 1

However, our check fails to conclude it is invalid (fails to make any conclusion):

>>> surface.is_valid
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "bezier/surface.py", line 1004, in is_valid
    self._is_valid = self._compute_valid()
  File "bezier/surface.py", line 919, in _compute_valid
    poly_sign = _surface_helpers.polynomial_sign(jac_poly)
  File "bezier/_surface_helpers.py", line 298, in polynomial_sign
    _MAX_POLY_SUBDIVISIONS)
ValueError: ('Did not reach a conclusion after max subdivisions', 5)

After 5 iterations, the 0 is still around in just one of the subdivided surfaces (out of 4^5 = 1024)

ipdb> undecided
[<Surface (degree=2, dimension=1, base=(0.9375, 0), width=0.0625)>]
ipdb> undecided[0]._nodes
array([[ 0.0625    ],
       [ 0.03125   ],
       [ 0.        ],
       [ 0.05786133],
       [ 0.02734375],
       [ 0.05517578]])
ipdb> [v.hex() for v in undecided[0]._nodes.flatten()]
['0x1.0000000000000p-4',
 '0x1.0000000000000p-5',
 '0x0.0p+0',
 '0x1.da00000000000p-5',
 '0x1.c000000000000p-6',
 '0x1.c400000000000p-5']

Even after 30 iterations (by tweaking _surface_helpers._MAX_POLY_SUBDIVISIONS = 30) the 0 is still around (in just one out of 4^30 = 1,152,921,504,606,846,976 ~= 1.152922e+18):

ipdb> undecided
[<Surface (degree=2, dimension=1, base=(1, 0), width=1.86265e-09)>]
ipdb> undecided[0].base_x
0.9999999981373549
ipdb> undecided[0].base_x.hex()
'0x1.fffffff000000p-1'
ipdb> undecided[0].base_y
0.0
ipdb> undecided[0]._nodes
array([[  1.86264515e-09],
       [  9.31322575e-10],
       [  0.00000000e+00],
       [  1.74622983e-09],
       [  8.14907253e-10],
       [  1.62981451e-09]])
ipdb> [v.hex() for v in undecided[0]._nodes.flatten()]
['0x1.0000000000000p-29',
 '0x1.0000000000000p-30',
 '0x0.0p+0',
 '0x1.dffffffd00000p-30',
 '0x1.c000000000000p-31',
 '0x1.c000000200000p-30']

At

$ git log -1 --pretty=%H
feea7087f89554e12b803792ab3c653f4d8f5ab0

Detect bad parameterizations

e.g. with curves, this leads to larger than expected loss of accuracy when doing intersections. As a (somewhat contrived) example encountered in the wild:

>>> import numpy as np
>>> import bezier
>>> nodes1 = np.asfortranarray([
...     [2.125, 2.8125],
...     [2.125, 2.9375],
... ])
>>> curve1 = bezier.Curve(nodes1, degree=1, _copy=False)
>>> nodes2 = np.asfortranarray([
...     [2.1015625  , 2.875],
...     [2.16015625 , 2.875],
...     [2.220703125, 2.875],
... ])
>>> curve2 = bezier.Curve(nodes2, degree=2, _copy=False)
>>> from bezier import _intersection_helpers
>>> I, = _intersection_helpers.all_intersections([(curve1, curve2)])
>>> I.s.hex()
'0x1.0000000000020p-1'
>>> I.t.hex()
'0x1.983e62b67ad27p-3'
>>> # expected 4 sqrt(57) - 30, done in high precision
>>> expected_t = float.fromhex('0x1.983e62b67adeep-3')
>>> np.abs((expected_t - I.t) / np.spacing(expected_t))
199.0

This comes from the bad parameterization of curve2:

x(s) = (s^2 + 60 s + 1076) / 512
y(s) = 23 / 8
==>
f(x, y) = (23 - 8 y)^2 / 64
---
x(s   - 30) = 11 / 32 + s^2 / 512
x(s i - 30) = 11 / 32 - s^2 / 512

which should instead be

x(s) = (61 s + 1076) / 512
y(s) = 23 / 8
==>
f(x, y) = (23 - 8 y) / 8

figure_1


Same root, but non-"bad" parameterization

However, moving the y-values so that curve2 is no longer badly parameterized doesn't fix things:

>>> nodes3 = np.asfortranarray([
...     [2.1015625  , 2.8125],
...     [2.16015625 , 2.875 ],
...     [2.220703125, 2.9375],
... ])
>>> curve3 = bezier.Curve(nodes3, degree=2, _copy=False)
>>> I2, = _intersection_helpers.all_intersections([(curve1, curve3)])
>>> I2.s.hex()
'0x1.983e62b67ada7p-3'
>>> I2.t.hex()
'0x1.983e62b67ad27p-3'
>>> np.abs((expected_t - I2.t) / np.spacing(expected_t))
199.0
>>> # Now the s-value is ALSO expected to be 4 sqrt(57) - 30
>>> np.abs((expected_t - I2.s) / np.spacing(expected_t))
71.0

figure_1

NOTE: curve3 lies on the algebraic curve given by x = (256 y^2 + 480 y + 929) / 2048, so the quadratic parameterization is appropriate.


Newton's Method

What happens if we keep going with Newton's method on the "bad" intersection:

>>> st_values = [(I.s, I.t)]
>>> for _ in range(2):
...     st_values.append(
...         _intersection_helpers.newton_refine(
...             st_values[-1][0], nodes1, st_values[-1][1], nodes2))
...
>>> st_values
[(0.5000000000000036, 0.19933774108299326),
 (0.5, 0.19933774108300079),
 (0.5, 0.19933774108300079)]
>>> len(set(st_values))
2
>>> st_values[-1][0].hex()
'0x1.0000000000000p-1'
>>> st_values[-1][1].hex()
'0x1.983e62b67ae36p-3'
>>> np.abs((st_values[-1][1] - expected_t) / np.spacing(expected_t))
72.0

Newton's method terminates because there is a fairly large band where B1(s) = B2(t) numerically (with parameterizations):

x1(s) = 17 / 8 = 1088 / 512 = 2.125
x2(t) = (t^2 + 60 t + 1076) / 512
y1(s) = (2 s + 45) / 16
y2(t) = 23 / 8 = 46 / 16

This is due to the relative size mismatch in the coefficients of t^2 + 60 t + 1076. Centering a band of 401 ULPs around expected_t (200 on either side), we find that >= 135 of them evaluate x2(t) to exactly 2.125 using 5 different methods. Using the Bernstein basis and de Casteljau algorithm, 196 of them evaluate to 2.125. Using Horner's method, an entire contiguous band of 68 ULPs on either side of expected_t evaluate to exactly 2.125.

In particular:

>>> val = float.fromhex('0x1.983e62b67ae36p-3')
>>> de_cast00 = 269.0 / 128.0 * (1.0 - val) + 553.0 / 256.0 * val
>>> de_cast01 = 553.0 / 256.0 * (1.0 - val) + 1137.0 / 512.0 * val
>>> de_cast = de_cast00 * (1.0 - val) + de_cast01 * val
>>> de_cast
2.125
>>> de_cast.hex()
'0x1.1000000000000p+1'

And is it the same for the non-"bad" one:

>>> st_values = [(I2.s, I2.t)]
>>> for _ in range(2):
...     st_values.append(
...         _intersection_helpers.newton_refine(
...             st_values[-1][0], nodes1, st_values[-1][1], nodes3))
...
>>> st_values
[(0.19933774108299682, 0.19933774108299326),
 (0.19933774108300079, 0.19933774108300079),
 (0.19933774108300079, 0.19933774108300079)]
>>> len(set(st_values))
2
>>> st_values[-1][0].hex()
'0x1.983e62b67ae36p-3'
>>> st_values[-1][1].hex()
'0x1.983e62b67ae36p-3'
>>> np.abs((st_values[-1][1] - expected_t) / np.spacing(expected_t))
72.0

Since x3(t) = x2(t) and y3(t) = (2 t + 45) / 16 = y1(t), once s = t, we are subject to the same issues as in the intersection of curve1 and curve2.


"Exact" values

However, this may just be an issue with polynomials? Or Newton's method? Computing the roots of s^2 + 60 s - 12 via the quadratic formula even yields a decent amount of error (unless cancellation is avoided):

>>> a, b, c = 1.0, 60.0, -12.0
>>> sq_discrim = np.sqrt(b * b - 4.0 * a * c)
>>> v1 = (-b + sq_discrim) / (2.0 * a)
>>> v1.hex()
'0x1.983e62b67ae00p-3'
>>> np.abs((expected_t - v1) / np.spacing(expected_t))
18.0
>>> v2 = (2.0 * c) / (-b - sq_discrim)
>>> v2.hex()
'0x1.983e62b67adeep-3'
>>> np.abs((expected_t - v2) / np.spacing(expected_t))
0.0
>>> _, v3 = np.sort(np.roots([a, b, c]))
>>> v3.hex()
'0x1.983e62b67adeep-3'
>>> np.abs((expected_t - v3) / np.spacing(expected_t))
0.0
>>> _, v4 = np.sort(np.polynomial.polynomial.polyroots([c, b, a]))
>>> v4.hex()
'0x1.983e62b67ae00p-3'
>>> np.abs((expected_t - v4) / np.spacing(expected_t))
18.0

Algebraic

Though the algebraic approach doesn't (necessarily) suffer from the same issue:

>>> from bezier import curve
>>> ALGEBRAIC = curve.IntersectionStrategy.algebraic
>>> I3, = _intersection_helpers.all_intersections(
...     [(curve1, curve2)], strategy=ALGEBRAIC)
>>> I3.s.hex()
'0x1.0000000000000p-1'
>>> I3.t.hex()
'0x1.983e62b67ae00p-3'
>>> np.abs((expected_t - I3.t) / np.spacing(expected_t))
18.0
>>>
>>> I4, = _intersection_helpers.all_intersections(
...     [(curve1, curve3)], strategy=ALGEBRAIC)
>>> I4.s.hex()
'0x1.983e62b67ae00p-3'
>>> I4.t.hex()
'0x1.983e62b67ae00p-3'
>>> I4.s == I4.t == I3.t
True

Determine if it's more stable to use de Casteljau or to form Bernstein basis

To be more explict, the question is between one of two approaches:

  • Bernstein basis: First compute A = (1 - s)^2, B = 2 * s * (1 - s), C = s^2 and then use them to compute A * v0 + B * v1 + C * v2
  • de Casteljau: First compute intermediates A = (1 - s) * v0 + s * v1, B = (1 - s) * v1 + s * v2 and then combine them (1 - s) * A + s * B

Relevant changeset: 8ec6fdf

Makefile should work for all Python versions

Currently it just defaults to the current Python (and I have only verified it works on Linux with Python 2.7)

The issue was mv _speedup.so, not version pinning in the Makefile. (I had an issue on my machine where pyenv puts a gfortran shim ahead of /usr/bin/gfortran on the PATH, which broke somehow in my Python 3.6.1 pyenv environment, but not in my Python 2.7.13 environment.)

Add option to switch between development and production compiler flags

Probably will be easiest to add via env. var. (see c64a97a, a CLI flag is problematic because pip / setup.py use the flags for so many other things)

See dev flags and prod flags on fortran90.org.

        gfortran dev | prod                 | numpy.distutils (dflt)
---------------------+----------------------+------------------------
-Wall                | -Wall                | -Wall
-Wextra              | -Wextra              |
-Wimplicit-interface | -Wimplicit-interface |
-fPIC                | -fPIC                | -fPIC
                     | -Werror              |
-fmax-errors=1       | -fmax-errors=1       |
-g                   |                      | -g
-fcheck=all          |                      |
-fbacktrace          |                      |
                     | -O3                  | -O3
                     | -march=native        |
                     | -ffast-math          |
                     | -funroll-loops       | -funroll-loops
                     |                      | -fno-second-underscore

           ifort dev | prod                 | numpy.distutils (dflt)
---------------------+----------------------+------------------------
-warn all            | -warn all            |
-check all           |                      |
                     | -fast                |

According to the CircleCI commands journal, we are coming close to production by using the defaults:

# YES
-Wall  # warn all
-fPIC  # fixed position
-O3  # Optimization level 3
-funroll-loops  # Unroll loops

# NO
-Wextra  # extra warning flags that are not enabled by -Wall
-Wimplicit-interface  # Warn if a procedure is called without an explicit interface
-Werror  # Make all warnings into errors.
-fmax-errors=1  # Stop compilation at first error
-march=native  # Selects the CPU to tune for at compilation time by
               # determining the processor type of the compiling machine. 
               # (-march=generic makes more sense for Wheels, but maybe OS X 
               # wheels enough can be assumed about the hardware?)
-ffast-math  # "the least conforming but fastest math mode"

# ALSO PRESENT (i.e. `numpy.disutils` does, but Certik doesn't)
-ffixed-form  # Just for F77 "strict / fixed" source code layout
-fno-second-underscore  # Do **less** name mangling for symbols

# PRESENT BUT SHOULDN'T BE / DOESN'T NEED TO BE / WON'T BE USEFUL
-g  # Produce debug symbols

Also to describe the debug flags:

-fcheck=all  # Enable run-time checks of **all** supported checks (e.g. bounds checking)
-fbacktrace  # Make sure to generate backtrace for serious failures

Also discussion about -march=native.

Is probably worth adding -pedantic and -fimplicit-none for debug mode.

Consider manually specifying -std=f2008 (I know I am using features not present in the Fortran 90 standard, but not sure if the features showed up in the 1995, 2003 or 2008 standard.)

Geometric Intersection can introduce "spurious" curve-curve intersections

Consider quadratic curves given by the following points (and with a known intersection computed in "300-bit" precision with mpmath):

>>> import numpy as np
>>> 
>>> F = float.fromhex
>>>
>>> nodes1 = np.asfortranarray([
...     [F('0x1.4345fd5589f51p-1'), F('-0x1.412e9c2f15ae4p-1')],
...     [F('0x1.63df4780c5c32p-1'), F('-0x1.42bb000d3d300p-1')],
...     [F('0x1.848246bdfdd34p-1'), F('-0x1.4291e7f549c50p-1')],
... ])
>>> nodes2 = np.asfortranarray([
...     [F('0x1.75d757fd8e1f0p-1'), F('-0x1.05bd352fd9edep-1')],
...     [F('0x1.67bbe834a07ecp-1'), F('-0x1.29e8bb3154d06p-1')],
...     [F('0x1.5a50bad050cccp-1'), F('-0x1.4da2aadf14a2ap-1')],
... ])
>>> 
>>> 
>>> known_s = F('0x1.ad944e4aa3a19p-2')
>>> known_t = F('0x1.ae000252ee9cdp-1')

Visual inspection shows only one intersection (and since quadratics, there can't be "undetectable" / "microscopic" change of direction):

>>> import matplotlib.pyplot as plt
>>> import seaborn
>>>
>>> import bezier
>>>
>>> seaborn.set()
>>>
>>> curve1 = bezier.Curve.from_nodes(nodes1, _copy=False)
>>> curve2 = bezier.Curve.from_nodes(nodes2, _copy=False)
>>>
>>> ax = curve1.plot(256)
>>> _ = curve2.plot(256, ax=ax)
>>> _ = ax.axis('scaled')
>>> ax.figure.show()

figure_1


When intersecting the curves geometrically, we get a "phantom":

>>> from bezier._intersection_helpers import all_intersections
>>> from bezier._intersection_helpers import IntersectionStrategy
>>>
>>> candidates = [(curve1, curve2)]
>>>
>>> intersections_geom = all_intersections(
...     candidates, strategy=IntersectionStrategy.geometric)
>>>
>>> len(intersections_geom)
2
>>> (intersections_geom[0].s - known_s) / np.spacing(known_s)
15.0
>>> (intersections_geom[0].t - known_t) / np.spacing(known_t)
3.0
>>> (intersections_geom[1].s - known_s) / np.spacing(known_s)
7.0
>>> (intersections_geom[1].t - known_t) / np.spacing(known_t)
4.0

Both solutions are only a few ULPs from the "best" approximation in IEEE-754 and both are on the same side of that "best approximation".


When intersecting the curves algebraically, there is no issue:

>>> intersections_alg = all_intersections(
...     candidates, strategy=IntersectionStrategy.algebraic)
>>>
>>> len(intersections_alg)
1
>>> (intersections_alg[0].s - known_s) / np.spacing(known_s)
-3.0
>>> (intersections_alg[0].t - known_t) / np.spacing(known_t)
-1.0

$ git log -1 --pretty=%H
fb863038bc6b3effaf8a401e51e69b18bbf4b0ae

Curve-curve intersection fails on "almost" degree-elevated lines

>>> nodes1 = [
...     ['-0x1.0000000000000p+0', '0x1.b6db6db6db6d8p-2'],
...     ['-0x1.0000000000000p+0', '0x1.2492492492490p-2'],
...     ['-0x1.0000000000000p+0', '0x1.2492492492490p-3'],
... ]
>>> nodes2 = [
...     ['-0x1.1000000000000p+0', '0x1.d249249249246p-2'],
...     ['-0x1.1000000000000p+0', '0x1.36db6db6db6d9p-2'],
...     ['-0x1.1000000000000p+0', '0x1.36db6db6db6d9p-3'],
... ]
>>> F = float.fromhex
>>> nodes1 = [[F(val) for val in row] for row in nodes1]
>>> nodes2 = [[F(val) for val in row] for row in nodes2]
>>> import numpy as np
>>> nodes1 = np.asfortranarray(nodes1)
>>> nodes2 = np.asfortranarray(nodes2)
>>> import bezier
>>> C1 = bezier.Curve(nodes1, degree=2, _copy=False)
>>> C2 = bezier.Curve(nodes2, degree=2, _copy=False)
>>> import seaborn
>>> ax = C1.plot(256)
>>> _ = C2.plot(256, ax=ax)
>>> _ = ax.axis('scaled')
>>> _ = ax.set_xlim(-1.125, -0.875)
>>> ax.figure.show()
>>> C1.intersect(C2)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "bezier/curve.py", line 583, in intersect
    intersections = _intersection_helpers.all_intersections(candidates)
  File "bezier/_intersection_helpers.py", line 1433, in all_intersections
    accepted = intersect_one_round(candidates, intersections)
  File "bezier/_intersection_helpers.py", line 1353, in intersect_one_round
    from_linearized(first, second, intersections)
  File "bezier/_intersection_helpers.py", line 1119, in from_linearized
    second.end_node, orig_second._nodes)
NotImplementedError: Line segments parallel.

figure_1

The issue is that the curves are actually lines but not to machine precision:

>>> from bezier._intersection_helpers import linearization_error
>>> linearization_error(nodes1)
0.0
>>> linearization_error(nodes2)
6.938893903907228e-18

However, if we "project" onto the space of degree-elevated lines (inside the space of quadratics), we see:

>>> projection = np.asfortranarray([
...     [ 2.5, 1.0, -0.5],
...     [ 1.0, 1.0,  1.0],
...     [-0.5, 1.0,  2.5],
... ])
>>> projected1 = projection.dot(nodes1) / 3
>>> projected2 = projection.dot(nodes2) / 3
>>> projected1 - nodes1
array([[ 0.,  0.],
       [ 0.,  0.],
       [ 0.,  0.]])
>>> projected2 - nodes2
array([[  0.00000000e+00,  -5.55111512e-17],
       [  0.00000000e+00,   0.00000000e+00],
       [  0.00000000e+00,   0.00000000e+00]])
>>> np.linalg.norm(projected2 - nodes2, ord='fro') / np.linalg.norm(nodes2, ord='fro')
2.8822815212580453e-17

At

$ git log -1 --pretty=%H
09550abcf0efeb239b17f9871ccac93fcd1bcd4a

Make 0.5.0 release

I am making a formal issue here so that I can document how the process went.

Regression: using **more precise** Chebyshev nodes causes failure in curve-curve intersection 10

At

$ git log -1 --pretty=%H
73f909805e971221cb7976cf69603b82f31a4a32

we have the failure

>>> import numpy as np
>>> 
>>> from bezier import _curve_helpers
>>> from bezier._implicitization import _check_non_simple
>>> from bezier._implicitization import locate_point
>>> from bezier._implicitization import normalize_polynomial
>>> from bezier._implicitization import roots_in_unit_interval
>>> from bezier._implicitization import to_power_basis
>>> 
>>> 
>>> nodes1 = np.asfortranarray([  # Curve 1
...     [0.0, 0.0],
...     [0.5, 1.0],
...     [1.0, 0.0],
... ])
>>> nodes2 = np.asfortranarray([  # Curve 13
...     [0.0 ,  0.0],
...     [0.25,  2.0],
...     [0.5 , -2.0],
...     [0.75,  2.0],
...     [1.0 ,  0.0],
... ])
>>> assert nodes1 is _curve_helpers.full_reduce(nodes1)
>>> assert nodes2 is _curve_helpers.full_reduce(nodes2)
>>> assert nodes1.ndim == nodes2.ndim == 2
>>> assert nodes1.shape[0] < nodes2.shape[0]
>>> assert nodes1.shape[1] == nodes2.shape[1] == 2
>>> 
>>> 
>>> coeffs = normalize_polynomial(to_power_basis(nodes1, nodes2))
>>> assert not np.all(coeffs == 0.0)
>>> _check_non_simple(coeffs)
>>> t_vals = roots_in_unit_interval(coeffs)
>>> 
>>> s_vals = []
>>> for t_val in t_vals:
...     (x_val, y_val), = _curve_helpers.evaluate_multi(
...         nodes2, np.asfortranarray([t_val]))
...     s_vals.append(locate_point(nodes1, x_val, y_val))
... 
>>> t_vals
array([  8.17017406e-16,   3.11017763e-01,   6.88982237e-01,
         1.00000000e+00])
>>> s_vals
[8.1701740628022657e-16, 0.3110177634952106, None, None]

whereas previously (at 5fa4ab2) we had

>>> t_vals
array([ -5.82714096e-15,   3.11017763e-01,   6.88982237e-01,
         1.00000000e+00])
>>> s_vals
[-5.8271409569311434e-15, 0.31101776349520172, 0.68898223650486901, 0.99999999999994127]

Curve intersection fails when non-default start / end are used

For example:

>>> import numpy as np
>>> import bezier
>>> from bezier._intersection_helpers import all_intersections
>>>
>>> nodes15 = np.asfortranarray([
...     [0.25 , 0.625],
...     [0.625, 0.25 ],
...     [1.0  , 1.0  ],
])
>>> nodes25 = np.asfortranarray([
...     [0.0 , 0.5],
...     [0.25, 1.0],
...     [0.75, 1.5],
...     [1.0 , 0.5],
... ])
>>>
>>> curve15a = bezier.Curve(nodes15, degree=2)
>>> curve25a = bezier.Curve(nodes25, degree=3)
>>> intersection, = all_intersections([(curve15a, curve25a)])
>>> intersection.s
0.8587897065534015
>>> intersection.t
0.8734528541508781
>>> intersection.get_point()
array([[ 0.89409228,  0.81061745]])

but if we just use non-default start and end this fails:

>>> curve15b = bezier.Curve(nodes15, degree=2, start=0.125, end=0.75)
>>> curve25b = bezier.Curve(nodes25, degree=3, start=0.25 , end=2.0)
>>> all_intersections([(curve15b, curve25b)])
all_intersections([(curve15b, curve25b)])
Traceback (most recent call last):
...
ValueError: outside of unit interval

even though the answer should be exactly the same:

>>> curve15b.evaluate(intersection.s)
array([[ 0.89409228,  0.81061745]])
>>> curve25b.evaluate(intersection.t)
array([[ 0.89409228,  0.81061745]])

At 8271bf8

Avoid copying arrays when passing to Fortran (make sure `f_contiguous` is true)

From f2py docs

When a Numpy array, that is Fortran contiguous and has a dtype corresponding to presumed Fortran type, is used as an input array argument, then its C pointer is directly passed to Fortran.

Otherwise F2PY makes a contiguous copy (with a proper dtype) of the input array and passes C pointer of the copy to Fortran subroutine. As a result, any possible changes to the (copy of) input array have no effect to the original argument, as demonstrated below:

Add PyPy support to CircleCI

Remnants of failed attempt:
https://github.com/dhermes/bezier/blob/a6b30013cccf8998e5b62e90a74db9f039e1b7a9/tox.ini
https://github.com/dhermes/bezier/blob/a6b30013cccf8998e5b62e90a74db9f039e1b7a9/circle.yml

Essentially, the NumPy / SciPy / Matplotlib story isn't really developed until later version of PyPy (4.1 and >= 5.0):


It's worth noting, that on my local install:

$ pypy --version
Python 2.7.10 (5.3.1+dfsg-1~ppa1~ubuntu14.04, Jun 19 2016, 15:09:54)
[PyPy 5.3.1 with GCC 4.8.4]

running the tests with PyPy are comically slow:

$ tox -e functional
...
======= 57 passed in 1.12 seconds =======
...
$ tox -e functional-pypy
...
======= 57 passed in 6.19 seconds =======
...
$ tox -e py27
...
======= 204 passed in 3.72 seconds =======
...
$ tox -e pypy
...
======= 203 passed, 1 skipped in 28.25 seconds =======
...

It's common to have a requirements.txt file that uses environment markers as follows:

numpy; platform_python_implementation != "PyPy"
git+https://bitbucket.org/pypy/numpy.git; platform_python_implementation == "PyPy"

Surface-surface intersection fails on "almost" parallel lines

>>> import numpy as np
>>> nodes1 = np.asfortranarray([
...     [1.625 , 1.1875 ],
...     [1.5   , 1.03125],
...     [1.375 , 0.875  ],
...     [1.6875, 1.0    ],
...     [1.5625, 0.84375],
...     [1.75  , 0.8125 ],
... ])
>>> nodes2 = np.asfortranarray([
...     [1.25     , 0.75     ],
...     [1.34375  , 0.78125  ],
...     [1.4375   , 0.8125   ],
...     [1.375    , 0.875    ],
...     [1.4609375, 0.9140625],
...     [1.484375 , 1.015625 ],
... ])
>>> import bezier
>>> S1 = bezier.Surface(nodes1, degree=2, _copy=False)
>>> S2 = bezier.Surface(nodes2, degree=2, _copy=False)
>>> import seaborn
>>> ax = S1.plot(256)
>>> _ = S2.plot(256, ax=ax)
>>> _ = ax.axis('scaled')
>>> ax.figure.show()
>>> S1.intersect(S2)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "bezier/surface.py", line 1144, in intersect
    edge_pairs)
  File "bezier/_intersection_helpers.py", line 1437, in all_intersections
    raise NotImplementedError(msg)
NotImplementedError: The number of candidate intersections is too high.
17 accepted pairs gives 68 candidate pairs.

figure_1

This actually comes from a single edge-pair (I originally assumed the issue was that the bound of 16 for two curves was overly restrictive for multiple curves, relaxing to 20 allows this to pass BTW)

>>> for i, edge1 in enumerate(S1.edges):
...     for j, edge2 in enumerate(S2.edges):
...         try:
...             I = edge1.intersect(edge2)
...             print('{}, {} success'.format(i, j))
...         except NotImplementedError:
...             print('{}, {} failure'.format(i, j))
... 
0, 0 success
0, 1 success
0, 2 failure
1, 0 success
1, 1 success
1, 2 success
2, 0 success
2, 1 success
2, 2 success

So

>>> import numpy as np
>>> nodes1 = np.asfortranarray([
...     [1.625 , 1.1875 ],
...     [1.5   , 1.03125],
...     [1.375 , 0.875  ],
... ])
>>> nodes2 = np.asfortranarray([
...     [1.484375, 1.015625],
...     [1.375   , 0.875   ],
...     [1.25    , 0.75    ],
... ])
>>> import bezier
>>> C1 = bezier.Curve(nodes1, degree=2, _copy=False)
>>> C2 = bezier.Curve(nodes2, degree=2, _copy=False)
>>> import seaborn
>>> ax = C1.plot(256)
>>> _ = C2.plot(256, ax=ax)
>>> _ = ax.axis('scaled')
>>> ax.figure.show()
>>> C1.intersect(C2)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "bezier/curve.py", line 583, in intersect
    intersections = _intersection_helpers.all_intersections(candidates)
  File "bezier/_intersection_helpers.py", line 1437, in all_intersections
    raise NotImplementedError(msg)
NotImplementedError: The number of candidate intersections is too high.
17 accepted pairs gives 68 candidate pairs.

and the crazy thing is, they never intersect!

figure_1

Via implicitization, this is straightforward to dismiss:

Computed f(t) = f1(x2(t), y2(t)) =
    (-0.0009765625) * t^0 + (0.001953125) * t^1 + (-0.0087890625) * t^2
======
Resultant(f, f') = -5.78476 (computed with L2-normalized ||f|| = 1 on [0, 1])
Roots:

This f(t) is just 9 t^2 - 2 t + 1 divided by -1024. The unscaled version has discriminant equal to (-2)^2 - 4(9)(1) = -32, so it has no real roots.

>>> from bezier._intersection_helpers import IntersectionStrategy
>>> I, = S1.intersect(S2, strategy=IntersectionStrategy.algebraic)
>>> I
<CurvedPolygon (num_sides=3)>
>>>
>>> ax = S1.plot(256)
>>> _ = S2.plot(256, ax=ax)
>>> _ = I.plot(256, ax=ax)
>>> _ = ax.axis('scaled')
>>> ax.figure.show()

figure_1


$ git log -1 --pretty=%H
09550abcf0efeb239b17f9871ccac93fcd1bcd4a
$ # UPDATED: Still b0rken
$ git log -1 --pretty=%H
7a86d904b2e9b48decbd5d6b3fd2340d492b3947

locate_point() uses start/end incorrectly

This is identical to #42 but for curves instead of surfaces.

The failing test added in 9308b0e showcases the problem. Adding an equivalent breakpoint at

s_approx = np.mean(params)

we see

(Pdb) s_approx
0.625
(Pdb) newton_refine(curve._nodes, curve._degree, point, s_approx)
0.51249999999999996

Note that 0.625 corresponds to the midpoint of the shifted interval: 0.5 * (0.25 + 1.0) == 0.625

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.