Git Product home page Git Product logo

apd's People

Contributors

dhui avatar josharian avatar kardianos avatar maddyblue avatar miretskiy avatar nvanbenschoten avatar otan avatar raduberinde avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

apd's Issues

please drop github.com/pkg/errors

Now that the standard library's errors package supports wrapping errors in a... standard way :) There shouldn't be a need to use pkg/errors anymore.

This has the advantage that the library will have zero dependencies, which is nice!

I can try to send a patch, though I don't know what formatting style the errors are meant to follow in this project, so it might be easiest for one of the maintainers or developers to do it directly. Calls to New and Errorf should be the same, and calls to Wrap can be replaced with Errorf("some message: %w", err).

Inconsistent negative flag behavior

func TestNegFlag(t *testing.T) {
	expected := apd.New(-100, 0)
	actual := apd.NewWithBigInt(big.NewInt(-100), 0)
	if expected.Negative != actual.Negative {
		t.Errorf("Expected %v but got %v", expected.Negative, actual.Negative)
	}
}
=== RUN   TestNegFlag
--- FAIL: TestNegFlag (0.00s)
    builder_test.go:19: Expected true but got false

Context.Rem fails for Rem(100E100, 10)

I attempted to run the following test:

func TestRem(t *testing.T) {
	a := apd.New(100, 100)
	b := apd.New(10, 1)
	var rem apd.Decimal
	_, err := apd.BaseContext.Rem(&rem, a, b)
	if err != nil {
		t.Fatal(err)
	}

	if !rem.IsZero() {
		t.Fatal("expected zero")
	}
}

I would expect this test to pass, but it fails:

=== RUN   TestRem
    rem_test.go:72: division impossible
--- FAIL: TestRem (0.00s)

FAIL

apd.BaseContext.Rem returns division impossible, which (I believe) is an incorrect result in this case.

How to round up to fixed number of decimal digits?

Thanks for this great library. I am trying to use this lib in financial calculation. I am doing all my intermediate calculations with high precision but I want to store the final result with 2 decimal digits (RoundHalfUp if needed). How do I do that?

d1 := apd.New(5, 0)
d2 := apd.New(3, 0)
r := apd.New(0, 0)
c := apd.BaseContext.WithPrecision(30)
res, err := c.Quo(r, d1, d2)

Now, I want to get 1.67 as the final answer from r . How do I do that?

clarify .Text 'g' docs

The 'g' docs say it uses 'f' for small exponents, and 'f' says it can sometimes display zeroes that are not present, but 'g' is smart and doesn't do that. Clarify the docs for 'g'.

Is this maintained?

I tried posting an issue a while ago (#98) and haven't heard anything. It seems like apd is still used in cockroachdb in production, which is a good sign but this repo itself doesn't look like it's had a ton of activity recently. So I'm wondering if this has a maintainer assigned, if we can expect anyone to look at issues or potential PRs, etc. We are considering using this library in a critical production code base and if there is no maintainer, we will likely fork or choose a different approach. Thanks 🙏 .

Consider using a context

This would allow moving precision, rounding, and other settings out of the Decimal type. It would also allow us to support cancellation for large computations that involve loops like Ln.

ToStandard is missing

Hi.

I am using this package https://github.com/leekchan/accounting which uses in turn your package. It seems you've removed ToStandard.

github.com/leekchan/accounting
../../leekchan/accounting/formatnumber.go:79: d.ToStandard undefined (type *apd.Decimal has no field or method ToStandard)
../../leekchan/accounting/formatnumber.go:139: d.ToStandard undefined (type *apd.Decimal has no field or method ToStandard)

Can you tell me which is its equivalent?

Thanks

leekchan/accounting#6

Inconsistent (or buggy) Quantize behavior

Looks like Quantize(result, amount, 0) func works differently for:

  • when amount is of the form N.0* (where N is integer number and N != 0)
  • when amount is of the form 0.0*

Here is a simple test to reproduce this (with v3.1.0):

// quantize is a simple wrapper around `apd` library.
func quantize(amount *apd.Decimal, decimalPlaces uint32, roundingRule apd.Rounder) (*apd.Decimal, error) {
	ctx := apd.BaseContext.WithPrecision(infinitePrecision)
	ctx.Rounding = roundingRule

	var amt apd.Decimal
	exp := -int32(decimalPlaces)
	_, err := ctx.Quantize(&amt, amount, exp)

	return &amt, err
}

func TestQuantizeRounding(t *testing.T) {
        // 1.1 -> 2 (works as expected)
	got, err = quantize(apd.New(11, -1), 0, apd.RoundUp)
	if err != nil {
		t.Fatalf("unexpected error: %v", err)
	}
	want = apd.New(2, 0)
	if want.CmpTotal(got) != 0 {
		t.Fatalf("want: %+v, got: %+v", want, got)
	}

        // 1.01 -> 2 (works as expected)
	got, err := quantize(apd.New(101, -2), 0, apd.RoundUp)
	if err != nil {
		t.Fatalf("unexpected error: %v", err)
	}
	want := apd.New(2, 0)
	if want.CmpTotal(got) != 0 {
		t.Fatalf("want: %+v, got: %+v", want, got)
	}

        // 0.1 -> 1 (works as expected)
	got, err = quantize(apd.New(1, -1), 0, apd.RoundUp)
	if err != nil {
		t.Fatalf("unexpected error: %v", err)
	}
	want = apd.New(1, 0)
	if want.CmpTotal(got) != 0 {
		t.Fatalf("want: %+v, got: %+v", want, got)
	}

        // 0.01 -> 1 (doesn't work as expected! instead it does 0.01 -> 0 for some reason)
	got, err = quantize(apd.New(1, -2), 0, apd.RoundUp)
	if err != nil {
		t.Fatalf("unexpected error: %v", err)
	}
	want = apd.New(1, 0)
	if want.CmpTotal(got) != 0 {
		t.Fatalf("want: %+v, got: %+v", want, got)
	}
}

precision, scale

Firstly, thank you for making this library available.

We use CRDB as our data store and this library when we need to do decimal operations.

Following taken from CRDB docs:

DECIMAL(precision, scale), where precision is the maximum count of digits both to the left and right of the decimal point 
and scale is the exact count of digits to the right of the decimal point.
.
.
.
When inserting a decimal value:

If digits to the right of the decimal point exceed the column's scale, CockroachDB rounds to the scale.
If digits to the right of the decimal point are fewer than the column's scale, CockroachDB pads to the scale with 0s.

What is the best way to achieve CRDB's precision, scale with apd v2? An example would be handy.

What is the best way to round digits to the right of the decimal point with apd v2 ? An example would be handy.

Thank you!

Just wanted to say thank you for the v3 release. The performance improvements are noticeable.

Benchmarks from my package (bojanz/currency) where the average coefficient is small:

Results for apd v2:

BenchmarkAdd-12       	 2718600	       435.0 ns/op	     128 B/op	       3 allocs/op
BenchmarkSub-12       	 3016879	       405.6 ns/op	      88 B/op	       3 allocs/op
BenchmarkMul-12       	 1447600	       820.8 ns/op	     168 B/op	       5 allocs/op
BenchmarkMulDec-12       1000000	      1055 ns/op	     184 B/op	       7 allocs/op
BenchmarkDiv-12       	  465312	      2928 ns/op	     336 B/op	      11 allocs/op
BenchmarkDivDec-12    	  355858	      3116 ns/op	     352 B/op	      13 allocs/op
BenchmarkRound-12     	  870912	      1228 ns/op	     296 B/op	      11 allocs/op
BenchmarkCmp-12       	85437470	        15.05 ns/op	       0 B/op	       0 allocs/op

Results for apd v3:

BenchmarkAdd-12       	 4333795	       286.2 ns/op	      64 B/op	       2 allocs/op
BenchmarkSub-12       	 4126153	       298.8 ns/op	      64 B/op	       2 allocs/op
BenchmarkMul-12       	 1990543	       625.5 ns/op	      96 B/op	       3 allocs/op
BenchmarkMulDec-12      1366653	       863.4 ns/op	     112 B/op	       5 allocs/op
BenchmarkDiv-12       	  627897	      1614 ns/op	      96 B/op	       3 allocs/op
BenchmarkDivDec-12    	  889832	      1317 ns/op	     112 B/op	       5 allocs/op
BenchmarkRound-12     	 2020036	       574.0 ns/op	      64 B/op	       2 allocs/op
BenchmarkCmp-12       	84142538	        14.26 ns/op	       0 B/op	       0 allocs/op

CmpTotal Bug

CmpTotal() does not properly compare equivalent Decimal values when the coefficient and exponent representations differ

Example:

package main

import (
    "github.com/cockroachdb/apd"
    "log"
)

func main() {
    a := apd.New(10, 0)
    b := apd.New(1, 1)
    log.Println(a)             // prints 10
    log.Println(b)             // prints 1E+1
    log.Println(a.CmpTotal(b)) // prints -1
    log.Println(a.Cmp(b))      // prints 0
    log.Println(b.CmpTotal(a)) // prints 1
    log.Println(b.Cmp(a))      // prints 0
}

rfc: release tags

I am planning on making tagged releases of apd for better version control. The plan is to:

  1. Make a tag of v1.0.0 as the first release, since I consider apd to be in very good shape and am confident that tagging it as a 1.0 is safe.
  2. NOT make a new branch, just stick with tags and develop on master.
  3. Make new releases with more-or-less each commit. This is because I don't expect there to be many commits. The minor version will be incremented on any additional feature, API addition, performance gain. Patch version on any bug, doc change, correctness issue, etc. I don't expect to increment the major version much. I suppose on major API changes like if we decide ErrDecimal is silly and needs to change.

Comments? @RaduBerinde @nvanbenschoten

Add small precision fast-path to Quo

I've been doing some benchmarks to compare APD's performance vs other approaches we're considering (cosmos/cosmos-sdk#7772).

It seems that Quo operations perform pretty poorly compare to Add, Mul, etc.:

Benchmark/1_Ops:_apd.Decimal128.Add
Benchmark/1_Ops:_apd.Decimal128.Add-16         	 5285344	       218 ns/op
Benchmark/1_Ops:_apd.Decimal128.Sub
Benchmark/1_Ops:_apd.Decimal128.Sub-16         	 5665453	       208 ns/op
Benchmark/1_Ops:_apd.Decimal128.Mul
Benchmark/1_Ops:_apd.Decimal128.Mul-16         	 9138602	       138 ns/op
Benchmark/1_Ops:_apd.Decimal128.Quo
Benchmark/1_Ops:_apd.Decimal128.Quo-16         	  162537	      7790 ns/op

This benchmark is basically just perform a single operation on two random floating point numbers (different ones each round). I'm using a Context configured with decimal128 parameters (i.e. Precision = 34, etc.)

Is Quo inherently expected to be 40x slower than other operations or could there be a performance bottleneck?

panic with SetString(".-400000000000000000000000000000000000000")

This test panics:

func TestSetStringMalformed(t *testing.T) {
	c := &BaseContext
	d := new(Decimal)
	tests := []string{
		".-4",
		".-400000000000000000000000000000000000000",
	}
	for _, test := range tests {
		_, _, err := c.SetString(d, test)
		if err == nil {
			t.Errorf("expected error for SetString(%q)", test)
		}
	}
}

The panic is in the second test case. Also, I think ".-4" should yield an error (the first test case).


This was found (indirectly) by go-fuzz. Would there be interest in adding a fuzz function or three directly? (I don't have much time to dedicate to this, but I can at least give it a nudge, and/or continue my indirect fuzzing.)

Function remainder returns division impossible

package main

import (
	"log"

	apd "github.com/cockroachdb/apd/v2"
)

func main() {
	dec("4", "2")
}

func dec(x, y string) {
	rem := new(apd.Decimal)

	valx, _, err := apd.NewFromString(x)
	if err != nil {
		panic("x: " + err.Error())
	}
	valy, _, err := apd.NewFromString(y)
	if err != nil {
		panic("y: " + err.Error())
	}

	_, err = apd.BaseContext.Rem(rem, valx, valy)
	if err != nil {
		log.Fatal(err)
	}
	log.Printf("result: %s", rem)
}

How to use Quantize with limited precision context

I'm building a financial application that deals with multiple currencies and multiple Contexts with settings appropriate to each currency. The DB storage supports the widest possible range of values to permit to store any currency, DB storage uses Decimal(78,18), i.e. MaxExponent=60 and MinExponent=-18 in apd terms.

I'm trying to quantize the value coming from DB with highest possible precision into the precision appropriate for given currency. The value is not large, but has many trailing decimal zeroes that I want to remove to bring it to correct precision.

And for whatever reason Quantize() does not produce the result I expect, but throws an Overflow condition instead. Below is the sample code that reproduces it:

package main

import (
	"fmt"
	"github.com/cockroachdb/apd/v2"
)

func main() {
	dStr := "6410.000000000000000000"    // small value with many trailing zeroes as seen in DB 
	wideCon := &apd.Context{         // widest supported context
		MaxExponent: 78,                
		MinExponent: -18,
		Traps:       apd.DefaultTraps,
	}
	d, _, _ := wideCon.SetString(&apd.Decimal{}, dStr)

	limitCon := &apd.Context{         // limited context suitable for given currency
		Precision:   17,
		MaxExponent: 9,
		MinExponent: -8,
		Traps:      apd.DefaultTraps,
	}
	scratch := &apd.Decimal{}
	con, err := limitCon.Quantize(scratch, d, limitCon.MinExponent)   // attempt to quantize to correct precision, expect to work fine 
	fmt.Println(con, err, scratch)      // fails with `overflow` condition
}

In this piece of code the wideCon is the widest supported context in the system, i.e. the DB default context, while limitCon is the target currency context. If I understand correctly the context settings - the limitCon effectively forces the values in this context to be within 999999999.99999999 range, which is exactly what I need.

The sample value is way below the MaxExponent limit, and I expect limitCon.Quantize() to produce same value with decimal places adjusted to fit, but instead it returns a NaN value and an Overflow condition.

The error goes away if I increase the MaxExponent, it works at MaxExponent=12 for this value, but workarounding it this way means I will have to forfeit the ceiling check limit which I'd like to not do.

What is a better way to achieve what I need - round an arbitrary long precision value to a constrained precision settings force-rounded per currency and with correct ceiling constraint?

format weirdness with a non-pointer Decimal

Sorry first time I've done this!

Basically, if you've got the code

`
var a1 apd.Decimal

a1.Scan(234.234)

log.Printf("%f", a1)

var a2 *apd.Decimal

a2, _, _ = apd.NewFromString("345.345")

log.Printf("%s", a2)
`

it prints out :

2019/04/14 18:07:55 {%!f(apd.Form=0) %!f(bool=false) %!f(int32=-3) {%!f(bool=false) [%!f(big.Word=234234)]}}
2019/04/14 18:07:55 345.345

This behaviour was weird enough for me to write a wrapper around it. Sorry I fail at markdown

I believe it can be fixed by removing the pointer from the receiver of the Format() function call

Benchmarks

I have benchmarked your library against other two decimal libraries with the following code:

package main

import (
	"strconv"
	"testing"

	d3 "github.com/cockroachdb/apd"
	d1 "github.com/ericlagergren/decimal"
	d2 "github.com/shopspring/decimal"
)

func BenchmarkFloat(b *testing.B) {
	price := 0.2985
	var unit float64 = 60
	b.ResetTimer()
	for i := 0; i < b.N; i++ {
		result := price / unit
		strconv.FormatFloat(result*unit, 'f', -1, 64)
	}
}

func BenchmarkEricLagergren(b *testing.B) {
	price := d1.New(2985, 4)
	unit := d1.New(60, 0)
	b.ResetTimer()
	for i := 0; i < b.N; i++ {
		price.Quo(price, unit)
		price.Mul(price, unit).String()
	}
}

func BenchmarkShopSpring(b *testing.B) {
	price := d2.NewFromFloat(0.2985)
	unit := d2.NewFromFloat(60)
	b.ResetTimer()
	for i := 0; i < b.N; i++ {
		result := price.Div(unit)
		result.Mul(unit).String()
	}
}

func BenchmarkCockroachdb(b *testing.B) {
	c := d3.BaseContext.WithPrecision(9)
	price := d3.New(2985, -4)
	unit := d3.New(60, 0)
	b.ResetTimer()
	for i := 0; i < b.N; i++ {
		c.Quo(price, price, unit)
		c.Mul(price, price, unit)
		price.String()
	}
}

and got the following results:

go test -bench .
BenchmarkFloat-8           	10000000	       137 ns/op
BenchmarkEricLagergren-8   	10000000	       233 ns/op
BenchmarkShopSpring-8      	 2000000	       870 ns/op
BenchmarkCockroachdb-8     	 1000000	      1682 ns/op
PASS
ok  	_/home/rif/Downloads/issue	8.258s

Did I used apd correctly? Any tips to optimize this?

Cbrt not being tested

Cbrt is not being tested, despite having a test file for cuberoot.

I can add a return 0, nil at the beginning and the tests pass..

 func (c *Context) Cbrt(d, x *Decimal) (Condition, error) {
+	return 0, nil
    --- PASS: TestParseDecTest/cuberoot-apd.decTest (0.00s)
go test
PASS
ok  	github.com/cockroachdb/apd	4.267s

Separation of Decimal and Context

What's the reasoning for the separation of Decimal and Context? Whenever I use a decimal type, I tend to need to use an arithmetic operation on it. Having to define a context, my relevant decimal objects, and determine a precision for the operation each time I need to calculate a value is a bit unwieldy. On the note of precision, I usually want the most precise value during a series of operations and then I will round at the end of the series. Having to determine a precision for each operation is not ideal in that case, as it may result in a loss of precision at one of the steps in the series.

Precision clarification

I've been using the lib with a precision of 34, and I've noticed that when using Text(), some of my decimals terminate with ...9999999 or ..0000001. I suspect it's got something to do with this.

Is there a way I can avoid having such errors, because my PoC app requires a certain sum total to be exactly zero, even at the precision of 34, which I can't achieve with such errors. I have to use f and can't use scientific notation. I'd like to know why this error occurs, and why it can't be fixed, because I've noticed the shopspring lib having a similar problem.

Version v1.1.0 included by default when using Go modules

The 2.0.0 release is only accessible when using Go modules if you manually modify go.mod to look for v2.0.0 or master (latest selects v1.1.0) . When v2 is requested by hand, the go tool updates go.mod to include the line:

github.com/cockroachdb/apd v0.0.0-20181017181144-bced77f817b4

This was surprising, as I was expecting to automatically adopt the latest version of the code when importing this library into a project for the first time. Perhaps it would be worth looking at the advice in the golang wiki for a solution that better supports Go 1.11 modules.

Decompose is not using buf

The comment specifies that buf will be used for the coefficient is it provides enough capacity but right now it is ignored which annoying when trying to encode with zero allocation.

See #132 for a possible fix.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.