Git Product home page Git Product logo

numpy-financial's Introduction


Powered by NumFOCUS PyPI Downloads Conda Downloads Stack Overflow Nature Paper OpenSSF Scorecard

NumPy is the fundamental package for scientific computing with Python.

It provides:

  • a powerful N-dimensional array object
  • sophisticated (broadcasting) functions
  • tools for integrating C/C++ and Fortran code
  • useful linear algebra, Fourier transform, and random number capabilities

Testing:

NumPy requires pytest and hypothesis. Tests can then be run after installation with:

python -c "import numpy, sys; sys.exit(numpy.test() is False)"

Code of Conduct

NumPy is a community-driven open source project developed by a diverse group of contributors. The NumPy leadership has made a strong commitment to creating an open, inclusive, and positive community. Please read the NumPy Code of Conduct for guidance on how to interact with others in a way that makes our community thrive.

Call for Contributions

The NumPy project welcomes your expertise and enthusiasm!

Small improvements or fixes are always appreciated. If you are considering larger contributions to the source code, please contact us through the mailing list first.

Writing code isn’t the only way to contribute to NumPy. You can also:

  • review pull requests
  • help us stay on top of new and old issues
  • develop tutorials, presentations, and other educational materials
  • maintain and improve our website
  • develop graphic design for our brand assets and promotional materials
  • translate website content
  • help with outreach and onboard new contributors
  • write grant proposals and help with other fundraising efforts

For more information about the ways you can contribute to NumPy, visit our website. If you’re unsure where to start or how your skills fit in, reach out! You can ask on the mailing list or here, on GitHub, by opening a new issue or leaving a comment on a relevant issue that is already open.

Our preferred channels of communication are all public, but if you’d like to speak to us in private first, contact our community coordinators at [email protected] or on Slack (write [email protected] for an invitation).

We also have a biweekly community call, details of which are announced on the mailing list. You are very welcome to join.

If you are new to contributing to open source, this guide helps explain why, what, and how to successfully get involved.

numpy-financial's People

Contributors

alexmgtno avatar charris avatar cournape avatar drammock avatar eugenia-mazur avatar garrypolley avatar gfyoung avatar jarrodmillman avatar jlopezpena avatar kai-striega avatar khaledto avatar liufei11111 avatar mashybasker avatar melissawm avatar naveenarun avatar peliot avatar person142 avatar pv avatar seberg avatar shoyer avatar simongibbons avatar stefanv avatar tacaswell avatar tal66 avatar teoliphant avatar timcera avatar tylerjereddy avatar warrenweckesser avatar yatshunlee avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

numpy-financial's Issues

Rework functions to use ufuncs?

The implementation of e.g. nper:

https://github.com/numpy/numpy-financial/blob/master/numpy_financial/_financial.py#L229

requires the usual “split the array up using boolean indexes and then do different things to each hunk” stuff that you have to do when writing a function with conditional logic that operates on an entire array. This tends to lead to convoluted, hard-to-maintain code. (And nper still has more edge cases that need to be handled, so there will need to be more boolean arrays!)

One solution to this problem is to write ufuncs so that you can work with scalars and just use conditional logic. The numpy_financial functions can’t be full ufuncs because they have default arguments, but most of them can be very thin wrappers around a ufunc.

The disadvantage of introducing ufuncs is that you introduce compiled code, but perhaps it is worth it? Code for generating the loops from scalar kernels can be grabbed from SciPy (though it can be stripped down for numpy_financial since less languages need to be supported).

DOC: Test examples in doctstrings fail

Issue with current documentation:

The test examples provided in the docstrings of numpy_financial/_financial.py mostly fail.

image

Idea or request for content:

No response

TST: Parameterise tests where needed

Proposed new feature or change:

Currently NumPy-Financial has many repeating tests for example in tests.test_financial.TestMirr.test_mirr We have the following code that could be parameterised across inputs/data types

    def test_mirr(self):
        val = [-4500, -800, 800, 800, 600, 600, 800, 800, 700, 3000]
        assert_almost_equal(npf.mirr(val, 0.08, 0.055), 0.0666, 4)

        val = [-120000, 39000, 30000, 21000, 37000, 46000]
        assert_almost_equal(npf.mirr(val, 0.10, 0.12), 0.126094, 6)

        val = [100, 200, -50, 300, -200]
        assert_almost_equal(npf.mirr(val, 0.05, 0.06), 0.3428, 4)

        val = [39000, 30000, 21000, 37000, 46000]
        assert_(numpy.isnan(npf.mirr(val, 0.10, 0.12)))

    def test_mirr_decimal(self):
        val = [Decimal('-4500'), Decimal('-800'), Decimal('800'),
               Decimal('800'), Decimal('600'), Decimal('600'), Decimal('800'),
               Decimal('800'), Decimal('700'), Decimal('3000')]
        assert_equal(npf.mirr(val, Decimal('0.08'), Decimal('0.055')),
                     Decimal('0.066597175031553548874239618'))

        val = [Decimal('-120000'), Decimal('39000'), Decimal('30000'),
               Decimal('21000'), Decimal('37000'), Decimal('46000')]
        assert_equal(npf.mirr(val, Decimal('0.10'), Decimal('0.12')),
                     Decimal('0.126094130365905145828421880'))

        val = [Decimal('100'), Decimal('200'), Decimal('-50'),
               Decimal('300'), Decimal('-200')]
        assert_equal(npf.mirr(val, Decimal('0.05'), Decimal('0.06')),
                     Decimal('0.342823387842176663647819868'))

        val = [Decimal('39000'), Decimal('30000'), Decimal('21000'),
               Decimal('37000'), Decimal('46000')]
        assert_(numpy.isnan(npf.mirr(val, Decimal('0.10'), Decimal('0.12'))))

numpy.nper() with pmt=0 gives unexpected "divide by zero" warning

Reproducing code example:

import numpy as np

np.nper(rate=0.1, pmt=0, pv=-500, fv=1500)

Error message:

numpy\lib\financial.py:308: RuntimeWarning: divide by zero encountered in long_scalars
A = -(fv + pv)/(pmt+0)

Actual Result:

np.nper() returns the correct result with "divide by zero" warning

Expected Result:

np.nper() returns the same correct result but without warning

Numpy/Python version information:

1.16.4 3.6.5 |Anaconda, Inc.| (default, Mar 29 2018, 13:32:41) [MSC v.1900 64 bit (AMD64)]

DOC: MIRR documentation is lacking

Issue with current documentation:

The documentation for MIRR is very limited (see below screenshot)

image

Idea or request for content:

The documentation for MIRR should explain what it is, how to use it and when to use it. In addition, in should supply some examples of how it should be used

We need more thorough test coverage

Our tests for many of the functions are very sparse, this should be updated to include more tests for each of the functions in numpy-financial. In particular, I think we need more thorough testing of edge cases.

The test can be modified by editing test_financial.py.

pmt() function too slow - here are some ways to make it faster

The pmt() function of numpy_financial is too slow and can become the main bottleneck in all those cases where it must be run thousands, if not millions of times - e.g. multiple scenarios on many loans or mortgages based on a floating rate.

I propose below a few alternative ways to write new, more optimised functions, which can be 7 to 60 times faster, depending on the circumstances:

  • if you need to run this function millions of times and your code allows it, you'll be better off using one function for when the inputs, and the output, are scalar, and a separate one for when they are arrays
  • unless you have reasons not to, using numba will speed things up even more
  • if you cannot use numba and never run this function on scalars, you won't see much benefit

My findings are that, on my machine at least:

  • For scalars: npf is slower than all the other implementations: about 30 times slower than my scalar, non-numba function, about 60 times slower than my scalar numba one but also significantly slower than my array function
  • For arrays: my numba function is about 7 times faster than npf's; without numba, they are about the same

I copied below the exact code I used to test all of this:

import numpy as np
import pandas as pd
import numpy_financial as npf
import timeit
import numba

start = time.time()


def my_pmt(rate, nper, pv, fv =0, when =0):
    c = (1+rate)**nper
    # multipl by 1 converts an array of size 1 to a scalar
    return 1 * np.where(nper == 0, np.nan,
                    np.where(rate ==0, -(fv + pv) /nper ,
                             (-pv *c  - fv) * rate / ( (c - 1) *( 1 + rate * when ) ) ) )

@numba.jit
def pmt_numba_array(rate, nper, pv, fv =0, when =0):
    c = (1+rate)**nper
    return np.where(nper == 0, np.nan,
                    np.where(rate ==0, -(fv + pv) /nper ,
                             (-pv *c  - fv) * rate / ( (c - 1) *( 1 + rate * when ) ) ) )

def my_pmt_optimised(rate,nper, pv, fv =0, when =0):
    if np.isscalar(rate) and np.isscalar(nper) and np.isscalar(pv) and np.isscalar(fv):
        return pmt_numba_scalar(rate, nper, pv, fv, when)
    else:
        return pmt_numba_array(rate, nper, pv, fv, when)
    
    
    


@numba.jit
def pmt_numba_scalar(rate, nper, pv, fv=0, when = 0): 
    # 0 = end, 1 = begin
    if nper == 0:
        return(np.nan)   
    elif rate == 0:
        return ( -(fv+pv)/nper )
    else:
        c= (1 + rate) ** nper
        return (-pv *c  - fv) * rate / ( (c - 1) *( 1 + rate * when) )  
    
    
def pmt_no_numba_scalar(rate, nper, pv, fv=0, when = 0):
    if nper == 0:
        return(np.nan)   
    elif rate == 0:
        return ( -(fv+pv)/nper )
    else:
        c= (1 + rate) ** nper
        return (-pv *c  - fv) * rate / ( (c - 1) *( 1 + rate * when)  )


def pmt_npf(rate, nper, pv, fv=0, when = 0):
    #(rate, nper, pv, fv, when) = map(np.array, [rate, nper, pv, fv, when])
    temp = (1 + rate)**nper
    mask = (rate == 0)
    masked_rate = np.where(mask, 1, rate)
    fact = np.where(mask != 0, nper,
                    (1 + masked_rate*when)*(temp - 1)/masked_rate)
    return -(fv + pv*temp) / fact

r = 4
n = int(1e4)

rate = 5e-2
nper = 120
pv = 1e6
fv = -100e3



t_my_numba = timeit.Timer("pmt_numba_scalar(rate, nper, pv, fv ) " ,  globals = globals() ).repeat(repeat = r, number = n)
t_my_no_numba = timeit.Timer("pmt_no_numba_scalar(rate, nper, pv, fv ) " ,  globals = globals() ).repeat(repeat = r, number = n)
t_npf = timeit.Timer("npf.pmt(rate, nper, pv, fv )" ,  globals = globals() ).repeat(repeat = r, number = n)
t_my_pmt = timeit.Timer("my_pmt(rate, nper, pv, fv )" ,  globals = globals() ).repeat(repeat = r, number = n)
t_my_pmt_optimised = timeit.Timer("my_pmt_optimised(rate, nper, pv, fv )" ,  globals = globals() ).repeat(repeat = r, number = n)


resdf_scalar = pd.DataFrame(index = ['min time'])
resdf_scalar['my scalar func, numba'] = [min(t_my_numba)]
resdf_scalar['my scalar func, no numba'] = [min(t_my_no_numba)]
resdf_scalar['npf'] = [min(t_npf)]
resdf_scalar['my array function, no numba'] = [min(t_my_pmt)]
resdf_scalar['my scalar/array function, numba'] = [min(t_my_pmt_optimised)]

# the docs explain why we should take the min and not the avg
resdf_scalar = resdf_scalar.transpose()
resdf_scalar['diff vs fastest'] = (resdf_scalar / resdf_scalar.min() )

rate =np.arange(2,12)*1e-2
nper = np.arange(200,210)
pv = np.arange(1e6,1e6+10)
fv = -100e3

t_npf_array = timeit.Timer("npf.pmt(rate, nper, pv, fv )" ,  globals = globals() ).repeat(repeat = r, number = n)
t_my_pmt_array = timeit.Timer("my_pmt(rate, nper, pv, fv )" ,  globals = globals() ).repeat(repeat = r, number = n)
t_my_pmt_optimised_array = timeit.Timer("my_pmt_optimised(rate, nper, pv, fv )" ,  globals = globals() ).repeat(repeat = r, number = n)


resdf_array = pd.DataFrame(index = ['min time'])
resdf_array['npf'] = [min(t_npf_array)]
resdf_array['my array function, no numba'] = [min(t_my_pmt_array)]
resdf_array['my scalar/array function, numba'] = [min(t_my_pmt_optimised_array)]

# the docs explain why we should take the min and not the avg
resdf_array = resdf_array.transpose()
resdf_array['diff vs fastest'] = (resdf_array / resdf_array.min() )

irr() providing different results on ARM and x86 CPUs

Hi all,
I've got two machines set up, my local development machine (i7-10510U), and an AWS EC2 instance (c6g.16xlarge, Arm-based AWS Graviton2). On both machines, the same version of Python (3.8.5) and numpy_financial (1.0.0) are installed. Both machines are running Ubuntu 20.04 LTS, with my local machine running kernel 5.8.0-43-generic and the AWS machine running 5.4.0-1037-aws.

I run the following code on both machines:

import numpy_financial
test_cashflow = [-20, 2, 3, 4, 5, 4, 3, 2]
numpy_financial.irr(test_cashflow)

On my local development machine (x86), the result given is 0.03603774756546363, whereas on the AWS EC2 Arm-based instance, the result given is 0.08389095415117676. I've also run the same code on an AWS EC2 t2.medium instance (x86 based) and was given the same result as my local development machine (0.036...)

If I provide the same input into the MS Excel IRR function, the result given is 3.6%, matching the x86 outputs. This leads me to believe that there's an issue occurring with the ARM package.

BUG: rate returning incorrect value

Describe the issue:

I found a situation where npf.rate is not returning the same thing as my BA II Plus financial calculator or npf.irr. Even setting guess to the correct value does not get the expected answer.

Reproduce the code example:

import numpy_financial as npf
# This returns -1.8964420585461792
npf.rate(8, -440_000, 263_175, 25_500) 
# This returns 0.5838779110248231 - Matches financial calculator
npf.irr([-440_000, 263_175, 263_175, 263_175, 263_175, 263_175, 263_175, 263_175, 263_175 + 25_500])
# Still returns -1.8964420585461792
npf.rate(8, -440_000, 263_175, 25_500, 0, 0.5838779110248231)

Error message:

No response

Runtime information:

print(numpy.version)
1.26.4
print(sys.version)
3.12.3 (main, Apr 9 2024, 08:09:14) [Clang 15.0.0 (clang-1500.3.9.4)]
print(numpy.show_runtime())
WARNING: threadpoolctl not found in system! Install it by pip install threadpoolctl. Once installed, try np.show_runtime again for more detailed build information
[{'numpy_version': '1.26.4',
'python': '3.12.3 (main, Apr 9 2024, 08:09:14) [Clang 15.0.0 '
'(clang-1500.3.9.4)]',
'uname': uname_result(system='Darwin', node='HCOB-003406.local', release='23.4.0', version='Darwin Kernel Version 23.4.0: Fri Mar 15 00:12:49 PDT 2024; root:xnu-10063.101.17~1/RELEASE_ARM64_T6020', machine='arm64')},
{'simd_extensions': {'baseline': ['NEON', 'NEON_FP16', 'NEON_VFPV4', 'ASIMD'],
'found': ['ASIMDHP'],
'not_found': ['ASIMDFHM']}}]
None

Context for the issue:

No response

Initial library creation tasks

This is a rough "to do" list. The tasks probably need to be refined.

  • Get the financial functions from numpy into this repo., and create a basic package structure (setup.py, etc) that allows the package to be installed locally.
    - Done in #1
  • Configure testing for running tests with pytest, and set up CI.
    - I set up a github workflow to run the tests with numpy >= 1.17 with Python versions 3.5, 3.6, 3.7 on ubuntu. That's enough to consider this task done. We can add more platforms later.
  • Change setup.py and anything else necessary so the version is visible wherever it is needed: in setup.py, as numpy_financial.__version__, etc.
    - Done in bf4808c
  • Set up documentation (Sphinx, etc.)
    - Documentation builds on each push to the master branch, and is available to view at https://numpy.org/numpy-financial/
  • Upload package to PyPI; get feedback.
    - Version 0.1.0 was uploaded to PyPI on October 10, 2019.
  • Release 1.0

After releasing 1.0, we can:

  • Add deprecation warnings to the financial functions in NumPy.
  • Transfer the open GitHub issues about the financial functions from the NumPy repo to this one.
  • Freeze the NumPy financial library. Direct all future issues and pull requests to this repo.

BUG: Benchmarking does not work with spin

Describe the issue:

Running the benchmarks does not work since the project moved to spin/conda. The code needs to be updated to use the scientific Python ecosytem's tooling.

Reproduce the code example:

spin bench

Error message:

Usage: spin [OPTIONS] COMMAND [ARGS]...
Try 'spin --help' for help.

Error: No such command 'bench'.

Runtime information:

Working on the current main branch

Context for the issue:

We've recently moved to using conda/spin to build our projects. However the benchmarks haven't been ported yet.

IRR chooses solution closest to zero not always correct choice

For the following set of cash flows, I would expect the IRR to return 12%.

cf = np.array([-217500.0, -217500.0, 108466.80462450592, 101129.96439328062, 93793.12416205535, 86456.28393083003, 79119.44369960476, 71782.60346837944, 64445.76323715414, 57108.92300592884, 49772.08277470355, 42435.24254347826, 35098.40231225296, 27761.56208102766, 20424.721849802358, 13087.88161857707, 5751.041387351768, -1585.7988438735192, -8922.639075098821, -16259.479306324123, -23596.31953754941, -30933.159768774713, -38270.0, -45606.8402312253, -52943.680462450604, -60280.520693675906, -67617.36092490121])

However, from the code, we have the following choice being made:
# NPV(rate) = 0 can have more than one solution so we return # only the solution closest to zero.
rate = 1/res - 1
print(rate)
[-0.01809679 0.12 ]

As you can see, 12% is one of the solutions but it is discarded.
This leads me to question whether choosing the solution that is closest to zero is indeed the right choice here and would appreciate the group's feedback.

Error in numpy-financials IRR computation - eigenvector did not converge

Within a code, I compute the IRR of a set of 1000 cashflows with numpy-financials ("npf"). Each time the code stops and delivers the following message :

File "C:\Users\navag\anaconda3\envs\tf2\lib\site-packages\numpy_financial\_financial.py", line 700, in irr
  res = np.roots(values[::-1])

File "<__array_function__ internals>", line 6, in roots

File "C:\Users\navag\AppData\Roaming\Python\Python37\site-packages\numpy\lib\polynomial.py", line 245, in roots
  roots = eigvals(A)

File "<__array_function__ internals>", line 6, in eigvals

File "C:\Users\navag\AppData\Roaming\Python\Python37\site-packages\numpy\linalg\linalg.py", line 1054, in eigvals
  w = _umath_linalg.eigvals(a, signature=signature, extobj=extobj)

File "C:\Users\navag\AppData\Roaming\Python\Python37\site-packages\numpy\linalg\linalg.py", line 103, in _raise_linalgerror_eigenvalues_nonconvergence
  raise LinAlgError("Eigenvalues did not converge")

LinAlgError: Eigenvalues did not converge

When I try manually the line of code : portf_irr_BH = (((npf.irr(invest[:,6])+1)**252) - 1), I also get first the same error message, but if I try immediately the line once again, I then get the right result. This does mean that npf is capable of finding the right result ! Does anyone know how to prevent this bug ? Thanks in advance

TypeError in numpy.rate when using Decimals

numpy.rate is raising TypeError when working with Decimals and the iterative function doesn't converge before the max iterations.

I poked it a little bit, and it seems like the function assumes rn to be an array or float, but it is a Decimal:

https://github.com/numpy/numpy/blob/cacdf265755ffae9d5c07d35fafa3d1366e342b8/numpy/lib/financial.py#L640-L648

Reproducing code example:

import numpy as np
from decimal import Decimal

np.rate(Decimal(12), Decimal('400'), Decimal('10000'), Decimal(0))

floats work just fine:

>>> np.rate(12.0, 400.0, 10000.0, 0.0)
nan

Error message:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/mauricio/.virtualenvs/unhaggle/local/lib/python2.7/site-packages/numpy/lib/financial.py", line 648, in rate
    return np.nan + rn
TypeError: unsupported operand type(s) for +: 'float' and 'Decimal'

Numpy/Python version information:

('1.16.3', '2.7.15+ (default, Nov 27 2018, 23:36:35) \n[GCC 7.3.0]')

Extremely slow to calculate IRR

I just came across this package as I realized that np.irr() has been migrated over into this package. As I tried to get familiar with it, I was just stumped with the amount of time it takes to calculate IRR. If this package is being maintained, I would really like to know why it takes this long and if there are any recommended alternatives. Here is a quick example I compiled:

import pandas as pd
import numpy as np
import numpy_financial as npf
from time import time


# Generate some example data
t = pd.date_range('2022-01-01', '2037-01-01', freq='D')

cash_flows = np.random.randint(10000, size=len(t)-1)
cash_flows = np.insert(cash_flows, 0, -10000)

# Calculate IRR
start_timer = time()
npf.irr(cash_flows, guess)
stop_timer = time()
print(f"""Time taken to calculate IRR over 30 years of daily data: {round((stop_timer-start_timer)/60, 2)}""")

It took over a minute to calculate IRR for 30 years of daily data. I am wondering if the lack of an initial guess is slowing this down and if that needs to be added ?

BUG: There may exist multplit roots in the solution of IRR function

Describe the issue:

This was what I encountered:
image

What I got in Excel:
image

Reproduce the code example:

import numpy_financial as npf
print(npf.irr([-10000]+[327.24625]*16))

Error message:

NA

Runtime information:

NA

Context for the issue:

There may exist multplit roots in the solution of IRR function so that the irr() may give an incorrect result.

MAINT: Avoid ``ZeroDivisionError``s in ``nper_inner_loop``

Proposed new feature or change:

I've just merged #118, this PR reproduced the behaviour of nper in Cython as closely as possible. However there are several ZeroDivisionErrors that are currently handled poorly (they return NANs).

z = pmt_ * (1.0 + rate_ * when_) / rate_
return log((-fv_ + z) / (pv_ + z)) / log(1.0 + rate_)

In each case, we should carefully evaluate each code path and return a financially sensible result.

Financial function nper() incorrect for zero-rate annuities.

When rate=0.0, the sign on present value is wrong, causing most degenerate cases to be incorrect.

As an example, making periodic payments of $10 for a $100 loan at progressively smaller interest rates shows the number of periods approaching 10:

numpy.nper(0.01, -10, 100, 0)
10.588644459423231

numpy.nper(0.001, -10, 100, 0)
10.055360184319873

numpy.nper(0.0001, -10, 100, 0)
10.005503577667028

numpy.nper(0.00001, -10, 100, 0)
10.000550035678859

numpy.nper(0.000001, -10, 100, 0)
10.000055001142128

If the interest rate is 0%, it should be exactly 10 payments, not -10:

numpy.nper(0, -10, 100, 0)
-10.0

Slow NPV calculations for monte carlo simulation purposes

When using a large number of monte carlo simulations (~1e6 or more) for uncertainty calculations on NPV, the present implementation of Numpy-financial NPV is very slow. We suggest a new implementation of the NPV function , which allows calculation of serveral projects simultaneously, and is approximately 200 times faster than the current version for 1e6 simulations for a cash-flow of length 10.

In cases where the number of entries in the cash-flow is significantly higher than the number of projects to be calculated, the old implementation will be faster. To solve this, we suggest an adaptive approach which chooses method based on the input dimensions of the 'values' array.

Example code:

import numpy_financial as npf
import numpy as np
import time

def faster_npv(rate: float, values: list):
    """ A faster way to calculate NPV for several projects based 
    on numpy arrays. """
    discounted_values = []
    for i in range(np.shape(values)[0]):
        discounted_values.append(values[i] / (1 + rate) ** i)
    return np.sum(discounted_values, axis=0)

rate=0.05
no_simulations = int(1e6)

capex = np.random.normal(loc=10e5, scale = 10e3, size=no_simulations)
lifetime = 10 #years

opex = []
for yr in range(lifetime):
    opex.append(np.random.normal(loc=10e4, scale= 10e2,size=no_simulations))

start_time=time.time()
all_npv=np.nan*np.ones_like(capex)
for i in range(no_simulations):
    values = [capex[i]]+[op[i] for op in opex]
    all_npv[i]=npf.npv(rate, values)
print(f"Standard NPF calculations took {time.time()-start_time} s.")

start_time=time.time()
values=[capex]+opex
faster_npv(rate,values )
print(f"Faster NPV calculations took {time.time()-start_time} s.")

AttributeError: partially initialized module 'numpy_financial' has no attribute 'ipmt' (most likely due to a circular import)

I am running this example: https://www.toptal.com/finance/cash-flow-consultants/python-cash-flow-model

Generating the above error in line 19: interest_payment = npf.ipmt(rate=coupon / 12, per=periods, nper=term, pv=-original_balance)

numpy_financial.txt

I have the latest version (1.0.0) installed and use Spyder v5.2.2

  • Spyder version: 5.2.2 None
  • Python version: 3.9.12 64-bit
  • Qt version: 5.9.7
  • PyQt5 version: 5.9.2
  • Operating System: Linux 5.15.0-43-generic
    conda_packages.txt

The same error repeats on the next line when uncommenting line 19

I followed these instructions: https://anaconda.org/conda-forge/numpy-financial

Restarting the kernel does not fix it; running into the same error again.

Thanks, Marc

DOC: NumPy-Financial should have a documented style guide

Issue with current documentation:

NumPy-Financial currently does not have a documented style guide

Idea or request for content:

Write a style guide for others to follow, the guide should be based of:

  • PEP8 for general code, except the line length should be 88 characters
  • numpydoc for documentation

Some further documents to consider:

In case of ambiguity, we follow the NumPy style guide

Add CUMPRINC and CUMIPMT functions

The Open Document Format for Office Applications (OpenDocument)v1.2 on which numpy-financial is based specifies two functions which are currently not implemented: CUMPRINC and CUMIPMT. These functions calculate the cumulative principal and interest paid down in an installment loan after a number of payment periods.

To implement these, one could simply call npf.ppmt/npf.ipmt with a 1-D array for the per argument and then sum over rows.

I'd be happy to contribute code and submit a pull request.

Bizarre linalg errors with IRR

The following code throws an error:

>>> lst = [-3000.0, 2.3926932267015667e-07, 4.1672087103345505e-16, 5.3965110036378706e-25, 5.1962551071806174e-34, 3.7202955645436402e-43, 1.9804961711632469e-52, 7.8393517651814181e-62, 2.3072565113911438e-71, 5.0491839233308912e-81, 8.2159177668499263e-91, 9.9403244366963527e-101, 8.942410813633967e-111, 5.9816122646481191e-121, 2.9750309031844241e-131, 1.1002067043497954e-141, 3.0252876563518021e-152, 6.1854121948207909e-163, 9.4032980015353301e-174, 1.0629218520017728e-184, 8.9337141847171845e-196, 5.5830607698467935e-207, 2.5943122036622652e-218, 8.9635842466507006e-230, 2.3027710094332358e-241, 4.3987510596745562e-253, 6.2476630372575209e-265, 6.598046841695288e-277, 5.1811095266842017e-289, 3.0250999925830644e-301, 1.3133070599585015e-313]
>>> np.irr(lst)
...
LinAlgError: Array must not contain infs or NaNs

Bizarrely, if you change the e-313 to e-300 or e-330, the problem goes away on its own:

>>> np.irr(lst[:30] + [1.31330705996e-300])
-0.9999999990596069

>>> np.irr(lst[:30] + [1.31330705996e-330])
-0.9999999990596069

I don't have enough numerics-fu to know what's going on here, unfortunately. For now I'm happy to round the array to 10 decimal places, which works around it, but thought I'd file a bug since the behavior is pretty weird.

TST: Replace `assert_almost_equal` checks with `assert_allclose` in test suite

assert_almost_equal is no longer the recommended way of checking for two numbers that are close to one another due to the inconsistencies in how floating point numbers are handled. It is recommended to use one of assert_allclose, assert_array_almost_equal_nulp or assert_array_max_ulp instead of this function for more consistent floating point comparisons.

This issue is to replace assert_almost_equal with the appropriate check

DOC: Document how to run the benchmarking suite

Issue with current documentation:

The documentation currently does not exist.

Idea or request for content:

There should be some documentation on how to run the benchmarking suite. This should include: how to install the required dependencies (poetry install --with bench), how to actually run the suite, how to use asv to publish a website of results.

BUG: ipmt not working with numpy==1.26.4 when using a pandas df as input.

Describe the issue:

Not sure if bug or feature request, but I used to be able to use pandas dataframe columns as input arrays to npf.ipmt(). With numpy 1.26.4 and pandas 2.2.1 that yields a ValueError. I think it might be due to a change in np.broadcast_arrays()

Reproduce the code example:

import pandas as pd
import numpy_financial as npf
import numpy as np

df = pd.DataFrame({'rate': [0.05, 0.07], 'periods':[np.array([1,2,3,4]), np.array([1,2,3,4])], 'pv': [10000, 12000], 'nper': [10,10]})

#ValueError
npf.ipmt(df['rate'], df['periods'], df['nper'], df['pv'])

#Works
npf.ipmt(np.array(df['rate'].tolist()).reshape(-1,1), np.array(df['periods'].tolist()), np.array(df['nper'].tolist()).reshape(-1,1), np.array(df['pv'].tolist()).reshape(-1,1))

Error message:

Traceback (most recent call last)
Cell In[4], line 2
      1 df = pd.DataFrame({'rate': [0.05, 0.07], 'periods':[np.array([1,2,3,4]), np.array([1,2,3,4])], 'pv': [10000, 12000], 'nper': [10,10]})
----> 2 npf.ipmt(df['rate'], df['periods'], df['nper'], df['pv'])

File ~\.conda\envs\test\Lib\site-packages\numpy_financial\_financial.py:394, in ipmt(rate, per, nper, pv, fv, when)
    392 try:
    393     ipmt = np.where(when == 1, ipmt/(1 + rate), ipmt)
--> 394     ipmt = np.where(np.logical_and(when == 1, per == 1), 0, ipmt)
    395 except IndexError:
    396     pass

ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()

Runtime information:

1.26.4
3.11.8 | packaged by Anaconda, Inc. | (main, Feb 26 2024, 21:34:05) [MSC v.1916 64 bit (AMD64)]

[{'numpy_version': '1.26.4',
'python': '3.11.8 | packaged by Anaconda, Inc. | (main, Feb 26 2024, '
'21:34:05) [MSC v.1916 64 bit (AMD64)]',
'uname': uname_result(system='Windows', node='xxx', release='10', version='10.0.19045', machine='AMD64')},
{'simd_extensions': {'baseline': ['SSE', 'SSE2', 'SSE3'],
'found': ['SSSE3',
'SSE41',
'POPCNT',
'SSE42',
'AVX',
'F16C',
'FMA3',
'AVX2'],
'not_found': ['AVX512F',
'AVX512CD',
'AVX512_SKX',
'AVX512_CLX',
'AVX512_CNL',
'AVX512_ICL']}}]

Context for the issue:

No response

PPMT not behaving as expected

Hello,

My apologies if this isn't the right place to post, I am a simple user rather than a developer. I just couldn't find a more appropriate location for this issue.

I have been using ppmt to calculate the amortisation schedule on a portfolio of loans, however the output from the function is incorrect. An extremely simple example with constants rather than variables:

npf.ppmt(0.1479, 297, 300, -270.51)

or

npf.ppmt(rate=0.1479, per=297, nper=300, pv=-270.51)

Returns:

2463.202029

This is obviously incorrect - how can a loan of 270 dollars pay a principal of 2463 dollars in just one period?

For reference, the correct output (obtained through Excel and a Texas Instruments financial calculator) is 24.03.

Thank you.

BENCH: Add air speed velocity as a benchmarking suite

Proposed new feature or change:

NumPy-Financial is on the cusp of being re-written in numba as gufuncs. This may have a performance impact that I would like to be measuring. This ticket is to track the progress of adding a benchmarking suite using asv.

npf.rate() returns list of all nans where only one input PV/FV pair is problematic

Hello,

Today I noticed that where a list of present values and a list of future values are used as input to npf.rate(), each element in the list returned is nan if only just one present/future value combination exhibits the same sign. Is this behavior intentional? I would have expected only that particular problematic pair to have returned nan, as the other pairs' results remain valid and calculable.

As you can see by the reproducible example below, all you need to do to resolve this case is drop the last element from each list and the preceding elements (all of which exhibit opposing signs) calculate as expected. Leave the last elements in place, and you get back a list of all nan.

Thanks!
-Tyler

import numpy_financial as npf

pv = [-593.06, -4725.38, -662.05, -428.78, -13.65]
fv = [214.07, 4509.97, 224.11, 686.29, -329.67]

print(npf.rate(2, 0, pv, fv))

[nan nan nan nan nan]

print(npf.rate(2, 0, pv[0:-1], fv[0:-1]))

[-0.39920185 -0.02305873 -0.41818459 0.26513414]

DOC: Convert developer docs from Markdown to restructured text

Issue with current documentation:

The two files for developer docs:

  • building_with_poetry.md
  • getting_the_code.md

are currently markdown files. Although Sphinx supports Markdown it does not support many of the same features as restructured text.

Idea or request for content:

To enable better support for the documentation, these files should be converted to restructured text

API: Move business days functions to `numpy-financial`

Hi!
I'm writing with a question regarding a proposition of API change. In the ongoing NEP 52 work one of the goals is to reduce needless or confusing entries from the top NumPy namespace (for NumPy 2.0 release).

There are five business days functions in the NumPy API: is_busday, busday_count, busday_offset and busdaycalendar. I wonder if it makes sense to move these functions to numpy-financial and remove from numpy.

What do you think about this idea?

Incorrect IRR with cashflow phasing

Hi Kai,

I found another scenario with strange behaviour in the IRR calc (current master branch). In the second example below I have pushed back the investment to the second period and the IRR is coming out incorrect. I'm not familiar with the maths behind the IRR and I understand that in most scenarios there is an assumption of an initial investment (-ve cashflow) in the first period which is not the case in the second example.

clfws = [-161445.03, 2113.73, 7626.73, 8619.84, 8612.92]

irr_out = irr(clfws)
print(f"Working IRR : {irr_out}")


clfws = [2113.73, -161445.03, 7626.73, 8619.84, 8612.92]

irr_out = irr(clfws)
print(f"Not Working IRR : {irr_out}")

Output:

Working IRR : -0.43658134635468815
Not Working IRR : 75.33123197333728

import numpy_financial as npf

hello i am trying to assign import numpy_financial as npf on jupiter note but i have some errors :
NameError: name 'numpy_financial' is not defined.
i am sorry i am new to python .

Financial functions ipmt and ppmt result in incorrect values for per=1

The numpy.ipmt and numpy.ppmt functions seem to misbehave when per=1. The ipmt+ppmt still equals the result of numpy.pmt, but as you can see below, ipmt results in 0 for per=1, while ppmt equals the full amount of principal+interest.

I noticed that https://github.com/numpy/numpy/blob/master/numpy/lib/financial.py contains an exception for per=1, but I do not think that should be there:
ipmt = np.where(np.logical_and(when == 1, per == 1), 0.0, ipmt)

# Principal calculation (notice second line)
numpy.ppmt(rate=0.001988079518355057, per=0, nper=360, pv=300000, fv=0, when="begin")=-568.922801885
numpy.ppmt(rate=0.001988079518355057, per=1, nper=360, pv=300000, fv=0, when="begin")=-1165.29433577
numpy.ppmt(rate=0.001988079518355057, per=2, nper=360, pv=300000, fv=0, when="begin")=-571.18717807
numpy.ppmt(rate=0.001988079518355057, per=3, nper=360, pv=300000, fv=0, when="begin")=-572.3227436
# ...etc

# Interest calculation (notice second line)
numpy.ipmt(rate=0.001988079518355057, per=0, nper=360, pv=300000, fv=0, when="begin")=-596.37153388933
numpy.ipmt(rate=0.001988079518355057, per=1, nper=360, pv=300000, fv=0, when="begin")=0.0
numpy.ipmt(rate=0.001988079518355057, per=2, nper=360, pv=300000, fv=0, when="begin")=-594.1071577047084
numpy.ipmt(rate=0.001988079518355057, per=3, nper=360, pv=300000, fv=0, when="begin")=-592.9715921748406
# ...etc

Wrong IRR calculated (discrepency from Excel)

Reproducing code example:

import numpy as np

a = np.array([-50, -100, 600, 300, -100])
print(np.irr(a))

This will yield a negative result (-76%), while the actual IRR should be 185%.

np.pmt documentation is misleading on calculating monthly rate

Documentation

To calculate the monthly rate, you should calculate as (1 + annual_rate) ** (1/12) - 1 rather than simply divide it by 12.

    Examples
    --------
    >>> import numpy_financial as npf

    What is the monthly payment needed to pay off a $200,000 loan in 15
    years at an annual interest rate of 7.5%?

    >>> npf.pmt(0.075/12, 12*15, 200000)
    -1854.0247200054619

    In order to pay-off (i.e., have a future-value of 0) the $200,000 obtained
    today, a monthly payment of $1,854.02 would be required.  Note that this
    example illustrates usage of `fv` having a default value of 0.

Should be rephrased to

    Examples
    --------
    >>> import numpy_financial as npf

    What is the monthly payment needed to pay off a $200,000 loan in 15
    years at an annual interest rate of 7.5%?

    >>> npf.pmt(1.075**(1/12) - 1, 12*15, 200000)
    -1826.1657857130267

    In order to pay-off (i.e., have a future-value of 0) the $200,000 obtained
    today, a monthly payment of $1,826.17 would be required.  Note that this
    example illustrates usage of `fv` having a default value of 0.

https://github.com/numpy/numpy-financial/blob/master/numpy_financial/_financial.py#L220-L232

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.