Git Product home page Git Product logo

scikit-posthocs's Introduction

Hello! πŸ‘‹

Here is a list of my repos:

Repository Description GitHub
Science
bitermplus Biterm topic models (for short texts)
fixation-correction Fixation Correction plugin for Pupil Labs software
mchmm Markov chains and hidden Markov models
scikit-na Missing data analysis
scikit-posthocs Multiple pairwise comparison tests
tmplot Visualization of topic modeling results
Web
youtrack-vscode-extension YouTrack Extension for Visual Studio Code
droneci-vscode-extension Drone CI Extension for Visual Studio Code
tab Start page generator
randpaper Random photo from Pexels
CLI tools
ranger-archives Ranger plugin for creating and extracting archives
ranger-cmus Ranger plugin for integration with *cmus audio player
mufi Simple music finder for command-line
lastfm-cli-scrobbler Last.fm CLI scrobbler
Arduino
spectrumLED Spectrum analyzer on a LED matrix
heart-sensor Heart rate sensor
lastfm-cli-scrobbler Mood lamp based on arduino and WS2812 RGB leds
Linux WMs
i3blocks-blocklets Useful blocklets for *i3blocks*
awesome-wm-widgets Widgets for Awesome WM

scikit-posthocs's People

Contributors

bmcfee avatar chengmingbo avatar denissonleal avatar kempa-liehr avatar maximtrp avatar nategeorge avatar pedroilidio avatar raamana avatar synapticarbors avatar theavey avatar yongcaihuang avatar ysbach avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

scikit-posthocs's Issues

Solving ValueError; 'All numbers are identical in mannwhitneyu'

Hi,

I often use your _posthoc_mannwhitneyu and I get ValueError 'All numbers are identical in mannwhitneyu' when two groups are composed from idential numbers. But I thought we should adjust p-values including the p-value(=1.0) from those comparisons, so I modified the code in _pothoc.py like this.

def _posthoc_mannwhitney(
        a: Union[list, np.ndarray, DataFrame],
        val_col: str = None,
        group_col: str = None,
        use_continuity: bool = True,
        alternative: str = 'two-sided',
        p_adjust: str = None,
        sort: bool = True) -> DataFrame:
    '''Pairwise comparisons with Mann-Whitney rank test.

    Parameters
    ----------
    a : array_like or pandas DataFrame object
        An array, any object exposing the array interface or a pandas
        DataFrame. Array must be two-dimensional.

    val_col : str, optional
        Name of a DataFrame column that contains dependent variable values (test
        or response variable). Values should have a non-nominal scale. Must be
        specified if `a` is a pandas DataFrame object.

    group_col : str, optional
        Name of a DataFrame column that contains independent variable values
        (grouping or predictor variable). Values should have a nominal scale
        (categorical). Must be specified if `a` is a pandas DataFrame object.

    use_continuity : bool, optional
        Whether a continuity correction (1/2.) should be taken into account.
        Default is True.

    alternative : ['two-sided', 'less', or 'greater'], optional
        Whether to get the p-value for the one-sided hypothesis
        ('less' or 'greater') or for the two-sided hypothesis ('two-sided').
        Defaults to 'two-sided'.

    p_adjust : str, optional
        Method for adjusting p values.
        See statsmodels.sandbox.stats.multicomp for details.
        Available methods are:
        'bonferroni' : one-step correction
        'sidak' : one-step correction
        'holm-sidak' : step-down method using Sidak adjustments
        'holm' : step-down method using Bonferroni adjustments
        'simes-hochberg' : step-up method  (independent)
        'hommel' : closed method based on Simes tests (non-negative)
        'fdr_bh' : Benjamini/Hochberg  (non-negative)
        'fdr_by' : Benjamini/Yekutieli (negative)
        'fdr_tsbh' : two stage fdr correction (non-negative)
        'fdr_tsbky' : two stage fdr correction (non-negative)

    sort : bool, optional
        Specifies whether to sort DataFrame by group_col or not. Recommended
        unless you sort your data manually.

    Returns
    -------
    result : pandas.DataFrame
        P values.

    Notes
    -----
    Refer to `scipy.stats.mannwhitneyu` reference page for further details.

    Examples
    --------
    >>> x = [[1,2,3,4,5], [35,31,75,40,21], [10,6,9,6,1]]
    >>> sp.posthoc_mannwhitney(x, p_adjust = 'holm')
    '''
    x, _val_col, _group_col = __convert_to_df(a, val_col, group_col)
    x = x.sort_values(by=[_group_col, _val_col], ascending=True) if sort else x

    groups = x[_group_col].unique()
    x_len = groups.size
    vs = np.zeros((x_len, x_len))
    xg = x.groupby(_group_col)[_val_col]
    tri_upper = np.triu_indices(vs.shape[0], 1)
    tri_lower = np.tril_indices(vs.shape[0], -1)
    vs[:, :] = 0

    combs = it.combinations(range(x_len), 2)

    for i, j in combs: ##I modified this section##
        try:
            vs[i, j] = ss.mannwhitneyu(
                xg.get_group(groups[i]),
                xg.get_group(groups[j]),
                use_continuity=use_continuity,
                alternative=alternative)[1]
        except ValueError as e:
            if str(e)=="All numbers are identical in mannwhitneyu":
                vs[i, j] =1.0
            else:
                raise e

    if p_adjust:
        vs[tri_upper] = multipletests(vs[tri_upper], method=p_adjust)[1]

    vs[tri_lower] = np.transpose(vs)[tri_lower]
    np.fill_diagonal(vs, 1)
    return DataFrame(vs, index=groups, columns=groups)

Is this a right solution?

I'm not sure but this error may not occur with other versions of scipy.stats.

Use posthoc tests to plot critical difference diagram

It seems there are many post-hoc tests implemented and will be really good if there's an easy interface to plot critical difference diagrams with any chosen pairwise tests.

In Orange framework, there's only few types of tests implemented and the interface isn't very intuitive.

Significance Plots Colour Bar Breaks Using Matplotlib >= 3.5.0

It appears the way Matplotlib handles colour bars from version 3.5.0 breaks the significance plots colour bar:

To Reproduce

import matplotlib.pyplot as plt
import scikit_posthocs as sp
import statsmodels.api as sa

x = sa.datasets.get_rdataset('iris').data
x.columns = x.columns.str.replace('.', '')
pc = sp.posthoc_ttest(x, val_col='SepalWidth', group_col='Species', p_adjust='holm')
heatmap_args = {'linewidths': 0.25, 'linecolor': '0.5', 'clip_on': False, 'square': True,
                'cbar_ax_bbox': [0.82, 0.35, 0.04, 0.3]}
sp.sign_plot(pc, **heatmap_args)
plt.show()

issue

Expected behaviour

expected

Error when running Friedman Conover test.

Hi

When I try to run a Conover posthoc on my pandas dataframe I get the following error:
/opt/conda/lib/python3.6/site-packages/scikit_posthocs/_posthocs.py:705: RuntimeWarning: invalid value encountered in sqrt
tval = dif / np.sqrt(A / B)

I also suspect the Nemenyi to be affected as I am getting values of 0.001 exactly for every conditions.

Best,

function outliers_gesd has a bug when outliers > 1

Describe the bug
outliers_gesd has a bug ,when the outliers increase and abs_d numpy array size decrease;

in file '_outliers.py' line

      # Masked values
        lms = ms[-1] if len(ms) > 0 else []
        ms.append(lms + [np.argmax(abs_d)])

the abs_d's size is not data's size any more, so the np.argmax(abs_d) is not the true outlier index in data numpy array.

posthoc_tukey yields p-values of 0.900 and 0.100 which are also different to scipy.stats.tukey_hsd

First, thank you very much for this very useful package.

Describe the bug
scikit_posthocs.posthoc_tukey gives unexpected result of '0.001' and '0.900' with one dataset (see below).

scipy.stats.tukey_hsd gives similar but not identical numbers.

Note that the groups don't have the identical n.

Dataset
see code below

To Reproduce

# bug report
import scikit_posthocs as sp
import scipy
import pandas as pd

data_a = [0.08331362, 0.22462052, 0.44619224, 0.34004518, 0.03146107,
           0.15828442, 0.27876282, 0.14699693, 0.3870986 , 0.33669976,
           0.38822324, 0.28127964, 0.04101782, 0.31787209, 0.20165472,
           0.40043812, 0.50580976, 0.20009951]

data_b = [ 0.14693014,  0.0055596 ,  0.19977264, -0.30859794,  0.017286  ,
            0.05342739, -0.09502465,  0.01998256,  0.06162499,  0.18634389,
            0.34667326,  0.06702727,  0.14268381,  0.13141426,  0.06344518,
            0.04185783,  0.18701589, -0.06134188,  0.02844774]

data_c = [ 0.14727163,  0.10290732, -0.09934048,  0.06231107, -0.06754609,
            0.04739071,  0.19232889,  0.03198218,  0.11590822,  0.08816257,
            0.05692482,  0.04922897, -0.06524353,  0.08966288,  0.12975986,
           -0.08346692,  0.02827149,  0.15724036,  0.05327535]

all_data = [data_a, data_b, data_c]
labels = ['a', 'b', 'c']

df = pd.DataFrame()
for i in range(3):
    cur_dict = {'Group': [labels[i]] * len(all_data[i]),
                'Data': all_data[i]}
    
    cur_df = pd.DataFrame(cur_dict)
    
    df = pd.concat([cur_df, df],
                                     ignore_index=True)
    df.reset_index()
    
print(sp.posthoc_tukey(df,
                       val_col='Data',
                       group_col='Group'))

This yields:

       c      b      a
c  1.000  0.900  0.001
b  0.900  1.000  0.001
a  0.001  0.001  1.000

Which seem very unlikely values, right?

In contrast, the scipy library yields

print(scipy.stats.tukey_hsd(data_a, data_b, data_c))
Tukey's HSD Pairwise Group Comparisons (95.0% Confidence Interval)
Comparison  Statistic  p-value  Lower CI  Upper CI
 (0 - 1)      0.200     0.000     0.103     0.297
 (0 - 2)      0.210     0.000     0.114     0.307
 (1 - 0)     -0.200     0.000    -0.297    -0.103
 (1 - 2)      0.010     0.963    -0.085     0.106
 (2 - 0)     -0.210     0.000    -0.307    -0.114
 (2 - 1)     -0.010     0.963    -0.106     0.085

Expected behavior
I am not sure what the correct result it but it seems unlikely that the resulting p-value is '0.001' for two comparisons.

Also, it's unclear why the tukey test of scikit-posthocs gives a different result compared to the scipy version.

System and package information (please complete the following information):

  • OS: Window 10 Pro
  • Package version:
    • scikit-posthocs 0.7.0 pyhd8ed1ab_0 conda-forge
    • scipy 1.10.1 py310h309d312_1

Additional context
Other datasets give different results with more plausible p-values such as


           a           b           c
a       1.000000    0.409612     0.001678
b       0.409612    1.000000     0.053077
c       0.001678     0.053077    1.000000

More information in API documentation

The API documentation describes the available functions and names their arguments, but does not describe what those arguments mean, what the output is.

As a user of this package, I would be concerned that I had misunderstood what, for example, g is in posthoc_tukey_hsd(x, g[, alpha]). I might also struggle to figure out the format of the return value, and thus perhaps mis-interpret the results. A bit of additional context would give me confidence that I was using the test I thought I was using, and doing so correctly.

I suspect that the arguments are roughly the same for all of the methods of a particular API, so this could be fixed with some preliminary text describing common variables (a, y_col, etc.) and return values. A few methods will need additions to the per-method description for unique variables like g or alpha. These cases might also benefit from links or citations of specific descriptions of the test (or plot or outlier detection method) that use the same variable names, in case the user's text defines them differently.

(For openjournals/joss-reviews#1169)

Add contribution guidelines, bug reporting, support

openjournals/joss-reviews#1169 both suggests having a clear description of how to

  • Contribute to the software
  • Report issues or problems
  • Seek support

Github provides easy setup for all of this. Adding some links and text to the README would finish the job.

Note that this does not require you to commit to providing "support' - just make it clear what level of support a user might expect, and where best to go to get that support (github issues, mailing list, email the author, etc.)

No results for Likert-type scale items

I'm really excited to use the new posthocs package, and have been trying to run the Dunn test (posthoc_dunn) on results from a survey over the last few days (I have about 1100 respondents). I have no problem when I run it on results that represent the difference between two feeling thermometers (a variable that ranges from -100 to 100). But every time I try to run it on a Likert-type scale item that takes values of 1 through 3 or 1 through 5, it returns a table full of null results (NaN in all cells except the diagonal). This comes along with a series of warnings as follows:

  1. _posthocs.py:191: RuntimeWarning: invalid value encountered in sqrt:
    z_value = diff / np.sqrt((A - x_ties) * B)
  2. multitest.py:176: RuntimeWarning: invalid value encountered in greater
    notreject = pvals > alphaf / np.arange(ntests, 0, -1)
  3. multitest.py:251: RuntimeWarning: invalid value encountered in greater
    pvals_corrected[pvals_corrected>1] = 1

I am not a programming expert, but my impression is that what is happening here is that the compare_dunn function (lines 187-193 in posthoc.py) is not returning valid p-values, and I am guessing that this is because (A - x_ties) is negative for some reason and so the np.sqrt function isn't computing a value for the z_value.

I played around with some groups of small arrays involving combinations of values ranging from 1 to 3 and 1 to 5, on the same scale as my data. Sometimes these had no problem returning valid results and other times they yielded the same NaNs that I get with my full dataset. I'm wondering if the issue has something to do with the total number or overall proportion of ties in the data. Obviously with Likert-type scale items there are a lot of ties. I'd love your thoughts on whether it's something that can be fixed to make analysis on this type of data possible. Thanks!!

Don`t understand the group_col

I don't know how to use the group_col. And the error is KeyError: "['j' 'k' 'm'] not in index".

x = pd.DataFrame({"k":[1, 2 , 4, 5, 6], "j":[1, 3, 5, 7, 66],"m":[11, 222, 444, 5655, 777]}).T
sp.posthoc_conover(x, val_col=[0, 1, 2, 3, 4], group_col=["j","k","m"],p_adjust = 'holm') 

Can you tell me the right way ? Thank you.
Please forgive me for my poor English.

Wilcoxon paired test is not really paired

scikit_posthocs.posthoc_wilcoxon implements Wilcoxon paired test, that is measurements should be paired: before and after, different treatments on the same patient, etc.
The current implementation only uses two columns: var_col and group_col so the result depends on the order of rows in the dataframe.
There must be another variable to match measurements or the wide format should be used with each row giving measurements for the same item.

JOSS Review: Clarify statistical functionality of this package

This relates to requirements stated in openjournals/joss-reviews#1169. Both statement of need and functionality provided by this software needs further clarification.

From reading the README and paper.md alone, I don't immediately understand what scikit-posthoc does compared to statsmodel's multiple testing functionality. It would be helpful to have one or two paragraphs explaining exactly which sorts of multiple testing scenarios an end user should use scikit-posthocs for rather than say just invoking the multiple testing functionality of statsmodels.

The PMCMR package in R has a lot more detail in its documentation. However, it seems they implement a number of use-cases that this package does not yet support.

Negative p-value in the matrix

Hello, maybe this is not a bug. I am just wondering. I performed conover test after Kruskal-Wallis test and realise that In the matrix, the diagonal p-value is negative in value. Should it not be 1 because you are comparing a distribution to itself?

Categorical dtype is not respected by all post hoc functions

Hello, author!
Thank you for great package. Great work!

I would like to offer you to implement feature "sort" in test which help sort columns and rows in the output matrix by order in the list (which we push in sort)

for example,
input: scikit-posthocs.posthoc_dunn(df, val_col='value', group_col='group', sort=['group_7', 'group_11', 'group_3'])
result:
group_7 group_11 group_3
group_7 -1 0.5 0.5
group_11 0.5 -1 0.5
group_3 0.5 0.5 -1

  • colums and rows in the right order

  • sorry, i don't know how to change label on enhancement
    thanks

Add hypo argument for outliers_gesd

I'm happy to contribute this if you want, but it would be good to have a hypo argument for outliers_gesd() like with outliers_tietjen(). The reason being, when you get a boolean mask you can use that to deal with outliers as you like - for example, clipping them to max/min values and so on. This is important for cleaning outliers in multivariate datasets, which are pretty much all of them.

Dunnetts test

Thanks for this package!
One useful addition (unless I am missing it) would be Dunnett's test comparison to a control.

"bonferroni" posthoc ,The results of p are all zero, however the right results as following:

Hey,
Thanks for your work. I'm trying to make "bonferroni" posthoc on Linux environment, and the code is:
sp.posthoc_ttest(data, val_col='VALUE', group_col='group',sort="True",equal_var="True",pool_sd="False",p_adjust="bonferroni")

The results of p values are all zero on Linux environment( and i got total different results when i run the same code on my Windows10), and the right results as following:
image

tks for your great work.

System and package information (please complete the following information):

  • OS: (Linux version 3.10.0-327.36.3.el7.x86_64)
  • Package version: (0.5.1)

Dataset

group VALUE
1 8.82
1 8.92
1 8.27
1 8.83
2 11.8
2 9.58
2 11.46
2 13.25
3 10.37
3 10.59
3 10.24
3 8.33
4 12.08
4 11.89
4 11.6
4 11.51

fail to import scikit_posthocs

Hi,

I have been using scikit_posthocs(Version: 0.6.6) in Python 3.7.1 for a while. Today, when I tried to import scikit_posthocs, I got an error as
ModuleNotFoundError: No module named 'statsmodels.stats.libqsturng'

I checked with pip show statsmodels and Version: 0.12.0 was shown. I reinstalled by 'pip install statsmodels' but got the same error message. I reinstalled by 'pip install scikit-posthocs' but problem remained.
Running out of ideas. What could be the reason?

Thanks!

Results grouping after post-hoc test

Hi, I was wondering if there is any chance to include a feature where post hoc results are grouped according to their relationship.
I know that in R there are the packages multcompLetters and multcompview, which offer such feature.
I could find some people looking for a feature like this, but no feasible was found.

Example:
https://stackoverflow.com/questions/48841650/python-algorithm-on-letter-based-representation-of-all-pairwise-comparisons

There is a solution attempt at those topics, but I could not reproduce them:
https://stackoverflow.com/questions/43987651/tukey-test-grouping-and-plotting-in-scipy
https://stackoverflow.com/questions/49963138/label-groups-from-tuekys-test-results-according-to-significant

It looks like there is a paper describing the algorithm for implementing this:
Hans-Peter Piepho (2004) An Algorithm for a Letter-Based Representation of All-Pairwise Comparisons, Journal of Computational and Graphical Statistics, 13:2, 456-466, DOI: 10.1198/1061860043515

By the way, thanks for this project, it is awesome!

Bonferroni p_adjust method not taking into account enough comparisons

Hi, This is a great package for which I am very grateful for. I was using the example dataset 'Iris'. I divided the output of posthoc test with 'p_adjust = Bonferroni' by the output with the p_adjust = None to work out how many comparisons it was taking into account. The dataset has 3 categories (x) so therefor the Bonferroni should make (x^2)-x comparisons (6) but it seems to be only making 3 comparisons meaning that the test is not conservative enough.

import scipy.stats as ss
import statsmodels.api as sa
import scikit_posthocs as sp
df = sa.datasets.get_rdataset('iris').data
sp.posthoc_mannwhitney(df, val_col='Sepal.Width', group_col='Species', use_continuity=True, alternative="two-sided", p_adjust= None, sort=True)

I have made a work around because I wanted to be able to set how many comparisons I would like to make manually anyway. I just thought it should be brought to someone's attention.

def adjustedP(testDf, numberComparisons = 'all', stars = True): #accepts the dataframe output of the post-hoc tests with p_adjust = None
    try:
        int(numberComparisons)
    except ValueError:
        if numberComparisons == 'all':
            numberComparisons = (len(testDf.columns)*len(testDf.columns))-len(testDf.columns)
        else:
            raise ValueError("'numberComparisons' should be an interger or 'all'")
            #print(numberComparisons)  
    print("Number of comparisons being made: " + str(numberComparisons))
    Bonferroni = testDf * numberComparisons
    for index, row in Bonferroni.iterrows():
        #print(row)
        #print(index)
        for  colIndex in Bonferroni:
            #print(colIndex)
            val = Bonferroni.loc[index,colIndex]
            #print(val)
            if val < -1. and stars == False: 
                Bonferroni.loc[index,colIndex] = -1
            if val <= -1. and stars == True: 
                Bonferroni.loc[index,colIndex] = "---"
            if val > 1. and stars == False: 
                Bonferroni.loc[index,colIndex] = 1
            if val > 0.05 and stars == True: 
                Bonferroni.loc[index,colIndex] = "---"
            if 0.01 < val <= 0.05 and stars == True: 
                Bonferroni.loc[index,colIndex] = "*"
            if 0.001 < val <= 0.01 and stars == True: 
                Bonferroni.loc[index,colIndex] = "**"
            if  0. <= val < 0.01 and stars == True: 
                Bonferroni.loc[index,colIndex] = "***"
            if  val < 0. and stars == True: 
                Bonferroni.loc[index,colIndex] = "---"
    if stars ==True: 
        print("0.01 < val <= 0.05 -- *")
        print("0.001 < val <= 0.01-- **")
        print("val < 0.0------------ ***")
    return Bonferroni

I new to GitHub and coding in general so I'm sorry if these instructions are difficult to follow. I can give more information on request.

Post-hocs test for dataframes with different group / block / y column names break

Hi,

I cannot use post-hocs test for dataframes with melted = True and group_col != 'groups', block_col != 'blocks' and y_col != 'y'. Basically, anything which deviates from the example

sp.posthoc_nemenyi_friedman(data, y_col='y', block_col='blocks', group_col='groups', melted=True)

breaks the code. The error is likely due to __convert_to_block_df (https://github.com/maximtrp/scikit-posthocs/blob/master/scikit_posthocs/_posthocs.py) which returns the old y_col, group_col, block_col values but assigns the column names "groups" / "blocks" / "y"

def __convert_to_block_df(a, y_col=None, group_col=None, block_col=None, melted=False):
    # ...
    elif isinstance(a, DataFrame) and melted:
        x = DataFrame.from_dict({'groups': a[group_col],
                                 'blocks': a[block_col],
                                 'y': a[y_col]})**
    # ...
    return x, y_col, group_col, block_col

On a somewhat related note: I wanted to implement / use these tests to plot CD diagrams as suggested in "J. Demsar (2006), Statistical comparisons of classifiers over multiple data sets, Journal of Machine Learning Research, 7, 1-30." which you also cite in the documentation. However, I have a difficult time to understand what "block", "groups", and "y" mean in this context. More specifically, are blocks (or groups?) different classifiers or datasets and is y the ranks or the accuracies? You dont happen to have some example code and or explanation how to plot CD diagrams?

Thank

Rotation of the vertical ticks in the significance matrix.

Dear Maksim, thank you for creating this useful script. I would like to rotate the ticks on the vertical axis by 90 degrees. I tried to use g.tick_params(labelrotation=45) recalling g=sp.sign_plot() but 'tuple' object has no attribute 'tick_params'.
Can you help me better understand how to do this? Thanks in advance.
example_matrix

Question: Post-hoc dunn return non-significant

Hi! Thanks a lot for creating this analysis tool.

I would like to check if it is normal that a post-hoc analysis using Dunn test, after Kruskal-Wallis, returns no significant result at all between the pairwise comparisons?

Another question, does Dunn test require multiple comparison correction? Either way (with or without correction), I don't get any significant even though Kruskal-Wallis test rejects the null hypothesis.

Include statement of audience

The JOSS review requirements include:

Do the authors clearly state what problems the software is designed to solve and who the target audience is?

The first part is covered in the documentation (and covered better in the paper - it might make sense to use that longer paragraph in the documentation as well). However, neither the documentation nor the paper describe the target audience.

My understanding of that requirement is that I, as a reader, should be able to quickly determine "is this for me". For example, is this useful for post-hoc tests on analyses done in other statistical systems such as SAS? Does it support post-hoc tests for analyses other than ANOVA?

sign_plot significance order

Hi there!

Thanks for the nice library!

A small thing to suggest on sign_plot method, I believe it's better to put legend in the order to ['NS', 'p<0.05', 'p<0.01', 'p<0.001'] rather than the current version with 'NS' at the end. Because 'NS' is the situation where p>0.05, and it's more logical to sort the colormap on either descending or ascending order.

Best

Cannot import under python 2.7 by using Spyder

Describe the bug
I cannot import scikit_posthocs under python 2.7 by using Spyder. when I run the following code:
import scikit_posthocs as sp

It gives the following error:

import scikit_posthocs as sp
File "//anaconda2/lib/python2.7/site-packages/scikit_posthocs/_global.py", line 6
def global_simes_test(x: Union[List, ndarray]) -> float:
^
SyntaxError: invalid syntax

Could you give a solution? Thanks.

Comparing to a control?

Hi there,

Thanks for the package! I am using friedman tests paired with the Nemenyi test, and this works very nicely for looking at all pairwise comparisons.

I was wondering if there was a way to compare vs a control method using say the Holm method?
I can see there is a Holm option with both conover and siegel, but I do not believe these compare against a control (correct me if Im wrong).

Thank you

Results differ slightly from PMCMR

First off, congrats on the great idea of porting this to Python.
I just ran it on R's InsectSprays data (with no p_adjust method) and I'm getting slightly different p-values than those from PMCMR for a couple of the group comparisons (B & C and F & C). Do you know why that might be?
Thanks!

set pandas version requirements.

when I do scikit_posthocs.posthoc_nemenyi_friedman(), such an error has occurred.
so, pandas version should be above 0.20.0

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-24-cc591614cee8> in <module>()
----> 1 sp.posthoc_nemenyi_friedman(x)

~/anaconda/lib/python3.6/site-packages/scikit_posthocs/_posthocs.py in posthoc_nemenyi_friedman(a, y_col, block_col, group_col, melted, sort)
    524             x.columns.name = group_col
    525             x.index.name = block_col
--> 526             x = x.reset_index().melt(id_vars=block_col, var_name=group_col, value_name=y_col)
    527 
    528         else:

~/anaconda/lib/python3.6/site-packages/pandas/core/generic.py in __getattr__(self, name)
   2742             if name in self._info_axis:
   2743                 return self[name]
-> 2744             return object.__getattribute__(self, name)
   2745 
   2746     def __setattr__(self, name, value):

AttributeError: 'DataFrame' object has no attribute 'melt'

Structure for the tests

Hi Maksim!

Nice package, thanks for sharing! I am trying to use it and hoping to contribute.

Before I use it, I'd like to understand how it is tested. For example, if you look at

r_results = np.array([[-1, 4.390066e-02, 9.570998e-09],

or any other tests, you seem have to pre-selected the outputs/results to compare.. Where do they come from? From the PMCMR R-package? I tried to look into the PMCMR package to look at their tests - I can't see them in the package distributed, do you know if it is tested?

What are the other principles behind the tests you wrote?

Tests for plotting, outliers

I see and appreciate a good test suite for the post-hoc tests that make up the "core" of this library, but I don't see any tests for the plotting and outlier APIs. As a contributor, any change I made to either of those files (perhaps to improve performance, for example) would not be subject to any automated testing and could perhaps introduce errors that would invalidate analyses done with the tools.

I see that the post-hoc tests basically take some input data and test that the output closely matches the expected value, and do so once for each post-hoc test. That may fail to catch "edge cases" but still successfully executes each function and evaluates the common case. Doing the same for plotting and outliers should be sufficient.

(For openjournals/joss-reviews#1169)

JOSS Review: Clarify Statement of need

This is related to #18 as well. The package introduction and documentation assumes that the audience knows a great deal about pairwise multiple comparisons that arise after conducting an ANOVA. However, these scenarios are quite different from other sorts of multiple testing comparisons that arise.

It would be great to provide a canonical example/scenario that a user can relate to.
Relevant to openjournals/joss-reviews#1169

Is it possible to also return the test statistic instead of only p-values?

Hi,

Thank you so much for this useful python package!

As I was running post-hoc tests I noticed the functions in scikit_posthocs only return p-values and do not return the test statistic value. Would it be possible to add that or is there a specific reason you left that out of the results table?

For example, when running Dunn's test using posthoc_dunn, I would like to also see the respective pairwise z test statistic in each cell in the results in table (similarly to the dunn.test R package).

Thank you!

v0.6.8 in __init__.py

Dear Maksim,
I wanted to notify you that I tried installing scikit-posthocs with "conda install -c conda-forge scikit-posthocs". However, the installed version is still 0.6.7. According to init.py it should be 0.6.8.
Best regards
Atilio

Second code block in README doesn't work

I tried running the first two code blocks in the README and it fails with an error about NameError: name 'Sepal' is not defined at the following line:

lm = sfa.ols('Sepal.Width ~ C(Species)', data=df).fit()

This works:

lm = sfa.ols('df["Sepal.Width"] ~ C(Species)', data=df).fit()

I think the issue is that the column names clash with Pandas attribute notation (a.k.a 'dot notation'). However, adding in the bracket notation probably doesn't look like the R-style formula it is based on. Maybe renaming the columns would be a good compromise that keeps the R-like style formula more intact?
Below works:

df = df.rename(columns={'Sepal.Width':'SepalWidth'})
lm = sfa.ols('SepalWidth ~ C(Species)', data=df).fit()

custom comparisons on scikit_posthocs.posthoc_tukey

I want to perform the Tukey test of comparisons for a subset of pairs.

The problem I face is that scikit_posthocs.posthoc_tukey works since I am interested in all possible comparisons. Therefore, the applied correction is based on all tests. However, I am only hypothesizing on a subset of pairs.

Do you plan to add this functionality?

Thank you very much,

Gustavo

Invalid Synthax

Describe the bug
A clear and concise description of what the bug is.

Dataset
Please provide a link to the dataset you get the bug with.

To Reproduce
Steps to reproduce the behavior:

  1. Go to '...'
  2. Click on '....'
  3. Scroll down to '....'
  4. See error

Expected behavior
A clear and concise description of what you expected to happen.

System and package information (please complete the following information):

  • OS: (e.g. Linux 4.20.0-arch1-1-ARCH x86_64 GNU/Linux)
  • Package version: (e.g. 0.4.0)

Additional context
Add any other context about the problem here.

[Feature request] outlier_gesd to return data even when report=True

I have found this package extremely useful for quantitative data analysis for my astronomical data. Especially outlier_gesd is a simple/powerful function for me at this specific moment.

While using it, I wonder why it "returns" a str when report=True. My expectation was it prints out the report, while returning the same output as report=False.

I wonder if this was an intention. :)

sign_plot error with underflowing p-values

I noticed a quirk in sign_plot when inputting a p-value array whose entries had underflowed to 0.0F, presumably due to floating point precision. The logic here maps 0 entries to NS, which is exactly what one would not want to happen.

I was able to hack around this by adding 1e-40 to the entries before rendering the plot, but it could be avoided altogether by moving this line to the end, and changing this line to use df>=0.

pd.Grouper like group_col and level in the deep [enhancement]

Hello!

It would be great to check the difference between weeks.
For example,
posthoc_dunn(df, value='value', group_col=pd.Grouper(key='date', freq='W'))

where 'date' is DATE (by days or even hours) and freq='W' merges it by weeks.

It will be useful when we want to find significant difference between weeks\days\months and etc.

P.S. but it's not very important, because we can create new feature in dataframe with this group (for example, df['date'].dt.week )

level in the deep.
it means if we send list with order of colums and can choose how many neighbors will be check.
of even dictionary where key is group and value is list with groups which we should compare.
for example:
{group_1 : [group_2, group_3],
group_2 : [group_3, group_4]}

it will help to get less tax from p_adjust.

thanks

Can't Install

I can download scikit-posthocs on my Python 3.7 machine but when I download it on the machine running Python 3.8, I keep getting error messages. Using pip install scikit-posthocs

WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLError(1, '[SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:1123)'))': /simple/scikit-posthocs/
WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLError(1, '[SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:1123)'))': /simple/scikit-posthocs/
WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLError(1, '[SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:1123)'))': /simple/scikit-posthocs/
WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLError(1, '[SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:1123)'))': /simple/scikit-posthocs/
WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLError(1, '[SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:1123)'))': /simple/scikit-posthocs/
Could not fetch URL https://pypi.org/simple/scikit-posthocs/: There was a problem confirming the ssl certificate: HTTPSConnectionPool(host='pypi.org', port=443): Max retries exceeded with url: /simple/scikit-posthocs/ (Caused by SSLError(SSLError(1, '[SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:1123)'))) - skipping
ERROR: Could not find a version that satisfies the requirement scikit-posthocs
ERROR: No matching distribution found for scikit-posthocs
Could not fetch URL https://pypi.org/simple/pip/: There was a problem confirming the ssl certificate: HTTPSConnectionPool(host='pypi.org', port=443): Max retries exceeded with url: /simple/pip/ (Caused by SSLError(SSLError(1, '[SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:1123)'))) - skipping

posthoc_dscf Calculation

in definition posthoc_dscf, ni and nj have been replaced.
CURRENT

def posthoc_dscf(a, val_col=None, group_col=None, sort=False):
....
γ€€γ€€def compare(i, j):
γ€€γ€€γ€€γ€€....
   γ€€γ€€γ€€γ€€     u = np.array([nj * ni + (nj * (nj + 1) / 2), 
γ€€γ€€γ€€γ€€γ€€γ€€γ€€γ€€γ€€γ€€γ€€γ€€nj * ni + (ni * (ni + 1) / 2)]) - r
γ€€γ€€γ€€γ€€....

TRUTH

 u = np.array([nj * ni + (ni * (ni + 1) / 2), 
γ€€γ€€γ€€γ€€γ€€γ€€nj * ni + (nj* (nj + 1) / 2)]) - r

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.