Git Product home page Git Product logo

pysodevaltoolkit's Introduction

A Python-based image grayscale/binary segmentation evaluation toolbox.

中文文档

TODO

  • More flexible configuration script.
    • Use the yaml file that meets matplotlib requirements to control the drawing format.
    • Replace the json with a more flexible configuration format, such as yaml or toml.
  • Add test scripts.
  • Add more detailed comments.
  • Optimize the code for exporting evaluation results.
    • Implement code to export results to XLSX files.
    • Optimize the code for exporting to XLSX files.
    • Consider if using a text format like CSV would be better? It can be opened as a text file and also organized using Excel.
  • Replace os.path with pathlib.Path.
  • Improve the code for grouping data, supporting tasks like CoSOD, Video Binary Segmentation, etc.
  • Support concurrency strategy to speed up computation. Retained support for multi-threading, removed the previous multi-process code.
    • Currently, due to the use of multi-threading, there is an issue with extra log information being written, which needs more optimization.
  • Separate USVOS code into another repository PyDavis16EvalToolbox.
  • Use more rapid and accurate metric code PySODMetrics as the evaluation benchmark.

Tip

  • Some methods provide result names that do not match the original dataset's ground truth names.
    • [Note] (2021-11-18) Currently, support is provided for both prefix and suffix names, so users generally do not need to change the names themselves.
    • [Optional] The provided script tools/rename.py can be used to rename files in bulk. Please use it carefully to avoid data overwriting.
    • [Optional] Other tools, such as rename on Linux, and Microsoft PowerToys on Windows.

Features

  • Benefiting from PySODMetrics, it supports a richer set of metrics. For more details, see utils/recorders/metric_recorder.py.
    • Supports evaluating grayscale images, such as predictions from saliency object detection (SOD) and camouflaged object detection (COD) tasks.
      • MAE
      • Emeasure
      • Smeasure
      • Weighted Fmeasure
      • Maximum/Average/Adaptive Fmeasure
      • Maximum/Average/Adaptive Precision
      • Maximum/Average/Adaptive Recall
      • Maximum/Average/Adaptive IoU
      • Maximum/Average/Adaptive Dice
      • Maximum/Average/Adaptive Specificity
      • Maximum/Average/Adaptive BER
      • Fmeasure-Threshold Curve (run eval.py with the metric fmeasure)
      • Emeasure-Threshold Curve (run eval.py with the metric em)
      • Precision-Recall Curve (run eval.py with the metrics precision and recall, this is different from previous versions as the calculation of precision and recall has been separated from fmeasure)
    • Supports evaluating binary images, such as common binary segmentation tasks.
      • Binary Fmeasure
      • Binary Precision
      • Binary Recall
      • Binary IoU
      • Binary Dice
      • Binary Specificity
      • Binary BER
  • Richer functions.
    • Supports evaluating models according to the configuration.
    • Supports drawing PR curves, F-measure curves and E-measure curves based on configuration and evaluation results.
    • Supports exporting results to TXT files.
    • Supports exporting results to XLSX files (re-supported on January 4, 2021).
    • Supports exporting LaTeX table code from generated .npy files, and marks the top three methods with different colors.
    • … :>.

How to Use

Installing Dependencies

Install the required libraries: pip install -r requirements.txt

The metric evaluation is based on my another project: PySODMetrics. Bug reports are welcome!

Configuring Paths for Datasets and Method Predictions

This project relies on json files to store data. Examples for dataset and method configurations are provided in ./examples: config_dataset_json_example.json and config_method_json_example.json. You can directly modify them for subsequent steps.

Note

  • Please note that since this project relies on OpenCV to read images, ensure that the path strings do not contain non-ASCII characters.
  • Make sure that the name of the dataset in the dataset configuration file matches the name of the dataset in the method configuration file. After preparing the json files, it is recommended to use the provided tools/check_path.py to check if the path information in the json files is correct.
More Details on Configuration

Example 1: Dataset Configuration

Note, "image" is not necessary here. The actual evaluation only reads "mask".

{
    "LFSD": {
        "image": {
            "path": "Path_Of_RGBDSOD_Datasets/LFSD/Image",
            "prefix": "some_gt_prefix",
            "suffix": ".jpg"
        },
        "mask": {
            "path": "Path_Of_RGBDSOD_Datasets/LFSD/Mask",
            "prefix": "some_gt_prefix",
            "suffix": ".png"
        }
    }
}

Example 2: Method Configuration

{
    "Method1": {
        "PASCAL-S": {
            "path": "Path_Of_Method1/PASCAL-S",
            "prefix": "some_method_prefix",
            "suffix": ".png"
        },
        "ECSSD": {
            "path": "Path_Of_Method1/ECSSD",
            "prefix": "some_method_prefix",
            "suffix": ".png"
        },
        "HKU-IS": {
            "path": "Path_Of_Method1/HKU-IS",
            "prefix": "some_method_prefix",
            "suffix": ".png"
        },
        "DUT-OMRON": {
            "path": "Path_Of_Method1/DUT-OMRON",
            "prefix": "some_method_prefix",
            "suffix": ".png"
        },
        "DUTS-TE": {
            "path": "Path_Of_Method1/DUTS-TE",
            "suffix": ".png"
        }
    }
}

Here, path represents the directory where image data is stored. prefix and suffix refer to the prefix and suffix outside the common part in the names of the predicted images and the actual ground truth images.

During the evaluation process, the matching of method predictions and dataset ground truths is based on the shared part of the file names. Their naming patterns are preset as [prefix]+[shared-string]+[suffix]. For example, if there are predicted images like method1_00001.jpg, method1_00002.jpg, method1_00003.jpg and ground truth images gt_00001.png, gt_00002.png, gt_00003.png, then we can configure it as follows:

Example 3: Dataset Configuration

{
    "dataset1": {
        "mask": {
            "path": "path/Mask",
            "prefix": "gt_",
            "suffix": ".png"
        }
    }
}

Example 4: Method Configuration

{
    "method1": {
        "dataset1": {
            "path": "path/dataset1",
            "prefix": "method1_",
            "suffix": ".jpg"
        }
    }
}

Running the Evaluation

  • Once all the previous steps are correctly completed, you can begin the evaluation. For usage of the evaluation script, refer to the output of the command python eval.py --help.
  • Add configuration options according to your needs and execute the command. If there are no exceptions, it will generate result files with the specified filename.
    • If not all files are specified, it will directly output the results, as detailed in the help information of eval.py.
    • If --curves-npy is specified, the metrics information related to drawing will be saved in the corresponding .npy file.
  • [Optional] You can use tools/converter.py to directly export the LaTeX table code from the generated npy files.

Plotting Curves for Grayscale Image Evaluation

You can use plot.py to read the .npy file to organize and draw PR, F-measure, and E-measure curves for specified methods and datasets as needed. The usage of this script can be seen in the output of python plot.py --help. Add configuration items as per your requirement and execute the command.

The most basic instruction is to specify the values in the figure.figsize item in the configuration file according to the number of subplots reasonably.

A Basic Execution Process

Here I'll use the RGB SOD configuration in my local configs folder as an example (necessary modifications should be made according to the actual situation).

# Check Configuration Files
python tools/check_path.py --method-jsons configs/methods/rgb-sod/rgb_sod_methods.json --dataset-jsons configs/datasets/rgb_sod.json

# After ensuring there's nothing unreasonable in the output information, you can begin the evaluation with the following commands:
# --dataset-json: Set `configs/datasets/rgb_sod.json` as dataset configuration file
# --method-json: Set `configs/methods/rgb-sod/rgb_sod_methods.json` as method configuration file
# --metric-npy: Set `output/rgb_sod/metrics.npy` to store the metrics information in npy format
# --curves-npy: Set `output/rgb_sod/curves.npy` to store the curves information in npy format
# --record-txt: Set `output/rgb_sod/results.txt` to store the results information in text format
# --record-xlsx: Set `output/rgb_sod/results.xlsx` to store the results information in Excel format
# --metric-names: Specify `fmeasure em precision recall` as the metrics to be calculated
# --include-methods: Specify the methods from `configs/methods/rgb-sod/rgb_sod_methods.json` to be evaluated
# --include-datasets: Specify the datasets from `configs/datasets/rgb_sod.json` to be evaluated
python eval.py --dataset-json configs/datasets/rgb_sod.json --method-json configs/methods/rgb-sod/rgb_sod_methods.json --metric-npy output/rgb_sod/metrics.npy --curves-npy output/rgb_sod/curves.npy --record-txt output/rgb_sod/results.txt --record-xlsx output/rgb_sod/results.xlsx --metric-names sm wfm mae fmeasure em precision recall --include-methods MINet_R50_2020 GateNet_2020 --include-datasets PASCAL-S ECSSD

# Once you've obtained the curve data file, which in this case is the 'output/rgb_sod/curves.npy' file, you can start drawing the plot.

# For a simple example, after executing the command below, the result will be saved as 'output/rgb_sod/simple_curve_pr.pdf':
# --style-cfg: Specify the style configuration file `examples/single_row_style.yml`,Since there are only a few subplots, you can directly use a single-row configuration.
# --num-rows: The number of subplots in the figure.
# --curves-npys: Use the curve data file `output/rgb_sod/curves.npy` to draw the plot.
# --mode: Use `pr` to draw the `pr` curve, `em` to draw the `E-measure` curve, and `fm` to draw the `F-measure` curve.
# --save-name: Just provide the image save path without the file extension; the code will append the file extension as specified by the `savefig.format` in the `--style-cfg` you designated earlier.
# --alias-yaml: A yaml file that specifies the method and dataset aliases to be used in the plot.
python plot.py --style-cfg examples/single_row_style.yml --num-rows 1 --curves-npys output/rgb_sod/curves.npy --mode pr --save-name output/rgb_sod/simple_curve_pr --alias-yaml configs/rgb_aliases.yaml

# More complex examples, after executing the command below, the result will be saved as 'output/rgb_sod/complex_curve_pr.pdf'.

# --style-cfg: Specify the style configuration file `examples/single_row_style.yml`,Since there are only a few subplots, you can directly use a single-row configuration.
# --num-rows: The number of subplots in the figure.
# --curves-npys: Use the curve data file `output/rgb_sod/curves.npy` to draw the plot.
# --our-methods: The specified method, `MINet_R50_2020`, is highlighted with a bold red solid line in the plot.
# --num-col-legend: The number of columns in the legend.
# --mode: Use `pr` to draw the `pr` curve, `em` to draw the `E-measure` curve, and `fm` to draw the `F-measure` curve.
# --separated-legend: Draw a shared single legend.
# --sharey: Share the y-axis, which will only display the scale value on the first graph in each row.
# --save-name: Just provide the image save path without the file extension; the code will append the file extension as specified by the `savefig.format` in the `--style-cfg` you designated earlier.
python plot.py --style-cfg examples/single_row_style.yml --num-rows 1 --curves-npys output/rgb_sod/curves.npy --our-methods MINet_R50_2020 --num-col-legend 1 --mode pr --separated-legend --sharey --save-name output/rgb_sod/complex_curve_pr

Corresponding Results

Precision-Recall Curve:

PRCurves

F-measure Curve:

fm-curves

E-measure Curve:

em-curves

Programming Reference

Relevant Literature

@inproceedings{Fmeasure,
    title={Frequency-tuned salient region detection},
    author={Achanta, Radhakrishna and Hemami, Sheila and Estrada, Francisco and S{\"u}sstrunk, Sabine},
    booktitle=CVPR,
    number={CONF},
    pages={1597--1604},
    year={2009}
}

@inproceedings{MAE,
    title={Saliency filters: Contrast based filtering for salient region detection},
    author={Perazzi, Federico and Kr{\"a}henb{\"u}hl, Philipp and Pritch, Yael and Hornung, Alexander},
    booktitle=CVPR,
    pages={733--740},
    year={2012}
}

@inproceedings{Smeasure,
    title={Structure-measure: A new way to eval foreground maps},
    author={Fan, Deng-Ping and Cheng, Ming-Ming and Liu, Yun and Li, Tao and Borji, Ali},
    booktitle=ICCV,
    pages={4548--4557},
    year={2017}
}

@inproceedings{Emeasure,
    title="Enhanced-alignment Measure for Binary Foreground Map Evaluation",
    author="Deng-Ping {Fan} and Cheng {Gong} and Yang {Cao} and Bo {Ren} and Ming-Ming {Cheng} and Ali {Borji}",
    booktitle=IJCAI,
    pages="698--704",
    year={2018}
}

@inproceedings{wFmeasure,
  title={How to eval foreground maps?},
  author={Margolin, Ran and Zelnik-Manor, Lihi and Tal, Ayellet},
  booktitle=CVPR,
  pages={248--255},
  year={2014}
}

pysodevaltoolkit's People

Contributors

dependabot[bot] avatar lartpang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

pysodevaltoolkit's Issues

error

would you mind helping , when I run the code I get this error : cannot import name '_TYPE' from 'py_sod_metrics.sod_metrics' .
I could not find this py_sod_metric

in metrics folder / extra_metrics.py / line 3 : from py_sod_metrics.sod_metrics import _TYPE, _prepare_data

why I got that error : cannot import name '_TYPE' from 'py_sod_metrics.sod_metrics

thanks

plot_results.py遇到问题

我似乎无论怎么调整(画布尺寸、tight_layout()还是subplots_adjust)
都没有办法得到很好的图像。
画布似乎并不会跟着子图的个数、大小变化。
图例也游走于页面之上难以控制。

out.png
pr.png

pyplot被封装得极其精美,以至于无从下手@_@#
请教有什么解决办法。不尽感激。

关于F-measure的一些问题

作者您好!我在使用此工具去eval一项别的工作提供的预测图时,发现工具给出的F-measure的数值与那项工作文中描述的不一致,它的要高一些,因此产生了一些疑惑。那篇工作名为:《Pyramid Grafting Network for One-Stage High Resolution Saliency Detection》,他在文中描述F-measure为max F-m。
我的问题是,您的代码中在实现F-m时是怎么获得max F-m的?这里的max F-m是针对单张图片的,还是整个数据集的?对于我提到的这项工作中,他用到的F-m在数值上略高这件事您有什么头绪吗?我有点没弄懂,恳请您能够指点一二,非常感谢!

关于想获得数据的问题

你好,请问如果我只想获得当前训练好的模型生成的预测结果的一些指标应该怎么做呢?

question about threshold of f-measure

Thank you for sharing this cool toolkit.
I have a question about threshold of the f-measure plotting code.

After the np.flip operatoin, the fg_w_thrs becomes [ >= 255, >= 254, ..... ,>=0].
https://github.com/lartpang/Py-SOD-VOS-EvalToolkit/blob/f9c1fd5ffeef1a58067e31b9e6d28e9eb0754c46/metrics/sod/metrics.py#L66

When saving the curve data into np file for ploting , thess lines flip the fm data again.( so now fm should be [0, 1, ...., 255])
https://github.com/lartpang/Py-SOD-VOS-EvalToolkit/blob/f9c1fd5ffeef1a58067e31b9e6d28e9eb0754c46/eval_sod_all_methods.py#L132-L136

But when plotting image, the x axis(threshold) here starts from 1 to 0. I guess this should be from 0 to 1.
https://github.com/lartpang/Py-SOD-VOS-EvalToolkit/blob/f9c1fd5ffeef1a58067e31b9e6d28e9eb0754c46/eval_sod_all_methods.py#L198

fmeasure 和pr曲线的绘制错误

我用的命令python eval.py --dataset-json E:\network\PySODEvalToolkit-master\examples\config_dataset_json_example.json --method-json E:\network\PySODEvalToolkit-master\examples\config_method_json_example.json fmeasure
错误如下Traceback (most recent call last):
File "eval.py", line 170, in
main()
File "eval.py", line 149, in main
exclude_methods=args.exclude_methods,
File "E:\network\PySODEvalToolkit-master\utils\generate_info.py", line 109, in get_methods_info
raise FileNotFoundError(f"{f} is not be found!!!")
FileNotFoundError: fmeasure is not be found!!!

为什么会生成多个图例

使用以下命令
python tools/check_path.py --method-jsons MyJson/cos_method.json --dataset-jsons MyJson/cos_dataset.json

python eval.py --method-json MyJson/cos_method.json --dataset-json MyJson/cos_dataset.json --metric-npy output/metrics.npy --curves-npy output/curves.npy

python plot.py --style-cfg MyJson/single_row_style2.yml --num-rows 2 --curves-npys output/curves.npy --mode pr --save-name output/result
出现No artists with labels found to put in legend. Note that artists whose label start with an underscore are ignored when legend() is called with no argument.提示,并生成了多个图例。
result

配置信息

你好,不太理解config_method_py_exampl.py模型是怎么放置的,请问可以给出如何放置位置的截图吗,<your_methods_path>是放模型训练好的权重,还是模型结构呢

单独测method,最后在合并画图?

您好,请问有单独测method,最后再合并画图的方法吗?

RGBD-SOD的测试数据集NJU2K中,有些顶会使用了498张,而另外一些使用了495张,普遍使用的有500张。
我想先测使用500张的methods,再调整数据集(删掉缺失的5张、2张),测试495、498的methods。

求教,能否将两次测试结果合并画图?
(e.g. 10methods + 1method 一起画图)

再次感谢。

import py_sod_metrics ??

hello.
could not find the file of py_sod_metrics. does it upload?

Traceback (most recent call last):
File "/content/drive/MyDrive/camouflaged/project1/s1/Weakly-Supervised-Camouflaged-Object-Detection-with-Scribble-Annotations/PySODEvalToolkit/./eval.py", line 7, in
from metrics import cal_sod_matrics
File "/content/drive/MyDrive/camouflaged/project1/s1/Weakly-Supervised-Camouflaged-Object-Detection-with-Scribble-Annotations/PySODEvalToolkit/metrics/cal_sod_matrics.py", line 14, in
from utils.recorders import (
File "/content/drive/MyDrive/camouflaged/project1/s1/Weakly-Supervised-Camouflaged-Object-Detection-with-Scribble-Annotations/PySODEvalToolkit/utils/recorders/init.py", line 4, in
from .metric_recorder import (
File "/content/drive/MyDrive/camouflaged/project1/s1/Weakly-Supervised-Camouflaged-Object-Detection-with-Scribble-Annotations/PySODEvalToolkit/utils/recorders/metric_recorder.py", line 50, in
"handler": py_sod_metrics.FmeasureHandler,
AttributeError: module 'py_sod_metrics' has no attribute 'FmeasureHandler'
trained

python plot.py报错

在运行完python eval.py后生成了curves.npy,运行python plot.py报错

Traceback (most recent call last):
  File "/data1/lkw/PySODEvalToolkit/plot.py", line 132, in <module>
    main(args)
  File "/data1/lkw/PySODEvalToolkit/plot.py", line 98, in main
    draw_curves.draw_curves(
  File "/data1/lkw/PySODEvalToolkit/metrics/draw_curves.py", line 145, in draw_curves
    y_data = method_results["precision"]
KeyError: 'precision'

请问这可能是什么错误?

代码导包错误

想问一下这个是在windows下能运行的吗
出现了很多导包错误

运行eval_all.py文件报错

您好我遇到了以下问题

C:\Users\wickyan\.conda\envs\torch\python.exe F:/PySODEvalToolkit/examples/eval_all.py
`../output`已存在
Traceback (most recent call last):
  File "F:/PySODEvalToolkit/examples/eval_all.py", line 70, in <module>
    use_mp=False,  # using multi-threading
  File "F:\PySODEvalToolkit\metrics\cal_sod_matrics.py", line 113, in cal_sod_matrics
    max_method_name_width=max([len(x) for x in drawing_info.keys()]),  # 显示完整名字
ValueError: max() arg is an empty sequence

进程已结束,退出代码为 1

将max_method_name_width设置成固定值8,又会造成output文件夹中输出为“空”:

 ========>> Date: 2021-11-18 01:01:04.917841 <<======== 

 ========>> Dataset: LFSD <<======== 

 ========>> Dataset: NLPR <<======== 

 ========>> Date: 2021-11-18 01:43:13.467784 <<======== 

 ========>> Dataset: LFSD <<======== 

 ========>> Dataset: NLPR <<======== 

)使用check_path.py检查路径显示:基本正常
请教出现了什么问题,谢谢

python eval.py 报错 AssertionError: e

python check_path.py

/home/lkw/PySODEvalToolkit/examples/config_method_json_example.json & /home/lkw/PySODEvalToolkit/examples/config_dataset_json_example.json 基本正常

python eval.py (省略参数)

Traceback (most recent call last):
  File "/data1/lkw/PySODEvalToolkit/eval.py", line 168, in <module>
    main()
  File "/data1/lkw/PySODEvalToolkit/eval.py", line 137, in main
    datasets_info = get_datasets_info(
  File "/data1/lkw/PySODEvalToolkit/utils/generate_info.py", line 156, in get_datasets_info
    targeted_datasets = get_valid_elements(
  File "/data1/lkw/PySODEvalToolkit/utils/generate_info.py", line 61, in get_valid_elements
    assert element in source, element
AssertionError: e

请问这是怎么了?

关于曲线坐标的问题

非常感谢您杰出的工作!但是我对于F-measure和E-measure的横坐标有些问题,想请教下threshold是什么意思?为什么F-measure和E-measure会随着threshold的变化而变化?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.