Comments (5)
I think that would be a great solution. Even better than what I was thinking.
Having it applied to the whole process is perfect.
I could definitely help with that. Count me in!
I'll try to implement the functionality too, if you don't mind.
from pyss3.
Hi @ogabrielluiz!
First of all, thanks for showing interest in this humble project as well as for your suggestion, I think this is a wonderful idea that would make pyss3 better, allowing users to define their custom evaluation metrics (not only in code but also in the 3D evaluations plot). Plus, I think it shouldn't be too hard to actually implement it, and hence shouldn't take too much time to get it working.
Lately, I've been very busy trying to finish a job that I must get done by the end of this month. Nevertheless, I'll try to give it a shot this weekend, what do you think? ☕ 💻 😃
from pyss3.
That would be awesome.
Looking forward to seeing your approach.
In case it takes longer than you expected, I can try my hand on it too.
from pyss3.
@ogabrielluiz! I'm really sorry that I couldn't "give it shot" this last weekend as I was expecting. Unfortunately, I'm still very behind with the work that I have to finish by the end of this month, which, unfortunately, is extremely important.
Meanwhile, how do you think it should be implemented from the point of view of the user? I was thinking of adding a function called, for example, something like "add_metric" to the Evaluation
class, which would take the new metric's name and the function that actually implement it, like so:
Evaluation.add_metric("f2-score", my_f2_score_function)
Which would then make any Evaluation
's function that shows (or uses) any of the standard metrics also show (or accept) all the new user-defined metrics. For instance, after the above line of code:
Evaluation.test(clf, x_test, y_test) # should now also include our "f2-score" among the printed results.
And
best_s, best_l, best_p, _ = Evaluation.grid_search(
clf, x_test, y_test,
s=s_vals, l=l_vals, p=p_vals,
metric=""f2-score"" # <- now, our new metric should also be accepted
)
And the new metric would also be included in the 3D Evaluation Plot as well.
What do you think about it? The idea behind is to treat new user-defined metrics exactly as any other built-in metric...
I don't think I'll be able to implement any of this until 2 or 3 weeks from now.
Would you like me to send you a "collaborator" request? so you can get full access to this repo in case you want to help me out. Any type of help would be really appreciated, for instance, adding a new Jupyter Notebook to the "examples" folder as if it were a tutorial for this new functionality (pretending like this function is already implemented), not only would help me to test it but also would help users to know how they can use custom metrics for evaluations once it is actually implemented (you'll be also added as a contributor in the README file).
from pyss3.
Excellent! and I don't mind at all, any kind of help would be really appreciated, that's very kind of you, just make sure your pull requests follow the "seven rules of a great Git commit message" from "How to Write a Git Commit Message" and you're good to go, buddy! 😀
Let me know if you need any help, we could even use Discord (or Slack) in case it is needed 👍
from pyss3.
Related Issues (20)
- Divison by 0 HOT 4
- Initialization of sanction function HOT 6
- Use evaluation and explanation as a standalone package? HOT 2
- Partial learn HOT 10
- Data loading issues while train HOT 4
- [joss] update the changelog HOT 1
- [joss] update entry site of the documentation HOT 1
- [joss] feature request: accessible utility to import a dataset HOT 4
- [joss] software paper comments HOT 1
- [JOSS] comments on the paper
- AttributeError: type object 'Dataset' has no attribute 'load_from_url' HOT 3
- AttributeError: type object 'Dataset' has no attribute 'load_from_url' HOT 3
- PYSS3 support for multi-class classification
- Set custom Confidence Vectors
- Custom preprocessing in Live Test HOT 8
- Multilabel Classification Evaluation HOT 14
- Multilabel Classification Dataset Loading HOT 4
- Change of category name HOT 1
- Multilabel Live Test HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from pyss3.