since this is our first dog-breed-classification website , this issue are opened for all users who come up into our website and test our model by uploading external images. Note that if you have found any error, feel free to leave a comment here about what you've found. I will fix it as soon as i heard them from you!
in case of urgent work, please kindly @pavaris-pm to the loop, so that i will reply much faster!
Feature Requests [WIP] π¦
add custom component (e.g. probability distribution) in order to decorate our website by @pavaris-pm
train a classifier model using Convolutional AutoEncoder (CAE) to classify dog breeds by @pavaris-pm
#6 by @pavaris-pm for code implementation, co-authored with @supermind-c for bug fixing and model training
format and setup code of conduct (code refactorization for wrong import & NonType of input/output specification) by @pavaris-pm
add unit testing for test-driven-development
optimize the speed during inference of web application, and solve the problem found in #7 by @supermind-c with newly deployed webapp in new domain website
etc ... (tag me and explain an error you've found)
For Contributors
Clearly note here that a PR also welcome, let's contribute to this project onward !
You can contribute to this project by
Fork this repo
Clone the repository
add a feature or fix bug in the source code
Made a PR, then, add @pavaris-pm to review your code (do not forget to link to this issue)
Even it seems to be great on 92% validation accuracy with ConvNext architecture, @supermind-c has made a visualization of my torch training loop, and seems like my code for display the training statistics (e.g. training acc, training loss) is wrong in the colab notebook. Since it is my logical error not a syntax error, i will fix it and give you a link in this separated PR so that you won't be confused with #1 which is the main issuesπ₯
as stated in #7 we will see that it will re-init the model everytime we upload new image, which is time-consuming since the ConvNext itself is very large. Consider optimize the speed by downloading everything first before making an inference at the end would much help a lot π―
as mentioned in #4 that seems like i've faced a problem with loading the model weight, the path was founded already but the weight file seems like it is corrupted or something, the weight can be download in colab but cannot in codespace. According to that, one of the easiest solution is to building a shell script to be executed so that everything can be done in single environment. Note that any suggestion is welcome π―