googleinterns / acuiti Goto Github PK
View Code? Open in Web Editor NEWIcon Matching with Shape Context Descriptors.
License: Apache License 2.0
Icon Matching with Shape Context Descriptors.
License: Apache License 2.0
Canny edge detection, find contours, DBSCAN clustering, and shape context descriptor implementation. This is the initial pipeline that would be iteratively improved to achieve better accuracy, etc, results.
Visualizations for the icon and image, cluster size, histogram plotting, failed cases, etc.
Adjust code as necessary when the new format is acquired.
Go through the codebase and update places where we are still using the old set of datasets. Upgrade these.
Check for any places where I might not be passing by reference when I should, etc.
This will be a distance returned by shape context descriptor. It might be difficult to figure out a threshold, because the distances will be more fine-grained with more points, and coarse with less points.
It might be useful to upsample points whenever possible so that there isn't so much of a discrepancy between the number of points in the icon and the image patch (shape context alg will just add random points).
We need to figure out what the optimal threshold is for determining the distance cutoff for which an image patch is considered a match to the template icon, whether this threshold is an absolute number or something more like a ratio.
Instead of just reading from a tfrecord, include functionality to also read in from other images. This will be helpful to test things like zoomed in images.
From the two preliminary experiments so far, we know that the number of points kept matters, both for speed, and for accuracy. We’re going to try to find an optimal tradeoff now. Which points we keep also somewhat matters if there are few points. So, we’ll start by running contouring twice: once to know which are the keypoints, and another to get all the keypoints to help the DBSCAN clustering. After clustering, though, we’ll use the keypoint mask to start with only the keypoints.
Accuracy, latency, and memory usage information for a given find_icon algorithm.
This is good for now, I just wanted to note that it would be good to eventually have these scaling factors as part of the main icon finding pipeline (rather than just the benchmarking), so that we can adjust the size of the inputs more easily in case they do need to be scaled to the correct size range.
Address gold bounding boxes including a white padding around the actual icon.
One iteration of processing an image takes 1-2 seconds. It could be interesting to think about parallelization in these ways:
Try out different values of eps and min_samples in the DBSCAN algorithm to make sure that the clustering going into shape context is optimal.
Consider using precision and recall instead, to support having multiple bounding boxes (icon instances) in an image.
Include information for how to run the benchmark_pipeline from the command line, as well as the optimizer that can fine tune more parts via code, and changing the defaults.
The size of the pointset being input to shape context descriptor is a reflection of scale currently. We might have to change that from an absolute value to a relative one, or consider resizing the images that clients pass to us. (Including for the template icon.)
Look into using logging instead of print statements (https://docs.python.org/3/howto/logging.html) and python's timedelta in the Latency class.
See if another clustering algorithm can do a better job of clustering (and hence higher recall) than DBSCAN. The recall we need to beat is 95% on the medium & large datasets!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.