Depends on solr integration , from an API perspective.
For the given field to be faceted, get the low and the high regions and the region servers for the same.
Find out the list of region servers ( for the encompassing regions ) , ( similar to multiput API, introduced in trunk + 0.20.4 , could be refactored for use there ).
Fire out independent RPC calls to them for the facet count.
Consolidate the counts back ( a summation , based on key)
Return back to the web service.
Solr configuration - NamedList - to take in the parameters needed by HBaseConfiguration, and the properties manually added to a freshly created 'Configuration'
For the initial phase, providing the search component to search from an index would be useful, for validating and providing a service against a static, but very large scale data.
We can later revisit integrating the add/delete rest calls of solr with this.
Hi, I just did a git pull and I notice that all the lucene-core dependencies are gone.. ? (you've kept them at the test scope, but the runtime is not working..)?
I can't compile anything so far.. ? I'm presuming just putting back in the lucene-core dependency block should fix that.. ?
For a stored field, given the row key order of field/term - should be straight-forward.
A configurable scan limit can be given, that in turn refers to the number of unique terms across the given field to be sorted.
A reasonable default value can be placed, with expert level hints about the domain size of the field term ( number of unique terms ) given , that affects the memory on the regionservers and the performance of the scan query for sorting.
hi all,
i tried to run HelloWorlClass example from the thread: https://github.com/akkumar/hbasene/wiki/hello-world
but that throw the exception:
"Exception in thread "main" java.lang.NoSuchFieldError: ANALYZED
at org.hbasene.index.HelloWorldClass.main(HelloWorldClass.java:36)
"