Git Product home page Git Product logo

word's People

Contributors

niexiaolong avatar ultimatepea avatar ysc avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

word's Issues

下载下来配置有误

1、配置问题:
下载下来会有很多jar包不见.. 然后去百度也把jar包下好导入了... 但是总是会提醒build path还是什么的错误,百度有的说是多个jdk的问题... 然后也改成workspace默认的jdk了(1.8.5应该是),但依然会报这个奇怪的错...

2、不稳定(?)
运行例子经常会卡死运行不出结果... 不知道是不是我电脑的问题QWQ 因为百度好像知道这个的人也不是很多... 没办法求助一下..

solr使用word分词时,第一个词丢失或不显示

我按您的文档配置了solr,但测试时发现第一个词永远都不出现。例如原文"大家好啊",分词结果为"好啊"。我在solr的analysis工具测试时发现,结果有两个词,但第一个词位置上是空白的。
这是我的solr配置

<fieldtype name="textWord" class="solr.TextField">
<analyzer>  
<tokenizer class="org.apdplat.word.solr.ChineseWordTokenizerFactory"/>
</analyzer>  
</fieldtype>

作为对比我在同一个schema中配置了另一个分词组件

<fieldtype name="textComplex" class="solr.TextField">
<analyzer>  
<tokenizer class="com.chenlb.mmseg4j.solr.MMSegTokenizerFactory" mode="complex" dicPath="dic"/>
</analyzer>  
</fieldtype>

结果分词正确。

P.S.刚刚又测试了一次,这次的文本是"我的企业百科开发票",前两个词均为空白(应该分别是"我的"、"企业"),仅输出"百科"和"开发票"

P.P.S.今天早上研究了一下源码,确认是因为停用词造成的问题。建议慎重构建停用词库,“企业”、“美国”和“领导”等词在一定场景下是有意义的,当它们作为形容词、冠词时,对精准性影响较大。

elasticsearch插件無法創建索引

PUT testIndex 之後返回的錯誤,也就是說無法創建任何索引

{
   "error": {
      "root_cause": [
         {
            "type": "index_creation_exception",
            "reason": "failed to create index"
         }
      ],
      "type": "creation_exception",
      "reason": "Guice creation errors: 1) A binding to org.apdplat.word.elasticsearch.ChineseWordIndicesAnalysis was already configured at [unknown source]. at _unknown_ 1 error"
   },
   "status": 500
}

使用solr5.5.2时配置路径的相对路径需要修改

由于solr5.5本身的路径和4.x的不同,所以在配置conf文件的相对路径需要调整,建议帮助文档进行补充。

<fieldType name="text_word_complex" class="solr.TextField" positionIncrementGap="100" >
        <analyzer>
           <tokenizer class="org.apdplat.word.solr.ChineseWordTokenizerFactory" 
        conf="solr/conf/word.local.conf"/>
         </analyzer>
     </fieldType>

经过半个多小时的测试,终于找到了正确的相对路径。
默认路径是在solr-5.5.2\server下,我在solr-5.5.2\server\solr下又建立conf文件夹,将conf文件放在此处就没有问题了。

1.3尝试对文本分词的过程中出现空指针

at org.apdplat.word.segmentation.impl.AbstractSegmentation.seg(AbstractSegmentation.java:151)
at org.apdplat.word.lucene.ChineseWordTokenizer.getWord(ChineseWordTokenizer.java:95)

方法 segSentence(final String sentence) 可能返回null,但调用的过程中忽略了处理

我需要用相识度计算两个新闻标题的相识度,我用简单共有词得到的是0.586,用编辑距离得到的是0.862,其他几个的我没有试过,不知道用哪个合适

老师,我需要用相识度计算两个新闻标题的相识度,类似(词1:万达股债双杀市值蒸发666亿全世界都等王思聪发微博,词2:万达股债双杀市值半日蒸发666亿元全世界都在等王思聪发微博)我用简单共有词得到的是0.586,用编辑距离得到的是0.862,其他几个的我没有试过,不知道用哪个合适?很头疼啊!请求帮助分析一下用哪个好?

Solr6.3.0 采用word1.3分词器 高亮显示异常

我在Solr6.3.0上采用word.1.3.jar作为分词器,相应的配置也都按github上的改了,分词没问题,但是高亮出现了偏差。

只要我的内容中有\r\n字符,高亮就会挪位,我加了一个\r,高亮部分就会整体挪动一位。看了源码,标点符号是有加入\r \n \t这些内容的,但高亮很奇怪的是有\t,是不会有问题的。但有\r或者\n,高亮就会错位。

`"highlighting": {

"PK_3174064": {
	"pj_title": [
		"今天\t<em>宝宝</em>发烧,我也发烧"
	]
},
"PK_5028758": {
	"pj_title": [
		"今天<em>\r宝</em>宝发烧,我也发烧"
	]
},
"PK_41364016": {
	"pj_title": [
		"今天<em>\n宝</em>宝发烧,我也发烧"
	]
}

}`

elasticsearch中分词高亮错误

您好,我在elasticsearch里用word能获取到正确的分词,但在搜索时使用highlight标出的关键词是完全不相关的结果,比如我搜的是“新媒体”一词,但highlight标出的是标点符号和“点子和”这样的没关系的词。不知是我哪里没配置好还是插件有问题,麻烦帮忙分析下,谢谢。

使用V1.2版本在elastaicsearch下分词错误

1.按照elasticsearch插件安装说明安装
a. elasticsearch-1.4.4
b../plugin -u http://apdplat.org/word/archive/v1.2.zip -i word
2.测试分词:http://localhost:9200/paper/_analyze?text=**人民&analyzer=word
返回:{"tokens": [ ]}
不能正确分词.
3.http://localhost:9200/paper/_analyze?text=中华人民共和国&analyzer=word
结果为:
tokens": [
{"token": "华人民",
"start_offset": 1,
"end_offset": 4,
"type": "word",
"position": 2},
{"token": "共和国",
"start_offset": 4,
"end_offset": 7,
"type": "word",
"position": 3}]

关于SimHashPlusHammingDistanceTextSimilarity的问题

你好,我试用了SimHashPlusHammingDistanceTextSimilarity这个相似度算法,发现毫无相似性的两个文本,计算出来的相似性都在0.8以上。

例如:“医院”和“香港新闻网3月10日电 据香港《商报》报道,昨公布最新人民币环球指数,1月指数较12月上升3.1%至2228点,主要原因是今年初人民币贬值后,离岸人民币汇市成交量大增。表示,指数回升可能只是短暂的现象,而实际趋势仍然疲弱。企业可能因为担心**政府进一步限制资金外流,而将更多人民币资金保留在境外帐户,终止2015年年中以来离岸人民币存款量下降的趋势。”

关于分词与余弦相似度结果的问题

分词:

** -> **
大** -> 大 傻 逼

余弦相似度算法是不是有问题,我使用其他方式计算与您这里的结果不符合,我的代码:

package DOC.Similarity;

import java.io.UnsupportedEncodingException;
import java.util.HashMap;
import java.util.Iterator;
import java.util.Map;

/**
 * 2017/7/20
 * Created by dylan.
 * Home: http://www.devdylan.cn
 */
public class CosineSimilarAlgorithm {
    public static double getSimilarity(String doc1, String doc2) {
        if (doc1 != null && doc1.trim().length() > 0 && doc2 != null
                && doc2.trim().length() > 0) {

            if (Math.abs(doc2.length() - doc1.length()) > 10) {
                return 0;
            }
            Map<Integer, int[]> AlgorithmMap = new HashMap<Integer, int[]>();

            //将两个字符串中的中文字符以及出现的总数封装到,AlgorithmMap中
            for (int i = 0; i < doc1.length(); i++) {
                char d1 = doc1.charAt(i);
                if(isHanZi(d1)){
                    int charIndex = getGB2312Id(d1);
                    if(charIndex != -1){
                        int[] fq = AlgorithmMap.get(charIndex);
                        if(fq != null && fq.length == 2){
                            fq[0]++;
                        }else {
                            fq = new int[2];
                            fq[0] = 1;
                            fq[1] = 0;
                            AlgorithmMap.put(charIndex, fq);
                        }
                    }
                }
            }

            for (int i = 0; i < doc2.length(); i++) {
                char d2 = doc2.charAt(i);
                if(isHanZi(d2)){
                    int charIndex = getGB2312Id(d2);
                    if(charIndex != -1){
                        int[] fq = AlgorithmMap.get(charIndex);
                        if(fq != null && fq.length == 2){
                            fq[1]++;
                        }else {
                            fq = new int[2];
                            fq[0] = 0;
                            fq[1] = 1;
                            AlgorithmMap.put(charIndex, fq);
                        }
                    }
                }
            }

            Iterator<Integer> iterator = AlgorithmMap.keySet().iterator();
            double sqDoc1 = 0;
            double sqDoc2 = 0;
            double denominator = 0;
            while(iterator.hasNext()){
                int[] c = AlgorithmMap.get(iterator.next());
                denominator += c[0]*c[1];
                sqDoc1 += c[0]*c[0];
                sqDoc2 += c[1]*c[1];
            }

            return denominator / Math.sqrt(sqDoc1*sqDoc2);
        } else {
            return 0;
        }
    }

    private static boolean isHanZi(char ch) {
        // 判断是否汉字
        return (ch >= 0x4E00 && ch <= 0x9FA5);

    }

    /**
     * 根据输入的Unicode字符,获取它的GB2312编码或者ascii编码,
     *
     * @param ch
     *            输入的GB2312中文字符或者ASCII字符(128个)
     * @return ch在GB2312中的位置,-1表示该字符不认识
     */
    private static short getGB2312Id(char ch) {
        try {
            byte[] buffer = Character.toString(ch).getBytes("GB2312");
            if (buffer.length != 2) {
                // 正常情况下buffer应该是两个字节,否则说明ch不属于GB2312编码,故返回'?',此时说明不认识该字符
                return -1;
            }
            int b0 = (int) (buffer[0] & 0x0FF) - 161; // 编码从A1开始,因此减去0xA1=161
            int b1 = (int) (buffer[1] & 0x0FF) - 161; // 第一个字符和最后一个字符没有汉字,因此每个区只收16*6-2=94个汉字
            return (short) (b0 * 94 + b1);
        } catch (UnsupportedEncodingException e) {
            e.printStackTrace();
        }
        return -1;
    }
}

使用您这边的余弦相似度算法貌似结果不太正确。

配置上,我这边之久丢了一个最新的搜狗词库进去,大概有111Mb左右
还是因为我这边训练的不太正确,可以给出一些建议么?

而且,该句话是不是人话这个类貌似不存在maven repo的库中。

为什么在服务器上会出现获取分词结果失败?本地不会

2017-10-09 02:36:03.053 ERROR 3712 --- [pool-1-thread-2] o.a.w.s.impl.AbstractSegmentation : 获取分词结果失败

java.util.concurrent.ExecutionException: java.lang.NoClassDefFoundError: Could not initialize class org.apdplat.word.corpus.Bigram
at java.util.concurrent.FutureTask.report(FutureTask.java:122) ~[na:1.7.0_80]
at java.util.concurrent.FutureTask.get(FutureTask.java:188) ~[na:1.7.0_80]
at org.apdplat.word.segmentation.impl.AbstractSegmentation.seg(AbstractSegmentation.java:114) ~[word-1.2.jar!/:na]
at org.apdplat.word.WordSegmenter.seg(WordSegmenter.java:78) [word-1.2.jar!/:na]
at com.gwenson.robot.utils.SimHash.getSimHashCode(SimHash.java:28) [classes!/:0.0.1-SNAPSHOT]
at com.gwenson.robot.page.service.imp.DispatchTaskServiceImp.operationDoc(DispatchTaskServiceImp.java:158) [classes!/:0.0.1-SNAPSHOT]
at com.gwenson.robot.page.service.imp.DispatchTaskServiceImp$1.run(DispatchTaskServiceImp.java:74) [classes!/:0.0.1-SNAPSHOT]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_80]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_80]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_80]
Caused by: java.lang.NoClassDefFoundError: Could not initialize class org.apdplat.word.corpus.Bigram
at org.apdplat.word.segmentation.impl.AbstractSegmentation.ngram(AbstractSegmentation.java:72) ~[word-1.2.jar!/:na]
at org.apdplat.word.segmentation.impl.BidirectionalMinimumMatching.segImpl(BidirectionalMinimumMatching.java:53) ~[word-1.2.jar!/:na]
at org.apdplat.word.segmentation.impl.AbstractSegmentation.segSentence(AbstractSegmentation.java:148) ~[word-1.2.jar!/:na]
at org.apdplat.word.segmentation.impl.AbstractSegmentation.access$000(AbstractSegmentation.java:48) ~[word-1.2.jar!/:na]
at org.apdplat.word.segmentation.impl.AbstractSegmentation$1.call(AbstractSegmentation.java:129) ~[word-1.2.jar!/:na]
at org.apdplat.word.segmentation.impl.AbstractSegmentation$1.call(AbstractSegmentation.java:126) ~[word-1.2.jar!/:na]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) ~[na:1.7.0_80]
... 3 common frames omitted

有一个词库的需求问题

dictionary=main:en,locale=en,description=Sample wordlist,date=1351495318,version=1
word=sample,f=200
bigram=wordlist,f=243
word=wordlist,f=180
word=shortcut,f=176
shortcut=target,f=10
word=witelisted,f=10,not_a_word=true
shortcut=whitelisted,f=whitelist
word=profanity,f=0

这种词库,https://android.googlesource.com/platform/packages/inputmethods/LatinIME/+/android-cts-7.0_r11/dictionaries/

我如果像自己做一个词库出来用咱这个word可以么,怎么改进

solr5.3.1 高亮标记错误

示例:
输入关键“手机”,高亮显示的位置不对

"手机怎么使用WiFi上网? 您可以在**电信WiFi网络区域内,根据手机说明书将天翼手机的WiFi 或WLAN功能打开

索引分词问题

hi,
请问下,word分词中哪种分词方式是适合索引分词的呀?我发现如果词典有单个的英文字母,最小匹配会把"github"分成"g"、"i"、"t"、"h"、"u"、"b"

java.security.AccessControlException: access denied ("org.elasticsearch.ThreadPermission" "modifyArbitraryThreadGroup")

when i use word as a plugin in elasticsearch 2.3.4,i got the following errors(NOT ALWAYS):

RemoteTransportException[[Ruckus][127.0.0.1:9300][indices:admin/analyze[s]]]; nested: Error[java.security.AccessControlException: access denied ("org.elasticsearch.ThreadPermission" "modifyArbitraryThreadGroup")]; nested: AccessControlException[access denied ("org.elasticsearch.ThreadPermission" "modifyArbitraryThreadGroup")];
Caused by: java.lang.Error: java.security.AccessControlException: access denied ("org.elasticsearch.ThreadPermission" "modifyArbitraryThreadGroup")
at java.util.concurrent.ForkJoinWorkerThread$InnocuousForkJoinWorkerThread.createThreadGroup(ForkJoinWorkerThread.java:269)
at java.util.concurrent.ForkJoinWorkerThread$InnocuousForkJoinWorkerThread.(ForkJoinWorkerThread.java:216)
at java.util.concurrent.ForkJoinPool$InnocuousForkJoinWorkerThreadFactory$1.run(ForkJoinPool.java:3471)
at java.util.concurrent.ForkJoinPool$InnocuousForkJoinWorkerThreadFactory$1.run(ForkJoinPool.java:3469)
at java.security.AccessController.doPrivileged(Native Method)

i add the security policy file as follwos:

grant {
permission java.lang.RuntimePermission "";
permission java.lang.reflect.ReflectPermission "
";
permission org.elasticsearch.ThreadPermission "modifyArbitraryThreadGroup";
};

but! the error is continue.

how to handle? tnanks!

lucene 6.3.0 Lucene40CompoundReader overrides final method renameFile

Exception in thread "main" java.lang.VerifyError: class org.apache.lucene.codecs.lucene40.Lucene40CompoundReader overrides final method renameFile.(Ljava/lang/String;Ljava/lang/String;)V

	Analyzer analyzer=new ChineseWordAnalyzer();
	IndexWriterConfig iwc=new IndexWriterConfig(analyzer);
	IndexWriter writer=null;

执行到第二句new indexWriterConfing(analyzer)就报错了。。。。

对于api设计方面的建议

首先要说明的是,这个项目非常棒,先给你32个👍

  • 在开发使用过程中,我发现有的api设计可以改进一下
    例如词性标注功能
    org.apdplat.word.tagging.PartOfSpeechTagging#process 方法里面 public static void process(List<Word> words) 发现实现 可以把实现类List变成接口Collection ,
    public static void process(Collection<Word> words) {
        words.parallelStream().forEach(word -> {
            ……
        });
    }

面向接口的话,我们在调用的时候就更灵活了

  • 其他的都挺好,当时DictionaryFactory.reload() 和 DictionaryFactory.reload() 把我坑了一下,其他的用起来非常顺畅,非常感谢你们的无私奉献!

elasticsearch 安装失败了...

sudo ./plugin -t 1h -u http://apdplat.org/word/archive/v1.1.zip -i word
-> Installing word...
Trying http://apdplat.org/word/archive/v1.1.zip...
Downloading ......................................................................................................DONE
Installed word into /home/q/elasticsearch/elasticsearch-1.4.2/plugins/word

提示安装成功了, 但是 在es 启动的时候 没有家在任何的插件
[plugins ] [Silverclaw] loaded [], sites []

在我做分词测试的时候抛出异常
xxxxxxxx:9200/_analyze?analyzer=word&text=女士冲锋衣

org.elasticsearch.ElasticsearchIllegalArgumentException: failed to find analyzer [word]
at org.elasticsearch.action.admin.indices.analyze.TransportAnalyzeAction.shardOperation(TransportAnalyzeAction.java:151)
at org.elasticsearch.action.admin.indices.analyze.TransportAnalyzeAction.shardOperation(TransportAnalyzeAction.java:60)
at org.elasticsearch.action.support.single.custom.TransportSingleCustomOperationAction$AsyncSingleAction$1.run(TransportSingleCustomOperationAction.java:161)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)

语料如何生成

hi,
请问下,如果我有行业数据,怎样来生成语料文件,进而生成bigram.txt用来做歧义处理?

有未知空格没做判断的情况!

如果文本包含‘ ’(编码是8021)这样的空格就会抛出NullPoint。
private List segSentence(final String sentence){
if(sentence.length() == 1){
if(KEEP_WHITESPACE){
List result = new ArrayList<>(1);
result.add(new Word(sentence));
return result;
}else{
if(!Character.isWhitespace(sentence.charAt(0))){
List result = new ArrayList<>(1);
result.add(new Word(sentence));
return result;
}
}
}
if(sentence.length() > 1){
List list = segImpl(sentence);
if(list != null){
if(PERSON_NAME_RECOGNIZE){
list = PersonName.recognize(list);
}
return list;
}else{
LOGGER.error("文本 "+sentence+" 没有获得分词结果");
}
}
return null;
}

这个方法最后返回一个空的list会好点

Solr 5.4.0 无法加载 word 分词插件

看错误日志大概是因为没有实现该版本的一个(些)接口。

注:word-1.3.1 版出现此错误,而 word-1.3 版无此错误。

相关错误日志如下:

2016-01-04 01:33:48.281 ERROR (qtp859417998-20) [   x:inverst] o.a.s.s.HttpSolrCall null:java.lang.RuntimeException: java.lang.AbstractMethodError: org.apache.lucene.analysis.util.TokenizerFactory.create(Lorg/apache/lucene/util/AttributeFactory;)Lorg/apache/lucene/analysis/Tokenizer;
    at org.apache.solr.servlet.HttpSolrCall.sendError(HttpSolrCall.java:611)
    at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:472)
    at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:222)
    at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181)
    at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
    at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
    at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
    at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
    at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
    at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
    at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
    at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
    at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
    at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
    at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
    at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
    at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
    at org.eclipse.jetty.server.Server.handle(Server.java:499)
    at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
    at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
    at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
    at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
    at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.AbstractMethodError: org.apache.lucene.analysis.util.TokenizerFactory.create(Lorg/apache/lucene/util/AttributeFactory;)Lorg/apache/lucene/analysis/Tokenizer;
    at org.apache.lucene.analysis.util.TokenizerFactory.create(TokenizerFactory.java:75)
    at org.apache.solr.analysis.TokenizerChain.createComponents(TokenizerChain.java:89)
    at org.apache.lucene.analysis.AnalyzerWrapper.createComponents(AnalyzerWrapper.java:101)
    at org.apache.lucene.analysis.Analyzer.tokenStream(Analyzer.java:176)
    at org.apache.lucene.util.QueryBuilder.createFieldQuery(QueryBuilder.java:205)
    at org.apache.solr.parser.SolrQueryParserBase.newFieldQuery(SolrQueryParserBase.java:373)
    at org.apache.solr.parser.SolrQueryParserBase.getFieldQuery(SolrQueryParserBase.java:753)
    at org.apache.solr.parser.SolrQueryParserBase.handleBareTokenQuery(SolrQueryParserBase.java:548)
    at org.apache.solr.parser.QueryParser.Term(QueryParser.java:315)
    at org.apache.solr.parser.QueryParser.Clause(QueryParser.java:186)
    at org.apache.solr.parser.QueryParser.Query(QueryParser.java:107)
    at org.apache.solr.parser.QueryParser.TopLevelQuery(QueryParser.java:96)
    at org.apache.solr.parser.SolrQueryParserBase.parse(SolrQueryParserBase.java:154)
    at org.apache.solr.search.LuceneQParser.parse(LuceneQParser.java:50)
    at org.apache.solr.search.QParser.getQuery(QParser.java:141)
    at org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java:160)
    at org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:247)
    at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)
    at org.apache.solr.core.SolrCore.execute(SolrCore.java:2073)
    at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:658)
    at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:457)
    ... 22 more

2016-01-04 01:33:48.283 WARN  (qtp859417998-20) [   x:inverst] o.e.j.s.ServletHandler Error for /solr/inverst/select
java.lang.AbstractMethodError: org.apache.lucene.analysis.util.TokenizerFactory.create(Lorg/apache/lucene/util/AttributeFactory;)Lorg/apache/lucene/analysis/Tokenizer;
    at org.apache.lucene.analysis.util.TokenizerFactory.create(TokenizerFactory.java:75)
    at org.apache.solr.analysis.TokenizerChain.createComponents(TokenizerChain.java:89)
    at org.apache.lucene.analysis.AnalyzerWrapper.createComponents(AnalyzerWrapper.java:101)
    at org.apache.lucene.analysis.Analyzer.tokenStream(Analyzer.java:176)
    at org.apache.lucene.util.QueryBuilder.createFieldQuery(QueryBuilder.java:205)
    at org.apache.solr.parser.SolrQueryParserBase.newFieldQuery(SolrQueryParserBase.java:373)
    at org.apache.solr.parser.SolrQueryParserBase.getFieldQuery(SolrQueryParserBase.java:753)
    at org.apache.solr.parser.SolrQueryParserBase.handleBareTokenQuery(SolrQueryParserBase.java:548)
    at org.apache.solr.parser.QueryParser.Term(QueryParser.java:315)
    at org.apache.solr.parser.QueryParser.Clause(QueryParser.java:186)
    at org.apache.solr.parser.QueryParser.Query(QueryParser.java:107)
    at org.apache.solr.parser.QueryParser.TopLevelQuery(QueryParser.java:96)
    at org.apache.solr.parser.SolrQueryParserBase.parse(SolrQueryParserBase.java:154)
    at org.apache.solr.search.LuceneQParser.parse(LuceneQParser.java:50)
    at org.apache.solr.search.QParser.getQuery(QParser.java:141)
    at org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java:160)
    at org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:247)
    at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)
    at org.apache.solr.core.SolrCore.execute(SolrCore.java:2073)
    at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:658)
    at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:457)
    at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:222)
    at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181)
    at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
    at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
    at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
    at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
    at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
    at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
    at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
    at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
    at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
    at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
    at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
    at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
    at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
    at org.eclipse.jetty.server.Server.handle(Server.java:499)
    at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
    at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
    at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
    at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
    at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
    at java.lang.Thread.run(Thread.java:745)

elasticsearch 插件错误

  1. elasticsearch 1.4.4 插件V1.2 对_all 默认字段查询始终找不到数据指定_all中的具体字段名查询没问题.
    a. http://localhost:9200/paper/search?q=矿石 :查询不到结果.
    b. http://localhost:9200/paper/search?q=主题:矿石 有结果
    c.把下面映射中的indexAnalyzer改成其他分词器如mmseg/ansj 可以得到正确的结果.
    2._all 字段定义
    "_all": {
    "enabled": true,
    "indexAnalyzer": "word",
    "searchAnalyzer": "word",
    "term_vector": "no",
    "store": "false",
    "include": [
    "作者",
    "作者单位",
    "标题",
    "关键词",
    "主题",
    "摘要"
    ],
    "exclude": [
    "全文"
    ]
    },

    "properties": {
    "主题": {
    "type": "string",
    "analyzer": "word",
    "_boost": 2,
    "include_in_all": "true"
    },
    }
    }

多语言相似度计算

杨哥下午好!我叫韩亚洲。
看了您写的博客真心学到好多东西, 在此亚洲真心感激!
这里我遇到两个问题,找不到联系你的方式只能在此提问了。
1.多语言监测的问题,给一段文本能准确识别它是哪个国家的语言?
2.准确计算多种语言(中英日韩德西等)相似度的问题,看您写了那么多计算相似度的算法,可是面对多语言,如何准确得出相似度呢?那么多种语言,一定要进行分词吗?
给我点建议吧,这是我的扣扣1534433176.

lucene 6.3.0 ChineseWordTokenizer 错误

由于lucene6.3.0中会多次使用同一个Tokenizer,因此Tokenizer应该在reset函数进行初始化操作。
java/org/apdplat/word/lucene/ChineseWordTokenizer.java 第87行:

    private Word getWord() throws IOException {
        Word word = words.poll();
        if(word == null){
            if(reader==null){
                reader = new BufferedReader(input);
            }
            String line;
            while( (line = reader.readLine()) != null ){
                words.addAll(segmentation.seg(line));
            }
            startOffset = 0;
            word = words.poll();
        }
        return word;
    }

这里进行判断,如果reader==null则新建一个BufferedReader。但lucene会复用tokenizer,导致input更新了,而reader没有更新,不会进行分词。
解决办法是把这三行去掉,在reset函数里进行相关操作:

    public void reset() throws IOException {
        super.reset();
        reader = new BufferedReader(input);
    }

怎么自己用代码维护用户自定义词库?

代码看了一圈,没有找到解决办法。
场景是这样的:词库是用的自己的数据库维护的而不是使用的文件,用户会通过借口动态更新词库,看了半天代码,始终没有找到可以自己通过代码自己添加单个词库的。
文档中说:
指定方式有三种:
指定方式一,编程指定(高优先级):
WordConfTools.set("dic.path", "classpath:dic.txt,d:/custom_dic");
DictionaryFactory.reload();//更改词典路径之后,重新加载词典
指定方式二,Java虚拟机启动参数(中优先级):
java -Ddic.path=classpath:dic.txt,d:/custom_dic
指定方式三,配置文件指定(低优先级):
使用类路径下的文件word.local.conf来指定配置信息
dic.path=classpath:dic.txt,d:/custom_dic
我想问一下可以代码只添加一个词么?感觉一切都很完美,已经准备使用了,卡在这儿了。

JDK1.7下运行不了

java.lang.UnsupportedClassVersionError: org/apdplat/word/WordSegmenter : Unsupported major.minor version 52.0

为什么一定要jdk1.8呢?

我想实现短文本的相似度查询,求杨老师指导。

我在做一个句子发布的系统,就是收录一些名人名言,但是大家发布的时候,我会去库里面查询有没有人发布过,我要查询一下相似度,然后相似度大于某个比例的,我会告诉用户有人发布过。文字的字数在200字左右,需要建立索引,因为可能要比对很大量的数据,从网上看了很多方案,没有发现比较完美的,求杨老师指导,我应该采取什么方案?感谢O(∩_∩)O

词典全量热更新会导致应用停顿

hi,我老年代用的是CMS垃圾收集器,新生代用的是ParNew,但是发现在大并发读的时候,词典全量热更新会导致应用停顿几秒钟,在网上查了好多资料,好像不是gc导致的,跟jvm安全点有关的样子,具体还没找到原因,不知道作者有没有碰到这样的问题?
jvm

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.