Git Product home page Git Product logo

synonyms's Introduction

PyPI PyPI download month License

Synonyms

Chinese Synonyms for Natural Language Processing and Understanding.

更好的中文近义词:聊天机器人、智能问答工具包。

synonyms可以用于自然语言理解的很多任务:文本对齐,推荐算法,相似度计算,语义偏移,关键字提取,概念提取,自动摘要,搜索引擎等。

为提供稳定、可靠、长期优化的服务,Synonyms 改为使用 春松许可证, v1.0 并针对机器学习模型的下载进行收费,详见证书商店。之前的贡献者(突出贡献的代码贡献者),可与我们联系,讨论收费问题。-- Chatopera Inc. @ Oct. 2023

Table of Content:

Welcome

Follow steps below to install and activate packages.

1/3 Install Sourcecodes Package

pip install -U synonyms

当前稳定版本 v3.x。

2/3 Config license id

Synonyms's machine learning model package(s) requires a License from Chatopera License Store, first purchase a License and get the license id from Licenses page on Chatopera License Store(license id:在证书商店,证书详情页,点击【复制证书标识】).

image

Secondly, set environment variable in your terminal or shell scripts as below.

  • For Shell Users

e.g. Shell, CMD Scripts on Linux, Windows, macOS.

# Linux / macOS
export SYNONYMS_DL_LICENSE=YOUR_LICENSE
## e.g. if your license id is `FOOBAR`, run `export SYNONYMS_DL_LICENSE=FOOBAR`

# Windows
## 1/2 Command Prompt
set SYNONYMS_DL_LICENSE=YOUR_LICENSE
## 2/2 PowerShell
$env:SYNONYMS_DL_LICENSE='YOUR_LICENSE'
  • For Python Code Users

Jupyter Notebook, etc.

import os
os.environ["SYNONYMS_DL_LICENSE"] = "YOUR_LICENSE"
_licenseid = os.environ.get("SYNONYMS_DL_LICENSE", None)
print("SYNONYMS_DL_LICENSE=", _licenseid)

提示:安装后初次使用会下载词向量文件,下载速度取决于网络情况。

3/3 Download Model Package

Last, download the model package by command or script -

python -c "import synonyms; synonyms.display('能量')" # download word vectors file

Usage

支持使用环境变量配置分词词表和 word2vec 词向量文件。

环境变量 描述
SYNONYMS_WORD2VEC_BIN_MODEL_ZH_CN 使用 word2vec 训练的词向量文件,二进制格式。
SYNONYMS_WORDSEG_DICT 中文分词主字典,格式和使用参考
SYNONYMS_DEBUG ["TRUE"|"FALSE"], 是否输出调试日志,设置为 “TRUE” 输出,默认为 “FALSE”

synonyms#nearby(word [, size = 10])

import synonyms
print("人脸: ", synonyms.nearby("人脸"))
print("识别: ", synonyms.nearby("识别"))
print("NOT_EXIST: ", synonyms.nearby("NOT_EXIST"))

synonyms.nearby(WORD [,SIZE])返回一个元组,元组中包含两项:([nearby_words], [nearby_words_score])nearby_words是 WORD 的近义词们,也以 list 的方式存储,并且按照距离的长度由近及远排列,nearby_words_scorenearby_words对应位置的词的距离的分数,分数在(0-1)区间内,越接近于 1,代表越相近;SIZE 是返回词汇数量,默认 10。比如:

synonyms.nearby(人脸, 10) = (
    ["图片", "图像", "通过观察", "数字图像", "几何图形", "脸部", "图象", "放大镜", "面孔", "Mii"],
    [0.597284, 0.580373, 0.568486, 0.535674, 0.531835, 0.530
095, 0.525344, 0.524009, 0.523101, 0.516046])

在 OOV 的情况下,返回 ([], []),目前的字典大小: 435,729。

synonyms#compare(sen1, sen2 [, seg=True])

两个句子的相似度比较

    sen1 = "发生历史性变革"
    sen2 = "发生历史性变革"
    r = synonyms.compare(sen1, sen2, seg=True)

其中,参数 seg 表示 synonyms.compare 是否对 sen1 和 sen2 进行分词,默认为 True。返回值:[0-1],并且越接近于 1 代表两个句子越相似。

旗帜引领方向 vs 道路决定命运: 0.429
旗帜引领方向 vs 旗帜指引道路: 0.93
发生历史性变革 vs 发生历史性变革: 1.0

synonyms#display(word [, size = 10])

以友好的方式打印近义词,方便调试,display(WORD [, SIZE])调用了 synonyms#nearby 方法。

>>> synonyms.display("飞机")
'飞机'近义词1. 飞机:1.0
  2. 直升机:0.8423391
  3. 客机:0.8393003
  4. 滑翔机:0.7872388
  5. 军用飞机:0.7832081
  6. 水上飞机:0.77857226
  7. 运输机:0.7724742
  8. 航机:0.7664748
  9. 航空器:0.76592904
  10. 民航机:0.74209654

SIZE 是打印词汇表的数量,默认 10。

synonyms#describe()

打印当前包的描述信息:

>>> synonyms.describe()
Vocab size in vector model: 435729
model_path: /Users/hain/chatopera/Synonyms/synonyms/data/words.vector.gz
version: 3.18.0
{'vocab_size': 435729, 'version': '3.18.0', 'model_path': '/chatopera/Synonyms/synonyms/data/words.vector.gz'}

synonyms#v(word)

获得一个词语的向量,该向量为 numpy 的 array,当该词语是未登录词时,抛出 KeyError 异常。

>>> synonyms.v("飞机")
array([-2.412167  ,  2.2628384 , -7.0214124 ,  3.9381874 ,  0.8219283 ,
       -3.2809453 ,  3.8747153 , -5.217062  , -2.2786229 , -1.2572327 ],
      dtype=float32)

synonyms#sv(sentence, ignore=False)

获得一个分词后句子的向量,向量以 BoW 方式组成

    sentence: 句子是分词后通过空格联合起来
    ignore: 是否忽略OOVFalse时随机生成一个向量

synonyms#seg(sentence)

中文分词

synonyms.seg("中文近义词工具包")

分词结果,由两个 list 组成的元组,分别是单词和对应的词性。

(['中文', '近义词', '工具包'], ['nz', 'n', 'n'])

该分词不去停用词和标点。

synonyms#keywords(sentence [, topK=5, withWeight=False])

提取关键词,默认按照重要程度提取关键词。

keywords = synonyms.keywords("9月15日以来,台积电、高通、三星等华为的重要合作伙伴,只要没有美国的相关许可证,都无法供应芯片给华为,而中芯国际等国产芯片企业,也因采用美国技术,而无法供货给华为。目前华为部分型号的手机产品出现货少的现象,若该形势持续下去,华为手机业务将遭受重创。")

Contribution

Get more logs for debugging, set environment variable.

SYNONYMS_DEBUG=TRUE

PCA

以“人脸”为例主要成分分析:

Quick Get Start

$ pip install -r Requirements.txt
$ python demo.py

Change logs

更新情况说明

Voice of Users

用户怎么说:

Data

data is built based on wikidata-corpus.

Valuation

同义词词林

《同义词词林》是梅家驹等人于 1983 年编纂而成,现在使用广泛的是哈工大社会计算与信息检索研究中心维护的《同义词词林扩展版》,它精细的将中文词汇划分成大类和小类,梳理了词汇间的关系,同义词词林扩展版包含词语 7 万余条,其中 3 万余条被以开放数据形式共享。

知网, HowNet

HowNet,也被称为知网,它并不只是一个语义字典,而是一个知识系统,词汇之间的关系是其一个基本使用场景。知网包含词语 8 余条。

国际上对词语相似度算法的评价标准普遍采用 Miller&Charles 发布的英语词对集的人工判定值。该词对集由十对高度相关、十对中度相关、十对低度相关共 30 个英语词对组成,然后让 38 个受试者对这 30 对进行语义相关度判断,最后取他们的平均值作为人工判定标准。然后不同近义词工具也对这些词汇进行相似度评分,与人工判定标准做比较,比如使用皮尔森相关系数。在中文领域,使用这个词表的翻译版进行中文近义词比较也是常用的办法。

对比

Synonyms 的词表容量是 435,729,下面选择一些在同义词词林、知网和 Synonyms 都存在的几个词,给出其近似度的对比:

注:同义词林及知网数据、分数来源。Synonyms 也在不断优化中,新的分数可能和上图不一致。

更多比对结果

Used by

Github 关联用户列表

Benchmark

Test with py3, MacBook Pro.

python benchmark.py

++++++++++ OS Name and version ++++++++++

Platform: Darwin

Kernel: 16.7.0

Architecture: ('64bit', '')

++++++++++ CPU Cores ++++++++++

Cores: 4

CPU Load: 60

++++++++++ System Memory ++++++++++

meminfo 8GB

synonyms#nearby: 100000 loops, best of 3 epochs: 0.209 usec per loop

Live Sharing

52nlp.cn

机器之心

线上分享实录: Synonyms 中文近义词工具包 @ 2018-02-07

Statement

Synonyms发布证书 MIT。数据和程序可用于研究和商业产品,必须注明引用和地址,比如发布的任何媒体、期刊、杂志或博客等内容。

@online{Synonyms:hain2017,
  author = {Hai Liang Wang, Hu Ying Xi},
  title = {中文近义词工具包Synonyms},
  year = 2017,
  url = {https://github.com/chatopera/Synonyms},
  urldate = {2017-09-27}
}

References

wikidata-corpus

word2vec 原理推导与代码分析

Frequently Asked Questions (FAQ)

  1. 是否支持添加单词到词表中?

不支持,欲了解更多请看 #5

  1. 词向量的训练是用哪个工具?

Google 发布的word2vec,该库由 C 语言编写,内存使用效率高,训练速度快。gensim 可以加载 word2vec 输出的模型文件。

  1. 相似度计算的方法是什么?

详见 #64

  1. #118 词向量文件一直下载不下来?

Authors

Hai Liang Wang

Hu Ying Xi

自然语言处理推荐入门&工具书

本书由 Synonyms 作者参与著作。

快速购书链接

《智能问答与深度学习》 这本书是服务于准备入门机器学习和自然语言处理的学生和软件工程师的,在理论上介绍了很多原理、算法,同时也提供很多示例程序增加实践性,这些程序被汇总到示例程序代码库,这些程序主要是帮助大家理解原理和算法的,欢迎大家下载和执行。代码库的地址是:

https://github.com/l11x0m7/book-of-qna-code

Give credits to

Word2vec by Google

Wikimedia: 训练语料来源

gensim: word2vec.py

SentenceSim: 相似度评测语料

jieba: 中文分词

License

Chunsong Public License, version 1.0

Project Sponsor

Chatopera 云服务

https://bot.chatopera.com/

Chatopera 云服务是一站式实现聊天机器人的云服务,按接口调用次数计费。Chatopera 云服务是 Chatopera 机器人平台的软件即服务实例。在云计算基础上,Chatopera 云服务属于聊天机器人即服务的云服务。

Chatopera 机器人平台包括知识库、多轮对话、意图识别和语音识别等组件,标准化聊天机器人开发,支持企业 OA 智能问答、HR 智能问答、智能客服和网络营销等场景。企业 IT 部门、业务部门借助 Chatopera 云服务快速让聊天机器人上线!

synonyms's People

Contributors

alexsun1995 avatar bobbercheng avatar bojiang avatar charliechen1 avatar cycorey avatar hailiang-wang avatar huyingxi avatar inhzus avatar inuyasha2012 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

synonyms's Issues

两个意思相近的句子文本相似度计算没有gensim tfidf-model高

description

current

import synonyms
sen1 = "控制人涉诉和被司法采取强制措施而长期滞留香港,该事件对公司经营影响难以确认"
sen2 = "企业控制人因被指控和被司法采取强制措施而长期滞留香港,该事件对公司经营影响难以确认"
r = synonyms.compare(sen1, sen2, seg=True)

print (r)

not exist in w2v model: 控制人
not exist in w2v model: 涉诉
not exist in w2v model: ,
not exist in w2v model: 控制人
not exist in w2v model: ,
0.197

expected

sen1 = "控制人涉诉和被司法采取强制措施而长期滞留香港,该事件对公司经营影响难以确认"
sen2 = "企业控制人因被指控和被司法采取强制措施而长期滞留香港,该事件对公司经营影响难以确认"
gensim 句子相似度
0.92684400081634521

solution

environment

Windows

  • version:2.1
    The commit hash (git rev-parse HEAD)

Would you consider adding support for jieba otherthan word2vec?

description

Under my trial to your impressive project, I found everything exciting except one thing: I haven't found any API to add my own words to the dictionary

As jieba supports more efficiently for Chinese semantics analysis, and it allow user to add their own words to the dictionary, therefore, would you consider to add support to that?

Thank you very much.

  • version:
    The commit hash (git rev-parse HEAD)

"import synonyms" error

description

import synonyms

Synonyms on loading vocab ...
Synonyms on loading stopwords ...
Traceback (most recent call last):
File "C:\Install\Anaconda35\lib\site-packages\IPython\core\interactiveshell.py", line 2862, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "", line 1, in
import synonyms
File "C:\Install\PyCharm 2017.2.4\helpers\pydev_pydev_bundle\pydev_import_hook.py", line 21, in do_import
module = self.system_import(name, *args, **kwargs)
File "C:\Install\Anaconda35\lib\site-packages\synonyms_init
.py", line 113, in
_load_stopwords(fin_stopwords_path)
File "C:\Install\Anaconda35\lib\site-packages\synonyms_init
.py", line 108, in _load_stopwords
stopwords = words.readlines()
UnicodeDecodeError: 'gbk' codec can't decode byte 0x8a in position 2: illegal multibyte sequence

environment

windows10,64bit

  • version:
    C:\Users\shina>pip install -i https://pypi.doubanio.com/simple/ synonyms
    Collecting synonyms
    Downloading https://pypi.doubanio.com/packages/2a/70/47abca5e6a5b1cc695c3df662b97b4c50aff343f75bdebb316ceb5d18205/synonyms-1.9.tar.gz (61.5MB)
    100% |████████████████████████████████| 61.5MB 3.2MB/s
    Requirement already satisfied: jieba>=0.39 in c:\install\anaconda35\lib\site-packages (from synonyms)
    Requirement already satisfied: numpy>=1.13.1 in c:\install\anaconda35\lib\site-packages (from synonyms)
    Building wheels for collected packages: synonyms
    Running setup.py bdist_wheel for synonyms ... done
    Stored in directory: C:\Users\shina\AppData\Local\pip\Cache\wheels\d3\8d\ee\b32007051068368229e14a41e936a7143bdf6e8711ee4be5b8
    Successfully built synonyms
    Installing collected packages: synonyms
    Successfully installed synonyms-1.9

Please help, thank you.

wv模型库中不存在词的相似度处理

description

如果分词的结果在WV的模型库中不存在, 返回的结果是全0的向量

current

  try:
           c.append(_vectors.word_vec(y_))
  except KeyError as error:
           print("not exist in w2v model: %s" % y_)
           c.append(np.zeros((100,), dtype=float))

expected

如果待比较的两个词在模型库中都不存在,返回的结果都会是同样的向量, 二者之间的距离就会很近.而事实上二者之间的关系可能并不大

solution

可以考虑返回随机向量(以token 的hashcode为seed)

environment

  • version:
    The commit hash (git rev-parse HEAD)
    de23685

3.3.6版本进行compare时报错

description

版本:3.3.6
python版本:3.6.4
当我进行如下测试时:synonyms.compare(‘你们好呀’, '大家好')
会有如下错误:ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
debug断点发现:
出错地点:g = cosine(_flat_sum_array(_get_wv(s1)), _flat_sum_array(_get_wv(s2)))
返回g:
[nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan
nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan
nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan
nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan
nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan
nan nan nan nan nan nan nan nan nan nan]
导致下面对r进行分值判断时出错。

current

expected

solution

environment

python:3.6.4

  • version:3.3.6
    The commit hash (git rev-parse HEAD)

Use in-memory graph struct for storing and quering data

description

目前使用python dict, list 来存储,在建立多个词之间关系的时候,效率低。

solution

将word's vector, adjacent words and their distance, 存储在Graph中。

Possible solution:
方案1: graphlite
https://pypi.python.org/pypi/graphlite

方案2: projx
http://davebshow.github.io/projx/getting-started/

方案3:networkx
https://pypi.python.org/pypi/networkx/

image

支持 load 和 dump 文件 pickle 文件。
采用哪种格式? https://networkx.github.io/documentation/networkx-1.9.1/reference/readwrite.html

安装是否方便? 保持从 pip 安装,依赖少,跨平台

是否支持高级查询?比如 cypher

性能怎么样?

REST APIs with docker container

description

为了方便快速集成和使用,发布容器镜像到docker hub。

solution

  • Dockerfile
  • python flask API
  • docker push to docker hub

environment

  • version:
    The commit hash (git rev-parse HEAD)

飞碟的同义词怎么会有多个类型的词汇?

description

飞碟: ('魔方', 0.48614)
飞碟: ('**广播公司', 0.471286)
飞碟: ('**电视公司', 0.466064)
飞碟: ('台视', 0.462167)
飞碟: ('中视', 0.446324)
飞碟: ('气球', 0.441919)
飞碟: ('手动式', 0.441638)
飞碟: ('TVBS', 0.440763)
飞碟: ('中广', 0.433497)

current

expected

solution

environment

  • version:
    The commit hash (git rev-parse HEAD)

compare函数的问题

调用compare函数时,交换sen1和sen2的值,得到差异很大的结果,例如:

synonyms.compare("教学", "老师")
0.879
synonyms.compare("老师", "教学")
0.194

不在库中的无关短语/词语相似度为1

description

调用synonyms.compare()函数时出现的问题,当输入两个字符串完全无关时,预期返回值应该为0,而事实上不是这样。实验中有时候会将没有明确意义的短语进行比较,测试结果不是很好。观察发现,如果两个词都不在字典中,向量表示赋值为0,距离为0,相似度为1。

current

“一下张磊” “平金磊” 1.0

expected

“一下张磊” “平金磊” 0.0

solution

尝试在函数_similarity_distance中计算范数倒数g时规定若两字符串对应向量均为0,则令g=0。但是此时仍然有两个问题,一个是,此时编辑距离u的值为0.5或0.25,取决于分词后空格的个数差,感觉应该为0比较合理;另一个是,若分词后有部分词在字典中不存在,另一部分存在,则上述改动的判定失效,最后返回值为仍为1。猜测可能解决方法是,在词不在字典中时,以词为种子生成随机数,作为向量表示,不知是否可行。

environment

Python2.7

你好,我没有找到调用概念提取和自动摘要方法的入口

description

我看到描述"synonyms可以用于自然语言理解的很多任务:文本对齐,推荐算法,相似度计算,语义偏移,关键字提取,概念提取,自动摘要,搜索引擎等。"中有概念提取和自动摘要.

current

找不到相关类和方法的入口.

expected

solution

environment

  • version:
    The commit hash (git rev-parse HEAD)

无相关词相似性计量值过大的原因;请求分享训练Word2vec模型训练的参数、技巧

description

1.使用Synonyms过程中发现其会过高估计无关联词的相似性,测试至今,感觉其输出的最小词相似度都是0.5左右(例如:synonyms.compare("骨折", "巴赫", seg=False)=0.544),是因为您在设计包时有意设计了这个下限吗?
2. 用户似乎不能再您训练的Word2vec模型中添加自己的语料再训练模型,请问您是否可将您基于中文wiki训练Word2vec模型的的参数设置情况分享给大家学习下?我在自己训练过程中发现模型表现不好,即便很相似的词,相似性也不超过0.28。。。。
3. 训练时参数设置和语料大小间的关联是否有些经验规则可以提升模型效果?

current

synonyms.compare("骨折", "巴赫", seg=False)=0.544

expected

后续建议: 建议后续开发允许用户自定义语料重新训练的模型,应用领域会更广。此外,建议有机会尝试下稍新点的glove模型,据说词相似性计量效果会更好。

solution

environment

Windows 10

  • version:Python 3.6
    The commit hash (git rev-parse HEAD)

language detect

在处理时,检测语言是否是中文。不是返回None,或者raise Exception。

solution

pip install langid

线上分享: Synonyms 中文近义词工具包

description

目前很缺乏质量好的中文近义词库,于是便考虑使用word2vec训练一个高质量的同义词库将"非标准表述" 映射到 "标准表述",这就是Synonyms的起源。
在经典的信息检索系统中,相似度的计算是基于匹配的,而且是Query经过分词后与文档库的严格的匹配,这种就缺少了利用词汇之间的“关系”。
而word2vec使用大量数据,利用上下文信息进行训练,将词汇映射到低维空间,产生了这种“关系”,这种“关系”是基于距离的。有了这种“关系”,就可以进一步利用词汇之间的距离进行检索。所以,在算法层面上,检索更是基于了“距离”而非“匹配”,基于“语义”而非“形式”。

项目地址:
https://github.com/huyingxi/Synonyms

主要内容:

  • 应用场景
  • 现有的近义词包
  • N-gram模型介绍
  • word2vec原理
  • 使用的开放数据集
  • 训练过程
  • 计算句子相似度公式
  • 待改进的地方

分享渠道:

Gitchat - 在线分享

时间:

2018年2月7日

报名:

微信扫一扫
image

out of vocabulary

description

分词分出“多少钱”的词性为n,查找同义词时synonyms.display("多少钱")返回out of vocabulary,查看文件vocab.txt中有一条记录为 多少钱 3 nr,请问是哪里出了问题

current

expected

solution

environment

  • version:
    The commit hash (git rev-parse HEAD)

synonyms.compare 函数问题

description

在使用函数synonyms.compare(s1, s2, seg=False)时,常常提示:
W0320 10:34:21.076664 9464 synonyms.py:154] not exist in w2v model: 付东升
这是正常的,但是,很多标点依然会提示这样,在我看来很不合理。

expected

希望可以优化标点或者叫做stopwords的处理。

solution

environment

  • version:
    The commit hash (git rev-parse HEAD)

载入vocab.txt报错

`>>> import synonyms

Synonyms load wordseg dict [D:\python34\lib\site-packages\synonyms\data\vocab
.txt] ...
Traceback (most recent call last):
File "D:\python34\lib\site-packages\jieba\posseg_init_.py", line 105, in lo
ad_word_tag
word, _, tag = line.split(" ")
ValueError: too many values to unpack (expected 3)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "", line 1, in
File "D:\python34\lib\site-packages\synonyms_init_.py", line 85, in <module

_tokenizer.initialize(tokenizer_dict)

File "D:\python34\lib\site-packages\jieba\posseg_init_.py", line 95, in ini
tialize
self.load_word_tag(self.tokenizer.get_dict_file())
File "D:\python34\lib\site-packages\jieba\posseg_init_.py", line 109, in lo
ad_word_tag
'invalid POS dictionary entry in %s at Line %s: %s' % (f_name, lineno, line)
)
ValueError: invalid POS dictionary entry in D:\python34\lib\site-packages\synony
ms\data\vocab.txt at Line 333405: 福荫 1 v 2 n`

好像是jieba不支持多个词性,后来手工修改,每个词只保留一个词性,就没有这个bug了

句子相似度准确率

description

README里目前是旧数据,需要更新:

* 句子相似度准确率

在[SentenceSim](https://github.com/fssqawj/SentenceSim/blob/master/dev.txt)上进行测试。


测试语料条数为:7516条.
设定阈值 0.5:
  相似度 > 0.5, 返回相似;
  相似度 < 0.5, 返回不相似.


评测结果:


正确 : 6626,错误 : 890,准确度 : 88.15%

“减少”近义词为什么返回“增加”呢?

description

synonyms.display(u'减少')
'减少'近义词:

  1. 减少:1.0
  2. 增加:0.89950454
  3. 降低:0.89796096
  4. 减低:0.83169204
  5. 下降:0.806061
  6. 减小:0.79056865
  7. 提高:0.7783943
  8. 增大:0.76636106
  9. 缩减:0.7424295
  10. 减缓:0.7414519

current

expected

反义词应该去掉,否则词义会相反

solution

environment

  • version:
    The commit hash (git rev-parse HEAD)

部分句子相似度比较结果与预期相差极大

description

部分句子相似度比较结果与预期相差极大。

current

>>> synonyms.compare("如何申请司法援助","工厂让个人垫付全部养老保险和医疗保险,待本人退休后,根据工厂的经济状况,按先后顺序返还。请问这样合法么?强制职工签署相关合同合法么?",seg=True)
0.951
>>> synonyms.compare("如何申请司法援助","如何申请司法援助?",seg=True)
0.896
>>> synonyms.compare("如何申请司法援助?","如何申请司法援助?",seg=True)
1.0

expected

>>> synonyms.compare("如何申请司法援助","工厂让个人垫付全部养老保险和医疗保险,待本人退休后,根据工厂的经济状况,按先后顺序返还。请问这样合法么?强制职工签署相关合同合法么?",seg=True)
0.1
>>> synonyms.compare("如何申请司法援助","如何申请司法援助?",seg=True)
0.99
>>> synonyms.compare("如何申请司法援助?","如何申请司法援助?",seg=True)
1.0

solution

environment

  • version:
    The commit hash (git rev-parse HEAD)

"import synonyms" Error--py3

description

import synonyms

Synonyms on loading vocab ...
Synonyms on loading stopwords ...
Traceback (most recent call last):
File "C:\Install\Anaconda35\lib\site-packages\IPython\core\interactiveshell.py", line 2862, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "", line 1, in
import synonyms
File "C:\Install\PyCharm 2017.2.4\helpers\pydev_pydev_bundle\pydev_import_hook.py", line 21, in do_import
module = self.system_import(name, *args, **kwargs)
File "C:\Install\Anaconda35\lib\site-packages\synonyms_init
.py", line 113, in
_load_stopwords(fin_stopwords_path)
File "C:\Install\Anaconda35\lib\site-packages\synonyms_init
.py", line 108, in _load_stopwords
stopwords = words.readlines()
UnicodeDecodeError: 'gbk' codec can't decode byte 0x8a in position 2: illegal multibyte sequence

environment

python3.6;windows10,64bit

import sys
sys.stdout.encoding
Out[6]: 'UTF-8'

  • version:
    V1.9

add info: py3.6 & sys.stdout.encoding == 'UTF-8'.

AssertionError: seg len should be 2

File "/home//anaconda3/lib/python3.6/site-packages/synonyms/init.py", line 184, in compare
w2, t2 = _segment_words(s2)
File "/home/
/anaconda3/lib/python3.6/site-packages/synonyms/init.py", line 150, in segment_words
assert len(
) == 2, "seg len should be 2"
AssertionError: seg len should be 2
I don't know why it has this problem?

two sentences are partly equal

description

current

print(synonyms.compare('目前你用什么方法来保护自己', '目前你用什么方法'))
1.0

expected

Two sentences are partly equal but not fully equal. It should not returns 1 here.

solution

environment

  • version:
    The commit hash (git rev-parse HEAD)

ImportError: cannot import name 'KeyedVectors'

I used 'pip install -U synonyms' to install successfully.
But when i import the library, it doesn't work'. How can i solve this?
Thanks in advance.
#####################################
import synonyms
Traceback (most recent call last):

File "", line 1, in
import synonyms

File "C:\Anaconda3\lib\site-packages\synonyms_init_.py", line 47, in
from word2vec import KeyedVectors

ImportError: cannot import name 'KeyedVectors'
#######################################

environment

windows8, python3.5.2, anaconda

两个句子的相似度比较结果和readme中的数字不符?

description

用demo中的几个句子测试了一下,发现和readme中给出的结果不一致。

current

旗帜引领方向 vs 道路决定命运: 0.218
旗帜引领方向 vs 旗帜指引道路: 0.353
发生历史性变革 vs 发生历史性变革: 1.0

expected

旗帜引领方向 vs 道路决定命运: 0.429
旗帜引领方向 vs 旗帜指引道路: 0.93
发生历史性变革 vs 发生历史性变革: 1.0

solution

environment

python 2.7

3.3.9版本,#51 bug仍然存在

description

查看源码发现,try后g值仍然为无法找到向量,但是其并为出错,所以g仍然为向量nan值,而不会为0,导致计算出错

current

expected

solution

environment

  • version:3.3.9
    The commit hash (git rev-parse HEAD)

enhance Synonyms#compare

description

use advanced method to compute similarity for two sentence.

related with #4

solution

leverage adv distance measurements.

请问下相似度计算公式是什么?

请问下相似度计算公式是什么?

目前我用的多的是textrank + word2vec
请问本工具的算法是?我想做下对比,可能的话我把我的算法也pr过来

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.