Git Product home page Git Product logo

rasa_nlu_chi's Introduction

rasa_nlu_chi's People

Contributors

akelad avatar akshayagarwal avatar amn41 avatar choufractal avatar crownpku avatar dependabot[bot] avatar deubaka avatar doclambda avatar gelorin avatar ghostvv avatar howl-anderson avatar jinhong- avatar joeyfaulkner avatar jreeter avatar keineahnung2345 avatar leachim avatar milutz avatar parthsharma1996 avatar paschmann avatar phildionne avatar phlf avatar plauto avatar ricwo avatar ritwikgopi avatar skreutzberger avatar tmbo avatar twerkmeister avatar vinvinod avatar wrathagom avatar yulkes avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rasa_nlu_chi's Issues

工程名字找不到

服务启动后,工程名字总是找不到,不知什么原因,具体异常如下:

curl -XPOST 127.0.0.1:5000/parse -d '{"q":"我发烧了该吃什么药?","project": "rasa_nlu_test", "model": "model_20180104-154114"}' | python -mjson.tool

% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 155 0 56 100 99 21563 38120 --:--:-- --:--:-- --:--:-- 49500
{
"error": "No project found with name 'rasa_nlu_test'."
}

No project found with name 'rasa_nlu_test'.

Rasa NLU version (e.g. 0.10.1):

Used backend / pipeline (mitie, spacy_sklearn, ...):
used sample_configs/config_jieba_mitie_sklearn.json

Operating system (windows, osx, ...):
osx

Issue:
启动rasa nlu server的时候报错。
WARNING:rasa_nlu.project:Failed to list models of project default. u'No persistent storage specified. Supported values are aws, gcs'

用http测试服务的时候
~ % curl -XPOST localhost:5000/parse -d '{"q":"我发烧了该吃什么药?", "project": "rasa_nlu_test", "model": "model_20171019-092523"}' | python -mjson.tool
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 156 0 56 100 100 3970 7090 --:--:-- --:--:-- --:--:-- 7692
{
"error": "No project found with name 'rasa_nlu_test'."
}

Content of configuration file (if used & relevant):

model下的metedata.json
{
"entity_synonyms": "entity_synonyms.json",
"pipeline": [
"nlp_mitie",
"tokenizer_jieba",
"ner_mitie",
"ner_synonyms",
"intent_entity_featurizer_regex",
"intent_featurizer_mitie",
"intent_classifier_sklearn"
],
"mitie_feature_extractor_fingerprint": 1810187658478185215,
"regex_featurizer": null,
"language": "zh",
"mitie_file": "./data/total_word_feature_extractor_zh.dat",
"intent_classifier_sklearn": "intent_classifier.pkl",
"trained_at": "20171019-092523",
"training_data": "training_data.json",
"entity_extractor_mitie": "entity_extractor.dat",
"rasa_nlu_version": "0.10.1"
}

Wrong entity end.

WARNING:rasa_nlu.extractors.mitie_entity_extractor:Example skipped: Invalid entity {u'start': 0, u'end': 6, u'value': u'\u6c5f\u94c3E200', u'entity': u'\u8f66\u7cfb'} in example '江铃E200VS东风风神AX7新能源': entities must span whole tokens. Wrong entity end.
这里报错说实体位置标注错了,但是分词结果却是一样的
for i in jieba.tokenize('江铃E200VS东风风神AX7新能源'):
print(i)
('江铃E200', 0, 6)
('VS', 6, 8)
('东风风神AX7新能源', 8, 18)
不知道是什么原因?

关于自建语料库

您好,请教2个问题。
1.您使用的原始的百度百科的语料库是从哪里下载的呢?什么格式的呢?
2.假如我想自己建立一个垂直学科的语料库,然后训练语料库,同时不依赖于百度百科。请问该如何处理呢?是任意文本文件放在/path/to/your/folder_of_cutted_text_files里边都可以吗?文本格式上有什么要求呢?是纯文本文件?JSON可以吗?或者还有其他格式?
说实话,没太看明白您在github和blog中的文档。不会操作。

谢谢。

Rasa NLU version (e.g. 0.7.3):

Used backend / pipeline (mitie, spacy_sklearn, ...):

Operating system (windows, osx, ...):

Issue:

Content of configuration file (if used & relevant):

Support to yaha tokenizer

Hello, this is just an information to people who want to do NER.

I have found jieba tokenizer is not very good at tokenizing chinese surnames. For example: "我姓林" will be tokenized to "我" and "姓林". So I want to use yaha tokenizer instead.
And so far I had make my own yaha_tokenizer.py and conduct some corresponding change in registry.py. For people who also want to do NER, you can visit my repository:
https://github.com/keineahnung2345/Rasa_NLU_Chi
and find the two files.

请问如何给系统添加默认fallback intent的能力?

Rasa NLU version (e.g. 0.7.3):

Used backend / pipeline (mitie, spacy_sklearn, ...):

Operating system (windows, osx, ...):

Issue:

Content of configuration file (if used & relevant):

当我的系统中添加了多个intent和训练数据以后,但是当我测试其他不属于任何一个intent的语句,但是系统总是会识别出已经有的任何一种intent。我是要加一个intent专门用作fallback吗?然后把各种错误数据加入并且训练?但是这样得多大训练样本才可以达到目的,还是有其他办法呢?谢谢

我win7,python3.5,安装mitle一直不成功,谁有step by step的方法?

Rasa NLU version (e.g. 0.11.3):

Used backend / pipeline ("nlp_mitie",
"tokenizer_jieba",
"ner_mitie",
"ner_synonyms",
"intent_entity_featurizer_regex",
"intent_featurizer_mitie",
"intent_classifier_sklearn", ...):

Operating system (windows,:

Issue:
File "", line 222, in call_with_frames_removed
File "C:\Python35\lib\mitie_init
.py", line 1, in
from .mitie import *
File "C:\Python35\lib\mitie\mitie.py", line 36, in
f = ctypes.CDLL(most_recent)
File "C:\Python35\lib\ctypes_init
.py", line 351, in init
self._handle = _dlopen(self._name, mode)
OSError: [WinError 126] 找不到指定的模块。

Content of configuration file (if used & relevant):

有关spaCy + sklearn与mitie+sklearn的问题

您好~
最近看rasa_nlu官方文档,上面写pipline有多种选择方式,除了作者您描述的两种以外,有一种推荐是:
Best for most: spaCy + sklearn
问题是:这个组合+jieba能用来处理中文吗?不了解spaCy,所以想问问作者,当初为什么选择jieba+mitie+sklearn。而不是上面这种组合呢?谢谢

unwork on windows + pycharm + python3

rasa NLU version (e.g. 0.9.2):
0.10.0a5
Used backend / pipeline (mitie, spacy_sklearn, ...):
MITIE+Jieba+sklearn
Operating system (windows, osx, ...):
windows
Issue:
E:\PychatmProjects\rasa_nlu_chi>pip install sklearn
Collecting sklearn
Downloading http://mirrors.aliyun.com/pypi/packages/1e/7a/dbb3be0ce9bd5c8b7e3d87328e79063f8b263b2b1bfa4774cb1147bfcd3f/sklearn-0.0.tar.gz
Requirement already satisfied: scikit-learn in d:\program files\python36\lib\site-packages (from sklearn)
Installing collected packages: sklearn
Running setup.py install for sklearn ... done
Successfully installed sklearn-0.0

E:\PychatmProjects\rasa_nlu_chi>python -m rasa_nlu.train -c sample_configs/config_jieba_mitie_sklearn.json
INFO:rasa_nlu.components:Couldn't read dev-requirements.txt. Error: a bytes-like object is required, not 'str'
Traceback (most recent call last):
File "D:\Program Files\Python36\lib\runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "D:\Program Files\Python36\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "E:\PychatmProjects\rasa_nlu_chi\rasa_nlu\train.py", line 116, in
do_train(config)
File "E:\PychatmProjects\rasa_nlu_chi\rasa_nlu\train.py", line 104, in do_train
trainer = Trainer(config, component_builder)
File "E:\PychatmProjects\rasa_nlu_chi\rasa_nlu\model.py", line 120, in init
components.validate_requirements(config.pipeline)
File "E:\PychatmProjects\rasa_nlu_chi\rasa_nlu\components.py", line 96, in validate_requirements
"Please install {}".format(", ".join(failed_imports)))
Exception: Not all required packages are installed. To use this pipeline, you need to install the missing dependencies. Please install sklearn, mitie

Failed to find component class for 'tokenizer_jieba'

Rasa NLU version (e.g. 0.7.3):
最新
Used backend / pipeline (mitie, spacy_sklearn, ...):
config_jieba_mitie_sklearn
Operating system (windows, osx, ...):
win10 64
Issue:
用example和自带的配置进行训练时抛出异常

File "C:\Users\10309\Anaconda3\lib\site-packages\rasa_nlu\registry.py", line 136, in get_component_class

Exception: Failed to find component class for 'tokenizer_jieba'. Unknown component name. Check your configured pipeline and make sure the mentioned component is not misspelled. If you are creating your own component, make sure it is either listed as part of the component_classes in rasa_nlu.registry.py or is a proper name of a class in a module.

确定已经安装jieba

MITIE内存溢出,究竟要多少内存才够用

Issue:std::bad_alloc内存溢出

Content of configuration file :
number of raw ASCII files found: 513
num words: 200000
saving word counts to top_word_counts.dat
number of raw ASCII files found: 513
Sample 50000000 random context vectors
Now do CCA (left size: 50000000, right size: 50000000).
std::bad_alloc
8G的中文数据,jieba分好的,在32G的机子试了,内存溢出,64G的还是溢出,需要怎么处理?加到128G?怎么判断内存够不够用?大概要用多少

no example data under rasa named demo-rasa_zh_movie.json

Rasa NLU version (e.g. 0.7.3): latest from git

Used backend / pipeline (mitie, spacy_sklearn, ...):mitie
Operating system (windows, osx, ...): Mac

Issue:

Content of configuration file (if used & relevant): config_jieba_mitie_sklearn.json

{
  "name": "rasa_nlu_test",
  "pipeline": ["nlp_mitie",
        "tokenizer_jieba",
        "ner_mitie",
        "ner_synonyms",
        "intent_entity_featurizer_regex",
        "intent_featurizer_mitie",
        "intent_classifier_sklearn"],
  "language": "zh",
  "mitie_file": "./data/total_word_feature_extractor_zh.dat",
  "path" : "./models",
  "data" : "./data/examples/rasa/demo-rasa_zh_movie.json"
}

server.py: error: unrecognized arguments: --server_model_dirs

rasa NLU version (e.g. 0.9.2):
0.9.2
Used backend / pipeline (mitie, spacy_sklearn, ...):
MITIE+Jieba+sklearn
Operating system (windows, osx, ...):
osx
Issue:
server.py: error: unrecognized arguments: --server_model_dirs=/...
我用 pip install 的 rasa_nlu 可以執行 server_model_dirs
但如果是 git clone 再 python setup.py install
就無法使用 server_model_dirs

請問可以在同個config同時訓練中、英文嗎?("en","zh")

config language設定只能有一個主要的語言,目前我是分開訓練不同的config,每次要重開不同server,才能分析個別的語言,請問有方法可以在同個config同時訓練中英文,讓後續可以同時分析中文及英文,甚至中英文夾雜嗎?
感謝!

执行python -m rasa_nlu.train -c config_jieba_mitie_sklearn.json -d data/examples/rasa/demo-rasa_zh.json --path models报错

Rasa NLU version:

Operating system (windows, osx, ...):Linux version 3.10.0_3-0-0-15

Content of model configuration file:

Issue:执行python -m rasa_nlu.train -c config_jieba_mitie_sklearn.json -d data/examples/rasa/demo-rasa_zh.json --path models训练样本数据的时候,报错,错误如下:
Traceback (most recent call last):
File "/home/work/.pyenv/versions/2.7.14/lib/python2.7/runpy.py", line 174, in _run_module_as_main
"main", fname, loader, pkg_name)
File "/home/work/.pyenv/versions/2.7.14/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/home/work/tzw/rasa_nlu/Rasa_NLU_Chi-master/rasa_nlu/train.py", line 174, in
num_threads=cmdline_args.num_threads)
File "/home/work/tzw/rasa_nlu/Rasa_NLU_Chi-master/rasa_nlu/train.py", line 143, in do_train
trainer = Trainer(cfg, component_builder)
File "rasa_nlu/model.py", line 146, in init
components.validate_requirements(cfg.component_names)
File "rasa_nlu/config.py", line 146, in component_names
return [c.get("name") for c in self.pipeline]
AttributeError: 'unicode' object has no attribute 'get'

请问,是什么原因呢?

为什么我训练的特别慢 总是卡在 partII 就不动了 怎么办

art I: train segmenter
words in dictionary: 200000
num features: 271
now do training
C: 20
epsilon: 0.01
num threads: 1
cache size: 5
max iterations: 2000
loss per missed segment: 3
C: 20 loss: 3 0.795918
C: 35 loss: 3 0.784257
C: 20 loss: 4.5 0.804665
C: 5 loss: 3 0.790087
C: 20 loss: 1.5 0.74344
C: 17.5 loss: 4.05 0.803207
C: 20 loss: 4.8 0.80758
C: 16.7825 loss: 4.99261 0.816327
C: 10.769 loss: 5.44081 0.816327
C: 18.1567 loss: 5.22501 0.819242
C: 20.1353 loss: 5.54356 0.822157
C: 25.3591 loss: 6.08173 0.816327
C: 20.3988 loss: 5.69292 0.822157
C: 17.6169 loss: 5.82141 0.819242
C: 21.7838 loss: 5.45424 0.814869
C: 19.1881 loss: 5.57563 0.8207
best C: 20.1353
best loss: 5.54356
num feats in chunker model: 4095
train: precision, recall, f1-score: 0.897574 0.970845 0.932773
Part I: elapsed time: 269 seconds.

Part II: train segment classifier
now do training
num training samples: 762

到这就不走了 好苦恼

我要怎么拓展这些意图json

Rasa NLU version (e.g. 0.7.3):

Used backend / pipeline (mitie, spacy_sklearn, ...):

Operating system (windows, osx, ...):

Issue:

现在意图json非常少,我要怎么拓展这些意图json?
Content of configuration file (if used & relevant):

实体识别和意图识别之后怎么生成回答?

实体识别和意图识别之后,我们能得到问题的意图和实体信息(如下例子),然后怎么生成回答呢?有没有开发的工具子类的?
{ "entities": [ { "extractor": "ner_mitie", "confidence": null, "end": 2, "value": "明天", "entity": "weatherDate", "start": 0 } ], "intent": { "confidence": 0.99102248555852168, "name": "getWeather" }, "text": "明天会下雨吗?", "intent_ranking": [ { "confidence": 0.99102248555852168, "name": "getWeather" }, { "confidence": 0.0048164909262445208, "name": "baike" }, { "confidence": 0.0024581628351175943, "name": "assistant" }, { "confidence": 0.0017028606801161609, "name": "playMusic" } ] }

object has no attribute '_formatter_parser'

Hello.你好!@crownpku
我这边按照你的文章的步骤依次执行,还有下载了百度盘中训练好的total_word_feature_extractor_zh.dat,但是我执行测试的时候却报了下面这个错误,能帮忙看下么?3q

curl -XPOST localhost:5000/parse -d '{"q":"我发烧了该吃什么药?", "project": "default", "model": "model_20171030-153135"}' | python -mjson.tool
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 161 0 67 100 94 12 17 0:00:05 0:00:05 --:--:-- 0
{
"error": "'NoneType' object has no attribute '_formatter_parser'"
}

乱码了怎么办?

Rasa NLU version (e.g. 0.7.3):

Used backend / pipeline (mitie, spacy_sklearn, ...):
mitie, spacy_sklearn
Operating system (windows, osx, ...):
linux
Issue:
中文不能正常显示
{
"entities": [
{
"end": 3,
"entity": "disease",
"extractor": "ner_mitie",
"start": 1,
"value": "\u53d1\u70e7"
}
],
"intent": {
"confidence": 0.5544729787361836,
"name": "medical"
},
"intent_ranking": [
{
"confidence": 0.5544729787361836,
"name": "medical"
},
{
"confidence": 0.14118447377605906,
"name": "affirm"
},
{
"confidence": 0.1271251858682881,
"name": "restaurant_search"
},
{
"confidence": 0.10487447193717363,
"name": "goodbye"
},
{
"confidence": 0.07234288968229556,
"name": "greet"
}
],
"text": "\u6211\u53d1\u70e7\u4e86\u8be5\u5403\u4ec0\u4e48\u836f\uff1f"
}

Content of configuration file (if used & relevant):

NLU训练完,意图识别不对

Rasa NLU version (e.g. 0.7.3):
'0.10.5'

Used backend / pipeline (mitie, spacy_sklearn, ...):
{
"name": "rasa_nlu_test",
"pipeline": ["nlp_mitie",
"tokenizer_jieba",
"ner_mitie",
"ner_synonyms",
"intent_entity_featurizer_regex",
"intent_featurizer_mitie",
"intent_classifier_sklearn"],
"language": "zh",
"mitie_file": "../data/total_word_feature_extractor_zh.dat",
"path" : "../models",
"data" : "../data/nlu.md"
}

Operating system (windows, osx, ...):
WINDOWS 10 x64 + Python 3.6
Issue:

Building prefix dict from the default dictionary ...
Loading model from cache C:\Users\Benny\AppData\Local\Temp\jieba.cache
Loading model cost 1.053 seconds.
Prefix dict has been built succesfully.
Fitting 2 folds for each of 6 candidates, totalling 12 fits
C:\Users\Benny\AppData\Local\Programs\Python\Python36\lib\site-packages\sklearn\metrics\classification.py:1135: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples.
'precision', 'predicted', average, warn_for)
C:\Users\Benny\AppData\Local\Programs\Python\Python36\lib\site-packages\sklearn\metrics\classification.py:1135: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples.
'precision', 'predicted', average, warn_for)
C:\Users\Benny\AppData\Local\Programs\Python\Python36\lib\site-packages\sklearn\metrics\classification.py:1135: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples.
'precision', 'predicted', average, warn_for)
C:\Users\Benny\AppData\Local\Programs\Python\Python36\lib\site-packages\sklearn\metrics\classification.py:1135: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples.
'precision', 'predicted', average, warn_for)
C:\Users\Benny\AppData\Local\Programs\Python\Python36\lib\site-packages\sklearn\metrics\classification.py:1135: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples.
'precision', 'predicted', average, warn_for)
C:\Users\Benny\AppData\Local\Programs\Python\Python36\lib\site-packages\sklearn\metrics\classification.py:1135: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples.
'precision', 'predicted', average, warn_for)
C:\Users\Benny\AppData\Local\Programs\Python\Python36\lib\site-packages\sklearn\metrics\classification.py:1135: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples.
'precision', 'predicted', average, warn_for)
C:\Users\Benny\AppData\Local\Programs\Python\Python36\lib\site-packages\sklearn\metrics\classification.py:1135: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples.
'precision', 'predicted', average, warn_for)
C:\Users\Benny\AppData\Local\Programs\Python\Python36\lib\site-packages\sklearn\metrics\classification.py:1135: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples.
'precision', 'predicted', average, warn_for)
C:\Users\Benny\AppData\Local\Programs\Python\Python36\lib\site-packages\sklearn\metrics\classification.py:1135: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples.
'precision', 'predicted', average, warn_for)
C:\Users\Benny\AppData\Local\Programs\Python\Python36\lib\site-packages\sklearn\metrics\classification.py:1135: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples.
'precision', 'predicted', average, warn_for)
C:\Users\Benny\AppData\Local\Programs\Python\Python36\lib\site-packages\sklearn\metrics\classification.py:1135: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples.
'precision', 'predicted', average, warn_for)
[Parallel(n_jobs=1)]: Done 12 out of 12 | elapsed: 0.0s finished

Content of configuration file (if used & relevant):

intent:greet

  • hey
  • hello
  • hi
  • 在吗?
  • 早上好
  • 晚上好

intent:goodbye

  • bye
  • 再见
  • 一天好心情
  • 再会
  • 拜拜

intent:mood_affirm

  • yes
  • 是的
  • 当然
  • 听起来不错
  • 正确

intent:mood_deny

  • no
  • 不好
  • 我不觉得
  • 不喜欢那样
  • 没办法

intent:mood_great

  • 棒极了
  • 我感觉很好
  • 非常好
  • 太好了

intent:mood_unhappy

  • 很糟糕
  • 很伤心
  • 心情不好
  • 我很失望

意图识别情况:

interpreter = Interpreter.load(model_dir, config)
intent_entities = interpreter.parse('没办法')

Building prefix dict from the default dictionary ...
Loading model from cache C:\Users\Benny\AppData\Local\Temp\jieba.cache
Loading model cost 0.873 seconds.
Prefix dict has been built succesfully.
{'intent': {'name': 'mood_affirm', 'confidence': 0.22176080230671813}, 'entities': [], 'intent_ranking': [{'name': 'mood_affirm', 'confidence': 0.22176080230671813}, {'name': 'greet', 'confidence': 0.21021923238681889}, {'name': 'goodbye', 'confidence': 0.1875789668700675}, {'name': 'mood_unhappy', 'confidence': 0.13687120608411668}, {'name': 'mood_deny', 'confidence': 0.13181582565934799}, {'name': 'mood_great', 'confidence': 0.11175396669293076}], 'text': '没办法'}

应该识别出的意图为:mood_deny

但是识别出来的为: mood_affirm

问题可能出在哪里?

window 拒绝访问

你好我遇到一个拒绝访问的问题,当执行python setup.py 的时候

具体的log 如下所示,万望楼主哥哥帮忙看一看,小弟不胜感激。
另外楼主哥哥真的超级厉害呀
`creating build\bdist.win-amd64\egg\EGG-INFO
copying rasa_nlu.egg-info\PKG-INFO -> build\bdist.win-amd64\egg\EGG-INFO
copying rasa_nlu.egg-info\SOURCES.txt -> build\bdist.win-amd64\egg\EGG-INFO
copying rasa_nlu.egg-info\dependency_links.txt -> build\bdist.win-amd64\egg\EGG-INFO
copying rasa_nlu.egg-info\requires.txt -> build\bdist.win-amd64\egg\EGG-INFO
copying rasa_nlu.egg-info\top_level.txt -> build\bdist.win-amd64\egg\EGG-INFO
zip_safe flag not set; analyzing archive contents...
creating 'dist\rasa_nlu-0.12.2-py3.5.egg' and adding 'build\bdist.win-amd64\egg' to it
removing 'build\bdist.win-amd64\egg' (and everything under it)
Processing rasa_nlu-0.12.2-py3.5.egg
Removing c:\users\95890\appdata\local\programs\python\python35\lib\site-packages\rasa_nlu-0.12.2-py3.5.egg
Copying rasa_nlu-0.12.2-py3.5.egg to c:\users\95890\appdata\local\programs\python\python35\lib\site-packages
rasa-nlu 0.12.2 is already the active version in easy-install.pth

Installed c:\users\95890\appdata\local\programs\python\python35\lib\site-packages\rasa_nlu-0.12.2-py3.5.egg
Processing dependencies for rasa-nlu==0.12.2
Searching for pathlib
Reading https://pypi.python.org/simple/pathlib/
Downloading https://files.pythonhosted.org/packages/ac/aa/9b065a76b9af472437a0059f77e8f962fe350438b927cb80184c32f075eb/pathlib-1.0.1.tar.gz#sha256=6940718dfc3eff4258203ad5021090933e5c04707d5ca8cc9e73c94a7894ea9f
Best match: pathlib 1.0.1
Processing pathlib-1.0.1.tar.gz
Writing C:\Users\95890\AppData\Local\Temp\easy_install-6_rvpc5v\pathlib-1.0.1\setup.cfg
Running pathlib-1.0.1\setup.py -q bdist_egg --dist-dir C:\Users\95890\AppData\Local\Temp\easy_install-6_rvpc5v\pathlib-1.0.1\egg-dist-tmp-r18qri_1
zip_safe flag not set; analyzing archive contents...
Copying pathlib-1.0.1-py3.5.egg to c:\users\95890\appdata\local\programs\python\python35\lib\site-packages
Adding pathlib 1.0.1 to easy-install.pth file

Installed c:\users\95890\appdata\local\programs\python\python35\lib\site-packages\pathlib-1.0.1-py3.5.egg
Searching for humanfriendly>=4.7
Reading https://pypi.python.org/simple/humanfriendly/
C:\Users\95890\AppData\Local\Programs\Python\Python35\lib\site-packages\setuptools\pep425tags.py:89: RuntimeWarning: Config variable 'Py_DEBUG' is unset, Python ABI tag may be incorrect
warn=(impl == 'cp')):
C:\Users\95890\AppData\Local\Programs\Python\Python35\lib\site-packages\setuptools\pep425tags.py:93: RuntimeWarning: Config variable 'WITH_PYMALLOC' is unset, Python ABI tag may be incorrect
warn=(impl == 'cp')):
Downloading https://files.pythonhosted.org/packages/4a/4f/16881101fb87370fd62bdc1b7b895c505c6525a9b07e10571bf41899937b/humanfriendly-4.12.1-py2.py3-none-any.whl#sha256=72a2efa8b477abb4fbdb3e5e224942c13e201c1df8c70fc244ef13b982ceb010
Best match: humanfriendly 4.12.1
Processing humanfriendly-4.12.1-py2.py3-none-any.whl
Installing humanfriendly-4.12.1-py2.py3-none-any.whl to c:\users\95890\appdata\local\programs\python\python35\lib\site-packages
writing requirements to c:\users\95890\appdata\local\programs\python\python35\lib\site-packages\humanfriendly-4.12.1-py3.5.egg\EGG-INFO\requires.txt
Adding humanfriendly 4.12.1 to easy-install.pth file
Installing humanfriendly-script.py script to C:\Users\95890\AppData\Local\Programs\Python\Python35\Scripts
Installing humanfriendly.exe script to C:\Users\95890\AppData\Local\Programs\Python\Python35\Scripts

Installed c:\users\95890\appdata\local\programs\python\python35\lib\site-packages\humanfriendly-4.12.1-py3.5.egg
Searching for s3transfer<0.2.0,>=0.1.10
Reading https://pypi.python.org/simple/s3transfer/
Downloading https://files.pythonhosted.org/packages/d7/14/2a0004d487464d120c9fb85313a75cd3d71a7506955be458eebfe19a6b1d/s3transfer-0.1.13-py2.py3-none-any.whl#sha256=c7a9ec356982d5e9ab2d4b46391a7d6a950e2b04c472419f5fdec70cc0ada72f
Best match: s3transfer 0.1.13
Processing s3transfer-0.1.13-py2.py3-none-any.whl
Installing s3transfer-0.1.13-py2.py3-none-any.whl to c:\users\95890\appdata\local\programs\python\python35\lib\site-packages
error: [WinError 5] 拒绝访问。: 'c:\users\95890\appdata\local\programs\python\python35\lib\site-packages\s3transfer-0.1.13-py3.5.egg\s3transfer-0.1.13.dist-info' -> 'c:\users\95890\appdata\local\programs\python\python35\lib\site-packages\s3transfer-0.1.13-py3.5.egg\EGG-INFO'`

entities must span whole tokens. Wrong entity end.

@crownpku 您好!
我想知道使用jieba分词的过程中遇到分词错误导致的wrong entity end问题有没有什么比较好的解决方法。
比如我有下面一个训练example:

{
"text": "给刘三发个短信说车票买到了",
"intent": "send_message",
"entities": [
{
"start": 1,
"end": 3,
"value": "刘三",
"entity": "contact"
}
]
},

上面的标注本身没有问题,但由于jieba分词出现了错误:

print(" ".join(jieba.cut("给刘三发个短信说车票买到了")))
给 刘 三发 个 短信 说 车票 买到 了

这就导致“刘三”这个词出现了wrong entity end。我想知道您在使用过程中有没有遇到过类似的问题,能不能提供一些比较好的建议,因为像人名这样的命名实体统统加入词典的话是不太可行的。谢谢!

服务运行时Jieba的字典未加载

如题,运行服务后,开始工作,显示如下提示:
2018-05-26 12:13:30+0800 [-] No Jieba Default Dictionary found
2018-05-26 12:13:30+0800 [-] No Jieba User Dictionary found
看了一下,文件夹下的用户字典文件是有的,为啥都没加载成功呢?

这些intent.json是怎么生成的?我要如何拓展?

Rasa NLU version (e.g. 0.7.3):
Used backend / pipeline (mitie, spacy_sklearn, ...):
mitie
Operating system (windows, osx, ...):
ubuntu
Issue:
这么多intent 是怎么生成的?我要如何通过训练获得到?

_20180419174906

Content of configuration file (if used & relevant):

"error": "No project found with name 'default'."

rasa NLU version (e.g. 0.9.2):
0.9.2
Used backend / pipeline (mitie, spacy_sklearn, ...):
MITIE+Jieba+sklearn
Operating system (windows, osx, ...):
Ubuntu14.04
Issue:"error": "No project found with name 'default'."
username@linux:$ curl -XPOST localhost:5000/parse -d '{"q":"下次见"}' | python -mjson.tool
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 67 0 50 100 17 8161 2775 --:--:-- --:--:-- --:--:-- 8333
{
"error": "No project found with name 'default'."
}
username@linux:
$ curl -XPOST localhost:5000/parse -d '{"q":"我胃痛,该吃什么药"}' | python -mjson.tool
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 85 0 50 100 35 8140 5698 --:--:-- --:--:-- --:--:-- 8333
{
"error": "No project found with name 'default'."
}
我进行到了最后一步测试过程中无预测结果,总是报错,报错如上文所示,请问是什么原因。如果您能抽空帮我看看将会非常感谢!

缺少yaha模块

在初次运行时,缺少yaha模块。具体异常信息如下:
Traceback (most recent call last):
File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main
"main", fname, loader, pkg_name)
File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/home/zhoumeng/rasa_nlu/Rasa_NLU_Chi-master/rasa_nlu/train.py", line 125, in
do_train(config)
File "/home/zhoumeng/rasa_nlu/Rasa_NLU_Chi-master/rasa_nlu/train.py", line 111, in do_train
trainer = Trainer(config, component_builder)
File "rasa_nlu/model.py", line 126, in init
components.validate_requirements(config.pipeline)
File "rasa_nlu/components.py", line 77, in validate_requirements
from rasa_nlu import registry
File "rasa_nlu/registry.py", line 39, in
from rasa_nlu.tokenizers.yaha_tokenizer import YahaTokenizer
File "rasa_nlu/tokenizers/yaha_tokenizer.py", line 25, in
from yaha import Cuttor
ImportError: No module named yaha

python3.6运行出现OSError: [WinError 126] 找不到指定的模块。

Rasa NLU version (e.g. 0.7.3): 0.10.6

Used backend / pipeline (mitie, spacy_sklearn, ...):mitie-jieba-sklearn

Operating system (windows, osx, ...):windows7,X64,python3.6

Issue:
您好!我按照教程使用python3.6训练模型,到了
python -m rasa_nlu.train -c sample_configs/config_jieba_mitie_sklearn.json
这一步时出现了以下错,不知怎么解决:

D:\聊天机器人项目\机器人框架\RASA主体框架\Rasa_NLU_Chi-master>python -m rasa_nlu
.train -c sample_configs/config_jieba_mitie_sklearn.json
Building prefix dict from the default dictionary ...
DEBUG:jieba:Building prefix dict from the default dictionary ...
Loading model from cache C:\Users\YJY\AppData\Local\Temp\jieba.cache
DEBUG:jieba:Loading model from cache C:\Users\YJY\AppData\Local\Temp\jieba.cache

Loading model cost 0.477 seconds.
DEBUG:jieba:Loading model cost 0.477 seconds.
Prefix dict has been built succesfully.
DEBUG:jieba:Prefix dict has been built succesfully.
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\lib\runpy.py", line 193, in _run_module_as_main

"__main__", mod_spec)

File "C:\ProgramData\Anaconda3\lib\runpy.py", line 85, in run_code
exec(code, run_globals)
File "D:\聊天机器人项目\机器人框架\RASA主体框架\Rasa_NLU_Chi-master\rasa_nlu\t
rain.py", line 125, in
do_train(config)
File "D:\聊天机器人项目\机器人框架\RASA主体框架\Rasa_NLU_Chi-master\rasa_nlu\t
rain.py", line 111, in do_train
trainer = Trainer(config, component_builder)
File "D:\聊天机器人项目\机器人框架\RASA主体框架\Rasa_NLU_Chi-master\rasa_nlu\m
odel.py", line 126, in init
components.validate_requirements(config.pipeline)
File "D:\聊天机器人项目\机器人框架\RASA主体框架\Rasa_NLU_Chi-master\rasa_nlu\c
omponents.py", line 83, in validate_requirements
failed_imports.update(find_unavailable_packages(component_class.required_pac
kages()))
File "D:\聊天机器人项目\机器人框架\RASA主体框架\Rasa_NLU_Chi-master\rasa_nlu\c
omponents.py", line 68, in find_unavailable_packages
importlib.import_module(package)
File "C:\ProgramData\Anaconda3\lib\importlib_init
.py", line 126, in import
_module
return _bootstrap._gcd_import(name[level:], package, level)
File "", line 978, in _gcd_import
File "", line 961, in _find_and_load
File "", line 950, in _find_and_load_unlocked
File "", line 655, in _load_unlocked
File "", line 678, in exec_module
File "", line 205, in _call_with_frames_removed
File "D:\聊天机器人项目\机器人框架\RASA主体框架\Rasa_NLU_Chi-master\mitie.py",
line 36, in
f = ctypes.CDLL(most_recent)
File "C:\ProgramData\Anaconda3\lib\ctypes_init
.py", line 348, in init
self._handle = _dlopen(self._name, mode)
OSError: [WinError 126] 找不到指定的模块。

关于total_word_feature_extractor_zh.dat文件

Rasa NLU version (e.g. 0.7.3):

Used backend / pipeline (mitie, spacy_sklearn, ...):

Operating system (windows, osx, ...):

Issue:

Content of configuration file (if used & relevant):

```你好我在百度网盘下载了这个文件,但打开后是乱码,我encoding用utf-8并把文件也另存为utf-8了

關於設置字典的問題

import glob
import jieba
jieba_userdicts = glob.glob("./jieba_userdict/*")
if len(jieba_userdicts) > 0:
for jieba_userdict in jieba_userdicts:
print("Loading Jieba User Dictionary at " + str(jieba_userdict))
jieba.load_userdict(jieba_userdict)
else:
print("No Jieba User Dictionary found.")

因為我是需要繁體的字典當作預設的字典,是否能夠添加一段jieba.set_dictionary('dict.txt.big')
在jieba.load_userdict之前呢?

謝謝!!

rasa_nlu.train needs an url?

root@weizhen-Lenovo-IdeaPad-Y470:/home/weizhen/Rasa_NLU_Chi# python -m rasa_nlu.train -c sample_configs/config_jieba_mitie_sklearn.json
usage: train.py [-h] [-o PATH] (-d DATA | -u URL) -c CONFIG [-t NUM_THREADS]
[--project PROJECT] [--fixed_model_name FIXED_MODEL_NAME]
[--storage STORAGE] [--debug] [-v]
train.py: error: one of the arguments -d/--data -u/--url is required
root@weizhen-Lenovo-IdeaPad-Y470:/home/weizhen/Rasa_NLU_Chi#

can you have an look?
thank you very much .

如何将RASU_UI 与Rasa_NLU_CHi结合在一起使用?

hi,你好
我搭好了Rasa_NLU_CHi 以及 Rasa_ui ,但是使用Rasa_ui import angents data 之后,直接通过前端的train,一直跑着报错,不知道为什么,请问你展示的那种中文model截图的输出结果是怎么配置的呀?需要改Rasa_ui 的哪里配置吗?

How to generate example file?

rasa NLU version : 0.7.3

Used backend / pipeline (mitie, spacy_sklearn, ...): [“nlp_mitie”, “tokenizer_jieba”, “ner_mitie”, “ner_synonyms”, “intent_featurizer_mitie”, “intent_classifier_sklearn”]

Operating system: Centos

Issue:
Should I have to generate example json file(e.g. "data/examples/rasa/demo-rasa_zh.json") by manual?

Content of configuration file (if used & relevant):

請問curl results from the server後,中文的部分會變為編碼

Rasa NLU version (e.g. 0.7.3):0.10.5

Used backend / pipeline (mitie, spacy_sklearn, ...):nlp_mitie, tokenizer_jieba, ner_mitie, ner_synonyms, intent_featurizer_mitie, intent_classifier_sklearn

Operating system (windows, osx, ...):macOS

Issue:依照版主的步驟train及curl,但train完的data及輸出的結果,中文的部分都會變成編碼,如下圖,請問有方法可以解決嗎?感謝感謝
2017-12-27 3 10 09

Content of configuration file (if used & relevant):

Improve intent classification

Currently for Chinese we have two pipelines:
Use MITIE+Jieba:
["nlp_mitie", "tokenizer_jieba", "ner_jieba_mitie", "ner_synonyms", "intent_classifier_jieba_mitie"]
Use MITIE+Jieba+sklearn:
["nlp_mitie", "tokenizer_jieba", "ner_jieba_mitie", "ner_synonyms", "intent_featurizer_jieba_mitie", "intent_classifier_sklearn"]

Both of them give good entity recognition results, but intent classification results are very bad.
For example:

$ curl -XPOST localhost:5000/parse -d '{"q":"我想吃顿面条"}' | python -mjson.tool
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   325  100   299  100    26  19776   1719 --:--:-- --:--:-- --:--:-- 21357
{
    "entities": [
        {
            "end": 6,
            "entity": "food",
            "extractor": "ner_jieba_mitie",
            "start": 4,
            "value": "\u9762\u6761"
        }
    ],
    "intent": {
        "confidence": 0.027987433652389405,
        "name": "affirm"
    },
    "text": "\u6211\u60f3\u5403\u987f\u9762\u6761"

“面条” can be succesfuly extracted, but intent is WRONGLY assigned to "affirm" instead of "restaurant search".

Problem is probably that Chinese sentences should give special features for intent classification task. Will look into it.

模型训练不了

overall accuracy: 1
Part II: elapsed time: 3 seconds.
df.number_of_classes(): 2
Killed

rasa_nlu.train 训练不了

内存64G,已安装全部环境包,默认配置文件

[root@rasa-nlu Rasa_NLU_Chi]# python -m rasa_nlu.train -c sample_configs/config_jieba_mitie_sklearn.yml --data data/examples/rasa/demo-rasa_zh.json --path models
No Jieba Default Dictionary found
No Jieba User Dictionary found
Building prefix dict from the default dictionary ...
Loading model from cache /tmp/jieba.cache
Loading model cost 1.030 seconds.
Prefix dict has been built succesfully.
Training to recognize 2 labels: 'food', 'disease'
Part I: train segmenter
words in dictionary: 200000
num features: 271
now do training
C:           20
epsilon:     0.01
num threads: 1
cache size:  5
max iterations: 2000
loss per missed segment:  3
C: 20   loss: 3 	0.444444
C: 35   loss: 3 	0.444444
C: 20   loss: 4.5 	0.555556
C: 5   loss: 3 	0.444444
C: 20   loss: 1.5 	0.444444
C: 20   loss: 6 	0.555556
C: 20   loss: 5.25 	0.555556
C: 21.5   loss: 4.65 	0.555556
C: 16.9684   loss: 4.72073 	0.555556
C: 18.2577   loss: 4.43072 	0.555556
C: 18.2131   loss: 4.55681 	0.555556
C: 20   loss: 4.4 	0.555556
C: 20.9694   loss: 4.47547 	0.555556
best C: 20
best loss: 4.5
num feats in chunker model: 4095
train: precision, recall, f1-score: 1 1 1 
Part I: elapsed time: 2 seconds.

Part II: train segment classifier
now do training
num training samples: 9
C: 200   f-score: 1
C: 400   f-score: 1
C: 300   f-score: 1
C: 100   f-score: 1
C: 0.01   f-score: 1
C: 50.005   f-score: 1
C: 25.0075   f-score: 1
C: 12.5088   f-score: 1
C: 6.25938   f-score: 1
C: 3.13469   f-score: 1
C: 1.57234   f-score: 1
C: 0.791172   f-score: 1
C: 0.400586   f-score: 1
best C: 0.791172
test on train: 
3 0 
0 6 

overall accuracy: 1
Part II: elapsed time: 3 seconds.
df.number_of_classes(): 2
Fitting 2 folds for each of 6 candidates, totalling 12 fits
/opt/python3.6.5/lib/python3.6/site-packages/sklearn/metrics/classification.py:1135: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples.
  'precision', 'predicted', average, warn_for)
/opt/python3.6.5/lib/python3.6/site-packages/sklearn/metrics/classification.py:1135: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples.
  'precision', 'predicted', average, warn_for)
/opt/python3.6.5/lib/python3.6/site-packages/sklearn/metrics/classification.py:1135: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples.
  'precision', 'predicted', average, warn_for)
/opt/python3.6.5/lib/python3.6/site-packages/sklearn/metrics/classification.py:1135: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples.
  'precision', 'predicted', average, warn_for)
/opt/python3.6.5/lib/python3.6/site-packages/sklearn/metrics/classification.py:1135: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples.
  'precision', 'predicted', average, warn_for)
/opt/python3.6.5/lib/python3.6/site-packages/sklearn/metrics/classification.py:1135: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples.
  'precision', 'predicted', average, warn_for)
[Parallel(n_jobs=1)]: Done  12 out of  12 | elapsed:    0.1s finished

按照教程完全不会用啊,请看详情

Rasa NLU version: 直接git clone https://github.com/crownpku/rasa_nlu_chi.git

Operating system (windows, osx, ...):ubuntu

Issue: 按照教程做,根本没法作阿,先是说json文件找不到,然后我改了路径, 后又提示data必须填写,我又指定了demo的json路径, 再然后报错。

fengyu@Y570:~/rasa_nlu_chi$ python3 -m rasa_nlu.train -c ./sample_configs/config_jieba_mitie_sklearn.json -d ./data/examples/rasa/demo-rasa_zh.jsonTraceback (most recent call last):
File "/usr/lib/python3.5/runpy.py", line 184, in _run_module_as_main
"main", mod_spec)
File "/usr/lib/python3.5/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/fengyu/rasa_nlu_chi/rasa_nlu/train.py", line 174, in
num_threads=cmdline_args.num_threads)
File "/home/fengyu/rasa_nlu_chi/rasa_nlu/train.py", line 143, in do_train
trainer = Trainer(cfg, component_builder)
File "/home/fengyu/rasa_nlu_chi/rasa_nlu/model.py", line 146, in init
components.validate_requirements(cfg.component_names)
File "/home/fengyu/rasa_nlu_chi/rasa_nlu/config.py", line 146, in component_names
return [c.get("name") for c in self.pipeline]
File "/home/fengyu/rasa_nlu_chi/rasa_nlu/config.py", line 146, in
return [c.get("name") for c in self.pipeline]
AttributeError: 'str' object has no attribute 'get'

为什么使用中文的时候就没有intent,一直显示"confidence": 1.0, "name": "None"

Rasa NLU version (e.g. 0.7.3):

Used backend / pipeline (mitie, spacy_sklearn, ...):

Operating system (windows, osx, ...):

Issue:

Content of configuration file (if used & relevant):

请问问什么使用curl -XPOST localhost:5000/parse -d '{"q":"hi", "project": "model_20171219-224636", "model": "model_20171219-224636"}' | python -mjson.tool就可以识别到
{
"entities": [],
"intent": {
"confidence": 1.0,
"name": "greet"
},
"text": "hi"
}
而在使用curl -XPOST localhost:5000/parse -d '{"q":"感冒了应该怎么办", "project": "model_20171219-224636", "model": "model_20171219-224636"}' | python -mjson.tool就一直显示
{
"entities": [],
"intent": {
"confidence": 1.0,
"name": "None"
},
"text": "\u611f\u5192\u4e86\u5e94\u8be5\u600e\u4e48\u529e"
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.