paddlepaddle / paconvert Goto Github PK
View Code? Open in Web Editor NEWCode Convert to PaddlePaddle Toolkit
License: Apache License 2.0
Code Convert to PaddlePaddle Toolkit
License: Apache License 2.0
结果如下:
def _relative_position_to_absolute_position(self, x):
"""
x: [b, h, l, 2*l-1]
ret: [b, h, l, l]
"""
batch, heads, length, _ = x.shape
>>> x = torch.nn.functional.pad(x, commons.convert_pad_shape([[0, 0], [
0, 0], [0, 0], [0, 1]]))
"""Class Method: *.view, can not convert, please check whether it is torch.Tensor.*/Optimizer.*/nn.Module.*/torch.distributions.Distribution.*/torch.autograd.function.FunctionCtx.*/torch.profiler.profile.*/torch.autograd.profiler.profile.*, and convert manually"""
>>> x_flat = x.view([batch, heads, length * 2 * length])
>>> x_flat = torch.nn.functional.pad(x_flat, commons.convert_pad_shape(
[[0, 0], [0, 0], [0, length - 1]]))
"""Class Method: *.view, can not convert, please check whether it is torch.Tensor.*/Optimizer.*/nn.Module.*/torch.distributions.Distribution.*/torch.autograd.function.FunctionCtx.*/torch.profiler.profile.*/torch.autograd.profiler.profile.*, and convert manually"""
>>> x_final = x_flat.view([batch, heads, length + 1, 2 * length - 1])[:,
:, :length, length - 1:]
return x_final
def _absolute_position_to_relative_position(self, x):
"""
x: [b, h, l, l]
ret: [b, h, l, 2*l-1]
"""
batch, heads, length, _ = x.shape
>>> x = torch.nn.functional.pad(x, commons.convert_pad_shape([[0, 0], [
0, 0], [0, 0], [0, length - 1]]))
"""Class Method: *.view, can not convert, please check whether it is torch.Tensor.*/Optimizer.*/nn.Module.*/torch.distributions.Distribution.*/torch.autograd.function.FunctionCtx.*/torch.profiler.profile.*/torch.autograd.profiler.profile.*, and convert manually"""
>>> x_flat = x.view([batch, heads, length ** 2 + length * (length - 1)])
>>> x_flat = torch.nn.functional.pad(x_flat, commons.convert_pad_shape(
[[0, 0], [0, 0], [length, 0]]))
"""Class Method: *.view, can not convert, please check whether it is torch.Tensor.*/Optimizer.*/nn.Module.*/torch.distributions.Distribution.*/torch.autograd.function.FunctionCtx.*/torch.profiler.profile.*/torch.autograd.profiler.profile.*, and convert manually"""
>>> x_final = x_flat.view([batch, heads, length, 2 * length])[:, :, :, 1:]
return x_final
大家好,为了实现将 PyTorch 代码自动化的转写成 Paddle 代码,从而提升模型迁移的效率,我们建设了 代码自动转换工具: PaddlePaddle Code Convert Toolkits,目前已支持了1000+个Pytorch API的自动转换,我们此次对外开放408个API的转换规则开发,欢迎大家提 PR 来一起支持自动转换 🎉🎉🎉。
通过本次活动,你可以更详细地了解 PyTorch 框架与 Paddle 框架用法及设计差异,提升自己对深度学习框架的熟悉程度。
我们已将任务记录在《在线任务明细表》,为方便大家自由选择所熟悉的API,此次不对API进行分组,大家可自由选择一个或多个API来实现自动转换,认领时直接在本issue下回复认领的任务ID(至少1个,建议一次认领多个,且越多约好)。欢迎大家认领任务和提 PR~
Fork PaddlePaddle/docs 、PaddlePaddle/PaConvert 两个Github Reop。
书写API映射关系:PR提交到 PaddlePaddle/docs 下,需要为每个 API 新增对应的 md 文件并放入docs/guides/model_convert/convert_from_pytorch/api_difference
对应的目录下,文件名为PyTorch API名。如果已存在该API的映射关系,则无需新增 md 文件,只需要检查并校正之前的文档是否正确,如果与后面的AST规则有差异,则需要修改文档。
API映射关系请参考 《API映射关系-格式规范》 ,PR标题格式:映射文档 No. xxx/yyy/zzz,PR描述附上本issue。请严格根据格式规范来书写文档,避免因格式问题增加不必要的review成本,不满足格式规范的PR将不予合入。
API映射关系相当于人工转换的思路,在其完成后,即可开发AST自动转换规则:参考《AST转换规则开发步骤》 中步骤3~5,PR标题格式:转换规则 No. xxx/yyy/zzz,PR描述附上本issue与上述文档PR。请严格根据文档要求来开发代码,避免增加不必要的review成本,不满足要求的PR将不予合入。
每1个任务No.xxx 均包含 1个映射关系文档PR + 1个转换规则PR,两者均合入该任务才算完成,请尽可能一次性提交多个任务,以提高review效率。
review方式:评论里 @zhwesky2010 review,请及时修改review意见,review通过后将合入代码。
该任务时间:PR 截止合入时间是2023/10/30。
认领规则:直接在 issue 下回复认领的任务 ID(建议一次认领多个)。
提交PR前请参照官网安装pre-commit,检查代码格式。否则CI可能无法通过。
PR请先通过CI检查后再发起review,避免增加不必要的review成本。
PR标题格式:映射文档 No. xxx/yyy/zzz,转换规则 No. xxx/yyy/zzz,PR描述均需附上本issue,后者PR描述的 PR Docs
里需要写上前者PR的链接。
如果该API在paddle中存在 功能缺失、功能Bug、功能diff 等问题,导致无法转换,请直接在本issue下回复。我们会定期确认并记录API功能问题,同时对于此问题点,可暂不开发Matcher但仍需开发 屏蔽版本的测试case(单测屏蔽方式可查询开发文档或参考已有单测的代码)。问题描述参考如下格式:
torch.diff
问题:paddle仅支持n=1,torch的n可支持任意值torch.nonzero
问题:指定as_tuple时行为不一致,paddle会多一维且不合理torch.dstack
问题:功能缺失任务明细表中已对映射关系的分类进行了初步粗略标注,仅供参考,最终以开发者分析为主。
历史上的 good first issue 列表,也欢迎来提 PR 解决~ 欢迎联系花花加入社区,和我们一起快乐开源!
申请人 GitHub ID | PaConvert Repo 整体 merge PR 数 | PaConvert Repo 整体 review PR 数 | PaConvert Repo 整体报告 Issue 数 |
---|---|---|---|
@co63oc | 50 | 0 | 0 |
以下是此前在本 repo 里做过的贡献:
实现若干算子转换规则
PR 号 | PR 标题 | PR 简介 | Reviewer |
---|---|---|---|
#197 | 转换规则 No. 196/197/232/233/319 | - | zhwesky2010 |
#198 | 转换规则 No. 323/333 | - | zhwesky2010 |
#199 | 转换规则 No. 349/350 | - | zhwesky2010 |
import torch.nn as nn
from functools import partial
class test(nn.Module):
def __init__(self, in_channels, out_channels, norm_func=nn.LayerNorm):
super(test, self).__init__()
self.norm = norm_func(in_channels)
self.linear = nn.Linear(in_channels, out_channels)
def forward(self, x):
x = self.norm(x)
x = self.linear(x)
return x
if __name__ == "__main__":
model = test(10, 10, partial(nn.LayerNorm, eps=0.2))
import paddle
from functools import partial
class test(paddle.nn.Layer):
def __init__(self, in_channels, out_channels, norm_func=paddle.nn.LayerNorm
):
super(test, self).__init__()
self.norm = norm_func(in_channels)
self.linear = paddle.nn.Linear(in_features=in_channels,
out_features=out_channels)
def forward(self, x):
x = self.norm(x)
x = self.linear(x)
return x
if __name__ == '__main__':
model = test(10, 10, partial(paddle.nn.LayerNorm, eps=0.2))
Traceback (most recent call last):
File "/home/greatx/repos/PaConvert/paddle_project/test.py", line 21, in <module>
model = test(10, 10, partial(paddle.nn.LayerNorm, eps=0.2))
File "/home/greatx/repos/PaConvert/paddle_project/test.py", line 10, in __init__
self.norm = norm_func(in_channels)
TypeError: LayerNorm.__init__() got an unexpected keyword argument 'eps'
eps
should be converted to epsilon
.
使用命令pytest tests/test_cummin.py
测试测试案例时失败,发现在test_project/paddle_temp.py
中的代码没有更改为Paddle code,请问该如何解决该问题。
报错信息如下:
================================================= test session starts =================================================
platform win32 -- Python 3.9.18, pytest-7.4.2, pluggy-1.3.0 -- D:\anaconda2\envs\hackthon\python.exe
cachedir: .pytest_cache
rootdir: C:\Users\lfy\Desktop\PaConvert\tests
configfile: pytest.ini
plugins: anyio-4.0.0, cov-4.1.0
collected 1 item
tests\test_cummin.py::test_case_1 FAILED
====================================================== FAILURES =======================================================
_____________________________________________________ test_case_1 _____________________________________________________
def test_case_1():
pytorch_code = textwrap.dedent(
"""
import torch
x = torch.tensor([[1.0, 1.0, 1.0],
[2.0, 2.0, 2.0],
[3.0, 3.0, 3.0]])
result = torch.cummin(x, 0)
"""
)
> obj.run(pytorch_code, ["result"])
tests\test_cummin.py:32:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests\apibase.py:89: in run
self.compare(
tests\apibase.py:165: in compare
self.compare(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <apibase.APIBase object at 0x0000024B5A25EF70>, name = 'torch.cummin'
pytorch_result = tensor([[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.]])
paddle_result = tensor([[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.]]), check_value = True
check_dtype = True, check_stop_gradient = True, rtol = 1e-06, atol = 0.0
def compare(
self,
name,
pytorch_result,
paddle_result,
check_value=True,
check_dtype=True,
check_stop_gradient=True,
rtol=1.0e-6,
atol=0.0,
):
"""
compare tensors' data, shape, requires_grad, dtype
args:
name: pytorch api name
pytorch_result: pytorch Tensor
paddle_result: paddle Tensor
check_value: If false, the value will not be checked
check_dtype: If false, the dtype will not be checked
check_stop_gradient: If false, the stop gradient will not be checked
"""
if isinstance(pytorch_result, dict):
assert isinstance(paddle_result, dict), "paddle result should be dict"
assert len(pytorch_result) == len(
paddle_result
), "paddle result have different length with pytorch"
pytorch_result_k = [k for k in pytorch_result.keys()]
pytorch_result_v = [v for v in pytorch_result.values()]
paddle_result_k = [k for k in paddle_result.keys()]
paddle_result_v = [v for v in paddle_result.values()]
self.compare(
self.pytorch_api,
pytorch_result_k,
paddle_result_k,
check_value,
check_dtype,
check_stop_gradient,
rtol,
atol,
)
self.compare(
self.pytorch_api,
pytorch_result_v,
paddle_result_v,
check_value,
check_dtype,
check_stop_gradient,
rtol,
atol,
)
return
if isinstance(pytorch_result, (tuple, list)):
assert isinstance(
paddle_result, (tuple, list)
), "paddle result should be list/tuple"
assert len(pytorch_result) == len(
paddle_result
), "paddle result have different length with pytorch"
for i in range(len(pytorch_result)):
self.compare(
self.pytorch_api,
pytorch_result[i],
paddle_result[i],
check_value,
check_dtype,
check_stop_gradient,
rtol,
atol,
)
return
if isinstance(pytorch_result, (bool, np.number, int, str, type(None))):
assert type(paddle_result) == type(
pytorch_result
), "paddle result's type [{}] should be the same with pytorch's type [{}]".format(
type(paddle_result), type(pytorch_result)
)
if check_value:
assert (
pytorch_result == paddle_result
), "API ({}): pytorch result is {}, but paddle result is {}".format(
name, pytorch_result, paddle_result
)
return
if pytorch_result.requires_grad:
pytorch_numpy, paddle_numpy = (
pytorch_result.detach().numpy(),
paddle_result.numpy(False),
)
elif pytorch_result.is_conj():
pytorch_numpy, paddle_numpy = (
pytorch_result.resolve_conj().numpy(),
paddle_result.numpy(False),
)
else:
(
pytorch_numpy,
paddle_numpy,
> ) = pytorch_result.cpu().numpy(), paddle_result.numpy(False)
E TypeError: numpy() takes 0 positional arguments but 1 was given
tests\apibase.py:205: TypeError
-------------------------------------------------- Captured log call --------------------------------------------------
INFO Converter_0:utils.py:91 ===========================================
INFO Converter_0:utils.py:91 PyTorch to Paddle Convert Start ------>:
INFO Converter_0:utils.py:91 ===========================================
INFO Converter_0:utils.py:91 Start convert file: C:\Users\lfy\Desktop\PaConvert\test_project\pytorch_temp.py --> C:\Users\lfy\Desktop\PaConvert\test_project\paddle_temp.py
INFO Converter_0:utils.py:91 Finish convert C:\Users\lfy\Desktop\PaConvert\test_project\pytorch_temp.py --> C:\Users\lfy\Desktop\PaConvert\test_project\paddle_temp.py
INFO Converter_0:utils.py:91
===========================================
INFO Converter_0:utils.py:91 Convert Summary:
INFO Converter_0:utils.py:91 ===========================================
INFO Converter_0:utils.py:91 There are 0 Pytorch APIs in this Project:
INFO Converter_0:utils.py:91 0 Pytorch APIs have been converted to Paddle successfully!
INFO Converter_0:utils.py:91 0 Pytorch APIs are not supported to convert to Paddle currently!
INFO Converter_0:utils.py:91 Convert Rate is: 0.000%
INFO Converter_0:utils.py:91
Thank you to use Paddle Code Convert Tool. You can make any suggestions to us.
=============================================== short test summary info ===============================================
FAILED tests\test_cummin.py::test_case_1 - TypeError: numpy() takes 0 positional arguments but 1 was given
================================================== 1 failed in 3.25s ==================================================
import torch.backends.cudnn as cudnn
cudnn.benchmark = True
import paddle
False = True
git clone https://github.com/facebookresearch/detectron2.git
cd PaConvert
python paconvert/main.py --in_dir ../detectron2/ --out_dir ../detectron2.paddle
# Copyright (c) Facebook, Inc. and its affiliates.
"""
This file contains primitives for multi-gpu communication.
This is useful when doing distributed training.
"""
import functools
import numpy as np
import torch
import torch.distributed as dist
_LOCAL_PROCESS_GROUP = None
_MISSING_LOCAL_PG_ERROR = (
"Local process group is not yet created! Please use detectron2's `launch()` "
"to start processes and initialize pytorch process group. If you need to start "
"processes in other ways, please call comm.create_local_process_group("
"num_workers_per_machine) after calling torch.distributed.init_process_group()."
)
import paddle
"""
This file contains primitives for multi-gpu communication.
This is useful when doing distributed training.
"""
import functools
import numpy as np
_LOCAL_PROCESS_GROUP = None
_MISSING_LOCAL_PG_ERROR = (
>>>>>> "Local process group is not yet created! Please use detectron2's `launch()` to start processes and initialize pytorch process group. If you need to start processes in other ways, please call comm.create_local_process_group(num_workers_per_machine) after calling torch.distributed.init_process_group()."
)
torch.distributed.is_available()
torch.distributed.is_initialized()
torch.distributed.get_world_size()
import torch.distributed as dist
def is_dist_avail_and_initialized():
if not dist.is_available():
return False
if not dist.is_initialized():
return False
return True
import torch
import torch.nn as nn
import torch.nn.functional as F
from typing import List, Tuple, Optional
import math
# Calculate asymmetric TensorFlow-like 'SAME' padding for a convolution
def get_same_padding(x: int, kernel_size: int, stride: int, dilation: int):
if isinstance(x, torch.Tensor):
return torch.clamp(((x / stride).ceil() - 1) * stride + (kernel_size - 1) * dilation + 1 - x, min=0)
else:
return max((math.ceil(x / stride) - 1) * stride + (kernel_size - 1) * dilation + 1 - x, 0)
# Dynamically pad input x with 'SAME' padding for conv with specified args
def pad_same(
x,
kernel_size: List[int],
stride: List[int],
dilation: List[int] = (1, 1),
value: float = 0,
):
ih, iw = x.size()[-2:]
pad_h = get_same_padding(ih, kernel_size[0], stride[0], dilation[0])
pad_w = get_same_padding(iw, kernel_size[1], stride[1], dilation[1])
x = F.pad(x, (pad_w // 2, pad_w - pad_w // 2, pad_h // 2, pad_h - pad_h // 2), value=value)
return x
def avg_pool2d_same(x, kernel_size: List[int], stride: List[int], padding: List[int] = (0, 0),
ceil_mode: bool = False, count_include_pad: bool = True):
# FIXME how to deal with count_include_pad vs not for external padding?
x = pad_same(x, kernel_size, stride)
return F.avg_pool2d(x, kernel_size, stride, (0, 0), ceil_mode, count_include_pad)
import sys
sys.path.append('/home/greatx/repos/PaConvert/paddle_project/utils')
import paddle_aux
import paddle
from typing import List, Tuple, Optional
import math
def get_same_padding(x: int, kernel_size: int, stride: int, dilation: int):
if isinstance(x, paddle.Tensor):
return paddle.clip(x=((x / stride).ceil() - 1) * stride + (
kernel_size - 1) * dilation + 1 - x, min=0)
else:
return max((math.ceil(x / stride) - 1) * stride + (kernel_size - 1) *
dilation + 1 - x, 0)
def pad_same(x, kernel_size: List[int], stride: List[int], dilation: List[
int]=(1, 1), value: float=0):
ih, iw = x.shape[-2:]
pad_h = get_same_padding(ih, kernel_size[0], stride[0], dilation[0])
pad_w = get_same_padding(iw, kernel_size[1], stride[1], dilation[1])
x = paddle_aux._FUNCTIONAL_PAD(pad=(pad_w // 2, pad_w - pad_w // 2,
pad_h // 2, pad_h - pad_h // 2), value=value, x=x)
return x
def avg_pool2d_same(x, kernel_size: List[int], stride: List[int], padding:
List[int]=(0, 0), ceil_mode: bool=False, count_include_pad: bool=True):
x = pad_same(x, kernel_size, stride)
return paddle.nn.functional.avg_pool2d(kernel_size=kernel_size, stride=
stride, padding=(0, 0), ceil_mode=ceil_mode, x=x, exclusive=
notcount_include_pad)
count_include_pad
-> not count_include_pad
-> notcount_include_pad
code
import numpy as np
def extract_vertices(lines):
'''extract vertices info from txt lines
Input:
lines : list of string info
Output:
vertices: vertices of text regions <numpy.ndarray, (n,8)>
labels : 1->valid, 0->ignore, <numpy.ndarray, (n,)>
'''
labels = []
vertices = []
for line in lines:
vertices.append(list(map(int,line.rstrip('\n').lstrip('\ufeff').split(',')[:8])))
label = 0 if '###' in line else 1
labels.append(label)
return np.array(vertices), np.array(labels)
output
python paconvert/main.py --in_dir ~/repos/bug_test/ --out_dir ~/repos/bug_test_
===========================================
PyTorch to Paddle Convert Start ------>:
===========================================
Start convert file: /home/greatx/repos/bug_test/test.py --> /home/greatx/repos/bug_test_/test.py
Traceback (most recent call last):
File "/home/greatx/miniconda3/envs/jit-env/lib/python3.10/sre_parse.py", line 1051, in parse_template
this = chr(ESCAPES[this][1])
KeyError: '\\u'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/greatx/repos/PaConvert/paconvert/main.py", line 145, in <module>
main()
File "/home/greatx/repos/PaConvert/paconvert/main.py", line 131, in main
converter.run(args.in_dir, args.out_dir, args.exclude_dirs)
File "/home/greatx/repos/PaConvert/paconvert/../paconvert/converter.py", line 88, in run
self.transfer_dir(in_dir, out_dir, exclude_dir_list)
File "/home/greatx/repos/PaConvert/paconvert/../paconvert/converter.py", line 186, in transfer_dir
self.transfer_dir(old_path, new_path, exclude_dir_list)
File "/home/greatx/repos/PaConvert/paconvert/../paconvert/converter.py", line 164, in transfer_dir
self.transfer_file(old_path, new_path)
File "/home/greatx/repos/PaConvert/paconvert/../paconvert/converter.py", line 202, in transfer_file
self.transfer_node(root, old_path)
File "/home/greatx/repos/PaConvert/paconvert/../paconvert/converter.py", line 242, in transfer_node
trans.transform()
File "/home/greatx/repos/PaConvert/paconvert/../paconvert/base.py", line 81, in transform
self.visit(self.root)
File "/home/greatx/repos/PaConvert/paconvert/../paconvert/base.py", line 86, in visit
node = super(BaseTransformer, self).visit(node)
File "/home/greatx/miniconda3/envs/jit-env/lib/python3.10/ast.py", line 418, in visit
return visitor(node)
File "/home/greatx/repos/PaConvert/paconvert/../paconvert/base.py", line 295, in visit_Module
super(BaseTransformer, self).generic_visit(node)
File "/home/greatx/miniconda3/envs/jit-env/lib/python3.10/ast.py", line 494, in generic_visit
value = self.visit(value)
File "/home/greatx/repos/PaConvert/paconvert/../paconvert/base.py", line 86, in visit
node = super(BaseTransformer, self).visit(node)
File "/home/greatx/miniconda3/envs/jit-env/lib/python3.10/ast.py", line 418, in visit
return visitor(node)
File "/home/greatx/repos/PaConvert/paconvert/../paconvert/base.py", line 247, in visit_FunctionDef
super(BaseTransformer, self).generic_visit(node)
File "/home/greatx/miniconda3/envs/jit-env/lib/python3.10/ast.py", line 494, in generic_visit
value = self.visit(value)
File "/home/greatx/repos/PaConvert/paconvert/../paconvert/base.py", line 86, in visit
node = super(BaseTransformer, self).visit(node)
File "/home/greatx/miniconda3/envs/jit-env/lib/python3.10/ast.py", line 418, in visit
return visitor(node)
File "/home/greatx/repos/PaConvert/paconvert/../paconvert/base.py", line 277, in visit_For
super(BaseTransformer, self).generic_visit(node)
File "/home/greatx/miniconda3/envs/jit-env/lib/python3.10/ast.py", line 494, in generic_visit
value = self.visit(value)
File "/home/greatx/repos/PaConvert/paconvert/../paconvert/base.py", line 86, in visit
node = super(BaseTransformer, self).visit(node)
File "/home/greatx/miniconda3/envs/jit-env/lib/python3.10/ast.py", line 418, in visit
return visitor(node)
File "/home/greatx/repos/PaConvert/paconvert/../paconvert/transformer/basic_transformer.py", line 666, in visit_Expr
new_node = self.visit(old_value)
File "/home/greatx/repos/PaConvert/paconvert/../paconvert/base.py", line 86, in visit
node = super(BaseTransformer, self).visit(node)
File "/home/greatx/miniconda3/envs/jit-env/lib/python3.10/ast.py", line 418, in visit
return visitor(node)
File "/home/greatx/repos/PaConvert/paconvert/../paconvert/transformer/basic_transformer.py", line 363, in visit_Call
super(BasicTransformer, self).generic_visit(node)
File "/home/greatx/miniconda3/envs/jit-env/lib/python3.10/ast.py", line 494, in generic_visit
value = self.visit(value)
File "/home/greatx/repos/PaConvert/paconvert/../paconvert/base.py", line 86, in visit
node = super(BaseTransformer, self).visit(node)
File "/home/greatx/miniconda3/envs/jit-env/lib/python3.10/ast.py", line 418, in visit
return visitor(node)
File "/home/greatx/repos/PaConvert/paconvert/../paconvert/transformer/basic_transformer.py", line 363, in visit_Call
super(BasicTransformer, self).generic_visit(node)
File "/home/greatx/miniconda3/envs/jit-env/lib/python3.10/ast.py", line 494, in generic_visit
value = self.visit(value)
File "/home/greatx/repos/PaConvert/paconvert/../paconvert/base.py", line 86, in visit
node = super(BaseTransformer, self).visit(node)
File "/home/greatx/miniconda3/envs/jit-env/lib/python3.10/ast.py", line 418, in visit
return visitor(node)
File "/home/greatx/repos/PaConvert/paconvert/../paconvert/transformer/basic_transformer.py", line 363, in visit_Call
super(BasicTransformer, self).generic_visit(node)
File "/home/greatx/miniconda3/envs/jit-env/lib/python3.10/ast.py", line 494, in generic_visit
value = self.visit(value)
File "/home/greatx/repos/PaConvert/paconvert/../paconvert/base.py", line 86, in visit
node = super(BaseTransformer, self).visit(node)
File "/home/greatx/miniconda3/envs/jit-env/lib/python3.10/ast.py", line 418, in visit
return visitor(node)
File "/home/greatx/miniconda3/envs/jit-env/lib/python3.10/ast.py", line 503, in generic_visit
new_node = self.visit(old_value)
File "/home/greatx/repos/PaConvert/paconvert/../paconvert/base.py", line 86, in visit
node = super(BaseTransformer, self).visit(node)
File "/home/greatx/miniconda3/envs/jit-env/lib/python3.10/ast.py", line 418, in visit
return visitor(node)
File "/home/greatx/repos/PaConvert/paconvert/../paconvert/transformer/basic_transformer.py", line 539, in visit_Call
return self.trans_class_method(node, torch_class_api)
File "/home/greatx/repos/PaConvert/paconvert/../paconvert/transformer/basic_transformer.py", line 556, in trans_class_method
node_list = matcher.get_paddle_class_nodes(
File "/home/greatx/repos/PaConvert/paconvert/../paconvert/base.py", line 508, in get_paddle_class_nodes
self.parse_func(func)
File "/home/greatx/repos/PaConvert/paconvert/../paconvert/base.py", line 390, in parse_func
new_paddle_api = re.sub(
File "/home/greatx/miniconda3/envs/jit-env/lib/python3.10/re.py", line 209, in sub
return _compile(pattern, flags).sub(repl, string, count)
File "/home/greatx/miniconda3/envs/jit-env/lib/python3.10/re.py", line 326, in _subx
template = _compile_repl(template, pattern)
File "/home/greatx/miniconda3/envs/jit-env/lib/python3.10/re.py", line 317, in _compile_repl
return sre_parse.parse_template(repl, pattern)
File "/home/greatx/miniconda3/envs/jit-env/lib/python3.10/sre_parse.py", line 1054, in parse_template
raise s.error('bad escape %s' % this, len(this))
re.error: bad escape \u at position 26
torch_scatter 是一个torch的扩增库,目前在科学计算、神经辐射场等领域均有用到,例如第四期黑客松的赛题No.173 Point-NeRF就涉及torch_scatter中的scatter_min算子,无法通过paddle复现,像简单的scatter_add倒是可以组合复现,但是也很大的增加了复现的成本。请问您这边有计划推出torch_scatter的paddle转换吗
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.