Git Product home page Git Product logo

Comments (1)

justinchuby avatar justinchuby commented on September 26, 2024

Failing with torch.export.export:

In [18]: torch.export.export(model,(text, batch_lengths))
---------------------------------------------------------------------------
GuardOnDataDependentSymNode               Traceback (most recent call last)
File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/_dynamo/utils.py:1764, in run_node(tracer, node, args, kwargs, nnmodule)
   1763 if op == "call_function":
-> 1764     return node.target(*args, **kwargs)
   1765 elif op == "call_method":

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/_decomp/decompositions.py:1410, in tensor_split_tensor_indices_or_sections_py_impl(self, tensor_indices_or_sections, dim)
   1409 indices = [i.item() for i in tensor_indices_or_sections]
-> 1410 return self.tensor_split(indices, dim)

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/utils/_stats.py:20, in count.<locals>.wrapper(*args, **kwargs)
     19 simple_call_counter[fn.__qualname__] = simple_call_counter[fn.__qualname__] + 1
---> 20 return fn(*args, **kwargs)

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/_subclasses/fake_tensor.py:896, in FakeTensorMode.__torch_dispatch__(self, func, types, args, kwargs)
    895 try:
--> 896     return self.dispatch(func, types, args, kwargs)
    897 except TypeError:

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/_subclasses/fake_tensor.py:1241, in FakeTensorMode.dispatch(self, func, types, args, kwargs)
   1240 if self.cache_enabled:
-> 1241     return self._cached_dispatch_impl(func, types, args, kwargs)
   1242 else:

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/_subclasses/fake_tensor.py:974, in FakeTensorMode._cached_dispatch_impl(self, func, types, args, kwargs)
    973 if output is unassigned:
--> 974     output = self._dispatch_impl(func, types, args, kwargs)
    976 return output

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/_subclasses/fake_tensor.py:1393, in FakeTensorMode._dispatch_impl(self, func, types, args, kwargs)
   1392     with self:
-> 1393         return decomposition_table[func](*args, **kwargs)
   1395 with self:
   1396     # Decomposes CompositeImplicitAutograd ops

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/_decomp/decompositions.py:746, in slice_forward(self, dim, start, end, step)
    744     start_val += sizes[dim]
--> 746 if end_val < 0:
    747     end_val += sizes[dim]

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/__init__.py:374, in SymBool.__bool__(self)
    373 def __bool__(self):
--> 374     return self.node.bool_()

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/fx/experimental/sym_node.py:432, in SymNode.bool_(self)
    431 def bool_(self):
--> 432     return self.guard_bool("", 0)

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/fx/experimental/sym_node.py:374, in SymNode.guard_bool(self, file, line)
    371 def guard_bool(self, file, line):
    372     # TODO: use the file/line for some useful diagnostic on why a
    373     # guard occurred
--> 374     r = self.shape_env.evaluate_expr(self.expr, self.hint, fx_node=self.fx_node)
    375     try:

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/fx/experimental/recording.py:231, in record_shapeenv_event.<locals>.decorator.<locals>.wrapper(*args, **kwargs)
    225 if isinstance(args[0], ShapeEnv) and args[0].is_recording:  # type: ignore[has-type]
    226     # If ShapeEnv is already recording an event, call the wrapped
    227     # function directly.
    228     #
    229     # NB: here, we skip the check of whether all ShapeEnv instances
    230     # are equal, in favor of a faster dispatch.
--> 231     return fn(*args, **kwargs)
    233 # Retrieve an instance of ShapeEnv.
    234 # Assumption: the collection of args and kwargs may not reference
    235 # different ShapeEnv instances.

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/fx/experimental/symbolic_shapes.py:4138, in ShapeEnv.evaluate_expr(self, orig_expr, hint, fx_node, expect_rational, size_oblivious, forcing_spec)
   4132         size_oblivious_result = self._maybe_evaluate_static(
   4133             expr,
   4134             expect_rational=expect_rational,
   4135             size_oblivious=True
   4136         )
-> 4138     raise self._make_data_dependent_error(
   4139         expr.xreplace(self.var_to_val),
   4140         expr,
   4141         size_oblivious_result=size_oblivious_result
   4142     )
   4143 expr = new_expr

GuardOnDataDependentSymNode: Could not guard on data-dependent expression u0 < 0 (unhinted: u0 < 0).  (Size-like symbols: none)

Potential framework code culprit (scroll up for full backtrace):
  File "/home/justinchu/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/_decomp/decompositions.py", line 746, in slice_forward
    if end_val < 0:

For more information, run with TORCH_LOGS="dynamic"
For extended logs when we create symbols, also add TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="u0"
If you suspect the guard was triggered from C++, add TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
For more debugging help, see https://docs.google.com/document/d/1HSuTTVvYH1pTew89Rtpeu84Ht3nQEFTYhAX3Ypa_xJs/edit?usp=sharing

User Stack (most recent call last):
  (snipped, see stack below for prefix)
  File "<ipython-input-2-3004064f32b4>", line 25, in forward
    return batch_from_ragged(text, batch_lengths)
  File "<ipython-input-3-05634c9d43b6>", line 6, in batch_from_ragged
    torch.tensor_split(text, batch_lengths),


The above exception was the direct cause of the following exception:

RuntimeError                              Traceback (most recent call last)
File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/_dynamo/utils.py:1656, in get_fake_value(node, tx, allow_non_graph_fake)
   1655     with tx.fake_mode, enable_python_dispatcher():
-> 1656         ret_val = wrap_fake_exception(
   1657             lambda: run_node(tx.output, node, args, kwargs, nnmodule)
   1658         )
   1659 except Unsupported:

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/_dynamo/utils.py:1190, in wrap_fake_exception(fn)
   1189 try:
-> 1190     return fn()
   1191 except UnsupportedFakeTensorException as e:

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/_dynamo/utils.py:1657, in get_fake_value.<locals>.<lambda>()
   1655     with tx.fake_mode, enable_python_dispatcher():
   1656         ret_val = wrap_fake_exception(
-> 1657             lambda: run_node(tx.output, node, args, kwargs, nnmodule)
   1658         )
   1659 except Unsupported:

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/_dynamo/utils.py:1782, in run_node(tracer, node, args, kwargs, nnmodule)
   1781     except Exception as e:
-> 1782         raise RuntimeError(make_error_message(e)).with_traceback(
   1783             e.__traceback__
   1784         ) from e
   1786 raise AssertionError(op)

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/_dynamo/utils.py:1764, in run_node(tracer, node, args, kwargs, nnmodule)
   1763 if op == "call_function":
-> 1764     return node.target(*args, **kwargs)
   1765 elif op == "call_method":

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/_decomp/decompositions.py:1410, in tensor_split_tensor_indices_or_sections_py_impl(self, tensor_indices_or_sections, dim)
   1409 indices = [i.item() for i in tensor_indices_or_sections]
-> 1410 return self.tensor_split(indices, dim)

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/utils/_stats.py:20, in count.<locals>.wrapper(*args, **kwargs)
     19 simple_call_counter[fn.__qualname__] = simple_call_counter[fn.__qualname__] + 1
---> 20 return fn(*args, **kwargs)

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/_subclasses/fake_tensor.py:896, in FakeTensorMode.__torch_dispatch__(self, func, types, args, kwargs)
    895 try:
--> 896     return self.dispatch(func, types, args, kwargs)
    897 except TypeError:

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/_subclasses/fake_tensor.py:1241, in FakeTensorMode.dispatch(self, func, types, args, kwargs)
   1240 if self.cache_enabled:
-> 1241     return self._cached_dispatch_impl(func, types, args, kwargs)
   1242 else:

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/_subclasses/fake_tensor.py:974, in FakeTensorMode._cached_dispatch_impl(self, func, types, args, kwargs)
    973 if output is unassigned:
--> 974     output = self._dispatch_impl(func, types, args, kwargs)
    976 return output

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/_subclasses/fake_tensor.py:1393, in FakeTensorMode._dispatch_impl(self, func, types, args, kwargs)
   1392     with self:
-> 1393         return decomposition_table[func](*args, **kwargs)
   1395 with self:
   1396     # Decomposes CompositeImplicitAutograd ops

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/_decomp/decompositions.py:746, in slice_forward(self, dim, start, end, step)
    744     start_val += sizes[dim]
--> 746 if end_val < 0:
    747     end_val += sizes[dim]

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/__init__.py:374, in SymBool.__bool__(self)
    373 def __bool__(self):
--> 374     return self.node.bool_()

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/fx/experimental/sym_node.py:432, in SymNode.bool_(self)
    431 def bool_(self):
--> 432     return self.guard_bool("", 0)

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/fx/experimental/sym_node.py:374, in SymNode.guard_bool(self, file, line)
    371 def guard_bool(self, file, line):
    372     # TODO: use the file/line for some useful diagnostic on why a
    373     # guard occurred
--> 374     r = self.shape_env.evaluate_expr(self.expr, self.hint, fx_node=self.fx_node)
    375     try:

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/fx/experimental/recording.py:231, in record_shapeenv_event.<locals>.decorator.<locals>.wrapper(*args, **kwargs)
    225 if isinstance(args[0], ShapeEnv) and args[0].is_recording:  # type: ignore[has-type]
    226     # If ShapeEnv is already recording an event, call the wrapped
    227     # function directly.
    228     #
    229     # NB: here, we skip the check of whether all ShapeEnv instances
    230     # are equal, in favor of a faster dispatch.
--> 231     return fn(*args, **kwargs)
    233 # Retrieve an instance of ShapeEnv.
    234 # Assumption: the collection of args and kwargs may not reference
    235 # different ShapeEnv instances.

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/fx/experimental/symbolic_shapes.py:4138, in ShapeEnv.evaluate_expr(self, orig_expr, hint, fx_node, expect_rational, size_oblivious, forcing_spec)
   4132         size_oblivious_result = self._maybe_evaluate_static(
   4133             expr,
   4134             expect_rational=expect_rational,
   4135             size_oblivious=True
   4136         )
-> 4138     raise self._make_data_dependent_error(
   4139         expr.xreplace(self.var_to_val),
   4140         expr,
   4141         size_oblivious_result=size_oblivious_result
   4142     )
   4143 expr = new_expr

RuntimeError: Failed running call_function <built-in method tensor_split of type object at 0x7852861957c0>(*(FakeTensor(..., size=(1024,), dtype=torch.int64), FakeTensor(..., size=(6,), dtype=torch.int64)), **{}):
Could not guard on data-dependent expression u0 < 0 (unhinted: u0 < 0).  (Size-like symbols: none)

Potential framework code culprit (scroll up for full backtrace):
  File "/home/justinchu/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/_decomp/decompositions.py", line 746, in slice_forward
    if end_val < 0:

For more information, run with TORCH_LOGS="dynamic"
For extended logs when we create symbols, also add TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="u0"
If you suspect the guard was triggered from C++, add TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
For more debugging help, see https://docs.google.com/document/d/1HSuTTVvYH1pTew89Rtpeu84Ht3nQEFTYhAX3Ypa_xJs/edit?usp=sharing

User Stack (most recent call last):
  (snipped, see stack below for prefix)
  File "<ipython-input-2-3004064f32b4>", line 25, in forward
    return batch_from_ragged(text, batch_lengths)
  File "<ipython-input-3-05634c9d43b6>", line 6, in batch_from_ragged
    torch.tensor_split(text, batch_lengths),


During handling of the above exception, another exception occurred:

UserError                                 Traceback (most recent call last)
Cell In[18], line 1
----> 1 torch.export.export(model,(text, batch_lengths))

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/export/__init__.py:174, in export(mod, args, kwargs, dynamic_shapes, strict, preserve_module_call_signature)
    169 if not isinstance(mod, torch.nn.Module):
    170     raise ValueError(
    171         f"Expected `mod` to be an instance of `torch.nn.Module`, got {type(mod)}."
    172     )
--> 174 return _export(
    175     mod,
    176     args,
    177     kwargs,
    178     dynamic_shapes,
    179     strict=strict,
    180     preserve_module_call_signature=preserve_module_call_signature,
    181 )

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/export/_trace.py:635, in _log_export_wrapper.<locals>.wrapper(*args, **kwargs)
    628     error_type = t.__module__ + "." + t.__qualname__
    629     log_export_usage(
    630         event="export.error",
    631         type=error_type,
    632         message=str(e),
    633         flags=_EXPORT_FLAGS,
    634     )
--> 635     raise e
    636 finally:
    637     _EXPORT_FLAGS = None

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/export/_trace.py:618, in _log_export_wrapper.<locals>.wrapper(*args, **kwargs)
    616 try:
    617     start = time.time()
--> 618     ep = fn(*args, **kwargs)
    619     end = time.time()
    620     log_export_usage(
    621         event="export.time",
    622         metrics=end - start,
    623         flags=_EXPORT_FLAGS,
    624         **get_ep_stats(ep),
    625     )

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/export/exported_program.py:83, in _disable_prexisiting_fake_mode.<locals>.wrapper(*args, **kwargs)
     80 @functools.wraps(fn)
     81 def wrapper(*args, **kwargs):
     82     with maybe_disable_fake_tensor_mode():
---> 83         return fn(*args, **kwargs)

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/export/_trace.py:860, in _export(mod, args, kwargs, dynamic_shapes, strict, preserve_module_call_signature, pre_dispatch)
    838     _rewrite_non_persistent_buffers(mod, ep_non_strict.sig, ep_non_strict.constants)
    839     return ExportedProgram(
    840         root=gm,
    841         graph=gm.graph,
   (...)
    857         constants=ep_non_strict.constants,
    858     )
--> 860 gm_torch_level = _export_to_torch_ir(
    861     mod,
    862     args,
    863     kwargs,
    864     constraints,
    865     preserve_module_call_signature=preserve_module_call_signature,
    866     restore_fqn=False,  # don't need to restore because we will do it later
    867     _log_export_usage=False,
    868 )
    870 # We detect the fake_mode by looking at gm_torch_level's placeholders, this is the fake_mode created in dynamo.
    871 (
    872     fake_args,
    873     fake_kwargs,
    874     fake_params_buffers,
    875     dynamo_fake_mode,
    876 ) = _convert_input_to_fake(gm_torch_level, args, kwargs)

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/export/_trace.py:347, in _export_to_torch_ir(f, args, kwargs, constraints, preserve_module_call_signature, disable_constraint_solver, restore_fqn, _log_export_usage)
    343     module_call_specs: Dict[str, Dict[str, pytree.TreeSpec]] = {}
    344     with _wrap_submodules(
    345         f, preserve_module_call_signature, module_call_specs
    346     ), _ignore_backend_decomps():
--> 347         gm_torch_level, _ = torch._dynamo.export(
    348             f,
    349             constraints=constraints,  # type: ignore[arg-type]
    350             assume_static_by_default=True,
    351             tracing_mode="symbolic",
    352             disable_constraint_solver=disable_constraint_solver,
    353             _log_export_usage=_log_export_usage,
    354         )(
    355             *args,
    356             **kwargs,
    357         )
    358 except (ConstraintViolationError, ValueRangeError) as e:
    359     raise UserError(UserErrorType.CONSTRAINT_VIOLATION, str(e))  # noqa: TRY200

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py:1311, in export.<locals>.inner(*args, **kwargs)
   1309 # TODO(voz): We may have instances of `f` that mutate inputs, we should track sideeffects and reject.
   1310 try:
-> 1311     result_traced = opt_f(*args, **kwargs)
   1312 except ConstraintViolationError as e:
   1313     constraint_violation_error = e

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/nn/modules/module.py:1532, in Module._wrapped_call_impl(self, *args, **kwargs)
   1530     return self._compiled_call_impl(*args, **kwargs)  # type: ignore[misc]
   1531 else:
-> 1532     return self._call_impl(*args, **kwargs)

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/nn/modules/module.py:1541, in Module._call_impl(self, *args, **kwargs)
   1536 # If we don't have any hooks, we want to skip the rest of the logic in
   1537 # this function, and just call forward.
   1538 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
   1539         or _global_backward_pre_hooks or _global_backward_hooks
   1540         or _global_forward_hooks or _global_forward_pre_hooks):
-> 1541     return forward_call(*args, **kwargs)
   1543 try:
   1544     result = None

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py:451, in _TorchDynamoContext.__call__.<locals>._fn(*args, **kwargs)
    449 prior = set_eval_frame(callback)
    450 try:
--> 451     return fn(*args, **kwargs)
    452 finally:
    453     set_eval_frame(prior)

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/nn/modules/module.py:1532, in Module._wrapped_call_impl(self, *args, **kwargs)
   1530     return self._compiled_call_impl(*args, **kwargs)  # type: ignore[misc]
   1531 else:
-> 1532     return self._call_impl(*args, **kwargs)

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/nn/modules/module.py:1541, in Module._call_impl(self, *args, **kwargs)
   1536 # If we don't have any hooks, we want to skip the rest of the logic in
   1537 # this function, and just call forward.
   1538 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
   1539         or _global_backward_pre_hooks or _global_backward_hooks
   1540         or _global_forward_hooks or _global_forward_pre_hooks):
-> 1541     return forward_call(*args, **kwargs)
   1543 try:
   1544     result = None

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py:921, in catch_errors_wrapper.<locals>.catch_errors(frame, cache_entry, frame_state)
    917             return hijacked_callback(frame, cache_entry, hooks, frame_state)
    919 with compile_lock, _disable_current_modes():
    920     # skip=1: skip this frame
--> 921     return callback(frame, cache_entry, hooks, frame_state, skip=1)

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py:400, in convert_frame_assert.<locals>._convert_frame_assert(frame, cache_entry, hooks, frame_state, skip)
    386 compile_id = CompileId(frame_id, frame_compile_id)
    388 signpost_event(
    389     "dynamo",
    390     "_convert_frame_assert._compile",
   (...)
    397     },
    398 )
--> 400 return _compile(
    401     frame.f_code,
    402     frame.f_globals,
    403     frame.f_locals,
    404     frame.f_builtins,
    405     compiler_fn,
    406     one_graph,
    407     export,
    408     export_constraints,
    409     hooks,
    410     cache_size,
    411     frame,
    412     frame_state=frame_state,
    413     compile_id=compile_id,
    414     skip=skip + 1,
    415 )

File ~/anaconda3/envs/onnx/lib/python3.11/contextlib.py:81, in ContextDecorator.__call__.<locals>.inner(*args, **kwds)
     78 @wraps(func)
     79 def inner(*args, **kwds):
     80     with self._recreate_cm():
---> 81         return func(*args, **kwds)

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py:676, in _compile(code, globals, locals, builtins, compiler_fn, one_graph, export, export_constraints, hooks, cache_size, frame, frame_state, compile_id, skip)
    674 fail_user_frame_lineno: Optional[int] = None
    675 try:
--> 676     guarded_code = compile_inner(code, one_graph, hooks, transform)
    677     return guarded_code
    678 except (
    679     Unsupported,
    680     TorchRuntimeError,
   (...)
    687     BisectValidationException,
    688 ) as e:

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/_dynamo/utils.py:262, in dynamo_timed.<locals>.dynamo_timed_inner.<locals>.time_wrapper(*args, **kwargs)
    260 with torch.profiler.record_function(f"{key} (dynamo_timed)"):
    261     t0 = time.time()
--> 262     r = func(*args, **kwargs)
    263     time_spent = time.time() - t0
    264 compilation_time_metrics[key].append(time_spent)

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py:535, in _compile.<locals>.compile_inner(code, one_graph, hooks, transform)
    533 CompileContext.get().attempt = attempt
    534 try:
--> 535     out_code = transform_code_object(code, transform)
    536     break
    537 except exc.RestartAnalysis as e:

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/_dynamo/bytecode_transformation.py:1036, in transform_code_object(code, transformations, safe)
   1033 instructions = cleaned_instructions(code, safe)
   1034 propagate_line_nums(instructions)
-> 1036 transformations(instructions, code_options)
   1037 return clean_and_assemble_instructions(instructions, keys, code_options)[1]

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py:165, in preserve_global_state.<locals>._fn(*args, **kwargs)
    163 cleanup = setup_compile_debug()
    164 try:
--> 165     return fn(*args, **kwargs)
    166 finally:
    167     cleanup.close()

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py:500, in _compile.<locals>.transform(instructions, code_options)
    498 try:
    499     with tracing(tracer.output.tracing_context), tracer.set_current_tx():
--> 500         tracer.run()
    501 except exc.UnspecializeRestartAnalysis:
    502     speculation_log.clear()

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py:2149, in InstructionTranslator.run(self)
   2148 def run(self):
-> 2149     super().run()

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py:810, in InstructionTranslatorBase.run(self)
    805 try:
    806     self.output.push_tx(self)
    807     while (
    808         self.instruction_pointer is not None
    809         and not self.output.should_exit
--> 810         and self.step()
    811     ):
    812         pass
    813 except BackendCompilerFailed:

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py:773, in InstructionTranslatorBase.step(self)
    769         unimplemented(f"missing: {inst.opname}")
    770     TracingContext.set_current_loc(
    771         self.f_code.co_filename, self.lineno, self.f_code.co_name
    772     )
--> 773     getattr(self, inst.opname)(inst)
    775     return inst.opname != "RETURN_VALUE"
    776 except Unsupported:

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py:489, in break_graph_if_unsupported.<locals>.decorator.<locals>.wrapper(self, inst)
    485 try:
    486     TracingContext.set_current_loc(
    487         self.f_code.co_filename, self.lineno, self.f_code.co_name
    488     )
--> 489     return inner_fn(self, inst)
    490 except Unsupported as excp:
    491     if self.generic_context_manager_depth > 0:
    492         # We don't support graph break under GenericContextWrappingVariable,
    493         # If there is, we roll back to the checkpoint and fall back.

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py:1802, in InstructionTranslatorBase.CALL(self, inst)
   1800     args = args + contents[2:]
   1801     kwargs = {}
-> 1802 self.call_function(fn, args, kwargs)
   1803 self.kw_names = None

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py:674, in InstructionTranslatorBase.call_function(self, fn, args, kwargs)
    672 if inner_fn and callable(inner_fn) and is_forbidden(inner_fn):
    673     raise AssertionError(f"Attempt to trace forbidden callable {inner_fn}")
--> 674 self.push(fn.call_function(self, args, kwargs))

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/_dynamo/variables/functions.py:289, in UserFunctionVariable.call_function(self, tx, args, kwargs)
    284 if self.is_constant:
    285     return invoke_and_store_as_constant(
    286         tx, self.fn, self.get_name(), args, kwargs
    287     )
--> 289 return super().call_function(tx, args, kwargs)

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/_dynamo/variables/functions.py:90, in BaseUserFunctionVariable.call_function(self, tx, args, kwargs)
     87 def call_function(
     88     self, tx, args: "List[VariableTracker]", kwargs: "Dict[str, VariableTracker]"
     89 ) -> "VariableTracker":
---> 90     return tx.inline_user_function_return(
     91         self, list(self.self_args()) + list(args), kwargs
     92     )

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py:680, in InstructionTranslatorBase.inline_user_function_return(self, fn, args, kwargs)
    676 def inline_user_function_return(self, fn, args, kwargs):
    677     """
    678     A call to some user defined function by inlining it.
    679     """
--> 680     return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py:2285, in InliningInstructionTranslator.inline_call(cls, parent, func, args, kwargs)
   2282 @classmethod
   2283 def inline_call(cls, parent, func, args, kwargs):
   2284     with patch.dict(counters, {"unimplemented": counters["inline_call"]}):
-> 2285         return cls.inline_call_(parent, func, args, kwargs)

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py:2399, in InliningInstructionTranslator.inline_call_(parent, func, args, kwargs)
   2397 try:
   2398     with strict_ctx:
-> 2399         tracer.run()
   2400 except exc.SkipFrame as e:
   2401     msg = f"SKIPPED INLINING {code}: {e}"

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py:810, in InstructionTranslatorBase.run(self)
    805 try:
    806     self.output.push_tx(self)
    807     while (
    808         self.instruction_pointer is not None
    809         and not self.output.should_exit
--> 810         and self.step()
    811     ):
    812         pass
    813 except BackendCompilerFailed:

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py:773, in InstructionTranslatorBase.step(self)
    769         unimplemented(f"missing: {inst.opname}")
    770     TracingContext.set_current_loc(
    771         self.f_code.co_filename, self.lineno, self.f_code.co_name
    772     )
--> 773     getattr(self, inst.opname)(inst)
    775     return inst.opname != "RETURN_VALUE"
    776 except Unsupported:

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py:489, in break_graph_if_unsupported.<locals>.decorator.<locals>.wrapper(self, inst)
    485 try:
    486     TracingContext.set_current_loc(
    487         self.f_code.co_filename, self.lineno, self.f_code.co_name
    488     )
--> 489     return inner_fn(self, inst)
    490 except Unsupported as excp:
    491     if self.generic_context_manager_depth > 0:
    492         # We don't support graph break under GenericContextWrappingVariable,
    493         # If there is, we roll back to the checkpoint and fall back.

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py:1802, in InstructionTranslatorBase.CALL(self, inst)
   1800     args = args + contents[2:]
   1801     kwargs = {}
-> 1802 self.call_function(fn, args, kwargs)
   1803 self.kw_names = None

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py:674, in InstructionTranslatorBase.call_function(self, fn, args, kwargs)
    672 if inner_fn and callable(inner_fn) and is_forbidden(inner_fn):
    673     raise AssertionError(f"Attempt to trace forbidden callable {inner_fn}")
--> 674 self.push(fn.call_function(self, args, kwargs))

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/_dynamo/variables/torch.py:679, in TorchInGraphFunctionVariable.call_function(self, tx, args, kwargs)
    672                 if not isinstance(data_arg, TensorVariable) and check_any_unspec(
    673                     data_arg
    674                 ):
    675                     # This is slower and less canonical, so only use it if we
    676                     # have to
    677                     fn_ = torch._refs.tensor
--> 679             tensor_variable = wrap_fx_proxy(
    680                 tx=tx,
    681                 proxy=tx.output.create_proxy(
    682                     "call_function",
    683                     fn_,
    684                     *proxy_args_kwargs(args, kwargs),
    685                 ),
    686             )
    688             if (
    689                 isinstance(tensor_variable, TensorVariable)
    690                 and "requires_grad" in kwargs
    691                 and kwargs["requires_grad"].as_python_constant()
    692             ):
    693                 unimplemented(
    694                     """factory functions that return tensors that require grad are not supported.
    695 Either create the tensor outside the compiled region, or do not set the tensor to require_grad"""
    696                 )

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/_dynamo/variables/builder.py:1330, in wrap_fx_proxy(tx, proxy, example_value, subclass_type, **options)
   1322 kwargs = {
   1323     "tx": tx,
   1324     "proxy": proxy,
   (...)
   1327     **options,
   1328 }
   1329 if subclass_type is None:
-> 1330     return wrap_fx_proxy_cls(target_cls=TensorVariable, **kwargs)
   1331 else:
   1332     result = wrap_fx_proxy_cls(target_cls=TensorWithTFOverrideVariable, **kwargs)

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/_dynamo/variables/builder.py:1415, in wrap_fx_proxy_cls(target_cls, tx, proxy, example_value, subclass_type, **options)
   1411 with preserve_rng_state():
   1412     if example_value is None:
   1413         # only allow_non_graph_fake in this instance because we handle the non-fake
   1414         # cases properly below.
-> 1415         example_value = get_fake_value(proxy.node, tx, allow_non_graph_fake=True)
   1417     # Handle recursive calls here
   1418     elif maybe_get_fake_mode(example_value) is tx.fake_mode:

File ~/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/_dynamo/utils.py:1704, in get_fake_value(node, tx, allow_non_graph_fake)
   1696     unimplemented(
   1697         f"unsupported operator: {cause.func} ({import_suggestion}see "
   1698         "https://docs.google.com/document/d/1GgvOe7C8_NVOMLOCwDaYV1mXXyHMXY7ExoewHqooxrs/edit#heading=h.64r4npvq0w0"
   1699         " for how to fix)"
   1700     )
   1701 elif isinstance(
   1702     cause, torch.fx.experimental.symbolic_shapes.GuardOnDataDependentSymNode
   1703 ):
-> 1704     raise UserError(  # noqa: TRY200
   1705         UserErrorType.CONSTRAINT_VIOLATION,
   1706         "Tried to use data-dependent value in the subsequent computation. "
   1707         "This can happen when we encounter unbounded dynamic value that is unknown during tracing time.  "
   1708         "You will need to explicitly give hint to the compiler. Please take a look at "
   1709         f"constrain_as_value OR constrain_as_size APIs.  {cause}",
   1710         case_name="constrain_as_size_example",
   1711     )
   1712 elif isinstance(cause, ValueRangeError):
   1713     raise UserError(UserErrorType.CONSTRAINT_VIOLATION, e.args[0]) from e

UserError: Tried to use data-dependent value in the subsequent computation. This can happen when we encounter unbounded dynamic value that is unknown during tracing time.  You will need to explicitly give hint to the compiler. Please take a look at constrain_as_value OR constrain_as_size APIs.  Could not guard on data-dependent expression u0 < 0 (unhinted: u0 < 0).  (Size-like symbols: none)

Potential framework code culprit (scroll up for full backtrace):
  File "/home/justinchu/anaconda3/envs/onnx/lib/python3.11/site-packages/torch/_decomp/decompositions.py", line 746, in slice_forward
    if end_val < 0:

For more information, run with TORCH_LOGS="dynamic"
For extended logs when we create symbols, also add TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="u0"
If you suspect the guard was triggered from C++, add TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
For more debugging help, see https://docs.google.com/document/d/1HSuTTVvYH1pTew89Rtpeu84Ht3nQEFTYhAX3Ypa_xJs/edit?usp=sharing

User Stack (most recent call last):
  (snipped, see stack below for prefix)
  File "<ipython-input-2-3004064f32b4>", line 25, in forward
    return batch_from_ragged(text, batch_lengths)
  File "<ipython-input-3-05634c9d43b6>", line 6, in batch_from_ragged
    torch.tensor_split(text, batch_lengths),

For more information about this error, see: https://pytorch.org/docs/main/generated/exportdb/index.html#constrain-as-size-example

from user code:
   File "<ipython-input-2-3004064f32b4>", line 25, in forward
    return batch_from_ragged(text, batch_lengths)
  File "<ipython-input-3-05634c9d43b6>", line 6, in batch_from_ragged
    torch.tensor_split(text, batch_lengths),

Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information

from pytorch.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.