Git Product home page Git Product logo

Comments (6)

Ark-kun avatar Ark-kun commented on June 23, 2024 1

You're right, the sync parameter is not used at all in this function.

The sync parameter is handled by the following decorator:

@base.optional_sync(return_input_arg="empty_batch_prediction_job")

from python-aiplatform.

Ark-kun avatar Ark-kun commented on June 23, 2024 1

In next release we're adding the BatchPredictionJob.submit() method that does not wait for job completion (there will be job.wait_for_completion() for that).

from python-aiplatform.

racinmat avatar racinmat commented on June 23, 2024

If you look at the method definition in https://github.com/googleapis/python-aiplatform/blob/main/google/cloud/aiplatform/models.py#L3762 you will see parameter sync, it's not documented there, but here it's just passed to the jobs.BatchPredictionJob.create so if you look at its definition you will see the sync parameter documented here
so just call

text_model.batch_predict(
  source_uri=["gs://BUCKET_NAME/test_table.jsonl"],
  destination_uri_prefix="gs://BUCKET_NAME/tmp/2023-05-25-vertex-LLM-Batch-Prediction/result3",
  # Optional:
  model_parameters={
      "maxOutputTokens": "200",
      "temperature": "0.2",
      "topP": "0.95",
      "topK": "40",
  },
  sync=False,

and it should not hang.

from python-aiplatform.

aylon11 avatar aylon11 commented on June 23, 2024

Hey,

Thank you very much for addressing this.
Unfortunately this does not solve the problem.

I actually tried that before opening the bug and tried again now.
After adding the sync=False parameter the program still hangs, and the parameter doesn't seem to have any effect.

Following the links you provided, I can see that eventually, the sync parameter is not evaluated when calling the _block_until_complete method here

Any ideas? Is this a bug?

from python-aiplatform.

racinmat avatar racinmat commented on June 23, 2024

You're right, the sync parameter is not used at all in this function. Yes, I think it is a bug, because it behaves differently than what is in the docs.

from python-aiplatform.

Ark-kun avatar Ark-kun commented on June 23, 2024

I'm not sure we can just switch TextGenerationModel.batch_predict to non-blocking version since it can be considered a breaking change. What do you think? Would such change break workflows?

from python-aiplatform.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.