instill-ai / community Goto Github PK
View Code? Open in Web Editor NEW๐ All about our lovely community's feedback, feature request, bug report, whim, etc.
Home Page: https://www.instill.tech
๐ All about our lovely community's feedback, feature request, bug report, whim, etc.
Home Page: https://www.instill.tech
The tagline on hero section should be either:
The copy under the title 'Integration Vision AI into existing data stack in minutes' should be separated into two sentences not one.
The title 'Fetch, Deploy and Share with the community' needs a comma before and.
If we run the make tests
multiple times, there is a high possibility the tests in model-backend-rest.js
won't pass.
โ Model Backend API: Load model online
โ POST /models (multipart) cls response Status
โ POST /models (multipart) cvtask cls response Name
โ POST /models (multipart) cvtask cls response FullName
โ POST /models (multipart) cvtask cls response CVTask
โ POST /models (multipart) cvtask cls response Versions
โ PATCH /models (multipart) cls response Status
โ PATCH /models (multipart) cvtask cls response Name
โ PATCH /models (multipart) cvtask cls response FullName
โ PATCH /models (multipart) cvtask cls response CVTask
โณ 0% โ โ 0 / โ 1
โ PATCH /models (multipart) cvtask cls response Versions
โณ 0% โ โ 0 / โ 1
โ PATCH /models (multipart) cvtask cls response Version 1 Status
โณ 0% โ โ 0 / โ 1
Now, pipeline-backend only supports inference by multipart image uploading, we want to support inference by providing image URL or image in base64 format.
{
"contents": [
{
"url": "https://artifacts.instill.tech/dog.jpg"
},
{
"base64": "/9j/4AAQ...Kkdf/9k="
}
]
}
Now update model version method only support update status ONLINE/OFFLINE. But model version also has description fields to explain what is changed in this version. User wants to update the version description aslo.
We will release a holistic version for all protocol buffer files for all backends.
This is a flavour decision for our API practice. We prefer using a fine-grained version control package instill.pipeline
over an imposed-upstream control package instill.pipeline.v1
.
When protobufs is released, a GitHub Action workflow will trigger auto-gen for all gRPC client-server codes for all language repos (e.g., protogen-go) and also update all related gRPC/OpenAPI documents
Right now newsletter archive is SSG, every time we release new newsletter we have to manually re-deploy the whole site. This will cause overhead.
If I run the following script in quick start
# Deploy the model
go run deploy-model/main.go --model-path yolov4-onyx-cpu.zip --model-name yolov4
I get a model with model name yolov4
with 1 version.
2022/02/21 00:16:13 model has been created, the response is: id:1 name:"yolov4" full_Name:"local-user/yolov4" cv_task:DETECTION versions:{version:1 model_id:1 description:"YoloV4 for object detection" created_at:{seconds:1645402547 nanos:80792000} updated_at:{seconds:1645402547 nanos:80807000}}
The response missed the status
of version 1 of the model.
If I run the above script the second time, I get a model with model name yolov4
with 2 versions.
bash 2022/02/21 00:20:07 model has been created, the response is: id:1 name:"yolov4" full_Name:"local-user/yolov4" cv_task:DETECTION versions:{version:1 model_id:1 description:"YoloV4 for object detection" created_at:{seconds:1645402547 nanos:80792000} updated_at:{seconds:1645402625 nanos:977017000} status:ONLINE} versions:{version:2 model_id:1 description:"YoloV4 for object detection" created_at:{seconds:1645402779 nanos:961661000} updated_at:{seconds:1645402779 nanos:961692000}}
The response only includes the status of version 1, but no status of version 2.
# Test the model
go run test-model/main.go --model-name yolov4 --test-image dog.jpg --model-version 2
Get response
2022/02/21 00:24:15 error when triggering predict: rpc error: code = Code(400) desc = {"status":400,"title":"PredictModel","detail":"Model is offline"}
Note: shouldn't we use status code 422
instead of 400
for the above scenario?
But when GET /models/yolov4
{
"id": 1,
"name": "yolov4",
"full_Name": "local-user/yolov4",
"cv_task": "DETECTION",
"versions": [
{
"version": 1,
"model_id": 1,
"description": "YoloV4 for object detection",
"created_at": "2022-02-21T00:15:47.080792Z",
"updated_at": "2022-02-21T00:20:09.486272Z",
"status": "ONLINE"
},
{
"version": 2,
"model_id": 1,
"description": "YoloV4 for object detection",
"created_at": "2022-02-21T00:19:39.961661Z",
"updated_at": "2022-02-21T00:20:09.486272Z",
"status": "ONLINE"
}
]
}
The response shows both model versions are online.
Related to model backend issue
When we do
make all
make test
The tests won't pass because the tests rely on an existing online yolov4 model.
We should use a tiny dummy model for testing and fix this issue to make the tests self-contained.
Related PR instill-ai/protobufs#34
When running the quick start example, sometimes it produces errors, for example, Triton server not-ready error (503). We need to improve the example or provide troubleshooting guidelines.
Currently, we only validate model
in the recipe. We also need to validate source
and destination
.
Related issue: #202
Here we collect a list of tasks that needs to be completed to release the VDP project
model-backend
has a docker command "./openapi" at the moment.
It should be model-backend
to cohere other backends.
Image pulling is a one-time effort.
After creating an object detection pipeline by following the quick start tutorial, a user would want to deploy its own model to create a pipeline.
Given a model, we need to provide guidelines on
Add unit testing for all backends. Please tick off the check box when it is done
Current redoc deploys the OpenAPI docs from the main
branch of the protobufs.
It should deploy the corresponding OpenAPI docs that are compatible with the pipeline-backend
and model-backend
in the docker-compose.yml.
buf
has provided a list of rules and categories as well as style guidelines, which we should follow.
The behaviour of unload
is different from load
for an ensemble model.
load
will load all the ensemble model's dependent models into Triton:
https://github.com/instill-ai/model-backend/blob/66b034d0f9ef9af61c2823ae01df111db6c5abaa/pkg/services/model.go#L368
, while unload
will unload only the ensemble itself and leave the dependent in the Triton:
https://github.com/instill-ai/model-backend/blob/66b034d0f9ef9af61c2823ae01df111db6c5abaa/pkg/services/model.go#L388
This causes OOM after a few patching on an ensemble model if its dependents are not unloaded together or purged regularly.
The current supported Computer Vision (CV) tasks include image classification
and object detection
.
We need to provide guidelines on
Test highlights (including but not limited to):
Right now, we have IconBase component defined like below, we need to remove default my-auto and make it more flexible
import { FC } from "react";
import * as classNames from "classnames";
interface Props {
viewBox: string;
styleName: string;
}
export const IconBase: FC<Props> = ({ children, viewBox, styleName }) => {
return (
<svg
xmlns="http://www.w3.org/2000/svg"
viewBox={viewBox}
className={classNames.default("fill-current my-auto", styleName)}
>
{children}
</svg>
);
};
The model type is mistakenly set as tensorrt
.
https://github.com/instill-ai/model-backend/blob/9eb75d70b00198b6a0362e294104d07a082f7ba4/examples-go/grpc_client.go#L65
The model type is mistakenly set as tensorrt
.
https://github.com/instill-ai/vdp/blob/814ea12e94b7ebb4656c7208f5c160c646f8523a/examples-go/deploy-model/main.go#L65
GSAP is a open-source package we use for product website's animation, it is charged once we have paid users, so we need to consider adapt other animation package in the future
The VDP project span over multiple repos including
It would be nice if we provide docs to explain the role of each repo in the project, so the community has a better understanding of the VDP project.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.