techainer / mlchain-python Goto Github PK
View Code? Open in Web Editor NEWAuto-Magical Deploy AI model at large scale, high performance, and easy to use
Home Page: https://pypi.org/project/mlchain
License: MIT License
Auto-Magical Deploy AI model at large scale, high performance, and easy to use
Home Page: https://pypi.org/project/mlchain
License: MIT License
Hi there
I found a bug when running with mlchain run
on my MBP (OSX 10.15.6 but I suspect this problem can be reproduced on the older versions)
When configure wrapper: None
in mlconfig => Run mlchain run
=> There is no bug when calling API
When configure wrapper: gunicorn
in mlconfig => Run mlchain run
=> There is 1 bug that prohibits function called:
objc[28043]: +[__NSCFConstantString initialize] may have been in progress in another thread when fork() was called. objc[28043]: +[__NSCFConstantString initialize] may have been in progress in another thread when fork() was called. We cannot safely call it or ignore it in the fork() child process. Crashing instead. Set a breakpoint on objc_initializeAfterForkError to debug.
The problem disappears when I run this command before mlchain run
export OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES
As I tested:
On Windows: There is no gunicorn installed. Should find an alternative
On Linux: Everything works fine
As I know, mlchain inherently supports serving with nginx with 8080 port through mlchain serve ...
However, when inspecting code, there is no exposed sock for nginx to use
Please add support for this. Thank you
There are unnecessary + uncompressed Javascript files:
docs/js/chat.js
docs/js/custom.js
docs/js/termynal.js
mlchain/server/templates/swaggerui/swagger-ui-standalone-preset.js
Sometime there are missing Exception message from server call. Will need to investigate this issue further
The css files should be compressed.
docs/css/custom.css
docs/css/style.css
docs/css/termynal.css
mlchain/server/static/Source-Sans-Pro.css
mlchain/server/templates/swaggerui/swagger-ui.css
Hi there, we are experimenting this lib on ARM based hardware (E.g NVIDIA Jetson).
You should cover this with CI CD and future plan
Fix Travis CI build error and add auto deployment to PyPi when a released new version
Currently docs are being build and update manually. We need CI/CD for documentations on release tag
Hello there, as far as I know, mlchain supports many features. However, when using mlchain init
I think it should be better.
E.g:
mlchain-python/docs/css/custom.css
Line 20 in 0dd5c5b
Some may encounter this at mlchain run
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/gunicorn/arbiter.py", line 583, in spawn_worker
worker.init_process()
File "/usr/local/lib/python3.7/dist-packages/gunicorn/workers/gthread.py", line 92, in init_process
super().init_process()
File "/usr/local/lib/python3.7/dist-packages/gunicorn/workers/base.py", line 119, in init_process
self.load_wsgi()
File "/usr/local/lib/python3.7/dist-packages/gunicorn/workers/base.py", line 144, in load_wsgi
self.wsgi = self.app.wsgi()
File "/usr/local/lib/python3.7/dist-packages/gunicorn/app/base.py", line 67, in wsgi
self.callable = self.load()
File "/usr/local/lib/python3.7/dist-packages/mlchain/cli/run.py", line 205, in load
serve_model = get_model(entry_file, serve_model=True)
File "/usr/local/lib/python3.7/dist-packages/mlchain/cli/run.py", line 310, in get_model
module = importlib.import_module(import_name)
File "/usr/lib/python3.7/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 962, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'code.server'; 'code' is not a package
Hello there,
As far as I know, the exported environment variables are considered string in Python
Mlchain has already supported type indicator in function to convert from image binary to numpy array. However, when exporting int8, I get a string in my code
Please update, it should be a nice feature.
Switch CI to Github Action with full x86 matrix build. Including:
Priority order:
mlchain run
argmlconfig.yaml
valueNote: default
group in mlconfig should be effect by this as well
As reported in: Techainer/mnist-mlchain-examples#1
When using mlchain on Windows with gunicorn
warper will encounter this problem:
ModuleNotFoundError: No module named 'fcntl
Due to this line:
mlchain-python/mlchain/cli/run.py
Line 204 in 3edbebc
Environment variable CUDA_VISIBLE_DEVICES
don't have effect when running with MLChain. Will fix this soon cc @vuonghoainam
@vuonghoainam reported a bug when install mlchain 0.1.7
and run with mlchain run
from .h2 import H2Protocol
File "/usr/local/lib/python3.7/site-packages/hypercorn/protocol/h2.py", line 4, in <module>
import h2.connection
File "/usr/local/lib/python3.7/site-packages/h2/connection.py", line 33, in <module>
from .frame_buffer import FrameBuffer
File "/usr/local/lib/python3.7/site-packages/h2/frame_buffer.py", line 9, in <module>
from hyperframe.exceptions import InvalidFrameError, InvalidDataError
ImportError: cannot import name 'InvalidDataError' from 'hyperframe.exceptions' (/usr/local/lib/python3.7/site-packages/hyperframe/exceptions.py)
Mlchain installation problems with virtual environment (anaconda)
ERROR: Failed building wheel for bottleneck
Failed to build bottleneck
ERROR: Could not build wheels for bottleneck which use PEP 517 and cannot be installed directly
Ubuntu: 18.04.5 LTS
python : 3.6
conda : 4.8.3
Traceback (most recent call last):
File "/home/techainer_docker/miniconda/bin/mlchain", line 5, in <module>
from mlchain.cli.main import main
File "/home/techainer_docker/miniconda/lib/python3.8/site-packages/mlchain/__init__.py", line 34, in <module>
from .client import Client
File "/home/techainer_docker/miniconda/lib/python3.8/site-packages/mlchain/client/__init__.py", line 3, in <module>
from .grpc_client import GrpcClient
File "/home/techainer_docker/miniconda/lib/python3.8/site-packages/mlchain/client/grpc_client.py", line 4, in <module>
from .base import MLClient
File "/home/techainer_docker/miniconda/lib/python3.8/site-packages/mlchain/client/base.py", line 10, in <module>
from httpx import (
ImportError: cannot import name 'ResponseClosed' from 'httpx' (/home/techainer_docker/miniconda/lib/python3.8/site-packages/httpx/__init__.py)
One internal project of Techainer discover a bug that when using python3 server.py
with a content like this:
from mlchain.base import ServeModel
from model import Model
from mlchain import mlconfig
# mlconfig.load_config('mlconfig.yaml')
model = Model(weight_path=mlconfig.weight,
debug=mlconfig.debug)
model = ServeModel(model)
if __name__ == "__main__":
from mlchain.rpc.server.flask_server import FlaskServer
FlaskServer(model).run(bind=['127.0.0.1:8004'], gunicorn=True)
Have cause the self.sess.run
inside the model class to hang forever. While using mlchain run
CLI doesn't.
Noted that this model use Tensorflow 1.14, 1.15 suffer the same problem
This DOES NOT effect production usage since we only use mlchain run
but this bug is worth more examination.
When I ran QuartServer. I got an exception.
Traceback (most recent call last):
File "/mnt/hdd/spaces/miles/.local/lib/python3.7/site-packages/uvicorn/protocols/http/httptools_impl.py", line 396, in run_asgi
result = await app(self.scope, self.receive, self.send)
File "/mnt/hdd/spaces/miles/.local/lib/python3.7/site-packages/uvicorn/middleware/proxy_headers.py", line 45, in __call__
return await self.app(scope, receive, send)
File "/mnt/hdd/spaces/miles/.local/lib/python3.7/site-packages/quart/app.py", line 2117, in __call__
await self.asgi_app(scope, receive, send)
File "/mnt/hdd/spaces/miles/.local/lib/python3.7/site-packages/quart/app.py", line 2140, in asgi_app
await asgi_handler(receive, send)
File "/mnt/hdd/spaces/miles/.local/lib/python3.7/site-packages/quart/asgi.py", line 33, in __call__
_raise_exceptions(done)
File "/mnt/hdd/spaces/miles/.local/lib/python3.7/site-packages/quart/asgi.py", line 256, in _raise_exceptions
raise task.exception()
File "/mnt/hdd/spaces/miles/.local/lib/python3.7/site-packages/quart/asgi.py", line 84, in handle_request
await asyncio.wait_for(self._send_response(send, response), timeout=timeout)
File "/usr/lib/python3.7/asyncio/tasks.py", line 442, in wait_for
return fut.result()
File "/mnt/hdd/spaces/miles/.local/lib/python3.7/site-packages/quart/asgi.py", line 98, in _send_response
async for data in body:
File "/mnt/hdd/spaces/miles/.local/lib/python3.7/site-packages/quart/wrappers/response.py", line 129, in _aiter
for data in iterable: # type: ignore
TypeError: 'coroutine' object is not iterable
My environment:
MlChain 0.1.9
Flask 1.1.2
Quart 0.14.1
I've recently picked up on using this framework. I can't find instruction on how to define the specific URL for the API, other than IP:port. What I want to do is deploying an API to an URL like IP:port/route/to/api. How do I achieve that? Thank you.
Starlette is more stable than Quart, so we are considering remove Quart and integrate Starlette as default server (Now is Flask)
How to scale up system to hundreds of node server :)
At mlchain 0.1.8rc1, I have encountered the following bug during the initialization phase with mlchain. Please help resolve this. Thank you
This link should be helpful I guess: Stackoverflow
face_detection_1 | Traceback (most recent call last): face_detection_1 | File "/usr/local/bin/mlchain", line 11, in <module> face_detection_1 | sys.exit(main()) face_detection_1 | File "/usr/local/lib/python3.7/dist-packages/mlchain/cli/main.py", line 50, in main face_detection_1 | cli.main(args=sys.argv[1:], prog_name="python -m mlchain" if as_module else None) face_detection_1 | File "/usr/local/lib/python3.7/dist-packages/click/core.py", line 782, in main face_detection_1 | rv = self.invoke(ctx) face_detection_1 | File "/usr/local/lib/python3.7/dist-packages/click/core.py", line 1259, in invoke face_detection_1 | return _process_result(sub_ctx.command.invoke(sub_ctx)) face_detection_1 | File "/usr/local/lib/python3.7/dist-packages/click/core.py", line 1066, in invoke face_detection_1 | return ctx.invoke(self.callback, **ctx.params) face_detection_1 | File "/usr/local/lib/python3.7/dist-packages/click/core.py", line 610, in invoke face_detection_1 | return callback(*args, **kwargs) face_detection_1 | File "/usr/local/lib/python3.7/dist-packages/mlchain/cli/run.py", line 104, in run_command face_detection_1 | config = mlconfig.load_file(config) face_detection_1 | File "/usr/local/lib/python3.7/dist-packages/mlchain/config.py", line 152, in load_file face_detection_1 | return load_yaml(path) face_detection_1 | File "/usr/local/lib/python3.7/dist-packages/mlchain/config.py", line 144, in load_yaml face_detection_1 | return yaml.load(f, Loader=yaml.FullLoader) face_detection_1 | AttributeError: module 'yaml' has no attribute 'FullLoader'
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.