Git Product home page Git Product logo

chat-with-mlx's People

Contributors

amrrs avatar dependabot[bot] avatar lynncc6 avatar onuralpszr avatar qnguyen3 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

chat-with-mlx's Issues

chat templates?

thanks for building this. looks nice on quick first try. but for some models, you might want to check the prompt template / stop parameters / parse yaml template files like eg ooba etc do, because cf. eg:
Screenshot 2024-02-29 at 08 14 10
(sorry for quick lazy issue note, am short of time atm)

I suppose this should be local only?

  File "/opt/homebrew/lib/python3.10/site-packages/chat_with_mlx/app.py", line 166, in chatbot
    response = client.chat.completions.create(
  File "/opt/homebrew/lib/python3.10/site-packages/openai/_utils/_utils.py", line 275, in wrapper
    return func(*args, **kwargs)
  File "/opt/homebrew/lib/python3.10/site-packages/openai/resources/chat/completions.py", line 663, in create
    return self._post(
  File "/opt/homebrew/lib/python3.10/site-packages/openai/_base_client.py", line 1200, in post
    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
  File "/opt/homebrew/lib/python3.10/site-packages/openai/_base_client.py", line 889, in request
    return self._request(
  File "/opt/homebrew/lib/python3.10/site-packages/openai/_base_client.py", line 942, in _request
    return self._retry_request(
  File "/opt/homebrew/lib/python3.10/site-packages/openai/_base_client.py", line 1013, in _retry_request
    return self._request(
  File "/opt/homebrew/lib/python3.10/site-packages/openai/_base_client.py", line 942, in _request
    return self._retry_request(
  File "/opt/homebrew/lib/python3.10/site-packages/openai/_base_client.py", line 1013, in _retry_request
    return self._request(
  File "/opt/homebrew/lib/python3.10/site-packages/openai/_base_client.py", line 952, in _request
    raise APIConnectionError(request=request) from err
openai.APIConnectionError: Connection error.

Connection refused

Very excited to try this. Created a conda environment and pip install'd the code in editable mode using pip install -e .. Loaded the Mistral 7B model, uploaded and indexed a doc file and asked a simple question. Got the following error:

File "/Users/nmadnani/anaconda/envs/mlxchat/lib/python3.11/site-packages/httpx/_transports/default.py", line 69, in map_httpcore_exceptions
    yield
  File "/Users/nmadnani/anaconda/envs/mlxchat/lib/python3.11/site-packages/httpx/_transports/default.py", line 233, in handle_request
    resp = self._pool.handle_request(req)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/nmadnani/anaconda/envs/mlxchat/lib/python3.11/site-packages/httpcore/_sync/connection_pool.py", line 216, in handle_request
    raise exc from None
  File "/Users/nmadnani/anaconda/envs/mlxchat/lib/python3.11/site-packages/httpcore/_sync/connection_pool.py", line 196, in handle_request
    response = connection.handle_request(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/nmadnani/anaconda/envs/mlxchat/lib/python3.11/site-packages/httpcore/_sync/connection.py", line 99, in handle_request
    raise exc
  File "/Users/nmadnani/anaconda/envs/mlxchat/lib/python3.11/site-packages/httpcore/_sync/connection.py", line 76, in handle_request
    stream = self._connect(request)
             ^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/nmadnani/anaconda/envs/mlxchat/lib/python3.11/site-packages/httpcore/_sync/connection.py", line 122, in _connect
    stream = self._network_backend.connect_tcp(**kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/nmadnani/anaconda/envs/mlxchat/lib/python3.11/site-packages/httpcore/_backends/sync.py", line 205, in connect_tcp
    with map_exceptions(exc_map):
  File "/Users/nmadnani/anaconda/envs/mlxchat/lib/python3.11/contextlib.py", line 158, in __exit__
    self.gen.throw(typ, value, traceback)
  File "/Users/nmadnani/anaconda/envs/mlxchat/lib/python3.11/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions
    raise to_exc(exc) from exc
httpcore.ConnectError: [Errno 61] Connection refused

show error

I am trying to install a package using pip on my M2 MacBook Air, but after entering the command and clicking submit, the following error is reported in the background:

❯ chat-with-mlx -h
You try to use a model that was created with version 2.4.0.dev0, however, your version is 2.4.0. This might cause unexpected behavior or errors. In that case, try to update to the latest version.

Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch().
Traceback (most recent call last):
File "/opt/homebrew/lib/python3.11/site-packages/gradio/queueing.py", line 495, in call_prediction
output = await route_utils.call_process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/gradio/route_utils.py", line 235, in call_process_api
output = await app.get_blocks().process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/gradio/blocks.py", line 1627, in process_api
result = await self.call_function(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/gradio/blocks.py", line 1185, in call_function
prediction = await utils.async_iteration(iterator)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/gradio/utils.py", line 514, in async_iteration
return await iterator.anext()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/gradio/utils.py", line 640, in asyncgen_wrapper
response = await iterator.anext()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/gradio/chat_interface.py", line 490, in _stream_fn
first_response = await async_iteration(generator)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/gradio/utils.py", line 514, in async_iteration
return await iterator.anext()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/gradio/utils.py", line 507, in anext
return await anyio.to_thread.run_sync(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 2144, in run_sync_in_worker_thread
return await future
^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 851, in run
result = context.run(func, *args)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/gradio/utils.py", line 490, in run_sync_iterator_async
return next(iterator)
^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/chat_with_mlx/app.py", line 166, in chatbot
response = client.chat.completions.create(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/openai/_utils/_utils.py", line 275, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/openai/resources/chat/completions.py", line 663, in create
return self._post(
^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/openai/_base_client.py", line 1200, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/openai/_base_client.py", line 889, in request
return self._request(
^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/openai/_base_client.py", line 965, in _request
return self._retry_request(
^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/openai/_base_client.py", line 1013, in _retry_request
return self._request(
^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/openai/_base_client.py", line 965, in _request
return self._retry_request(
^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/openai/_base_client.py", line 1013, in _retry_request
return self._request(
^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/openai/_base_client.py", line 980, in _request
raise self._make_status_error_from_response(err.response) from None
openai.InternalServerError: Error code: 503

Error reported when executing ‘chat-with-mlx’ command

OS:macbookpro 2021,Sonoma 14.2.1

ERROR:

Traceback (most recent call last):
  File "/Users/hibisicus/miniforge3/lib/python3.8/site-packages/langchain_community/embeddings/huggingface.py", line 59, in __init__
    import sentence_transformers
  File "/Users/hibisicus/miniforge3/lib/python3.8/site-packages/sentence_transformers/__init__.py", line 3, in <module>
    from .datasets import SentencesDataset, ParallelSentencesDataset
  File "/Users/hibisicus/miniforge3/lib/python3.8/site-packages/sentence_transformers/datasets/__init__.py", line 3, in <module>
    from .ParallelSentencesDataset import ParallelSentencesDataset
  File "/Users/hibisicus/miniforge3/lib/python3.8/site-packages/sentence_transformers/datasets/ParallelSentencesDataset.py", line 4, in <module>
    from .. import SentenceTransformer
  File "/Users/hibisicus/miniforge3/lib/python3.8/site-packages/sentence_transformers/SentenceTransformer.py", line 24, in <module>
    from .evaluation import SentenceEvaluator
  File "/Users/hibisicus/miniforge3/lib/python3.8/site-packages/sentence_transformers/evaluation/__init__.py", line 3, in <module>
    from .BinaryClassificationEvaluator import BinaryClassificationEvaluator
  File "/Users/hibisicus/miniforge3/lib/python3.8/site-packages/sentence_transformers/evaluation/BinaryClassificationEvaluator.py", line 5, in <module>
    from sklearn.metrics.pairwise import paired_cosine_distances, paired_euclidean_distances, paired_manhattan_distances
  File "/Users/hibisicus/miniforge3/lib/python3.8/site-packages/sklearn/__init__.py", line 82, in <module>
    from .base import clone
  File "/Users/hibisicus/miniforge3/lib/python3.8/site-packages/sklearn/base.py", line 17, in <module>
    from .utils import _IS_32BIT
  File "/Users/hibisicus/miniforge3/lib/python3.8/site-packages/sklearn/utils/__init__.py", line 21, in <module>
    from scipy.sparse import issparse
  File "/Users/hibisicus/miniforge3/lib/python3.8/site-packages/scipy/sparse/__init__.py", line 283, in <module>
    from . import csgraph
  File "/Users/hibisicus/miniforge3/lib/python3.8/site-packages/scipy/sparse/csgraph/__init__.py", line 182, in <module>
    from ._laplacian import laplacian
  File "/Users/hibisicus/miniforge3/lib/python3.8/site-packages/scipy/sparse/csgraph/_laplacian.py", line 7, in <module>
    from scipy.sparse.linalg import LinearOperator
  File "/Users/hibisicus/miniforge3/lib/python3.8/site-packages/scipy/sparse/linalg/__init__.py", line 120, in <module>
    from ._isolve import *
  File "/Users/hibisicus/miniforge3/lib/python3.8/site-packages/scipy/sparse/linalg/_isolve/__init__.py", line 4, in <module>
    from .iterative import *
  File "/Users/hibisicus/miniforge3/lib/python3.8/site-packages/scipy/sparse/linalg/_isolve/iterative.py", line 9, in <module>
    from . import _iterative
ImportError: dlopen(/Users/hibisicus/miniforge3/lib/python3.8/site-packages/scipy/sparse/linalg/_isolve/_iterative.cpython-38-darwin.so, 0x0002): Library not loaded: @rpath/liblapack.3.dylib
  Referenced from: <4FB2C529-F29B-3B2A-AA66-BBB0F1340F73> /Users/hibisicus/miniforge3/lib/python3.8/site-packages/scipy/sparse/linalg/_isolve/_iterative.cpython-38-darwin.so
  Reason: tried: '/Users/hibisicus/miniforge3/lib/python3.8/site-packages/scipy/sparse/linalg/_isolve/../../../../../../liblapack.3.dylib' (no such file), '/Users/hibisicus/miniforge3/lib/python3.8/site-packages/scipy/sparse/linalg/_isolve/../../../../../../liblapack.3.dylib' (no such file), '/Users/hibisicus/miniforge3/lib/liblapack.3.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/Users/hibisicus/miniforge3/lib/liblapack.3.dylib' (no such file), '/Users/hibisicus/miniforge3/lib/liblapack.3.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/Users/hibisicus/miniforge3/lib/liblapack.3.dylib' (no such file), '/Users/hibisicus/miniforge3/bin/../lib/liblapack.3.dylib' (no such file), '/Users/hibisicus/miniforge3/lib/liblapack.3.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/Users/hibisicus/miniforge3/lib/liblapack.3.dylib' (no such file), '/Users/hibisicus/miniforge3/lib/liblapack.3.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/Users/hibisicus/miniforge3/lib/liblapack.3.dylib' (no such file), '/Users/hibisicus/miniforge3/bin/../lib/liblapack.3.dylib' (no such file), '/usr/local/lib/liblapack.3.dylib' (no such file), '/usr/lib/liblapack.3.dylib' (no such file, not in dyld cache)

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/Users/hibisicus/miniforge3/bin/chat-with-mlx", line 5, in <module>
    from chat_with_mlx.app import main
  File "/Users/hibisicus/miniforge3/lib/python3.8/site-packages/chat_with_mlx/app.py", line 26, in <module>
    emb = HuggingFaceEmbeddings(model_name='nomic-ai/nomic-embed-text-v1.5', model_kwargs={'trust_remote_code':True})
  File "/Users/hibisicus/miniforge3/lib/python3.8/site-packages/langchain_community/embeddings/huggingface.py", line 62, in __init__
    raise ImportError(
ImportError: Could not import sentence_transformers python package. Please install it with `pip install sentence-transformers`.

——————————————————————————————————————
i try ' pip3 install --upgrade pip'、'pip3 install torch'、‘pip3 install transformers==4.7.0’,The error still exists。
May I ask how to solve it?

Error on startup: name 'sys_prompt' is not defined

Hi @qnguyen3 I'm trying to use your project, and I followed the instructions on cloning and installing the cli, and then starting it with chat-with-mlx, but I get the following error when I try to use the chat:

<All keys matched successfully>
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Traceback (most recent call last):
  File "/Users/anbhatta/DeepLearning/chat-with-mlx/venv/lib/python3.11/site-packages/gradio/queueing.py", line 495, in call_prediction
    output = await route_utils.call_process_api(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/anbhatta/DeepLearning/chat-with-mlx/venv/lib/python3.11/site-packages/gradio/route_utils.py", line 235, in call_process_api
    output = await app.get_blocks().process_api(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/anbhatta/DeepLearning/chat-with-mlx/venv/lib/python3.11/site-packages/gradio/blocks.py", line 1627, in process_api
    result = await self.call_function(
             ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/anbhatta/DeepLearning/chat-with-mlx/venv/lib/python3.11/site-packages/gradio/blocks.py", line 1185, in call_function
    prediction = await utils.async_iteration(iterator)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/anbhatta/DeepLearning/chat-with-mlx/venv/lib/python3.11/site-packages/gradio/utils.py", line 514, in async_iteration
    return await iterator.__anext__()
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/anbhatta/DeepLearning/chat-with-mlx/venv/lib/python3.11/site-packages/gradio/utils.py", line 640, in asyncgen_wrapper
    response = await iterator.__anext__()
               ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/anbhatta/DeepLearning/chat-with-mlx/venv/lib/python3.11/site-packages/gradio/chat_interface.py", line 490, in _stream_fn
    first_response = await async_iteration(generator)
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/anbhatta/DeepLearning/chat-with-mlx/venv/lib/python3.11/site-packages/gradio/utils.py", line 514, in async_iteration
    return await iterator.__anext__()
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/anbhatta/DeepLearning/chat-with-mlx/venv/lib/python3.11/site-packages/gradio/utils.py", line 507, in __anext__
    return await anyio.to_thread.run_sync(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/anbhatta/DeepLearning/chat-with-mlx/venv/lib/python3.11/site-packages/anyio/to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/anbhatta/DeepLearning/chat-with-mlx/venv/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 2144, in run_sync_in_worker_thread
    return await future
           ^^^^^^^^^^^^
  File "/Users/anbhatta/DeepLearning/chat-with-mlx/venv/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 851, in run
    result = context.run(func, *args)
             ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/anbhatta/DeepLearning/chat-with-mlx/venv/lib/python3.11/site-packages/gradio/utils.py", line 490, in run_sync_iterator_async
    return next(iterator)
           ^^^^^^^^^^^^^^
  File "/Users/anbhatta/DeepLearning/chat-with-mlx/chat_with_mlx/app.py", line 151, in chatbot
    if sys_prompt is not None:
       ^^^^^^^^^^
NameError: name 'sys_prompt' is not defined

Am I doing something wrong?

NameError: name 'sys_prompt' is not defined

I got an error when lauch chat-with-mlx:

You try to use a model that was created with version 2.4.0.dev0, however, your version is 2.4.0. This might cause unexpected behavior or errors. In that case, try to update to the latest version.

Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch().
Traceback (most recent call last):
File "/Users/liqiang/.venv/lib/python3.12/site-packages/gradio/queueing.py", line 495, in call_prediction
output = await route_utils.call_process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/liqiang/.venv/lib/python3.12/site-packages/gradio/route_utils.py", line 235, in call_process_api
output = await app.get_blocks().process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/liqiang/.venv/lib/python3.12/site-packages/gradio/blocks.py", line 1627, in process_api
result = await self.call_function(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/liqiang/.venv/lib/python3.12/site-packages/gradio/blocks.py", line 1185, in call_function
prediction = await utils.async_iteration(iterator)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/liqiang/.venv/lib/python3.12/site-packages/gradio/utils.py", line 514, in async_iteration
return await iterator.anext()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/liqiang/.venv/lib/python3.12/site-packages/gradio/utils.py", line 640, in asyncgen_wrapper
response = await iterator.anext()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/liqiang/.venv/lib/python3.12/site-packages/gradio/chat_interface.py", line 490, in _stream_fn
first_response = await async_iteration(generator)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/liqiang/.venv/lib/python3.12/site-packages/gradio/utils.py", line 514, in async_iteration
return await iterator.anext()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/liqiang/.venv/lib/python3.12/site-packages/gradio/utils.py", line 507, in anext
return await anyio.to_thread.run_sync(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/liqiang/.venv/lib/python3.12/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/liqiang/.venv/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 2144, in run_sync_in_worker_thread
return await future
^^^^^^^^^^^^
File "/Users/liqiang/.venv/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 851, in run
result = context.run(func, *args)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/liqiang/.venv/lib/python3.12/site-packages/gradio/utils.py", line 490, in run_sync_iterator_async
return next(iterator)
^^^^^^^^^^^^^^
File "/Users/liqiang/Projects/llm/chat-with-mlx/chat_with_mlx/app.py", line 151, in chatbot
if sys_prompt is not None:
^^^^^^^^^^
NameError: name 'sys_prompt' is not defined. Did you mean: 'get_prompt'?

Q: Upload pypi via "github action"

I understand you doing manually upload but you can just create tags to upload to pypi automatically as well, If you need help please let me know. Also could you add me as co-maintainer ? I would like to help as well ?

Failed after downloading one model

It seems something on my mac blocking writing to certain path?
What should I do next?

Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch().
config.json: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.37k/2.37k [00:00<00:00, 8.51MB/s]
README.md: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.16k/1.16k [00:00<00:00, 4.41MB/s]
added_tokens.json: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 51.0/51.0 [00:00<00:00, 187kB/s]
special_tokens_map.json: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████| 557/557 [00:00<00:00, 1.89MB/s]
model.safetensors.index.json: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████| 52.3k/52.3k [00:00<00:00, 730kB/s]
.gitattributes: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.52k/1.52k [00:00<00:00, 7.67MB/s]
tokenizer_config.json: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.60k/1.60k [00:00<00:00, 4.53MB/s]
tokenizer.model: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 493k/493k [00:00<00:00, 3.40MB/s]
tokenizer.json: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.80M/1.80M [00:00<00:00, 3.60MB/s]
model.safetensors: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4.26G/4.26G [02:53<00:00, 24.6MB/s]
Fetching 10 files: 40%|██████████████████████████████████████████████ | 4/10 [02:54<04:22, 43.73s/it]
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/gradio/queueing.py", line 495, in call_prediction
output = await route_utils.call_process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/gradio/route_utils.py", line 233, in call_process_api
output = await app.get_blocks().process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/gradio/blocks.py", line 1608, in process_api
result = await self.call_function(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/gradio/blocks.py", line 1176, in call_function
prediction = await anyio.to_thread.run_sync(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 2144, in run_sync_in_worker_thread
return await future
^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 851, in run
result = context.run(func, *args)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/gradio/utils.py", line 689, in wrapper
response = f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/Volumes/Thunder_1T/chat-with-mlx/chat_with_mlx/app.py", line 40, in load_model
snapshot_download(repo_id=mlx_config[model_name], local_dir=local_model_dir)
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/huggingface_hub/_snapshot_download.py", line 308, in snapshot_download
thread_map(
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/tqdm/contrib/concurrent.py", line 69, in thread_map
return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/tqdm/contrib/concurrent.py", line 51, in _executor_map
return list(tqdm_class(ex.map(fn, *iterables, chunksize=chunksize), **kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/tqdm/std.py", line 1181, in iter
for obj in iterable:
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/concurrent/futures/_base.py", line 619, in result_iterator
yield _result_or_cancel(fs.pop())
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/concurrent/futures/_base.py", line 317, in _result_or_cancel
return fut.result(timeout)
^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/concurrent/futures/_base.py", line 456, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/huggingface_hub/_snapshot_download.py", line 283, in _inner_hf_hub_download
return hf_hub_download(
^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/huggingface_hub/file_download.py", line 1481, in hf_hub_download
_create_symlink(blob_path, local_dir_filepath, new_blob=False)
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/huggingface_hub/file_download.py", line 901, in _create_symlink
_support_symlinks = are_symlinks_supported(commonpath)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/huggingface_hub/file_download.py", line 117, in are_symlinks_supported
with SoftTemporaryDirectory(dir=cache_dir) as tmpdir:
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/contextlib.py", line 137, in enter
return next(self.gen)
^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/huggingface_hub/utils/_fixes.py", line 54, in SoftTemporaryDirectory
tmpdir = tempfile.TemporaryDirectory(prefix=prefix, suffix=suffix, dir=dir, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/tempfile.py", line 866, in init
self.name = mkdtemp(suffix, prefix, dir)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/tempfile.py", line 368, in mkdtemp
_os.mkdir(file, 0o700)
OSError: [Errno 30] Read-only file system: '/tmpxb88izet'

raise APIConnectionError(request=request) from err

       ^^^^^^^^^^^^^^

File "/opt/homebrew/Caskroom/miniforge/base/envs/mlx/lib/python3.11/site-packages/openai/_base_client.py", line 952, in _request
raise APIConnectionError(request=request) from err
openai.APIConnectionError: Connection error.

download fail

I start to download a model, but fail due to broken connection. what should I do?

I close web page and restart from the command line, and load model again, it is shown model loaded. but I am sure the download is not completed yet.

so I must delete all files and restart again.

please do not use symlink. it is annoying .

can not launch chat-with-mlx

can not launch chat-with-mlx.can anyone help?

I have finished Manual Pip Installation

save model files to director quantized-gemma-2b-it

截屏2024-03-04 15 58 20 截屏2024-03-04 15 58 27 截屏2024-03-04 15 58 42

Updated layout. Hi I added a nice new layout that you might like

Amazing work on this project! I really love the interface. Below is the style I used

class GusStyle(Base):
    def __init__(
        self,
        *,
        primary_hue: colors.Color | str = colors.sky,
        secondary_hue: colors.Color | str = colors.blue,
        neutral_hue: colors.Color | str = colors.gray,
        spacing_size: sizes.Size | str = sizes.spacing_md,
        radius_size: sizes.Size | str = sizes.radius_md,
        text_size: sizes.Size | str = sizes.text_lg,
        font: fonts.Font
        | str
        | Iterable[fonts.Font | str] = (
            fonts.GoogleFont("Quicksand"),
            "ui-sans-serif",
            "sans-serif",
        ),
        font_mono: fonts.Font
        | str
        | Iterable[fonts.Font | str] = (
            fonts.GoogleFont("IBM Plex Mono"),
            "ui-monospace",
            "monospace",
        ),
    ):
        super().__init__(
            primary_hue=primary_hue,
            secondary_hue=secondary_hue,
            neutral_hue=neutral_hue,
            spacing_size=spacing_size,
            radius_size=radius_size,
            text_size=text_size,
            font=font,
            font_mono=font_mono,
        )

Here is the shifted arrangement. I feel like this gives the user more room to see chat history.

with gr.Blocks(fill_height=True, theme=GusStyle()) as demo:
    with gr.Row():
        with gr.Column(scale=2):
            temp_slider = gr.State(0.2)
            max_gen_token = gr.State(512)
            freq_penalty = gr.State(1.05)
            retrieve_docs = gr.State(3)
            language = gr.State("default")
            gr.ChatInterface(
                chatbot=gr.Chatbot(height=800, render=False),
                fn=chatbot,  # Function to call on user input
                title="🍎 MLX Chat",  # Title of the web page
                retry_btn='Retry',
                undo_btn='Undo',
                clear_btn='Clear',
                additional_inputs=[temp_slider, max_gen_token, freq_penalty, retrieve_docs],
            )
        with gr.Column(scale=1):
            ## SELECT MODEL
            model_name = gr.Dropdown(
                label="Select Model",
                info="Select your model",
                choices=sorted(model_list),
                interactive=True,
                render=False,
            )
            model_name.render()
            language = gr.Dropdown(
                label="Language",
                choices=sorted(SUPPORTED_LANG),
                info="Chose Supported Language",
                value="default",
                interactive=True,
            )
            btn1 = gr.Button("Load Model", variant="primary")
            btn3 = gr.Button("Unload Model", variant="stop")

            # FILE
            mode = gr.Dropdown(
                label="Dataset",
                info="Choose your dataset type",
                choices=["Files (docx, pdf, txt)", "YouTube (url)"],
                scale=5,
            )
            url = gr.Textbox(
                label="URL",
                info="Enter your filepath (URL for Youtube)",
                interactive=True,
            )
            upload_button = gr.UploadButton(
                label="Upload File", variant="primary"
            )
            # MODEL STATUS
            # data = gr.Textbox(visible=lambda mode: mode == 'YouTube')
            model_status = gr.Textbox("Model Not Loaded", label="Model Status")
            index_status = gr.Textbox("Not Index", label="Index Status")
            btn1.click(
                load_model,
                inputs=[model_name, language],
                outputs=[model_status],
            )
            btn3.click(kill_process, outputs=[model_status])
            upload_button.upload(
                upload, inputs=upload_button, outputs=[url, index_status]
            )

            index_button = gr.Button("Start Indexing", variant="primary")
            index_button.click(
                indexing, inputs=[mode, url], outputs=[index_status]
            )
            stop_index_button = gr.Button("Stop Indexing")
            stop_index_button.click(kill_index, outputs=[index_status])


    with gr.Accordion("Advanced Setting", open=False):
        with gr.Row():
            with gr.Column(scale=1):
                temp_slider = gr.Slider(
                    label="Temperature",
                    value=0.2,
                    minimum=0.0,
                    maximum=1.0,
                    step=0.05,
                    interactive=True,
                )
                max_gen_token = gr.Slider(
                    label="Max Tokens",
                    value=512,
                    minimum=512,
                    maximum=4096,
                    step=256,
                    interactive=True,
                )
            with gr.Column(scale=1):
                freq_penalty = gr.Slider(
                    label="Frequency Penalty",
                    value=1.05,
                    minimum=-2,
                    maximum=2,
                    step=0.05,
                    interactive=True,
                )
                retrieve_docs = gr.Slider(
                    label="No. Retrieval Docs",
                    value=3,
                    minimum=1,
                    maximum=10,
                    step=1,
                    interactive=True,
                )
Screenshot 2024-05-05 at 10 16 09 PM

the GUI isn't very intuitive?

I mean, how to use it? select and load the model -> upload file -> ?

after I did the model thing and clicked load, there's a processing number running, but I have no idea what it means
image

Issue with ForwardRef._evaluate()

Dear Developer,

I encountered an issue when trying to run the chat-with-mlx project. When I open the app in the terminal, I get the following error message:

ForwardRef._evaluate() missing 1 required keyword-only argument: 'recursive_guard'

Could you please provide guidance on how to resolve this issue?

Thank you for your assistance.

Best regards

Time out

You try to use a model that was created with version 2.4.0.dev0, however, your version is 2.4.0. This might cause unexpected behavior or errors. In that case, try to update to the latest version.

/workspace/user/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/torch/cuda/__init__.py:141: UserWarning: CUDA initialization: The NVIDIA driver on your system is too old (found version 11040). Please update your GPU driver by downloading and installing a new version from the URL: http://www.nvidia.com/Download/index.aspx Alternatively, go to: https://pytorch.org to install a PyTorch version that has been compiled with your version of the CUDA driver. (Triggered internally at ../c10/cuda/CUDAFunctions.cpp:108.) return torch._C._cuda_getDeviceCount() > 0 Traceback (most recent call last): File "/workspace/user/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/httpx/_transports/default.py", line 69, in map_httpcore_exceptions yield File "/workspace/user/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/httpx/_transports/default.py", line 113, in __iter__ for part in self._httpcore_stream: File "/workspace/user/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/httpcore/_sync/connection_pool.py", line 367, in __iter__ raise exc from None File "/workspace/user/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/httpcore/_sync/connection_pool.py", line 363, in __iter__ for part in self._stream: File "/workspace/user/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/httpcore/_sync/http11.py", line 349, in __iter__ raise exc File "/workspace/user/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/httpcore/_sync/http11.py", line 341, in __iter__ for chunk in self._connection._receive_response_body(**kwargs): File "/workspace/user/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/httpcore/_sync/http11.py", line 210, in _receive_response_body event = self._receive_event(timeout=timeout) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/workspace/user/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/httpcore/_sync/http11.py", line 224, in _receive_event data = self._network_stream.read( ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/workspace/user/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/httpcore/_backends/sync.py", line 124, in read with map_exceptions(exc_map): File "/workspace/user/miniconda3/envs/mlx-chat/lib/python3.11/contextlib.py", line 158, in __exit__ self.gen.throw(typ, value, traceback) File "/workspace/user/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions raise to_exc(exc) from exc httpcore.ReadTimeout: timed out

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/workspace/user/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/gradio/queueing.py", line 495, in call_prediction
output = await route_utils.call_process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/user/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/gradio/route_utils.py", line 233, in call_process_api
output = await app.get_blocks().process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/user/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/gradio/blocks.py", line 1608, in process_api
result = await self.call_function(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/user/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/gradio/blocks.py", line 1188, in call_function
prediction = await utils.async_iteration(iterator)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/user/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/gradio/utils.py", line 513, in async_iteration
return await iterator.anext()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/user/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/gradio/utils.py", line 639, in asyncgen_wrapper
response = await iterator.anext()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/user/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/gradio/chat_interface.py", line 487, in _stream_fn
first_response = await async_iteration(generator)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/user/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/gradio/utils.py", line 513, in async_iteration
return await iterator.anext()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/user/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/gradio/utils.py", line 506, in anext
return await anyio.to_thread.run_sync(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/user/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/user/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 2144, in run_sync_in_worker_thread
return await future
^^^^^^^^^^^^
File "/workspace/user/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 851, in run
result = context.run(func, *args)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/user/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/gradio/utils.py", line 489, in run_sync_iterator_async
return next(iterator)
^^^^^^^^^^^^^^
File "/workspace/user/projects/chat-with-mlx/app.py", line 173, in chatbot
for chunk in response:
File "/workspace/user/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/openai/_streaming.py", line 44, in iter
for item in self._iterator:
File "/workspace/user/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/openai/_streaming.py", line 56, in stream
for sse in iterator:
File "/workspace/user/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/openai/_streaming.py", line 48, in _iter_events
yield from self._decoder.iter(self.response.iter_lines())
File "/workspace/user/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/openai/_streaming.py", line 224, in iter
for line in iterator:
File "/workspace/user/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/httpx/_models.py", line 861, in iter_lines
for text in self.iter_text():
File "/workspace/user/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/httpx/_models.py", line 848, in iter_text
for byte_content in self.iter_bytes():
File "/workspace/user/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/httpx/_models.py", line 829, in iter_bytes
for raw_bytes in self.iter_raw():
File "/workspace/user/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/httpx/_models.py", line 883, in iter_raw
for raw_stream_bytes in self.stream:
File "/workspace/user/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/httpx/_client.py", line 126, in iter
for chunk in self._stream:
File "/workspace/user/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/httpx/_transports/default.py", line 112, in iter
with map_httpcore_exceptions():
File "/workspace/user/miniconda3/envs/mlx-chat/lib/python3.11/contextlib.py", line 158, in exit
self.gen.throw(typ, value, traceback)
File "/workspace/user/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/httpx/_transports/default.py", line 86, in map_httpcore_exceptions
raise mapped_exc(message) from exc
httpx.ReadTimeout: timed out

Trouble loading a custom model

Hi.

I wanna load a custom model.

Yaml configuration is:

original_repo: ilsp-Meltemi-7B-Instruct-v1-4bit # The original HuggingFace Repo, this helps with displaying
mlx-repo: mlx-community/ilsp-Meltemi-7B-Instruct-v1-4bit # The MLX models Repo, most are available through mlx-community
quantize: 4bit # Optional: [4bit, 8bit]
default_language: multi # Optional: [en, es, zh, vi, multi]

Error log:

_Starting MLX Chat on port 7860
Sharing: False
Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch().
Traceback (most recent call last):
File "/Users/ktsi/Documents/Development/PyCode/chat-with-mlx/.venv/lib/python3.9/site-packages/gradio/queueing.py", line 501, in call_prediction
output = await route_utils.call_process_api(
File "/Users/ktsi/Documents/Development/PyCode/chat-with-mlx/.venv/lib/python3.9/site-packages/gradio/route_utils.py", line 258, in call_process_api
output = await app.get_blocks().process_api(
File "/Users/ktsi/Documents/Development/PyCode/chat-with-mlx/.venv/lib/python3.9/site-packages/gradio/blocks.py", line 1710, in process_api
result = await self.call_function(
File "/Users/ktsi/Documents/Development/PyCode/chat-with-mlx/.venv/lib/python3.9/site-packages/gradio/blocks.py", line 1250, in call_function
prediction = await anyio.to_thread.run_sync(
File "/Users/ktsi/Documents/Development/PyCode/chat-with-mlx/.venv/lib/python3.9/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "/Users/ktsi/Documents/Development/PyCode/chat-with-mlx/.venv/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 2144, in run_sync_in_worker_thread
return await future
File "/Users/ktsi/Documents/Development/PyCode/chat-with-mlx/.venv/lib/python3.9/site-packages/anyio/_backends/asyncio.py", line 851, in run
result = context.run(func, *args)
File "/Users/ktsi/Documents/Development/PyCode/chat-with-mlx/.venv/lib/python3.9/site-packages/gradio/utils.py", line 693, in wrapper
response = f(*args, **kwargs)
File "/Users/ktsi/Documents/Development/PyCode/chat-with-mlx/chat_with_mlx/app.py", line 58, in load_model
directory_path, "models", "download", model_name_list[1]
IndexError: list index out of range

What am I doing wrong?

Error: Could not find a version that satisfies the requirement mlx>=0

I followed the installation instructions provided in the document, but encountered an error.

ERROR: Could not find a version that satisfies the requirement mlx>=0.1 (from mlx-lm) (from versions: none)
ERROR: No matching distribution found for mlx>=0.1

conda 24.1.2
pip 23.3.1 (python 3.11)
Apple M1 Max

with model mistralai/Mixtral-8x7B-Instruct-v0.1- (🌍, 4bit) errors

Traceback (most recent call last):
File "/Users/liwei/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/gradio/chat_interface.py", line 490, in _stream_fn
first_response = await async_iteration(generator)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/liwei/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/gradio/utils.py", line 514, in async_iteration
return await iterator.anext()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/liwei/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/gradio/utils.py", line 507, in anext
return await anyio.to_thread.run_sync(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/liwei/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/liwei/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 2144, in run_sync_in_worker_thread
return await future
^^^^^^^^^^^^
File "/Users/liwei/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 851, in run
result = context.run(func, *args)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/liwei/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/gradio/utils.py", line 493, in run_sync_iterator_async
raise StopAsyncIteration() from None
StopAsyncIteration

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/Users/liwei/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/gradio/queueing.py", line 495, in call_prediction
output = await route_utils.call_process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/liwei/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/gradio/route_utils.py", line 235, in call_process_api
output = await app.get_blocks().process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/liwei/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/gradio/blocks.py", line 1627, in process_api
result = await self.call_function(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/liwei/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/gradio/blocks.py", line 1185, in call_function
prediction = await utils.async_iteration(iterator)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/liwei/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/gradio/utils.py", line 514, in async_iteration
return await iterator.anext()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/liwei/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/gradio/utils.py", line 640, in asyncgen_wrapper
response = await iterator.anext()
^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: async generator raised StopAsyncIteration

openai.NotFoundError - For mlx-community/Phi-3-mini-128k-instruct-4bit

I am trying to add a custom model : microsoft/Phi-3-mini-128k-instruct. After successful loading, when I try to use the model in the chat, I get the following error. I am wondering why the chat ui is connecting to openai.

Traceback (most recent call last):
File "/Users/arjun/LLM/MLX-LM/chat-with-mlx/.venv_mlx/lib/python3.12/site-packages/gradio/queueing.py", line 527, in process_events
response = await route_utils.call_process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/arjun/LLM/MLX-LM/chat-with-mlx/.venv_mlx/lib/python3.12/site-packages/gradio/route_utils.py", line 261, in call_process_api
output = await app.get_blocks().process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/arjun/LLM/MLX-LM/chat-with-mlx/.venv_mlx/lib/python3.12/site-packages/gradio/blocks.py", line 1786, in process_api
result = await self.call_function(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/arjun/LLM/MLX-LM/chat-with-mlx/.venv_mlx/lib/python3.12/site-packages/gradio/blocks.py", line 1350, in call_function
prediction = await utils.async_iteration(iterator)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/arjun/LLM/MLX-LM/chat-with-mlx/.venv_mlx/lib/python3.12/site-packages/gradio/utils.py", line 583, in async_iteration
return await iterator.anext()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/arjun/LLM/MLX-LM/chat-with-mlx/.venv_mlx/lib/python3.12/site-packages/gradio/utils.py", line 709, in asyncgen_wrapper
response = await iterator.anext()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/arjun/LLM/MLX-LM/chat-with-mlx/.venv_mlx/lib/python3.12/site-packages/gradio/chat_interface.py", line 545, in _stream_fn
first_response = await async_iteration(generator)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/arjun/LLM/MLX-LM/chat-with-mlx/.venv_mlx/lib/python3.12/site-packages/gradio/utils.py", line 583, in async_iteration
return await iterator.anext()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/arjun/LLM/MLX-LM/chat-with-mlx/.venv_mlx/lib/python3.12/site-packages/gradio/utils.py", line 576, in anext
return await anyio.to_thread.run_sync(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/arjun/LLM/MLX-LM/chat-with-mlx/.venv_mlx/lib/python3.12/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/arjun/LLM/MLX-LM/chat-with-mlx/.venv_mlx/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 2144, in run_sync_in_worker_thread
return await future
^^^^^^^^^^^^
File "/Users/arjun/LLM/MLX-LM/chat-with-mlx/.venv_mlx/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 851, in run
result = context.run(func, *args)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/arjun/LLM/MLX-LM/chat-with-mlx/.venv_mlx/lib/python3.12/site-packages/gradio/utils.py", line 559, in run_sync_iterator_async
return next(iterator)
^^^^^^^^^^^^^^
File "/Users/arjun/LLM/MLX-LM/chat-with-mlx/chat_with_mlx/app.py", line 203, in chatbot
response = client.chat.completions.create(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/arjun/LLM/MLX-LM/chat-with-mlx/.venv_mlx/lib/python3.12/site-packages/openai/_utils/_utils.py", line 277, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/Users/arjun/LLM/MLX-LM/chat-with-mlx/.venv_mlx/lib/python3.12/site-packages/openai/resources/chat/completions.py", line 581, in create
return self._post(
^^^^^^^^^^^
File "/Users/arjun/LLM/MLX-LM/chat-with-mlx/.venv_mlx/lib/python3.12/site-packages/openai/_base_client.py", line 1232, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/arjun/LLM/MLX-LM/chat-with-mlx/.venv_mlx/lib/python3.12/site-packages/openai/_base_client.py", line 921, in request
return self._request(
^^^^^^^^^^^^^^
File "/Users/arjun/LLM/MLX-LM/chat-with-mlx/.venv_mlx/lib/python3.12/site-packages/openai/_base_client.py", line 1012, in _request
raise self._make_status_error_from_response(err.response) from None
openai.NotFoundError:
<TITLE>Web Site does not exist</TITLE>

localhost not accessible in gradio

User
Traceback (most recent call last):
File "/opt/anaconda3/envs/mlx-chat/bin/chat-with-mlx", line 8, in
sys.exit(main())
^^^^^^
File "/Users/tangmin/chat-with-mlx/chat_with_mlx/app.py", line 239, in main
demo.launch(inbrowser=True)
File "/opt/anaconda3/envs/mlx-chat/lib/python3.11/site-packages/gradio/blocks.py", line 2064, in launch
raise ValueError(
ValueError: When localhost is not accessible, a shareable link must be created. Please set share=True or check your proxy settings to allow access to localhost.

Failed to launch chat-with-mlx, requests.sxceptions.SSLError

No sentence-transformers model found with name nomic-ai/nomic-embed-text-v1.5. Creating a new one with MEAN pooling.
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/urllib3/connectionpool.py", line 467, in _make_request
self._validate_conn(conn)
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/urllib3/connectionpool.py", line 1099, in _validate_conn
conn.connect()
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/urllib3/connection.py", line 653, in connect
sock_and_verified = _ssl_wrap_socket_and_match_hostname(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/urllib3/connection.py", line 806, in ssl_wrap_socket_and_match_hostname
ssl_sock = ssl_wrap_socket(
^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/urllib3/util/ssl
.py", line 465, in ssl_wrap_socket
ssl_sock = ssl_wrap_socket_impl(sock, context, tls_in_tls, server_hostname)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/urllib3/util/ssl
.py", line 509, in _ssl_wrap_socket_impl
return ssl_context.wrap_socket(sock, server_hostname=server_hostname)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/ssl.py", line 517, in wrap_socket
return self.sslsocket_class._create(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/ssl.py", line 1104, in _create
self.do_handshake()
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/ssl.py", line 1382, in do_handshake
self._sslobj.do_handshake()
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1006)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/urllib3/connectionpool.py", line 793, in urlopen
response = self._make_request(
^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/urllib3/connectionpool.py", line 491, in _make_request
raise new_e
urllib3.exceptions.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1006)

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/requests/adapters.py", line 486, in send
resp = conn.urlopen(
^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/urllib3/connectionpool.py", line 847, in urlopen
retries = retries.increment(
^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/urllib3/util/retry.py", line 515, in increment
raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /nomic-ai/nomic-embed-text-v1.5/resolve/main/config.json (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1006)')))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.11/bin/chat-with-mlx", line 5, in
from chat_with_mlx.app import main
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/chat_with_mlx/app.py", line 26, in
emb = HuggingFaceEmbeddings(model_name='nomic-ai/nomic-embed-text-v1.5', model_kwargs={'trust_remote_code':True})
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain_community/embeddings/huggingface.py", line 67, in init
self.client = sentence_transformers.SentenceTransformer(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/sentence_transformers/SentenceTransformer.py", line 198, in init
modules = self._load_auto_model(
^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/sentence_transformers/SentenceTransformer.py", line 1063, in _load_auto_model
transformer_model = Transformer(
^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/sentence_transformers/models/Transformer.py", line 35, in init
config = AutoConfig.from_pretrained(model_name_or_path, **model_args, cache_dir=cache_dir)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/transformers/models/auto/configuration_auto.py", line 1111, in from_pretrained
config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/transformers/configuration_utils.py", line 633, in get_config_dict
config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/transformers/configuration_utils.py", line 688, in _get_config_dict
resolved_config_file = cached_file(
^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/transformers/utils/hub.py", line 398, in cached_file
resolved_file = hf_hub_download(
^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/huggingface_hub/file_download.py", line 1261, in hf_hub_download
metadata = get_hf_file_metadata(
^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/huggingface_hub/file_download.py", line 1667, in get_hf_file_metadata
r = _request_wrapper(
^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/huggingface_hub/file_download.py", line 385, in _request_wrapper
response = _request_wrapper(
^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/huggingface_hub/file_download.py", line 408, in _request_wrapper
response = get_session().request(method=method, url=url, **params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/requests/sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/requests/sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/huggingface_hub/utils/_http.py", line 67, in send
return super().send(request, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/requests/adapters.py", line 517, in send
raise SSLError(e, request=request)
requests.exceptions.SSLError: (MaxRetryError("HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /nomic-ai/nomic-embed-text-v1.5/resolve/main/config.json (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1006)')))"), '(Request ID: 641d4e89-0e17-470d-9f14-2edb0f158e6d)')

openai.InternalServerError: Error code: 502

i have alread loaded model
image

Traceback (most recent call last):
File "/opt/homebrew/anaconda3/envs/mlx-chat/lib/python3.11/site-packages/gradio/queueing.py", line 495, in call_prediction
output = await route_utils.call_process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/envs/mlx-chat/lib/python3.11/site-packages/gradio/route_utils.py", line 235, in call_process_api
output = await app.get_blocks().process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/envs/mlx-chat/lib/python3.11/site-packages/gradio/blocks.py", line 1627, in process_api
result = await self.call_function(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/envs/mlx-chat/lib/python3.11/site-packages/gradio/blocks.py", line 1185, in call_function
prediction = await utils.async_iteration(iterator)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/envs/mlx-chat/lib/python3.11/site-packages/gradio/utils.py", line 514, in async_iteration
return await iterator.anext()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/envs/mlx-chat/lib/python3.11/site-packages/gradio/utils.py", line 640, in asyncgen_wrapper
response = await iterator.anext()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/envs/mlx-chat/lib/python3.11/site-packages/gradio/chat_interface.py", line 490, in _stream_fn
first_response = await async_iteration(generator)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/envs/mlx-chat/lib/python3.11/site-packages/gradio/utils.py", line 514, in async_iteration
return await iterator.anext()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/envs/mlx-chat/lib/python3.11/site-packages/gradio/utils.py", line 507, in anext
return await anyio.to_thread.run_sync(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/envs/mlx-chat/lib/python3.11/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/envs/mlx-chat/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 2144, in run_sync_in_worker_thread
return await future
^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/envs/mlx-chat/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 851, in run
result = context.run(func, *args)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/envs/mlx-chat/lib/python3.11/site-packages/gradio/utils.py", line 490, in run_sync_iterator_async
return next(iterator)
^^^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/envs/mlx-chat/lib/python3.11/site-packages/chat_with_mlx/app.py", line 166, in chatbot
response = client.chat.completions.create(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/envs/mlx-chat/lib/python3.11/site-packages/openai/_utils/_utils.py", line 275, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/envs/mlx-chat/lib/python3.11/site-packages/openai/resources/chat/completions.py", line 663, in create
return self._post(
^^^^^^^^^^^
File "/opt/homebrew/anaconda3/envs/mlx-chat/lib/python3.11/site-packages/openai/_base_client.py", line 1200, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/envs/mlx-chat/lib/python3.11/site-packages/openai/_base_client.py", line 889, in request
return self._request(
^^^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/envs/mlx-chat/lib/python3.11/site-packages/openai/_base_client.py", line 965, in _request
return self._retry_request(
^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/envs/mlx-chat/lib/python3.11/site-packages/openai/_base_client.py", line 1013, in _retry_request
return self._request(
^^^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/envs/mlx-chat/lib/python3.11/site-packages/openai/_base_client.py", line 965, in _request
return self._retry_request(
^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/envs/mlx-chat/lib/python3.11/site-packages/openai/_base_client.py", line 1013, in _retry_request
return self._request(
^^^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/envs/mlx-chat/lib/python3.11/site-packages/openai/_base_client.py", line 980, in _request
raise self._make_status_error_from_response(err.response) from None
openai.InternalServerError: Error code: 502

How to run ?

Based on the README.md file:

Start the app: chat-with-mlx

But chat-with-mlx is a folder. How to run?

openai.InternalServerError: Error code: 503

chat-with-mlx (mlx-chat) 16:45:31

Starting MLX Chat on port 7860
Sharing: False
Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch().
Fetching 6 files: 100%|██████████████████████████████████████████████████████████| 6/6 [00:00<00:00, 6.34it/s]
Traceback (most recent call last):
File "/Users/qian/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/gradio/queueing.py", line 527, in process_events
response = await route_utils.call_process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/qian/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/gradio/route_utils.py", line 261, in call_process_api
output = await app.get_blocks().process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/qian/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/gradio/blocks.py", line 1786, in process_api
result = await self.call_function(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/qian/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/gradio/blocks.py", line 1350, in call_function
prediction = await utils.async_iteration(iterator)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/qian/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/gradio/utils.py", line 583, in async_iteration
return await iterator.anext()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/qian/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/gradio/utils.py", line 709, in asyncgen_wrapper
response = await iterator.anext()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/qian/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/gradio/chat_interface.py", line 545, in _stream_fn
first_response = await async_iteration(generator)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/qian/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/gradio/utils.py", line 583, in async_iteration
return await iterator.anext()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/qian/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/gradio/utils.py", line 576, in anext
return await anyio.to_thread.run_sync(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/qian/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/qian/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 2144, in run_sync_in_worker_thread
return await future
^^^^^^^^^^^^
File "/Users/qian/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 851, in run
result = context.run(func, *args)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/qian/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/gradio/utils.py", line 559, in run_sync_iterator_async
return next(iterator)
^^^^^^^^^^^^^^
File "/Users/qian/projects/github/chat-with-mlx/chat_with_mlx/app.py", line 203, in chatbot
response = client.chat.completions.create(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/qian/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/openai/_utils/_utils.py", line 275, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/Users/qian/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/openai/resources/chat/completions.py", line 667, in create
return self._post(
^^^^^^^^^^^
File "/Users/qian/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/openai/_base_client.py", line 1213, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/qian/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/openai/_base_client.py", line 902, in request
return self._request(
^^^^^^^^^^^^^^
File "/Users/qian/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/openai/_base_client.py", line 978, in _request
return self._retry_request(
^^^^^^^^^^^^^^^^^^^^
File "/Users/qian/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/openai/_base_client.py", line 1026, in _retry_request
return self._request(
^^^^^^^^^^^^^^
File "/Users/qian/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/openai/_base_client.py", line 978, in _request
return self._retry_request(
^^^^^^^^^^^^^^^^^^^^
File "/Users/qian/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/openai/_base_client.py", line 1026, in _retry_request
return self._request(
^^^^^^^^^^^^^^
File "/Users/qian/miniconda3/envs/mlx-chat/lib/python3.11/site-packages/openai/_base_client.py", line 993, in _request
raise self._make_status_error_from_response(err.response) from None
openai.InternalServerError: Error code: 503

Model has loaded, but got this error when ask. Change model got the same error.I was installed with the manual way, using the main branch to install.

feature request

I set an API Server on LM Studio. hope to link api with chat-with-mlx.

and is that possible to chat with mutual-PDFs at the same time?

do not create symlink

please download the model file directly, do not create symlink.

if I clean files in .cache, not function properly, and will not re-download files.

I want to backup model files, please do not create symlink. thx

Method Not Allowed

Loaded a model, tried a test message but any message would get an error...

--
raise self._make_status_error_from_response(err.response) from None
openai.APIStatusError: Error code: 405 - {'detail': 'Method Not Allowed'}

Cannot run local MLX model not in mlx-community?

  1. download huggingface llm model(Qwen2-7B-Instruct)
  2. convert to mlx model(Qwen2-7B-Instruct=》Qwen2-7B-Instruct-MLX)
  3. copy mlx model to **/chat_with_mlx/models/download/Qwen2-7B-Instruct-MLX
  4. add /chat_with_mlx/models/configs/.yaml

if model(like Qwen2-7B-Instruct-MLX) is not exist in mlx-community;however modify yaml,this mlx model cannot run?

chat-with-mlx works with Gemma-2b-it here, but error for MoE 8*7B

chat-with-mlx works with Gemma-2b-it here, but error for MoE 8*7B

all error is below

`
taozhiyu@TAOZHIYUdeMBP chat-with-mlx % chat-with-mlx

Starting MLX Chat on port 7860
Sharing: False
Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch().
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/httpx/_transports/default.py", line 69, in map_httpcore_exceptions
yield
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/httpx/_transports/default.py", line 233, in handle_request
resp = self._pool.handle_request(req)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/httpcore/_sync/connection_pool.py", line 216, in handle_request
raise exc from None
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/httpcore/_sync/connection_pool.py", line 196, in handle_request
response = connection.handle_request(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/httpcore/_sync/connection.py", line 99, in handle_request
raise exc
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/httpcore/_sync/connection.py", line 76, in handle_request
stream = self._connect(request)
^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/httpcore/_sync/connection.py", line 122, in _connect
stream = self._network_backend.connect_tcp(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/httpcore/_backends/sync.py", line 205, in connect_tcp
with map_exceptions(exc_map):
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/contextlib.py", line 158, in exit
self.gen.throw(typ, value, traceback)
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions
raise to_exc(exc) from exc
httpcore.ConnectError: [Errno 61] Connection refused

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/openai/_base_client.py", line 918, in _request
response = self._client.send(
^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/httpx/_client.py", line 914, in send
response = self._send_handling_auth(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/httpx/_client.py", line 942, in _send_handling_auth
response = self._send_handling_redirects(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/httpx/_client.py", line 979, in _send_handling_redirects
response = self._send_single_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/httpx/_client.py", line 1015, in _send_single_request
response = transport.handle_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/httpx/_transports/default.py", line 232, in handle_request
with map_httpcore_exceptions():
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/contextlib.py", line 158, in exit
self.gen.throw(typ, value, traceback)
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/httpx/_transports/default.py", line 86, in map_httpcore_exceptions
raise mapped_exc(message) from exc
httpx.ConnectError: [Errno 61] Connection refused

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/gradio/queueing.py", line 501, in call_prediction
output = await route_utils.call_process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/gradio/route_utils.py", line 253, in call_process_api
output = await app.get_blocks().process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/gradio/blocks.py", line 1695, in process_api
result = await self.call_function(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/gradio/blocks.py", line 1247, in call_function
prediction = await utils.async_iteration(iterator)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/gradio/utils.py", line 516, in async_iteration
return await iterator.anext()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/gradio/utils.py", line 642, in asyncgen_wrapper
response = await iterator.anext()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/gradio/chat_interface.py", line 493, in _stream_fn
first_response = await async_iteration(generator)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/gradio/utils.py", line 516, in async_iteration
return await iterator.anext()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/gradio/utils.py", line 509, in anext
return await anyio.to_thread.run_sync(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 2144, in run_sync_in_worker_thread
return await future
^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 851, in run
result = context.run(func, *args)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/gradio/utils.py", line 492, in run_sync_iterator_async
return next(iterator)
^^^^^^^^^^^^^^
File "/Users/taozhiyu/Downloads/chat-with-mlx/chat_with_mlx/app.py", line 203, in chatbot
response = client.chat.completions.create(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/openai/_utils/_utils.py", line 275, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/openai/resources/chat/completions.py", line 663, in create
return self._post(
^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/openai/_base_client.py", line 1200, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/openai/_base_client.py", line 889, in request
return self._request(
^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/openai/_base_client.py", line 942, in _request
return self._retry_request(
^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/openai/_base_client.py", line 1013, in _retry_request
return self._request(
^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/openai/_base_client.py", line 942, in _request
return self._retry_request(
^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/openai/_base_client.py", line 1013, in _retry_request
return self._request(
^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/openai/_base_client.py", line 952, in _request
raise APIConnectionError(request=request) from err
openai.APIConnectionError: Connection error.`

截屏2024-03-14 10 28 06

i run in github with codespaces,there is a error showing as below

Traceback (most recent call last):
File "/home/codespace/.local/lib/python3.10/site-packages/httpcore/_exceptions.py", line 10, in map_exceptions██▉| 4.26G/4.26G [00:56<00:00, 77.3MB/s]
yield
File "/home/codespace/.local/lib/python3.10/site-packages/httpcore/_backends/sync.py", line 206, in connect_tcp
sock = socket.create_connection(
File "/usr/local/python/3.10.13/lib/python3.10/socket.py", line 845, in create_connection
raise err
File "/usr/local/python/3.10.13/lib/python3.10/socket.py", line 833, in create_connection
sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/home/codespace/.local/lib/python3.10/site-packages/httpx/_transports/default.py", line 67, in map_httpcore_exceptions
yield
File "/home/codespace/.local/lib/python3.10/site-packages/httpx/_transports/default.py", line 231, in handle_request
resp = self._pool.handle_request(req)
File "/home/codespace/.local/lib/python3.10/site-packages/httpcore/_sync/connection_pool.py", line 268, in handle_request
raise exc
File "/home/codespace/.local/lib/python3.10/site-packages/httpcore/_sync/connection_pool.py", line 251, in handle_request
response = connection.handle_request(request)
File "/home/codespace/.local/lib/python3.10/site-packages/httpcore/_sync/connection.py", line 99, in handle_request
raise exc
File "/home/codespace/.local/lib/python3.10/site-packages/httpcore/_sync/connection.py", line 76, in handle_request
stream = self._connect(request)
File "/home/codespace/.local/lib/python3.10/site-packages/httpcore/_sync/connection.py", line 124, in _connect
stream = self._network_backend.connect_tcp(**kwargs)
File "/home/codespace/.local/lib/python3.10/site-packages/httpcore/_backends/sync.py", line 205, in connect_tcp
with map_exceptions(exc_map):
File "/usr/local/python/3.10.13/lib/python3.10/contextlib.py", line 153, in exit
self.gen.throw(typ, value, traceback)
File "/home/codespace/.local/lib/python3.10/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions
raise to_exc(exc) from exc
httpcore.ConnectError: [Errno 111] Connection refused

httpx.ReadError: [Errno 54] Connection reset by peer

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/opt/anaconda3/lib/python3.11/site-packages/httpx/_transports/default.py", line 67, in map_httpcore_exceptions
yield
File "/opt/anaconda3/lib/python3.11/site-packages/httpx/_transports/default.py", line 111, in iter
for part in self._httpcore_stream:
File "/opt/anaconda3/lib/python3.11/site-packages/httpcore/_sync/connection_pool.py", line 361, in iter
for part in self._stream:
File "/opt/anaconda3/lib/python3.11/site-packages/httpcore/_sync/http11.py", line 337, in iter
raise exc
File "/opt/anaconda3/lib/python3.11/site-packages/httpcore/_sync/http11.py", line 329, in iter
for chunk in self._connection._receive_response_body(**kwargs):
File "/opt/anaconda3/lib/python3.11/site-packages/httpcore/_sync/http11.py", line 198, in _receive_response_body
event = self._receive_event(timeout=timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.11/site-packages/httpcore/_sync/http11.py", line 212, in _receive_event
data = self._network_stream.read(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.11/site-packages/httpcore/_backends/sync.py", line 124, in read
with map_exceptions(exc_map):
File "/opt/anaconda3/lib/python3.11/contextlib.py", line 158, in exit
self.gen.throw(typ, value, traceback)
File "/opt/anaconda3/lib/python3.11/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions
raise to_exc(exc) from exc
httpcore.ReadError: [Errno 54] Connection reset by peer

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/opt/anaconda3/lib/python3.11/site-packages/gradio/queueing.py", line 495, in call_prediction
output = await route_utils.call_process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.11/site-packages/gradio/route_utils.py", line 235, in call_process_api
output = await app.get_blocks().process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.11/site-packages/gradio/blocks.py", line 1627, in process_api
result = await self.call_function(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.11/site-packages/gradio/blocks.py", line 1185, in call_function
prediction = await utils.async_iteration(iterator)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.11/site-packages/gradio/utils.py", line 514, in async_iteration
return await iterator.anext()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.11/site-packages/gradio/utils.py", line 640, in asyncgen_wrapper
response = await iterator.anext()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.11/site-packages/gradio/chat_interface.py", line 490, in _stream_fn
first_response = await async_iteration(generator)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.11/site-packages/gradio/utils.py", line 514, in async_iteration
return await iterator.anext()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.11/site-packages/gradio/utils.py", line 507, in anext
return await anyio.to_thread.run_sync(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.11/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 2134, in run_sync_in_worker_thread
return await future
^^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 851, in run
result = context.run(func, *args)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.11/site-packages/gradio/utils.py", line 490, in run_sync_iterator_async
return next(iterator)
^^^^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.11/site-packages/chat_with_mlx/app.py", line 166, in chatbot
response = client.chat.completions.create(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.11/site-packages/openai/_utils/_utils.py", line 275, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.11/site-packages/openai/resources/chat/completions.py", line 663, in create
return self._post(
^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.11/site-packages/openai/_base_client.py", line 1200, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.11/site-packages/openai/_base_client.py", line 889, in request
return self._request(
^^^^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.11/site-packages/openai/_base_client.py", line 965, in _request
return self._retry_request(
^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.11/site-packages/openai/_base_client.py", line 1013, in _retry_request
return self._request(
^^^^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.11/site-packages/openai/_base_client.py", line 965, in _request
return self._retry_request(
^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.11/site-packages/openai/_base_client.py", line 1013, in _retry_request
return self._request(
^^^^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.11/site-packages/openai/_base_client.py", line 977, in _request
err.response.read()
File "/opt/anaconda3/lib/python3.11/site-packages/httpx/_models.py", line 811, in read
self._content = b"".join(self.iter_bytes())
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.11/site-packages/httpx/_models.py", line 829, in iter_bytes
for raw_bytes in self.iter_raw():
File "/opt/anaconda3/lib/python3.11/site-packages/httpx/_models.py", line 887, in iter_raw
for raw_stream_bytes in self.stream:
File "/opt/anaconda3/lib/python3.11/site-packages/httpx/_client.py", line 124, in iter
for chunk in self._stream:
File "/opt/anaconda3/lib/python3.11/site-packages/httpx/_transports/default.py", line 110, in iter
with map_httpcore_exceptions():
File "/opt/anaconda3/lib/python3.11/contextlib.py", line 158, in exit
self.gen.throw(typ, value, traceback)
File "/opt/anaconda3/lib/python3.11/site-packages/httpx/_transports/default.py", line 84, in map_httpcore_exceptions
raise mapped_exc(message) from exc
httpx.ReadError: [Errno 54] Connection reset by peer

error after switching models

if I load , unload, load another one, unload, load one..... and retry, error.....

`Starting httpd at 127.0.0.1 on port 8080...
Model Killed
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/httpx/_transports/default.py", line 69, in map_httpcore_exceptions
yield
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/httpx/_transports/default.py", line 233, in handle_request
resp = self._pool.handle_request(req)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/httpcore/_sync/connection_pool.py", line 216, in handle_request
raise exc from None
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/httpcore/_sync/connection_pool.py", line 196, in handle_request
response = connection.handle_request(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/httpcore/_sync/connection.py", line 99, in handle_request
raise exc
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/httpcore/_sync/connection.py", line 76, in handle_request
stream = self._connect(request)
^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/httpcore/_sync/connection.py", line 122, in _connect
stream = self._network_backend.connect_tcp(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/httpcore/_backends/sync.py", line 205, in connect_tcp
with map_exceptions(exc_map):
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/contextlib.py", line 158, in exit
self.gen.throw(typ, value, traceback)
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions
raise to_exc(exc) from exc
httpcore.ConnectError: [Errno 61] Connection refused

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/openai/_base_client.py", line 918, in _request
response = self._client.send(
^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/httpx/_client.py", line 914, in send
response = self._send_handling_auth(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/httpx/_client.py", line 942, in _send_handling_auth
response = self._send_handling_redirects(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/httpx/_client.py", line 979, in _send_handling_redirects
response = self._send_single_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/httpx/_client.py", line 1015, in _send_single_request
response = transport.handle_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/httpx/_transports/default.py", line 232, in handle_request
with map_httpcore_exceptions():
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/contextlib.py", line 158, in exit
self.gen.throw(typ, value, traceback)
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/httpx/_transports/default.py", line 86, in map_httpcore_exceptions
raise mapped_exc(message) from exc
httpx.ConnectError: [Errno 61] Connection refused

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/gradio/queueing.py", line 501, in call_prediction
output = await route_utils.call_process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/gradio/route_utils.py", line 253, in call_process_api
output = await app.get_blocks().process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/gradio/blocks.py", line 1695, in process_api
result = await self.call_function(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/gradio/blocks.py", line 1247, in call_function
prediction = await utils.async_iteration(iterator)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/gradio/utils.py", line 516, in async_iteration
return await iterator.anext()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/gradio/utils.py", line 642, in asyncgen_wrapper
response = await iterator.anext()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/gradio/chat_interface.py", line 493, in _stream_fn
first_response = await async_iteration(generator)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/gradio/utils.py", line 516, in async_iteration
return await iterator.anext()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/gradio/utils.py", line 509, in anext
return await anyio.to_thread.run_sync(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 2144, in run_sync_in_worker_thread
return await future
^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 851, in run
result = context.run(func, *args)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/gradio/utils.py", line 492, in run_sync_iterator_async
return next(iterator)
^^^^^^^^^^^^^^
File "/Users/taozhiyu/Downloads/chat-with-mlx/chat_with_mlx/app.py", line 203, in chatbot
response = client.chat.completions.create(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/openai/_utils/_utils.py", line 275, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/openai/resources/chat/completions.py", line 663, in create
return self._post(
^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/openai/_base_client.py", line 1200, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/openai/_base_client.py", line 889, in request
return self._request(
^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/openai/_base_client.py", line 942, in _request
return self._retry_request(
^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/openai/_base_client.py", line 1013, in _retry_request
return self._request(
^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/openai/_base_client.py", line 942, in _request
return self._retry_request(
^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/openai/_base_client.py", line 1013, in _retry_request
return self._request(
^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/openai/_base_client.py", line 952, in _request
raise APIConnectionError(request=request) from err
openai.APIConnectionError: Connection error.
Starting httpd at 127.0.0.1 on port 8080...
`

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.