Git Product home page Git Product logo

Comments (14)

CyberTea0X avatar CyberTea0X commented on May 13, 2024 2

@Antwa-sensei253
Had the same issue on Ubuntu 22.04.03. I manually installed llvm, cmake, gcc and clang. Then i manually installed llama-cpp-python and it worked.

sudo apt install gcc cmake llvm clang
pip3 install llama-cpp-python

from open-interpreter.

alexgit2k avatar alexgit2k commented on May 13, 2024 2

Had the same problem. As said in #39 (comment) llama-cpp-python has to be installed previously. Got it working in the Python docker-image docker run -it --rm python /bin/bash with the following commands:

pip install llama-cpp-python

pip install open-interpreter
interpreter --local

from open-interpreter.

jordanbtucker avatar jordanbtucker commented on May 13, 2024 1

@AlvinCZ Try running the following command on your Mac.

/Applications/Python\ 3.11/Install\ Certificates.command

Source

@Antwa-sensei253 @Cafezinho Ensure you have the proper build tools like cmake and build-essential or build_devel depending on your distribution. For Ubuntu, run this.

sudo apt update
sudo apt install build-essential cmake

from open-interpreter.

KillianLucas avatar KillianLucas commented on May 13, 2024

Hi @AlvinCZ! Thanks for trying this out + for the kind words about the project.

Does it do this error for all models, like if you try to download the low quality 7B vs. medium quality 7B, etc? I'm also on M2 Mac with Python 3.11 so it should work. Maybe one of the download URLs has some weird SSL certificate stuff and we just need to find a new URL.

from open-interpreter.

Antwa-sensei253 avatar Antwa-sensei253 commented on May 13, 2024

I am having the same issue in linux
this is my log

interpreter --local

Open Interpreter will use Code Llama for local execution. Use your arrow keys to set up the model.

[?] Parameter count (smaller is faster, larger is more capable): 7B
 > 7B
   16B
   34B

[?] Quality (lower is faster, higher is more capable): Low | Size: 3.01 GB, RAM usage: 5.51 GB
 > Low | Size: 3.01 GB, RAM usage: 5.51 GB
   Medium | Size: 4.24 GB, RAM usage: 6.74 GB
   High | Size: 7.16 GB, RAM usage: 9.66 GB

[?] Use GPU? (Large models might crash on GPU, but will run more quickly) (Y/n): y

[?] `Code-Llama` interface package not found. Install `llama-cpp-python`? (Y/n): y

Collecting llama-cpp-python
  Using cached llama_cpp_python-0.1.83.tar.gz (1.8 MB)
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
  Preparing metadata (pyproject.toml) ... done
Requirement already satisfied: typing-extensions>=4.5.0 in ./miniconda3/lib/python3.11/site-packages (from llama-cpp-python) (4.7.1)
Requirement already satisfied: numpy>=1.20.0 in ./miniconda3/lib/python3.11/site-packages (from llama-cpp-python) (1.24.4)
Collecting diskcache>=5.6.1 (from llama-cpp-python)
  Using cached diskcache-5.6.3-py3-none-any.whl (45 kB)
Building wheels for collected packages: llama-cpp-python
  Building wheel for llama-cpp-python (pyproject.toml) ... error
  error: subprocess-exited-with-error

  × Building wheel for llama-cpp-python (pyproject.toml) did not run successfully.
  │ exit code: 1
  ╰─> [112 lines of output]


      --------------------------------------------------------------------------------
      -- Trying 'Ninja' generator
      --------------------------------
      ---------------------------
      ----------------------
      -----------------
      ------------
      -------
      --
      CMake Deprecation Warning at CMakeLists.txt:1 (cmake_minimum_required):
        Compatibility with CMake < 3.5 will be removed from a future version of
        CMake.

        Update the VERSION argument <min> value or use a ...<max> suffix to tell
        CMake that the project does not need compatibility with older versions.

      Not searching for unused variables given on the command line.

      -- The C compiler identification is unknown
      -- Detecting C compiler ABI info
      -- Detecting C compiler ABI info - failed
      -- Check for working C compiler: /usr/bin/cc
      -- Check for working C compiler: /usr/bin/cc - broken
      CMake Error at /tmp/pip-build-env-76a7bker/overlay/lib/python3.11/site-packages/cmake/data/share/cmake-3.27/Modules/CMakeTestCCompiler.cmake:67 (message):
        The C compiler

          "/usr/bin/cc"

        is not able to compile a simple test program.

        It fails with the following output:

          Change Dir: '/tmp/pip-install-f91tf6td/llama-cpp-python_c2997b3bb92647b7838d93aba07777dc/_cmake_test_compile/build/CMakeFiles/CMakeScratch/TryCompile-Ex1lli'

          Run Build Command(s): /tmp/pip-build-env-76a7bker/overlay/lib/python3.11/site-packages/ninja/data/bin/ninja -v cmTC_66c96
          [1/2] /usr/bin/cc    -o CMakeFiles/cmTC_66c96.dir/testCCompiler.c.o -c /tmp/pip-install-f91tf6td/llama-cpp-python_c2997b3bb92647b7838d93aba07777dc/_cmake_test_compile/build/CMakeFiles/CMakeScratch/TryCompile-Ex1lli/testCCompiler.c
          FAILED: CMakeFiles/cmTC_66c96.dir/testCCompiler.c.o
          /usr/bin/cc    -o CMakeFiles/cmTC_66c96.dir/testCCompiler.c.o -c /tmp/pip-install-f91tf6td/llama-cpp-python_c2997b3bb92647b7838d93aba07777dc/_cmake_test_compile/build/CMakeFiles/CMakeScratch/TryCompile-Ex1lli/testCCompiler.c
          cc: fatal error: cannot executeas: execvp: No such file or directory
          compilation terminated.
          ninja: build stopped: subcommand failed.





        CMake will not be able to correctly generate this project.
      Call Stack (most recent call first):
        CMakeLists.txt:3 (ENABLE_LANGUAGE)


      -- Configuring incomplete, errors occurred!
      --
      -------
      ------------
      -----------------
      ----------------------
      ---------------------------
      --------------------------------
      -- Trying 'Ninja' generator - failure
      --------------------------------------------------------------------------------



      --------------------------------------------------------------------------------
      -- Trying 'Unix Makefiles' generator
      --------------------------------
      ---------------------------
      ----------------------
      -----------------
      ------------
      -------
      --
      CMake Deprecation Warning at CMakeLists.txt:1 (cmake_minimum_required):
        Compatibility with CMake < 3.5 will be removed from a future version of
        CMake.

        Update the VERSION argument <min> value or use a ...<max> suffix to tell
        CMake that the project does not need compatibility with older versions.

      Not searching for unused variables given on the command line.

      CMake Error: CMake was unable to find a build program corresponding to "Unix Makefiles".  CMAKE_MAKE_PROGRAM is not set.  You probably need to select a different build tool.
      -- Configuring incomplete, errors occurred!
      --
      -------
      ------------
      -----------------
      ----------------------
      ---------------------------
      --------------------------------
      -- Trying 'Unix Makefiles' generator - failure
      --------------------------------------------------------------------------------

                      ********************************************************************************
                      scikit-build could not get a working generator for your system. Aborting build.

                      Building Linux wheels for Python 3.11 requires a compiler (e.g gcc).
      But scikit-build does *NOT* know how to install it on arch

      To build compliant wheels, consider using the manylinux system described in PEP-513.
      Get it with "dockcross/manylinux-x64" docker image:

        https://github.com/dockcross/dockcross#readme

      For more details, please refer to scikit-build documentation:

        http://scikit-build.readthedocs.io/en/latest/generators.html#linux

                      ********************************************************************************
      [end of output]

  note: This error originates from a subprocess, and is likely not a problem with pip.
  ERROR: Failed building wheel for llama-cpp-python
Failed to build llama-cpp-python
ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects
Error during installation with OpenBLAS: Command '['/home/antwa/miniconda3/bin/python', '-m', 'pip', 'install',
'llama-cpp-python']' returned non-zero exit status 1.
>Failed to install Code-LLama.

**We have likely not built the proper `Code-Llama` support for your system.**

(Running language models locally is a difficult task! If you have insight into the best way to implement this
across platforms/architectures, please join the Open Interpreter community Discord and consider contributing
the project's development.)

Please press enter to switch to `GPT-4` (recommended).

from open-interpreter.

Cafezinho avatar Cafezinho commented on May 13, 2024

Error for me too...

`
Defaulting to user installation because normal site-packages is not writeable
Collecting llama-cpp-python
Using cached llama_cpp_python-0.1.83.tar.gz (1.8 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Collecting typing-extensions>=4.5.0
Using cached typing_extensions-4.7.1-py3-none-any.whl (33 kB)
Collecting numpy>=1.20.0
Using cached numpy-1.25.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (18.2 MB)
Collecting diskcache>=5.6.1
Using cached diskcache-5.6.3-py3-none-any.whl (45 kB)
Building wheels for collected packages: llama-cpp-python
Building wheel for llama-cpp-python (pyproject.toml) ... error
error: subprocess-exited-with-error

× Building wheel for llama-cpp-python (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [184 lines of output]

  --------------------------------------------------------------------------------
  -- Trying 'Ninja' generator
  --------------------------------
  ---------------------------
  ----------------------
  -----------------
  ------------
  -------
  --
  CMake Deprecation Warning at CMakeLists.txt:1 (cmake_minimum_required):
    Compatibility with CMake < 3.5 will be removed from a future version of
    CMake.

    Update the VERSION argument <min> value or use a ...<max> suffix to tell
    CMake that the project does not need compatibility with older versions.

  Not searching for unused variables given on the command line.

  -- The C compiler identification is GNU 11.4.0
  -- Detecting C compiler ABI info
  -- Detecting C compiler ABI info - done
  -- Check for working C compiler: /usr/bin/cc - skipped
  -- Detecting C compile features
  -- Detecting C compile features - done
  -- The CXX compiler identification is GNU 11.4.0
  -- Detecting CXX compiler ABI info
  -- Detecting CXX compiler ABI info - done
  -- Check for working CXX compiler: /usr/bin/c++ - skipped
  -- Detecting CXX compile features
  -- Detecting CXX compile features - done
  -- Configuring done (4.9s)
  -- Generating done (0.0s)
  -- Build files have been written to: /tmp/pip-install-zb5bu9st/llama-cpp-python_24b7e33fea9a4111bd59bb4482c69f2e/_cmake_test_compile/build
  --
  -------
  ------------
  -----------------
  ----------------------
  ---------------------------
  --------------------------------
  -- Trying 'Ninja' generator - success
  --------------------------------------------------------------------------------

  Configuring Project
    Working directory:
      /tmp/pip-install-zb5bu9st/llama-cpp-python_24b7e33fea9a4111bd59bb4482c69f2e/_skbuild/linux-x86_64-3.11/cmake-build
    Command:
      /tmp/pip-build-env-8te9r42p/overlay/local/lib/python3.11/dist-packages/cmake/data/bin/cmake /tmp/pip-install-zb5bu9st/llama-cpp-python_24b7e33fea9a4111bd59bb4482c69f2e -G Ninja -DCMAKE_MAKE_PROGRAM:FILEPATH=/tmp/pip-build-env-8te9r42p/overlay/local/lib/python3.11/dist-packages/ninja/data/bin/ninja --no-warn-unused-cli -DCMAKE_INSTALL_PREFIX:PATH=/tmp/pip-install-zb5bu9st/llama-cpp-python_24b7e33fea9a4111bd59bb4482c69f2e/_skbuild/linux-x86_64-3.11/cmake-install -DPYTHON_VERSION_STRING:STRING=3.11.4 -DSKBUILD:INTERNAL=TRUE -DCMAKE_MODULE_PATH:PATH=/tmp/pip-build-env-8te9r42p/overlay/local/lib/python3.11/dist-packages/skbuild/resources/cmake -DPYTHON_EXECUTABLE:PATH=/usr/bin/python3 -DPYTHON_INCLUDE_DIR:PATH=/usr/include/python3.11 -DPYTHON_LIBRARY:PATH=/usr/lib/x86_64-linux-gnu/libpython3.11.so -DPython_EXECUTABLE:PATH=/usr/bin/python3 -DPython_ROOT_DIR:PATH=/usr -DPython_FIND_REGISTRY:STRING=NEVER -DPython_INCLUDE_DIR:PATH=/usr/include/python3.11 -DPython3_EXECUTABLE:PATH=/usr/bin/python3 -DPython3_ROOT_DIR:PATH=/usr -DPython3_FIND_REGISTRY:STRING=NEVER -DPython3_INCLUDE_DIR:PATH=/usr/include/python3.11 -DCMAKE_MAKE_PROGRAM:FILEPATH=/tmp/pip-build-env-8te9r42p/overlay/local/lib/python3.11/dist-packages/ninja/data/bin/ninja -DCMAKE_BUILD_TYPE:STRING=Release

  Not searching for unused variables given on the command line.
  -- The C compiler identification is GNU 11.4.0
  -- The CXX compiler identification is GNU 11.4.0
  -- Detecting C compiler ABI info
  -- Detecting C compiler ABI info - done
  -- Check for working C compiler: /usr/bin/cc - skipped
  -- Detecting C compile features
  -- Detecting C compile features - done
  -- Detecting CXX compiler ABI info
  -- Detecting CXX compiler ABI info - done
  -- Check for working CXX compiler: /usr/bin/c++ - skipped
  -- Detecting CXX compile features
  -- Detecting CXX compile features - done
  -- Configuring done (3.9s)
  -- Generating done (0.0s)
  -- Build files have been written to: /tmp/pip-install-zb5bu9st/llama-cpp-python_24b7e33fea9a4111bd59bb4482c69f2e/_skbuild/linux-x86_64-3.11/cmake-build
  [1/2] Generating /tmp/pip-install-zb5bu9st/llama-cpp-python_24b7e33fea9a4111bd59bb4482c69f2e/vendor/llama.cpp/libllama.so
  I llama.cpp build info:
  I UNAME_S:  Linux
  I UNAME_P:  x86_64
  I UNAME_M:  x86_64
  I CFLAGS:   -I.            -O3 -std=c11   -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wdouble-promotion -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -pthread -march=native -mtune=native -DGGML_USE_K_QUANTS
  I CXXFLAGS: -I. -I./common -O3 -std=c++11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-multichar -pthread -march=native -mtune=native -DGGML_USE_K_QUANTS
  I LDFLAGS:
  I CC:       cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
  I CXX:      g++ (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0

  g++ -I. -I./common -O3 -std=c++11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-multichar -pthread -march=native -mtune=native -DGGML_USE_K_QUANTS -c llama.cpp -o llama.o
  cc  -I.            -O3 -std=c11   -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wdouble-promotion -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -pthread -march=native -mtune=native -DGGML_USE_K_QUANTS   -c ggml.c -o ggml.o
  cc -I.            -O3 -std=c11   -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wdouble-promotion -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -pthread -march=native -mtune=native -DGGML_USE_K_QUANTS   -c -o k_quants.o k_quants.c
  k_quants.c:182:14: warning: ‘make_qkx1_quants’ defined but not used [-Wunused-function]
    182 | static float make_qkx1_quants(int n, int nmax, const float * restrict x, uint8_t * restrict L, float * restrict the_min,
        |              ^~~~~~~~~~~~~~~~
  cc  -I.            -O3 -std=c11   -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wdouble-promotion -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -pthread -march=native -mtune=native -DGGML_USE_K_QUANTS   -c ggml-alloc.c -o ggml-alloc.o
  g++ -I. -I./common -O3 -std=c++11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-multichar -pthread -march=native -mtune=native -DGGML_USE_K_QUANTS -shared -fPIC -o libllama.so llama.o ggml.o k_quants.o ggml-alloc.o
  [1/2] Install the project...
  -- Install configuration: "Release"
  -- Installing: /tmp/pip-install-zb5bu9st/llama-cpp-python_24b7e33fea9a4111bd59bb4482c69f2e/_skbuild/linux-x86_64-3.11/cmake-install/llama_cpp/libllama.so

  copying llama_cpp/utils.py -> _skbuild/linux-x86_64-3.11/cmake-install/llama_cpp/utils.py
  copying llama_cpp/llama_grammar.py -> _skbuild/linux-x86_64-3.11/cmake-install/llama_cpp/llama_grammar.py
  copying llama_cpp/llama_cpp.py -> _skbuild/linux-x86_64-3.11/cmake-install/llama_cpp/llama_cpp.py
  copying llama_cpp/llama_types.py -> _skbuild/linux-x86_64-3.11/cmake-install/llama_cpp/llama_types.py
  copying llama_cpp/__init__.py -> _skbuild/linux-x86_64-3.11/cmake-install/llama_cpp/__init__.py
  copying llama_cpp/llama.py -> _skbuild/linux-x86_64-3.11/cmake-install/llama_cpp/llama.py
  creating directory _skbuild/linux-x86_64-3.11/cmake-install/llama_cpp/server
  copying llama_cpp/server/__main__.py -> _skbuild/linux-x86_64-3.11/cmake-install/llama_cpp/server/__main__.py
  copying llama_cpp/server/app.py -> _skbuild/linux-x86_64-3.11/cmake-install/llama_cpp/server/app.py
  copying llama_cpp/server/__init__.py -> _skbuild/linux-x86_64-3.11/cmake-install/llama_cpp/server/__init__.py
  copying /tmp/pip-install-zb5bu9st/llama-cpp-python_24b7e33fea9a4111bd59bb4482c69f2e/llama_cpp/py.typed -> _skbuild/linux-x86_64-3.11/cmake-install/llama_cpp/py.typed

  running bdist_wheel
  running build
  running build_py
  creating _skbuild/linux-x86_64-3.11/setuptools/lib.linux-x86_64-3.11
  creating _skbuild/linux-x86_64-3.11/setuptools/lib.linux-x86_64-3.11/llama_cpp
  copying _skbuild/linux-x86_64-3.11/cmake-install/llama_cpp/utils.py -> _skbuild/linux-x86_64-3.11/setuptools/lib.linux-x86_64-3.11/llama_cpp
  copying _skbuild/linux-x86_64-3.11/cmake-install/llama_cpp/llama_grammar.py -> _skbuild/linux-x86_64-3.11/setuptools/lib.linux-x86_64-3.11/llama_cpp
  copying _skbuild/linux-x86_64-3.11/cmake-install/llama_cpp/llama_cpp.py -> _skbuild/linux-x86_64-3.11/setuptools/lib.linux-x86_64-3.11/llama_cpp
  copying _skbuild/linux-x86_64-3.11/cmake-install/llama_cpp/llama_types.py -> _skbuild/linux-x86_64-3.11/setuptools/lib.linux-x86_64-3.11/llama_cpp
  copying _skbuild/linux-x86_64-3.11/cmake-install/llama_cpp/__init__.py -> _skbuild/linux-x86_64-3.11/setuptools/lib.linux-x86_64-3.11/llama_cpp
  copying _skbuild/linux-x86_64-3.11/cmake-install/llama_cpp/llama.py -> _skbuild/linux-x86_64-3.11/setuptools/lib.linux-x86_64-3.11/llama_cpp
  creating _skbuild/linux-x86_64-3.11/setuptools/lib.linux-x86_64-3.11/llama_cpp/server
  copying _skbuild/linux-x86_64-3.11/cmake-install/llama_cpp/server/__main__.py -> _skbuild/linux-x86_64-3.11/setuptools/lib.linux-x86_64-3.11/llama_cpp/server
  copying _skbuild/linux-x86_64-3.11/cmake-install/llama_cpp/server/app.py -> _skbuild/linux-x86_64-3.11/setuptools/lib.linux-x86_64-3.11/llama_cpp/server
  copying _skbuild/linux-x86_64-3.11/cmake-install/llama_cpp/server/__init__.py -> _skbuild/linux-x86_64-3.11/setuptools/lib.linux-x86_64-3.11/llama_cpp/server
  copying _skbuild/linux-x86_64-3.11/cmake-install/llama_cpp/py.typed -> _skbuild/linux-x86_64-3.11/setuptools/lib.linux-x86_64-3.11/llama_cpp
  copying _skbuild/linux-x86_64-3.11/cmake-install/llama_cpp/libllama.so -> _skbuild/linux-x86_64-3.11/setuptools/lib.linux-x86_64-3.11/llama_cpp
  running build_ext
  running install
  running install_lib
  Traceback (most recent call last):
    File "/usr/lib/python3/dist-packages/pip/_vendor/pep517/in_process/_in_process.py", line 363, in <module>
      main()
    File "/usr/lib/python3/dist-packages/pip/_vendor/pep517/in_process/_in_process.py", line 345, in main
      json_out['return_val'] = hook(**hook_input['kwargs'])
                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "/usr/lib/python3/dist-packages/pip/_vendor/pep517/in_process/_in_process.py", line 261, in build_wheel
      return _build_backend().build_wheel(wheel_directory, config_settings,
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "/usr/lib/python3/dist-packages/setuptools/build_meta.py", line 230, in build_wheel
      return self._build_with_temp_dir(['bdist_wheel'], '.whl',
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "/usr/lib/python3/dist-packages/setuptools/build_meta.py", line 215, in _build_with_temp_dir
      self.run_setup()
    File "/usr/lib/python3/dist-packages/setuptools/build_meta.py", line 158, in run_setup
      exec(compile(code, __file__, 'exec'), locals())
    File "setup.py", line 8, in <module>
      setup(
    File "/tmp/pip-build-env-8te9r42p/overlay/local/lib/python3.11/dist-packages/skbuild/setuptools_wrap.py", line 781, in setup
      return setuptools.setup(**kw)  # type: ignore[no-any-return, func-returns-value]
             ^^^^^^^^^^^^^^^^^^^^^^
    File "/usr/lib/python3/dist-packages/setuptools/__init__.py", line 153, in setup
      return distutils.core.setup(**attrs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "/usr/lib/python3/dist-packages/setuptools/_distutils/core.py", line 148, in setup
      return run_commands(dist)
             ^^^^^^^^^^^^^^^^^^
    File "/usr/lib/python3/dist-packages/setuptools/_distutils/core.py", line 163, in run_commands
      dist.run_commands()
    File "/usr/lib/python3/dist-packages/setuptools/_distutils/dist.py", line 967, in run_commands
      self.run_command(cmd)
    File "/usr/lib/python3/dist-packages/setuptools/_distutils/dist.py", line 986, in run_command
      cmd_obj.run()
    File "/tmp/pip-build-env-8te9r42p/overlay/local/lib/python3.11/dist-packages/skbuild/command/bdist_wheel.py", line 33, in run
      super().run(*args, **kwargs)
    File "/usr/lib/python3/dist-packages/wheel/bdist_wheel.py", line 335, in run
      self.run_command('install')
    File "/usr/lib/python3/dist-packages/setuptools/_distutils/cmd.py", line 313, in run_command
      self.distribution.run_command(command)
    File "/usr/lib/python3/dist-packages/setuptools/_distutils/dist.py", line 986, in run_command
      cmd_obj.run()
    File "/usr/lib/python3/dist-packages/setuptools/command/install.py", line 68, in run
      return orig.install.run(self)
             ^^^^^^^^^^^^^^^^^^^^^^
    File "/usr/lib/python3/dist-packages/setuptools/_distutils/command/install.py", line 622, in run
      self.run_command(cmd_name)
    File "/usr/lib/python3/dist-packages/setuptools/_distutils/cmd.py", line 313, in run_command
      self.distribution.run_command(command)
    File "/usr/lib/python3/dist-packages/setuptools/_distutils/dist.py", line 985, in run_command
      cmd_obj.ensure_finalized()
    File "/usr/lib/python3/dist-packages/setuptools/_distutils/cmd.py", line 107, in ensure_finalized
      self.finalize_options()
    File "/tmp/pip-build-env-8te9r42p/overlay/local/lib/python3.11/dist-packages/skbuild/command/__init__.py", line 34, in finalize_options
      super().finalize_options(*args, **kwargs)
    File "/usr/lib/python3/dist-packages/setuptools/command/install_lib.py", line 17, in finalize_options
      self.set_undefined_options('install',('install_layout','install_layout'))
    File "/usr/lib/python3/dist-packages/setuptools/_distutils/cmd.py", line 290, in set_undefined_options
      setattr(self, dst_option, getattr(src_cmd_obj, src_option))
                                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "/usr/lib/python3/dist-packages/setuptools/_distutils/cmd.py", line 103, in __getattr__
      raise AttributeError(attr)
  AttributeError: install_layout. Did you mean: 'install_platlib'?
  [end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for llama-cpp-python
Failed to build llama-cpp-python
ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects

`

from open-interpreter.

hcymysql avatar hcymysql commented on May 13, 2024
Open Interpreter will use Code Llama for local execution. Use your arrow keys to set up the model.                                                                     

[?] Parameter count (smaller is faster, larger is more capable): 7B
 > 7B
   13B
   34B

[?] Quality (lower is faster, higher is more capable): Low | Size: 3.01 GB, RAM usage: 5.51 GB
 > Low | Size: 3.01 GB, RAM usage: 5.51 GB
   Medium | Size: 4.24 GB, RAM usage: 6.74 GB
   High | Size: 7.16 GB, RAM usage: 9.66 GB

[?] Use GPU? (Large models might crash on GPU, but will run more quickly) (Y/n): y

[?] This instance of `Code-Llama` was not found. Would you like to download it? (Y/n): y


▌ Failed to install Code-LLama.                                                                                                                                      

We have likely not built the proper Code-Llama support for your system.                                                                                                

( Running language models locally is a difficult task! If you have insight into the best way to implement this across platforms/architectures, please join the Open    
Interpreter community Discord and consider contributing the project's development. )                                                                                   

Please press enter to switch to GPT-4 (recommended).   

from open-interpreter.

Cafezinho avatar Cafezinho commented on May 13, 2024

Does not work.

from open-interpreter.

AlvinCZ avatar AlvinCZ commented on May 13, 2024

Hi @AlvinCZ! Thanks for trying this out + for the kind words about the project.

Does it do this error for all models, like if you try to download the low quality 7B vs. medium quality 7B, etc? I'm also on M2 Mac with Python 3.11 so it should work. Maybe one of the download URLs has some weird SSL certificate stuff and we just need to find a new URL.

This error happens for all models and I still cannot download any model on my mac, but I managed to download models on my Windows machine (used to have this same issue) by closing my proxy app ( yes, windows can download models under the same internet condition with my mac?! )... btw, if someone has problem installing llama-cpp-python on Windows, try installing Microsoft Visual Studio first (need a cpp compiler for this module).

from open-interpreter.

Silversith avatar Silversith commented on May 13, 2024

Well, that took a second to figure out. You need to use Python x64 not x86

from open-interpreter.

robert-pattern avatar robert-pattern commented on May 13, 2024

I'm running into the same issues here. On a PC I was able to get it to download the packages but then it still wouldn't work. On my M2 mac I can't even get it to download the models

from open-interpreter.

iplayfast avatar iplayfast commented on May 13, 2024

On Linux it installs interpreter in ~/.local/bin/interpreter with ~/.local/share/Open Interpreter/models/ being the location of the models.
I keep my models in ~/ai/data/models so I'm going to be moving things around and providing links from the old model location

from open-interpreter.

jordanbtucker avatar jordanbtucker commented on May 13, 2024

I'm closing this issue. Feel free to open it back up if the issue is not resolved.

from open-interpreter.

Emojigit avatar Emojigit commented on May 13, 2024

Installing packages did not work for me, but I was able to install the package manually by directly executing pip in open interpreter's virtual environment created by pipx.

$HOME/.local/pipx/venvs/open-interpreter/bin/python -m pip install llama-cpp-python

from open-interpreter.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.