Git Product home page Git Product logo

Comments (7)

katusk avatar katusk commented on June 8, 2024

The Examples section begins with "You can download NuGet package dependencies and let CMake know about CMake export files in those packages via nuget_add_dependencies() calls:", then it follows with the code snippet you posted in the above. The CMAKE_PREFIX_PATHS parameter provides the root directory within the package from which the CMake export files can be found for that package, and whatever Target names they expose. There is no standardized way to put CMake export files in NuGet packages, so that is why you need to provide this explicitly. See https://cmake.org/cmake/help/latest/variable/CMAKE_PREFIX_PATH.html for more, eventually this is what is used in the background.

I looked into the Microsoft.ML.OnnxRuntime.Gpu NuGet package with the NuGet Package Explorer, I see no CMake export files, so in its current form I am assuming you probably cannot build this in a cross-platform way via CMake. And I see no C/C++ binary artifacts either -- this CMake scripting was intended originally to be used with native binary artifacts and their corresponding header files only.

However, as you already noticed, further down Examples section you can import .targets files when you use a Visual Studio generator with CMake; they are expected to be located under build/native/.targets. So in theory you could write something like this:

nuget_initialize()
nuget_add_dependencies(
    PACKAGE Microsoft.ML.OnnxRuntime.Gpu VERSION 1.17.1 IMPORT_DOT_TARGETS_AS Microsoft.ML.OnnxRuntime.Gpu
)

Then use the Microsoft.ML.OnnxRuntime.Gpu Target name in target_link_libraries CMake commands as a dependency only. But since there are no binaries to link, it does not make sense to me. Plus a seemingly minor problem is that the .targets file under the package root directory is actually located under buildTransitive/native.

One other important thing is that there is no transitive dependency management at all in this scripting, you have to explicitly list all NuGet packages you need directly -- otherwise they will not get downloaded, nor their build targets imported. I see now that Microsoft.ML.OnnxRuntime.Gpu is really an umbrella package for other packages. If I look into one of their dependencies, e.g. microsoft.ml.onnxruntime.gpu.windows.1.17.1.nupkg, then it looks way better as it has binaries.

Hmm, but... I need to think about this more, I am afraid your intended use-case as I see it now is currently not supported by the scripting. The runtimes/win-x64/native location within microsoft.ml.onnxruntime.gpu.windows.1.17.1.nupkg is a standard NuGet location, but the CMake scripting is currently not aware of that, as we expect the targets and what binaries to be linked to be described in the .targets file within the NuGet package -- but it is within buildTransitive/native/ instead of build/native. OK, this would be a minor change in the scripting to look into the buildTransitive dir, that is fine. But the rest, I need to think about, and I want to solve it as simply as possible. I see that the .targets file comes with an accompanying .props file that actually describes how to handle the binaries under runtimes/win-x64/native. Maybe the current scripting automatically handles accompanying .props files, I need to check. I use a CMake built-in call for hooking up .targets files from the NuGet packages when you are rolling with a Visual Studio generator...

Can you please provide a little bit more info about your use-case? Like do you have C# projects as well, or you only want to build C/C++? What is your build and project descriptor environment, and what target platforms do you have in mind?

from cmakenugettools.

vipcxj avatar vipcxj commented on June 8, 2024

I just want to build a c/c++ project.

This is my workaround.

file(MAKE_DIRECTORY "${CMAKE_CURRENT_SOURCE_DIR}/nuget/packages")
set(NUGET_PACKAGES_DIR ${CMAKE_CURRENT_SOURCE_DIR}/nuget/packages)
nuget_initialize()
set(ONNX_VERSION "1.17.1")
nuget_add_dependencies(
    PACKAGE Microsoft.ML.OnnxRuntime.Gpu VERSION ${ONNX_VERSION} CMAKE_PREFIX_PATHS "../../.."
)

if(UNIX)
    if(NOT EXISTS "${CMAKE_CURRENT_SOURCE_DIR}/nuget/packages/Microsoft.ML.OnnxRuntime.Gpu.Linux/runtimes/linux-x64/native/libonnxruntime.so.${ONNX_VERSION}")
        execute_process(
            COMMAND ${CMAKE_COMMAND} -E create_symlink libonnxruntime.so "libonnxruntime.so.${ONNX_VERSION}"
            WORKING_DIRECTORY "${CMAKE_CURRENT_SOURCE_DIR}/nuget/packages/Microsoft.ML.OnnxRuntime.Gpu.Linux/runtimes/linux-x64/native/"
        )
    endif()
endif()


find_package(onnxruntime CONFIG REQUIRED COMPONENTS onnxruntime_providers_cuda OPTIONAL_COMPONENTS onnxruntime_providers_tensorrt)

set(SRC_PATH "${CMAKE_CURRENT_SOURCE_DIR}/src")

add_executable(demo "${SRC_PATH}/main.cpp")
target_link_libraries(demo PRIVATE onnxruntime::onnxruntime onnxruntime::onnxruntime_providers_cuda)

And onnxruntimeConfig.cmake should put under project root dir

if(WIN32)
    set(onnxruntime_ROOT "${CMAKE_CURRENT_LIST_DIR}/nuget/packages/Microsoft.ML.OnnxRuntime.Gpu.Windows")
elseif(UNIX)
    set(onnxruntime_ROOT "${CMAKE_CURRENT_LIST_DIR}/nuget/packages/Microsoft.ML.OnnxRuntime.Gpu.Linux")
else()
    message(FATAL_ERROR "Nuget package only support Win and linux")
endif()

# checking if onnxruntime_ROOT contain nuget package
file(GLOB __nupkg_file "${onnxruntime_ROOT}/Microsoft.ML.OnnxRuntime.*.nupkg")
list (GET __nupkg_file 0 __nupkg_file)
STRING(REGEX MATCH "[0-9]+\\.[0-9]+\\.[0-9]+" onnxruntime_VERSION "${__nupkg_file}")
message(STATUS "Loading using nuget")

set(ORT_INCLUDE_DIR "${onnxruntime_ROOT}/buildTransitive/native/include/")
if(WIN32)
  set(ORT_LIBRARY_DIR "${onnxruntime_ROOT}/runtimes/win-x64/native/")
elseif(UNIX)
  set(ORT_LIBRARY_DIR "${onnxruntime_ROOT}/runtimes/linux-x64/native/")
else()
  message(FATAL_ERROR "Nuget package only support Win and linux")
endif()
message("ORT_INCLUDE_DIR: ${ORT_INCLUDE_DIR}")
set(onnxruntime_FOUND FALSE)
if(EXISTS ${ORT_INCLUDE_DIR} AND EXISTS ${ORT_LIBRARY_DIR})
  set(onnxruntime_FOUND TRUE)
endif()
if(onnxruntime_FOUND)
  add_library(onnxruntime::onnxruntime SHARED IMPORTED)
  set_target_properties(onnxruntime::onnxruntime PROPERTIES
    IMPORTED_CONFIGURATIONS RELEASE
    INTERFACE_INCLUDE_DIRECTORIES "${ORT_INCLUDE_DIR}"
  )
  if(WIN32)
    set_target_properties(onnxruntime::onnxruntime PROPERTIES
      IMPORTED_IMPLIB "${ORT_LIBRARY_DIR}onnxruntime.lib"
      IMPORTED_LOCATION "${ORT_LIBRARY_DIR}onnxruntime.dll"
    )
  elseif(UNIX AND NOT APPLE)
    set_target_properties(onnxruntime::onnxruntime PROPERTIES IMPORTED_LOCATION "${ORT_LIBRARY_DIR}libonnxruntime.so")
  endif()
endif()

if(onnxruntime_FOUND)
  function(import_providers provider_name)
    if(WIN32)
      set(lib_path "${ORT_LIBRARY_DIR}onnxruntime_providers_${provider_name}.dll")
    elseif(UNIX AND NOT APPLE)
      set(lib_path "${ORT_LIBRARY_DIR}libonnxruntime_providers_${provider_name}.so")
    endif()
    if(EXISTS ${lib_path})
        add_library(onnxruntime::onnxruntime_providers_${provider_name} MODULE IMPORTED)
        set_target_properties(onnxruntime::onnxruntime_providers_${provider_name} PROPERTIES
          IMPORTED_CONFIGURATIONS RELEASE
          INTERFACE_INCLUDE_DIRECTORIES ${ORT_INCLUDE_DIR}
          IMPORTED_LOCATION ${lib_path}
        )
      set(onnxruntime_onnxruntime_providers_${provider_name}_FOUND TRUE PARENT_SCOPE)
    endif()
  endfunction(import_providers)
  set(ORT_PROVIDERS cuda;tensorrt)
  foreach(__component ${ORT_PROVIDERS})
    import_providers(${__component})
  endforeach()
  find_package_handle_standard_args(
    onnxruntime
    REQUIRED_VARS onnxruntime_ROOT
    FOUND_VAR onnxruntime_FOUND
    VERSION_VAR onnxruntime_VERSION
    HANDLE_COMPONENTS
  )
endif()

There are still quite a few problems with this solution, such as the strange linking error I encountered. I was linking to libonnxruntime.so, but the actual output executable was linking to libonnxruntime.so.1.17.1. Apparently libonnxruntime.so.1.17.1 doesn't exist, so I had to create a softlink in CMakeLists for libonnxruntime.so pointing to libonnxruntime.so.1.17.1. I also used ldd to look at all the dependent libraries. Even though I specified onnxruntime_providers_cuda and onnxruntime_providers_tensorrt in target_link_libraries, neither of them are dependencies of demo. But the demo does call the gpu properly. so they are probably loaded at runtime. It's a matter of how they find the corresponding so files, and whether the version number is automatically added when they're already looking for the so files.

I've been using java, js, then python and golang, and in the last 1-2 months I've started to use c and c++, and c++'s dependency management has been a disaster. maven and npm are so handy, pip and conda are a bit bad, but far better than c++. golang didn't have any dependency management before, But now its dependency management function is already very mature.

from cmakenugettools.

katusk avatar katusk commented on June 8, 2024

Yeah, C/C++ dependency management, especially if you want to do it cross-platform is not easy at all. Look, I would advise you to use vcpkg -- search for onnx here: https://vcpkg.io/en/packages -- or Conan -- https://conan.io/ --, if you want proper cross-platform C/C++ library management that integrates with CMake and other build systems quite well. If you want something more lightweight, there is also https://github.com/cpm-cmake/CPM.cmake

And I have also heard about the following, but I have not tried them myself yet: https://github.com/cpp-pm/hunter and https://build2.org/ and https://spack.io/ -- latter is currently only for supercomputers, Linux, and macOS, but there were initiatives to make it work on Windows as well, I don't know the progress there.

NuGet is not really designed with primarily C/C++ in mind as far as I can see, it is more of a C# thing primarily, that also lets you ship native binaries along with C# wrapper dlls. And you can sort of use it for C/C++ for certain cases, if you cannot avoid it. If you have primarily a C/C++ environment, then I think you are better off without NuGet packages.

OK, but apart from the above advice: I will look into your NuGet use case just for fun. No promises, maybe not this week, but I will :)

from cmakenugettools.

vipcxj avatar vipcxj commented on June 8, 2024

@katusk I was using vcpkg before, and submitted several prs, and happened to see wip onnxruntime in the pr list, and thought that onnxruntime was not available on vcpkg, It turns out that although there is no onnxruntime, there is an onnxruntime-gpu. I'll try using onnxruntime-gpu directly from vcpkg now! Thanks.

from cmakenugettools.

vipcxj avatar vipcxj commented on June 8, 2024

@katusk It seems that onnxruntime-gpu on vcpkg doesn't do what I need, it only supports windows, not linux~.

from cmakenugettools.

vipcxj avatar vipcxj commented on June 8, 2024

@katusk I create a vcpkg overlay port to get onnxruntime-cuda12, it works perfactly. However, there is not compiled tensorrt provider in the onnxruntime-linux(win)-x64-cuda12-1.17.1.tgz(.zip). So it does not support tensorrt. This is unacceptable to me. Now I'm going back to the original method. However I really found tensorrt provider which support cuda12 in nuget package. I wonder if there is a way for vcpkg to download the packages I need via nuget.

from cmakenugettools.

katusk avatar katusk commented on June 8, 2024

@vipcxj Good question, you should ask them whether that is possible, or what would be the proper way to use that particular library via vcpkg. I know that vcpkg can export parts of its install tree to a self-created NuGet package, that you can use via CMake (or via CMakeNuGetTools if you want to avoid some scripting) or msbuild (it has both cmake exports and an msbuild-only .targets file), and it can also use self-created individual NuGet packages for binary caching purposes -- EDIT: latter is handled entirely by vcpkg, you do not need other tooling for that, apart from setting up NuGet of course. But I am not aware of vcpkg handling NuGet packages out in the wild that are not created by vcpkg itself, I do not think it is created for that, or they had that in mind -- the project scope would be very different then. EDIT: Please note, that I also happened to use vcpkg before, but I do not have knowledge about its full feature set -- I am just an user like you.

from cmakenugettools.

Related Issues (5)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.