Git Product home page Git Product logo

dkbhaskaran / hcc Goto Github PK

View Code? Open in Web Editor NEW

This project forked from rocm/hcc

0.0 0.0 0.0 23.06 MB

HCC is an Open Source, Optimizing C++ Compiler for Heterogeneous Compute currently for the ROCm GPU Computing Platform

Home Page: https://github.com/RadeonOpenCompute/hcc/wiki

License: Other

CMake 1.24% Shell 1.10% Python 0.84% C++ 95.95% C 0.26% Makefile 0.02% Perl 0.57% Gnuplot 0.01%

hcc's Introduction

HCC : An open source C++ compiler for heterogeneous devices

This repository hosts the HCC compiler implementation project. The goal is to implement a compiler that takes a program that conforms to a parallel programming standard such as HC, C++ 17 ParallelSTL and transforms it into the AMD GCN ISA.

The project is based on LLVM+CLANG. For more information, please visit the hcc wiki:

https://github.com/RadeonOpenCompute/hcc/wiki

Deprecation Notice

AMD is deprecating HCC to put more focus on HIP development and on other languages supporting heterogeneous compute. We will no longer develop any new feature in HCC and we will stop maintaining HCC after its final release, which is planned for June 2019. If your application was developed with the hc C++ API, we would encourage you to transition it to other languages supported by AMD, such as HIP or OpenCL. HIP and hc language share the same compiler technology, so many hc kernel language features (including inline assembly) are also available through the HIP compilation path.

Download HCC

The project now employs git submodules to manage external components it depends upon. It it advised to add --recursive when you clone the project so all submodules are fetched automatically.

For example:

# automatically fetches all submodules
git clone --recursive -b clang_tot_upgrade https://github.com/RadeonOpenCompute/hcc.git

For more information about git submodules, please refer to git documentation.

Build HCC from source

To configure and build HCC from source, use the following steps:

mkdir -p build; cd build
cmake -DCMAKE_BUILD_TYPE=Release ..
make

To install it, use the following steps:

sudo make install

Use HCC

For HC source codes:

hcc -hc foo.cpp -o foo

Multiple ISA

HCC now supports having multiple GCN ISAs in one executable file. You can do it in different ways:

use --amdgpu-target= command line option

It's possible to specify multiple --amdgpu-target= option. Example:

# ISA for Fiji(gfx803) and Vega10(gfx900) would 
# be produced
hcc -hc \
    --amdgpu-target=gfx803 \
    --amdgpu-target=gfx900 \
    foo.cpp

configure HCC use CMake HSA_AMDGPU_GPU_TARGET variable

If you build HCC from source, it's possible to configure it to automatically produce multiple ISAs via HSA_AMDGPU_GPU_TARGET CMake variable.

Use ; to delimit each AMDGPU target. Example:

# ISA for Fiji(gfx803) and Vega10(gfx900) would 
# be produced by default
cmake \
    -DCMAKE_BUILD_TYPE=Release \
    -DHSA_AMDGPU_GPU_TARGET="gfx803;gfx900" \
    ../hcc

CodeXL Activity Logger

To enable the CodeXL Activity Logger, use the USE_CODEXL_ACTIVITY_LOGGER environment variable.

Configure the build in the following way:

cmake \
    -DCMAKE_BUILD_TYPE=Release \
    -DUSE_CODEXL_ACTIVITY_LOGGER=1 \
    <ToT HCC checkout directory>

In your application compiled using hcc, include the CodeXL Activity Logger header:

#include <CXLActivityLogger.h>

For information about the usage of the Activity Logger for profiling, please refer to its documentation.

HCC with ThinLTO Linking

To enable the ThinLTO link time, use the KMTHINLTO environment variable.

Set up your environment in the following way:

export KMTHINLTO=1

ThinLTO Phase 1 - Implemented

For applications compiled using hcc, ThinLTO could significantly improve link-time performance. This implementation will maintain kernels in their .bc file format, create module-summaries for each, perform llvm-lto's cross-module function importing and then perform clamp-device (which uses opt and llc tools) on each of the kernel files. These files are linked with lld into one .hsaco per target specified.

ThinLTO Phase 2 - Under development

This ThinLTO implementation which will use llvm-lto LLVM tool to replace clamp-device bash script. It adds an optllc option into ThinLTOGenerator, which will perform in-program opt and codegen in parallel.

To use HCC Printf Functions

Set up environmental variable:

export HCC_ENABLE_PRINTF=1

Then compile the printf kernel with HCC_ENABLE_ACCELERATOR_PRINTF macro defined.

~/build/bin/hcc -hc -DHCC_ENABLE_ACCELERATOR_PRINTF -lhc_am -o printf.out ~/hcc/tests/Unit/HSA/printf.cpp

For more examples on how to use printf, see tests in tests/Unit/HSA/printf*.cpp.

hcc's People

Contributors

whchung avatar scchan avatar rocm-hcc avatar bensander avatar huimcw avatar david-salinas avatar unclehandsome avatar lyh-kernel-mcw avatar kwu91 avatar yan-ming avatar aaronenyeshi avatar alexvlx avatar changchengwang avatar yaoxiaocc avatar pfultz2 avatar tstellaramd avatar facao avatar jeffdaily avatar sunway513 avatar alexratcliff-mcw avatar yxsamliu avatar gargrahul avatar arsenm avatar dfukalov avatar xiangfeng2006 avatar vsytch avatar xatier avatar aditya4d1 avatar alexvoicu avatar mangupta avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.