Skip to content
Surf Wiki
Save to docs
arts

From Surf Wiki (app.surf) — the open knowledge base

Heterogeneous System Architecture

Computing system

Heterogeneous System Architecture

Computing system

Heterogeneous System Architecture (HSA) is a cross-vendor set of specifications that allow for the integration of central processing units and graphics processors on the same bus, with shared memory and tasks. The HSA is being developed by the HSA Foundation, which includes (among many others) AMD and ARM. The platform's stated aim is to reduce communication latency between CPUs, GPUs and other compute devices, and make these various devices more compatible from a programmer's perspective, relieving the programmer of the task of planning the moving of data between devices' disjoint memories (as must currently be done with OpenCL or CUDA).

CUDA and OpenCL as well as most other fairly advanced programming languages can use HSA to increase their execution performance. Heterogeneous computing is widely used in system-on-chip devices such as tablets, smartphones, other mobile devices, and video game consoles.{{cite web | archive-url = https://web.archive.org/web/20140201183411/http://gpuscience.com/cs/heterogeneous-system-architecture-purpose-and-outlook/ | archive-date = 2014-02-01

Rationale

The rationale behind HSA is to ease the burden on programmers when offloading calculations to the GPU. Originally driven solely by AMD and called the FSA, the idea was extended to encompass processing units other than GPUs, such as other manufacturers' DSPs, as well.

| File:HSA – using the GPU without HSA.svg | Steps performed when offloading calculations to the GPU on a non-HSA system

|File:HSA – using the GPU with HSA.svg | Steps performed when offloading calculations to the GPU on a HSA system, using the HSA functionality

Modern GPUs are very well suited to perform single instruction, multiple data (SIMD) and single instruction, multiple threads (SIMT), while modern CPUs are still being optimized for branching. etc.

Overview

Originally introduced by embedded systems such as the Cell Broadband Engine, sharing system memory directly between multiple system actors makes heterogeneous computing more mainstream. Heterogeneous computing itself refers to systems that contain multiple processing units central processing units (CPUs), graphics processing units (GPUs), digital signal processors (DSPs), or any type of application-specific integrated circuits (ASICs). The system architecture allows any accelerator, for instance a graphics processor, to operate at the same processing level as the system's CPU.

Among its main features, HSA defines a unified virtual address space for compute devices: where GPUs traditionally have their own memory, separate from the main (CPU) memory, HSA requires these devices to share page tables so that devices can exchange data by sharing pointers. This is to be supported by custom memory management units. To render interoperability possible and also to ease various aspects of programming, HSA is intended to be ISA-agnostic for both CPUs and accelerators, and to support high-level programming languages.

So far, the HSA specifications cover:

HSA Intermediate Layer===

HSAIL (Heterogeneous System Architecture Intermediate Language), a virtual instruction set for parallel programs

  • similar to LLVM Intermediate Representation and SPIR (used by OpenCL and Vulkan)
  • finalized to a specific instruction set by a JIT compiler
  • make late decisions on which core(s) should run a task
  • explicitly parallel
  • supports exceptions, virtual functions and other high-level features
  • debugging support

HSA memory model

  • compatible with C++11, OpenCL, Java and .NET memory models
  • relaxed consistency
  • designed to support both managed languages (e.g. Java) and unmanaged languages (e.g. C)
  • will make it much easier to develop 3rd-party compilers for a wide range of heterogeneous products programmed in Fortran, C++, C++ AMP, Java, et al.

HSA dispatcher and run-time

  • designed to enable heterogeneous task queueing: a work queue per core, distribution of work into queues, load balancing by work stealing
  • any core can schedule work for any other, including itself
  • significant reduction of overhead of scheduling work for a core

Mobile devices are one of the HSA's application areas, in which it yields improved power efficiency.

Block diagrams

The illustrations below compare CPU-GPU coordination under HSA versus under traditional architectures.

| File:Desktop computer bus bandwidths.svg | Standard architecture with a discrete GPU attached to the PCI Express bus. Zero-copy between the GPU and CPU is not possible due to distinct physical memories.

|File:HSA-enabled virtual memory with distinct graphics card.svg | HSA brings unified virtual memory and facilitates passing pointers over PCI Express instead of copying the entire data.

| File:Integrated graphics with distinct memory allocation.svg | In partitioned main memory, one part of the system memory is exclusively allocated to the GPU. As a result, zero-copy operation is not possible.

| File:HSA-enabled integrated graphics.svg | Unified main memory, where GPU and CPU are HSA-enabled. This makes zero-copy operation possible.

| File:MMU and IOMMU.svg | The CPU's MMU and the GPU's IOMMU must both comply with HSA hardware specifications.

Software support{{Anchor|AMDKFD|HQ|HMM}}

amdkfd}} provides required support.<ref>{{cite web

Some of the HSA-specific features implemented in the hardware need to be supported by the operating system kernel and specific device drivers. For example, support for AMD Radeon and AMD FirePro graphics cards, and APUs based on Graphics Core Next (GCN), was merged into version 3.19 of the Linux kernel mainline, released on 8 February 2015.{{cite web

Additionally, supports heterogeneous queuing (HQ), which aims to simplify the distribution of computational jobs among multiple CPUs and GPUs from the programmer's perspective. Support for heterogeneous memory management (HMM), suited only for graphics hardware featuring version 2 of the AMD's IOMMU, was accepted into the Linux kernel mainline version 4.14.

Integrated support for HSA platforms has been announced for the "Sumatra" release of OpenJDK, due in 2015.

AMD APP SDK is AMD's proprietary software development kit targeting parallel computing, available for Microsoft Windows and Linux. Bolt is a C++ template library optimized for heterogeneous computing.

GPUOpen comprehends a couple of other software tools related to HSA. CodeXL version 2.0 includes an HSA profiler.

Hardware support

AMD

, only AMD's "Kaveri" A-series APUs (cf. "Kaveri" desktop processors and "Kaveri" mobile processors) and Sony's PlayStation 4 allowed the integrated GPU to access memory via version 2 of the AMD's IOMMU. Earlier APUs (Trinity and Richland) included the version 2 IOMMU functionality, but only for use by an external GPU connected via PCI Express.

Post-2015 Carrizo and Bristol Ridge APUs also include the version 2 IOMMU functionality for the integrated GPU.

ARM

ARM's Bifrost microarchitecture, as implemented in the Mali-G71, is fully compliant with the HSA 1.1 hardware specifications. , ARM has not announced software support that would use this hardware feature.

References

References

  1. Tarun Iyer. (30 April 2013). "AMD Unveils its Heterogeneous Uniform Memory Access (hUMA) Technology".
  2. George Kyriazis. (30 August 2012). "Heterogeneous System Architecture: A Technical Review". AMD.
  3. "What is Heterogeneous System Architecture (HSA)?". AMD.
  4. Joel Hruska. (2013-08-26). "Setting HSAIL: AMD explains the future of CPU/GPU cooperation". [[Ziff Davis]].
  5. Linaro. (21 March 2014). "LCE13: Heterogeneous System Architecture (HSA) on ARM". slideshare.net.
  6. "Heterogeneous system architecture: Multicore image processing using a mix of CPU and GPU elements".
  7. (2014-01-15). "Kaveri microarchitecture". [[SemiAccurate]].
  8. (13 November 2017). "Linux Kernel 4.14 Announced with Secure Memory Encryption and More".
  9. Alex Woodie. (26 August 2013). "HSA Foundation Aims to Boost Java's GPU Prowess".
  10. (11 January 2022). "Bolt on github".
  11. AMD GPUOpen. (2016-04-19). "CodeXL 2.0 includes HSA profiler".
  12. (2016-05-30). "ARM Bifrost GPU Architecture".
  13. Computer memory architecture for hybrid serial and parallel computing systems, US patents 7,707,388, 2010 and 8,145,879, 2012. Inventor: [[Uzi Vishkin]]
Info: Wikipedia Source

This article was imported from Wikipedia and is available under the Creative Commons Attribution-ShareAlike 4.0 License. Content has been adapted to SurfDoc format. Original contributors can be found on the article history page.

Want to explore this topic further?

Ask Mako anything about Heterogeneous System Architecture — get instant answers, deeper analysis, and related topics.

Research with Mako

Free with your Surf account

Content sourced from Wikipedia, available under CC BY-SA 4.0.

This content may have been generated or modified by AI. CloudSurf Software LLC is not responsible for the accuracy, completeness, or reliability of AI-generated content. Always verify important information from primary sources.

Report