OpenAI Releases Triton, An Open-Source Python-Like GPU Programming Language For Neural Networks – MarkTechPost

Source: http://www.eecs.harvard.edu/~htk/publication/2019-mapl-tillet-kung-cox.pdf

OpenAI released their newest language, Triton. This open-source programming language that enables researchers to write highly efficient GPU code for AI workloads is Python-compatible and comes with the ability of a user to write in as few as 25 lines, something on par with what an expert could achieve. OpenAI claims this makes it possible to reach peak hardware performance without much effort, making creating more complex workflows easier than ever before!

http://www.eecs.harvard.edu/~htk/publication/2019-mapl-tillet-kung-cox.pdf

Researchers in the field of Deep Learning often rely on native framework operators. However, this can be problematic because it requires many temporary tensors to work, which may hurt performance at scale for neural networks. Writing specialized GPU kernels is a more convenient solution, but surprisingly difficult due to intricacies when programming them according to GPUs. It was challenging to find a system that provides the flexibility and speed required while also being easy enough for developers to understand. This has led researchers at OpenAI in improving Triton, which was initially founded by one of their teammates.

The architecture of modern GPUs can be broken down into three major components—DRAM, SRAM and ALUs. Each must be considered when optimizing CUDA code; one cannot overlook the challenges that come with GPU programming, including: Memory transfers from DRAM should coalesced to leverage larger bus widths on today’s memory interfaces. Data needs to manually stashed in SRAMS before it is re-used again so as not to conflict with other shared memories banks upon retrieval.

https://openai.com/blog/triton/

Triton simplifies the development of specialized kernels that can be much faster than those in general-purpose libraries. The compiler automatically optimizes and parallelizes it, converting into code for execution on recent Nvidia GPUs. Triton has its origins in a 2019 paper submitted to the International Workshop on Machine Learning and Programming Languages and its creator is now a part of OpenAI team.

Paper: http://www.eecs.harvard.edu/~htk/publication/2019-mapl-tillet-kung-cox.pdf

Github: https://github.com/openai/triton

Source: https://openai.com/blog/triton/


A message from Asif Razzaq, Co-founder of Marktechpost:

Show your support for our mission ‘making AI understandable for all’ by joining/connecting through our 34k+ FB GroupLinkedIn Page and Quora AI Group.

Advertisement/Sponsored Post:

If you are a company looking to promote your product/webinar/conference/service, feel free to reach out via email to [email protected] We offer sponsored posts and advertisements.

Related Products