The growing ranks of programmers using the Python open-source language can now take full advantage of GPU acceleration for their high performance computing (HPC) and big data analytics applications by using the NVIDIA® CUDA® parallel programming model, NVIDIA today announced.

Easy to learn and use, Python is among the top 10 programming languages with more than three million users. It enables users to write high-level software code that captures their algorithmic ideas without delving deep into programming details. Python’s extensive libraries and advanced features make it ideal for a broad range of HPC science, engineering and big data analytics applications.

Support for NVIDIA CUDA parallel programming comes from NumbaPro, a Python compiler in the new Anaconda Accelerate product from Continuum Analytics.

“Hundreds of thousands of Python programmers will now be able to leverage GPU accelerators to improve performance on their applications,” said Travis Oliphant, co-founder and CEO at Continuum Analytics. “With NumbaPro, programmers have the best of both worlds: they can take advantage of the flexibility and high productivity of Python with the high performance of NVIDIA GPUs.”

Expanded Access to Accelerated Computing Via LLVM

This new support for GPU-accelerated application development is the result of NVIDIA’s  contribution of the CUDA compiler source code into the core and parallel thread execution backend of LLVM, a widely used open source compiler infrastructure.

Continuum Analytics’ Python development environment uses LLVM and the NVIDIA CUDA compiler software development kit to deliver GPU-accelerated application capabilities to Python programmers.

The modularity of LLVM makes it easy for language and library designers to add support for GPU acceleration to a wide range of general-purpose languages like Python, as well as to domain-specific programming languages. LLVM’s efficient just-in-time compilation capability lets developers compile dynamic languages like Python on the fly for a variety of architectures.

“Our research group typically prototypes and iterates new ideas and algorithms in Python and then rewrites the algorithm in C or C++ once the algorithm is proven effective,” said Vijay Pande, professor of Chemistry and of Structural Biology and Computer Science at Stanford University. “CUDA support in Python enables us to write performance code while maintaining the productivity offered by Python.”

Anaconda Accelerate is available for Continuum Analytics’ Anaconda Python offering, and as part of the Wakari browser-based data exploration and code development environment.

About CUDA

CUDA is a parallel computing platform and programming model developed by NVIDIA. It enables dramatic increases in computing performance by harnessing the power of GPUs. With more than 1.7 million downloads, supporting more than 220 leading engineering, scientific and commercial applications, the CUDA programming model is the most popular way for developers to take advantage of GPU-accelerated computing.

More information about NVIDIA CUDA GPUs is available at the NVIDIA Tesla® GPU website. To learn more about CUDA or download the latest version, visit the CUDA website.

Enhanced by Zemanta
Verified by MonsterInsights