NVIDIA® Tesla™ GPU Technology
NVIDIA Tesla computing solutions enable users to process large datasets with a massively multi-threaded computing architecture. By developing a parallel architecture from the ground up, NVIDIA has designed its Tesla computing products to meet the requirements of HPC software. With the introduction of the Tesla 20-Series, the next-generation CUDA architecture (codenamed "Fermi"), you will enjoy a powerful new array of features:
In addition to the power of GPU parallel processing, you can
benefit from the CUDA software development environment for parallel
programming (including support for C, C++, Fortran, Open CL and Direct
Compute) and a steadily expanding spectrum of high-performance computing
applications.

NVIDIA Tesla computing solutions enable users to process large datasets with a massively multi-threaded computing architecture. By developing a parallel architecture from the ground up, NVIDIA has designed its Tesla computing products to meet the requirements of HPC software. With the introduction of the Tesla 20-Series, the next-generation CUDA architecture (codenamed "Fermi"), you will enjoy a powerful new array of features:
Tesla 20-Series | ||
Processing Cores | 448 | |
Double Precision Floating Point Capability |
515 Gflops | |
Single Precision Floating Point Capability |
1003 Gflops | |
Memory1 | 3GB GDDR5 (or 2.625GB GDDR5 w/ ECC) | |
L1 Cache (per streaming multiprocessor) |
Configurable 48KB or 16KB | |
L2 Cache | 768KB | |
ECC Memory Support1 | Yes | |
Concurrent Kernels | Up to 16 | |
1 With ECC on, a portion of the dedicated memory is used for ECC bits, so the available user memory is reduced by 12.5%. For example, 3GB total memory yields 2.625GB of user-available memory. |
![]() |
Read the product brief for the NVIDIA® Tesla™ C2050 cGPU. |
![]() |
|