Nvidia’s GPU Technology Conference

Highlights of CEO’s Partnership Announcements

  • Cuda X86 PGI – Compiler to deploy Cuda programmed applications on any computer/server in the world, despite CPU/GPU, Parallel or Multiprocessor. The PGI CUDA C compiler for x86 platforms will allow developers using CUDA to compile and optimize CUDA applications to run on x86-based workstations, servers and clusters with or without an NVIDIA GPU accelerator, basically making the language non platform specific and able to be shared throughout your enterprise without any worries about compatibility of architecture.
  • Matlab’s announcement of a Cuda Accelerated Parallel Programming Toolbox.  Matlab used by over 1 million researches world will support Cuda across a cluster of GPU’s with up to a 40x increase over CPU clusters. Matlab is used by some of the most renowned researchers in the world, including scientists at NASA, MIT, Boeing, Ford, Toyota, and Motorola to name a few (#1 in Numerical Calculations)
  • Multi GPU  now supported by Amber 11  thanks to work by UCSD Professor Ross Walker and his team.  This tools used for biomoleculer simulations including protein forcefields, show scaling of Nvidiia  MultiGPU solutions outperforming the Supercomputer Kraken with its 192- quad cores CPU’s only processing 46 ns/Day, but  four server racks sporting a 8 Fermi GPU solution, able to process 52 ns/day (based on JAC benchmark) proving the ideas that Supercomputer processing does not require such a huge space and be both readily available and affordable. (#1 in Moleculer Dynamics, stands for Assisted Model Building w/ Energy Refinement)
  • Ansys has announced its beta testing results while using an Nvidia Solution. gaining up to 2x faster rendering in house without any optimization work having even been started yet. While this doesn’t seem like a huge gain when you consider things like it taking 60+ CPU hours to render the simulation of an airplane breaking system, this can now be done in just 30+ hours via a GPU solution. although time is an important factor in design, what this really equates to is the ability to generate as many solutions as possible given a deadline for a project.  This in turn results in being able to run as many simulations as possible to determine the best solution possible, even when a customer may change the design specs, maybe even one you might not even have got a chance to preview giving older more time consuming CPU solutions alone. (Leading provider of Simulated Driven Product Solutions)
  • 3DS have announced Iray and Physx will be available for subscription based users next week for 3DS Max. Iray functionality includes and allows for every photon to be rendered by simulating light physically , which can only be done by an enormous amount of floating point calculations.  Upcoming research with 3DS Max has come up with a solution to solve dilemma of having a client saying hey like something but would like to see it rendered say from a different perspective or include an object say a reflective vase into the solution.  Enter the solution of being able to publish the data to the cloud and and allow for the first full interactive photo realistic rendering image to be displayed and rendered in real time for the customer. The demo shown during GTC showed a browser displaying video rendering done by 32 Fermi Processors at a supercenter in Toronto, over 2600 miles away, just by sending positional or object data to be displayed via the cloud. (Leading solutions for 3D Digital Content Creation)
  • Announcement of new Tesla OEM providers -IBM BladeCenter  , Cray XE6  10 of the top 50 corporations as customers, and T-Platforms TB2 (50% of Flops of Russia’s Top 50)  In total this is equal to about 84% of the usage of High performance computing companies of the Top 500 for the world (IBM BladeCenter   is #1 HPC provider w/196 of top 500 companies as clients)
  • Adobe Corporations offered a peak into the future of GPU rendering and being able solve some of the problems with current photography. History of Photography problems dating back to film photography where you would take a picture and not realize if you had a decent shot until after the film was processed, even with digital photography and  instant display of photos, a shot can be ruined by the wrong object being in focus.. The solution is a plenoptic lens, computational photography, and image processing and optix being moved into the computer/GPU. They showed a 4D dimension light field that became a realistic option when they moved the algorithm from the CPU to the GPU, showing an increase in image processing of up to 500x, even at stereoscopic 3D and 120hz displays.   They showed this solution being able to eliminate the problems of focus by allowing the user to choose an option to either bring the background or subject into focus and everything in between.  They also hinted at solutions coming being able to remove motion blur from videos and photo’s, in turn revolutionizing the video industry.