Changes, they just keep a coming..
The world of computing has certainly changed over the years, I still remember the days of programming being done on punch cards, and graphics being created and touted by the simple commands of BK 100, RT 90, and FD 50 (That’s and L, in old school logo commands for those who don’t recognize the syntax.)
Fast forward to today’s world, a much more complex place, with over a thousand different programming languages, including web, database and graphic design, with many more added every year. Changes are not only happening in how we are programming, but more specifically, what we are programming, which is the very topic of this article.
Back to the questions at hand, “what’s the big deal?”
GPU Programming is being touted as the wave of the future, as can be seen by many of the announcements made recently about GPU/CPU combined architecture, including AMD’s Fusion processor “Zacate” and Intels’ own “Sandy Bridge” architecture release topics over the last few weeks. Truth be told the power of Many Cores Vs. Multi-Core is really coming into context when you see things like a 192 QuadCore Supercomputer “aka the Kraken” being beat in floating point operation per second (Flops) computation by a simple 4 Rack, 8 GPU Fermi solution. The reason that GPU are being touted as the next generation of computing really revolves around the architecture, CPU’s were made with a focus more on sequential processing as opposed the parallel processing architecture that GPU’s are optimized for. This technology has proven highly advantageous for single point calculations as some of the demo’s from GTC have shown. Nvidia is currently working on optimizing double precision floating point calculations and when this is complete, I have no doubt in my mind there are going to be more amazing numbers being shown for performance calculations vs. the plain CPU architecture we have been used to seeing. Also of note is Nvidia’s CPU/GPU combined chip Tegra that is being planned for mobile devices, this chip if successful can revolutionize mobile computing as we know it. No longer are we going to have handheld phones with an OS & some computer programs able to run on them, but rather handheld computers with phone capabilities. People will view mobile phones today like we think of the big old cell phones of the 80’s, you know the one’s I’m referring to, with the basic suitcase type attachment you had to carry around with you just to power the things.
“What is this change, in technology and programming focus, and how is it going to affect me?”
The GTC Conference announced several breakthrough partnerships between Nvidia and the leading solution providers in several areas. These included Mathematical and engineering calculations, research, 3D modeling, production simulations, and even a future peak into their collaboration with photography and video processing. These partnerships (described in more detail below), will affect everyone in the world in one way or another. Solutions for the 3D Stereoscopic medical imagery, Fluid dynamics, genome research, facial and object recognition, real time rendering, and even landscape calculations used by unmanned vehicles, are just a few of the technologies that are able to have its research increase, which in turn will help solve some of today’s problems, making the world a smarter and safer place.
Sure a lot of these things are high level, but even the average consumer stands to gain from the technology, Stereoscopic 3D without glasses on your handheld devices, movies and video games is just the beginning. Architects being able to show you a real time photorealistic view of your newly designed home, interior decorators being able to show photorealistic imagery of your old or newly purchased furniture arranged in your home, or if your just hobbyist photographer, can you imagine never having a bad picture again? Video producers and production companies will literally be able to remove motion blur entirely, meaning less cost and less time for movie production. In truth the possibilities are endless as the human imagination and the solutions we can come up with. The CEO of Nvidia started off the conference by stating it was not Nvidia’s conference so much as it was a conference to celebrate its users and developers , and I think he was very correct in the statement, we should all celebrate the ingenuity that we have accomplished over the years. Some of this technology is amazing in all fields, I’ll use photography as an example cause it’s so easy to visualize, but just look at the leaps and bounds made in the field in just 10-15 years where using film was a primary tool, now were looking at endless possibilities and pictures just from one image, really breathtaking if you think of the grand scheme of things.
The CEO also announced at the conference a roadmap to releases of future chipsets from Nvidia including their nicknames, and expected performance per watt. It was interesting to note that all names were based on famous mathematicians. For comparison purposes, the CEO used Tesla as the starting/origin point of Per Watt expectation at 1x, Fermi being at 2x, Kepler (28nm) at 5x in 2011, and Maxwell (22nm) at 16x by 2013. Also mentioned was architectural enhancements, since Fermi Introduced ECC capabilities, The CEO stated that we should expect to see further enhancements including virtual memory, preemption (priority task scheduling) , and non-blocking of CPU processes, which will in turn, speed up transfers between the two to help increase the significance of parallel computing. There was a hint of a midlife Fermi kicker chip being announced prior to Keplers release in 2011, so it’ll be interesting to see what is unveiled after AMD makes their announcements this upcoming month.
Oct 06, 2015 0