(https://aidreams.co.uk/forum/proxy.php?request=http%3A%2F%2Fi1.wp.com%2Ftimdettmers.com%2Fwp-content%2Fuploads%2F2014%2F08%2Fgpu-pic.jpg%3Fresize%3D700%252C366&hash=8f6ce05ae72a4d070828af3c44cc301627087694)
Here's a couple of articles on a blog from a guy called Tim Dettmers.
This first one is from 2015, but I think the overview will still be useful - also it's still getting comments in 2017 and Tim is still responding to them...
http://timdettmers.com/2015/03/09/deep-learning-hardware-guide/ (http://timdettmers.com/2015/03/09/deep-learning-hardware-guide/)
If you read that first one you'll get the idea that GPUs are the way to go. Nvidia seems to be his and the readers' weapon of choice. Since Nvidia brought out their new range around six months back, then I guess there are probably some bargains to be had in second hand kit - I know my old 970 was snapped up pretty quick.
And that brings us on to the second 2017 article which goes into more about choosing GPUs...again he picks out Nvidia...
NVIDIA’s standard libraries made it very easy to establish the first deep learning libraries in CUDA, while there were no such powerful standard libraries for AMD’s OpenCL. Right now, there are just no good deep learning libraries for AMD cards – so NVIDIA it is. Even if some OpenCL libraries would be available in the future I would stick with NVIDIA: The thing is that the GPU computing or GPGPU community is very large for CUDA and rather small for OpenCL. Thus, in the CUDA community, good open source solutions and solid advice for your programming is readily available.
http://timdettmers.com/2017/04/09/which-gpu-for-deep-learning/ (http://timdettmers.com/2017/04/09/which-gpu-for-deep-learning/)
I think it might interest some of you. Korrelan, I think this is getting into your territory.
Personally I've only dabbled with parallel computing on a GPU briefly and did not have much luck. This was back when I was first running morphs through my 3D characters. I was using it to calculate vertex positions. An interesting tangent that I went off on, that was soon better solved another way.
Still, interesting stuff.