WC4 Fan Remake Launches Full-Featured Website (July 23, 2020)

Yeah, I had a feeling it was all built-in. So I'm guessing the reason you don't have a Threadripper CPU (yet?) is because you already splurged on a Titan Xp. XD Sadly, I think the GPU industry is going back to segregating between graphics-oriented ('gaming') and compute-focused GPUs (and the latter is really expensive compared to the former) because I really enjoyed being able to use my GPUs for more than just playing games.

Clearly NN has made impressive results, both here and in dozens of examples touted around the web already. But it also still needs help from humans in judging what is good vs what is not as part of its learning. Simple, fixed-rules learning (like Chess, Super Mario Bros, etc) sure, but visual interpolation is so much deeper than that.
 
Clearly NN has made impressive results, both here and in dozens of examples touted around the web already. But it also still needs help from humans in judging what is good vs what is not as part of its learning. Simple, fixed-rules learning (like Chess, Super Mario Bros, etc) sure, but visual interpolation is so much deeper than that.

Pretty much, yes - for the most part. One method that's being widely employed is to train the NN on things that you already have the correct results for. For example, in the case of our visual upscale AI models, I believe they're trained by having high resolution images for the NN to compare against as "correct" results, then feeding it low resolution versions of the same image set and allowing it to compare its results against the original for the "keep/discard" process. That way, direct human intervention isn't required to tell the NN if it's getting it right - but by the same token, the software itself doesn't "know" if it's getting things right.

Certainly when I was training the AI model for that DeepFake example, I didn't have to tell the NN what to keep or what to discard. It just kept getting better on its own merits. It was calculating 3D data (head position, inclination, rotation etc.) from the "target" sequence and checking facial fidelity from the "source" sequence. The down side was just the sheer amount of time involved to go through all the iterations. The only reason the results were so poor were that after three weeks of training, I gave up (I wanted my computer back!). If I'd had the time (or number of devices) spare to leave it to run for a lot longer, I'm confident it would eventually have come up with something pretty convincing on its own.
 
Wow, did your processed suck up all the computing power? Thought you can reduce its priority level or otherwise cut back on cores allocated. Anyway, sounds like a learning experience.

And yes, good point about NN being useful if you already have a good set of 'destination' samples ready.
 
Wow, did your processed suck up all the computing power? Thought you can reduce its priority level or otherwise cut back on cores allocated. Anyway, sounds like a learning experience.

Well the Deep Learning process is handled almost entirely by the GPU. The CPU impact was negligible. When I say "I wanted my computer back," I mean "I wanted to play games!"
 
Ah, of course! Yeah, a pity that GPGPU doesn't have priority levels in the same way that regular processes on the CPU does, but I understand that's supposed to be changing...
 
Back
Top