I`m sharing file that takes 1280 X 1280 texture and convert it to chops.
please share results of cook time , cpu and gpu models.
hopefully it will give some estimation and benchmark for GPU memory to system memory on different hardware ( intel vs AMD vs NVIDIA \ quadro vs rtc\grforce ) .
give it 10 second and be on the same window for that duration.
This is a great idea, but you might get a larger set of results if you drop the test to 1280x1280 so anyone can test on any machine easily using Non-Commercial. It’s still large enough to get a good idea of relative performance.
This test actually doesn’t really reflect GPU->CPU time particularly well. The TOP To CHOP node is waiting for multiple things to be able to complete it’s cook.
In particular, it will need to wait for the GPU to complete any pending operations it has, including latency for ones that have been queued and not send to the GPU yet, then it needs to wait for the download.
So in this example it’s going to need to wait for the GPU to receive any commands that haven’t arrived yet, which may have latency, then it needs to calculate the Noise TOP, which can be very expensive, then it can begin the download.
A better test would involve the source TOP not cooking, and instead the TOP To CHOP being forced to cook every frame using a an Execute DAT forcing it to cook(Force=True) every frame.
Overall you’re always going to have some latency between when you ask the GPU to do something, and when it actually starts the operation, which will still hide the GPU->CPU bandwidth somewhat.
ok!
I will create another toe file reflect your comment.
thanks Malcolm .
Update:
After creating another version,
I guess its affect will be on weaker
hardware where the time genrating noise on the GPU will
be significant or in higher resolution cases.