
What we think about it:
Car-face - "The performance delta between the 16 GB and 8 GB models is within 0.5%"
sluuuurp - "Depends on your purpose. If you’re doing machine learning inference of large language models, that extra 8GB will make a huge difference."
FUTURE10S - "No, but he's saying the performance loss from the extra VRAM is within statistical error. Yeah, if you're doing stuff where you're capped by VRAM, you'll lose performance in the 8GB but that wasn't what the tests here were about, it's to see if the 4060 16 is slower than the 4060 8."
_Lucille_ - "Someone doing enough ML wouldn't be using a 4060."
sluuuurp - "Some people want to do ML without spending a ton of money. I do it on my laptop."
_Lucille_ - "They will def want at least a card with 256bit bus."
Moscato359 - "Most ML work is not as intensive as you think it is"
Oscarcharliezulu - "It’s a big jump over older cards on a dollar for dollar basis. I’m getting by with an a2000 12gb - it’s a lot better than not having one!"
bbatwork - "Someone like me who doesn't want to shell out $1200 for a better card just might."
_Lucille_ - "There is a giant gap between 4060ti and a $1200 card."
Moscato359 - "Buying used graphics cards is like gambling"
_Lucille_ - "Used cards are commonly traded and documented, esp crypto miners who normally undervolt the cards."
rileyphone - "Would be using a lab at their college."
LimerickJim - "*cluster"
[deleted] - "Well you don't use clusters for experiments, you use them for simulations..."
Moscato359 - "That is absolutely false, I've already spoken to multiple people with plans to do so."
homer_3 - "No, you could buy 2 of the exact same model card and get more than a .5% difference. Not every card performs exactly the same."
RiftHunter4 - "Yeah, I was going to bring this up. Stable Diffusion and some other Locally run Ai run much better on 12 or 16 GB."
Omikron - "That's probably an insanely small percentage of users."