AMD Ryzen Threadripper 1900X 8-core CPU for 549 USD

>The 8-core Threadripper has 200 MHz higher base clock than Ryzen 7 1800X. AMD will offer Threadripper 1900X for only 50 USD more, but considering the cost of the whole X399 platform, it may actually make sense to keep the difference low.

>This means that AMD has three Threadrippers ready, with a strong likelihood of introducing non-X parts soon.

>The 16-core Ryzen Threadripper 1950X and 12-core 1920X will be available on August 10th, whereas the 8-core 1900X will be available on August 31st.

videocardz.com/71449/amd-ryzen-threadripper-1900x-price-leaked

Call the ambulance! Intel is sick!

Other urls found in this thread:

youtube.com/watch?v=4D8dzlgqEOo
intel.com/content/www/us/en/foundry/emib.html
en.wikipedia.org/wiki/Stochastic_optimization
anandtech.com/show/11544/intel-skylake-ep-vs-amd-epyc-7000-cpu-battle-of-the-decade
twitter.com/AnonBabble

Also

>64 lanes
>quad chanel
>no bullshit raid keys

It still made sense to go for a non X 1700 build IMO....
1700 went for $269. Mobo was $100 .. $369.

2 x 1700 = $538. So, i'd have to pay twice as much for a Threadripper 1900x.
Not going to get twice as much performance. I do like the higher clocks on threadripper. I'm assuming because they are using 2 dies... which is comforting because each die has its own Memory controller and 2 channels of DDR4.

So, I think a 1700 build is solid for testing out the Ryzen ecosystem.
If it checks out, I'd go straight to a 1950x for the x399.
I don't see the value at any other points.
I either need a hellavuh lot more compute and will pay the premium or I can be comfy at 1700 scale compute.

Way less cost and risk doing so.
Someone will say just buy a 1900x and x399 but i'd be paying double the price on the CPU and likely triple or quadruple the price on the mobo. Just doesn't work out for the same number of cores. I doubt 8 cores will be memory starved on dual channel DDR. So, there's that.

$369 for a mobo/cpu with all other parts being transferable to X399 is comfy

Extremely impressive clockspeeds there.

Two dies with 4 cores each. Lower heat density and thus can push them more.

STOOOOOOOPPPPPPP

GLUED TOGETHER

You forgot the 64 pcie lanes, quad channel...

For example: I do not need a 12 or 16 core CPU, but I need multiple GPUs installed on the system and multiple NVMe disks.

the only one worth buying is the 16 core and 32 core, fuck off with this gay shit

Are threadrippers better for virtualization?

Falling for the PCIe lane meme and having no use for it....
I'm sorry but even a power user would be hard pressed to max out Dual channel and 24 PCIe Lanes .. x8/x8 + x4 M.2 and x4 chipset is more than enough. Dual GPU, M.2 NVME, and 32GB of dual channel ram.

Multiple GPUs .. more than 2? Multiple NVMe? for what exactly? No doubt, some people need this. However most don't and sending that all to 8 cores is a bit of a joke.

I don't feel most people are capable of evaluating what exactly their needs are and thus overspec because they have the money and can. It's your money. So, I have no comments to make. However, one can quickly look at the balance of things and tell one's gunning for an unbalanced system w/ bottlenecks thus can't be that serious about properly evaluating their needs.

If all you're doing is loading a shit ton of stuff into the GPU and letting it churn away, you don't need x16 much less x8 which is why miners run on 6 baby pci-e lanes and have 6 GPUs hanging off a basic bitch mobo/processor w/ 8GB of ram..

Multiple NVMe? As opposed to just getting larger capacity NVMe?
You're talking about a system that is clearly in the $2-3k range. Then you say (I have no need for a 12/16 core CPU).. LMFAO, then get a bit mining mobo that supports 6 GPUs with riser cards.

Again, I doubt people really understand their needs.

don't get threadripper 8 cores.
the cores will be on different dies.

>4ghz turbo

i-it's just as I imagined

Brainlets aren't thinking properly..
> Muh quad channel DDR4
> Muh cross-die hop to get to the other 2 channels
> Muh I need 64 PCI-E Lanes but will only feed it to 8 cores...

1950x + gtx 2080 when it comes out

Fuck yus

are you going to wait for 10 years to build your PC?

4 graphics cards for meme learning. Lots of RAM for a ZFS fileserver and NVME drives for caching. There are enough uses.

>200MHz XFR
MUH DICK

unless the call pascal 2 the 1180

>Call the ambulance! Intel is sick!

youtube.com/watch?v=4D8dzlgqEOo

>multiples GPUs? More than 2?

What is blender? What is octane? What is raytrace? What is deep learning?

no its not about the platform cost even if it was a 500 bucks more...
amd is simply filling the gaps so that intel couldnt find a way to launch a competitive product (HA! competitive...)in a lower price

oh yeah amd glue together dies
meanwhile in intel
intel.com/content/www/us/en/foundry/emib.html
An Elegant Interconnect for a More Civilized Age
Embedded Multi-die Interconnect Bridge (EMIB) is an elegant and cost-effective approach to in-package high density interconnect of heterogeneous chips.

Is that name seriously a reference to Star Wars?

you fail to see the wording

>An Elegant Interconnect for a More Civilized Age

they are so full of themselfs i bet no one at intel realised that ryzen has launched

mfw you need such a config to detect a hotdog because your algos/approach are shit-tier

Yes, I was hoping for a 64 PCIe lane X399 version of the 1800X.

>his algorithm needs 1.5-3 years of training to learn to walk even on advanced neural network hardware

>lane X399 version of the 1800X.
It's 1500x x2.

...

> What is an 8 core bottleneck
> What are multi-die hops for memory access
> What is, not needed x16 PCI-E to feed the GPU
> What is saying you're a professional who does such work but buys the cheapest processor in the stack for which a single GPU costs more than?
> $500 difference between the 8core and 16core processor but you go for the 8core yet are talking about 4 GPUs : $500x4 .. $2,000 and $1400 in RAM and $600 in NVME drives.

Sorry, I'm not buying it and how is everyone suddenly a professional render specialist and Deep learning guru? LOL

>Multiple NVMe? for what exactly?

It's cheaper and faster.

2x 512GB 960pro are 2x $279.99 = $559.98
1x 1TB 960pro is $632.10

> mfw bruteforce learning needs housefire hardware because no one stopped to think about how to fundamentally approach the problem.
> muh search and convergence algo
> Muh 2TB worth of hotdog pics on a ZFS file system
> converging converging ...

> mfw when brainlets call this AI

>> What is an 8 core bottleneck

>You don't need more than dual core ~ intel shill
>Wait, you need more than 8 cores ~ intel shill

Make up your damn mind.

silicon is the problem and considering how much graphene costs we probably wont get anything out of it for many years to come

NVMe seems fitting for either a main drive or a cache drive. For bulk storage, i'd expect SSD drives followed up by mechanical. It's called a memory hierarchy. Only brainlets fall for maxing out the first hierarchy of storage. They don't even do this in the enterprise...

Tbh, it seems a lot of people have no clue how to configure these enterprise grade systems. Thus, you're going to see all sorts of whacky configs, bottlenecks, and choke points therein.

Balance :
> 8 core, x8/x8 GPU + NVME + sata
> Dual channel

> HEDT
> 8 core x16/x16/x8/x8 + 3 NVME + sata
> Quad channel
> Multi-die hop to access PCI-E and DRAM hanging off other die

I'm guessing you brainlets didn't do any due diligence w.r.t to understanding NUMA architectures.

That non HEDT 8 core is likely going to kick the pants off your HEDT 8 core simply because all the cores are on the same die as is the DRAM/PCI-E lane access...

Enjoy the disappointment when you come to realize what NUMA is.

Shit inefficient algos require lots of compute power.
> But but...
> silicon is the problem

Keep waiting on that silicon Machine learning brainlet

>inb4 unlocking TR 1900X to 32C/64T
It's 2009 all over again!

oh yeah thats the problem.................

Not possible if TR is tied to 2 dies.
The other 2 dies literally lead to null traces on the motherboard.
There's no PCIE or DRAM traces to feed them.

I guess they don't teach you the fundamentals behind things anymore....

en.wikipedia.org/wiki/Stochastic_optimization
Brute force statistical optimization...

> Muh silicon

Aren't all 4 dies tied together over IF? I'd like to think the other two dies can access resources over Infinity Fabric. Doesn't give you additional DRAM channels or PCIe lanes, but does give you moar coars.

The performance/latency would be shit, but hey it's two extra dies for free.

It only has 2 dies and 2 spacers. The only time it will have more is if after binding 2 dies, one of them is a dud (less than 4 working cores). At best you might unlock 2 to 4 more cores.

There better be something in that big ass box besides a processor and a mounting harness.

2 quad core CPUs were a thing just a few years ago.

4 ? TR doesn't have 4 active dies. Only EPYC does which is why it has 128 PCI-E lanes and octa-channel.

TR has 2 dies. In order to get 64 PCI-E lanes, you need 2 active dies. Thus, you're in a proper NUMA config.
- 32 PCI-E lanes and 2 DDR channels feed one die
- 32 PCI-E lanes and 2 DDR channels feed the other die

They are connected via IF. There is extra latency to cross dies. Going multi-die 8 core vs. sticking to one die 8 core is not proper scaling.

It's comments like yours that reflect that a number of people have no clue what their buying thus can't possible be serious about (performance) and scaling.

They were... now you have single socket/die 8 core. Going 8 core split die w/o carefully considering what exactly your work load consists of means you're not serious about why you are doing it.

If someone told me they were going 16core HEDT threadripper to take advantage of the higher scaling, I'd understand.

>4 cores windows OS with 16GB
>4 cores linux VM with 16GB

Nice and comfy.

August 10 confirmed.

I would like to know what uses more than 1-2 cores outside of video editing that average user uses

The average user is better off just using a phone. Nobody besides gamers and power users is building their own PC.

I CAN'T TAKE THIS ANYMOOOOOOOOOOOORE

now imagine a phone with a downclocked and undervolted ryzen

Every thread
Every single thread

If all you need is 1 or 2 cores then buy a 40 dollar Pentium and fuck off.
YOU DO NOT BELONG HERE.

Better yet, call Intel, tell them they can stop making everything other than Pentiums. Turns out all that r&d was pointless, all anyone anywhere will ever need in all of human existence is 1 or 2 cores so they can just stop that as well. I'm sure they will happily oblige and watch AMD crash and burn in their futile efforts to increase core counts and lower prices and tdp while casuals, gamers, HEDT enthusiasts and professionals alike flock to buy Intel's bargain priced 800$ pentiums, while laughing at AMD'S vain efforts.

Your just so smart aren't you, you fucking moron.

Henkuma ;∆

I just wish it were actually useful over a 1700 for people who aren't running a business with them and consider the price a negligible investment.

You're*

Google Chrome/Chromium
they spawn a large number of processes for sandboxing purposes.
Firefox is now beginning to go multicore too.

If you listen to music while using word processing while running background programs (like BitTorrent) etc, you can also benefit from more cores.

What's the advantage of the 1900x Kikeripper compared to the Ryzen 1800x?

Moar memory.
Moar lanes.

4x the pcie lanes of a non HEDT 8c/16t (64 vs 16)
Quad channel memory

Thats really about it. Well worth it if you have need of many people based devices. I'm personally waiting on benchmarks of the 16c/32t Threadripper before I jump.

>16c all at 4.0ghz

One can dream. Wonder what kind of cooler I'll need?

cinebench

ok where to start

first things first, for the most part quad channel vs dual channel is mostly a meme, the difference is there on paper and some work loads, but the speed is really only beneficial on tasks that a server would be better for.

now 8 core, 4 cores per die being slow? yes, in gaming it would likely shit itself with inter die latency, however most applications where you would be considering a high end desktop as an option over the consumer don't rely on inter core communication so much, these applications are the ones that show off scaling per core better then anything else.

now pcie lanes, if you are rendering 3d, this isn't a question, if cuda accelerates it 4 980's would be 8000 cores, and cost 5-600$
4 980ti's would be 10000 cores for 1000$

now to get even close to 8000 you would need 4 1070's, and does that even work? if it does, you need to spend 1200$ on it, and to break 10000 you would be spending over 2000$, now if you can't do more then 2 way, to even approach 8000 cores you would need to spend 2400$

Now let's say your workload needed high amounts of ram, which 3d rendering can demand, thread ripper would allow for a cheap solution for 128gb of ram

then you have storage, while I would personally wait till nvme got larger solutions before I get another, if you need the space now, you can use it now, then you raid 0 them all and you effectively have a near 10gbps read speed, which is orgasmic for video work, which again, is better offloaded to the gpu then done on cpu.

someone deleted a thread ripper and it had 4 dies on it, if this is going to be the norm or if this was just an engineering sample we don't know.

>Firefox is now beginning to go multicore too
The future is multi-core.
AMD was right all along.

Jokes on you, I'm getting an i9!

pic related

...

Stay mad boxlets.

>$100 for 0.2ghz
>can easily be OCd

> quad vs dual a meme except for specific work loads
agreed

> If application doesn't require intra core communication you're fine
Many do and I question whether the large number of brainlets interested in this platform @ 8 cores are not going to be using it in cases that require this. They are already having issues in cases on Ryzen where processes end up on another CCX complex. So, for the professional who knows what this platform is and why their workload will work good on it, they're fine. For the brainlet getting this vs. ryzen 8 core because it has more PCI-E lanes and DDR slots, they're fucked and in for a surprise.

> now pcie lanes, if you are rendering 3d, this isn't a question,
Explain to me how this works. Is the CPU maxed out at the same time the GPU is for GPU bound rendering? No.. you're simply piping crap over to the GPU, letting it run with little traffic back and forth. Thus why miners use shit tier processors/8GB ram and PCIE x2 attached to USb cables to PCIE 3.0 x16 cards... you ship the task over to the GPU and run it. You don't need x16 for that. Correct me if i'm wrong.

> Now let's say your workload needed high amounts of ram, which 3d rendering can demand, thread ripper would allow for a cheap solution for 128gb of ram
Now were getting somewhere. But why would you need high amounts of ram for GPU bounded rendering? You use GPU memory no? Give me some good source sauce on what GPU rendering traffic looks like between the CPUs/GPUs and ram.

> Raid 0 nvme
Please stop w/ this bullshit

> which again, is better offloaded to the gpu then done on cpu.
Take a tip and hint from miners for GPU bounded workloads.. You don't need big ass communication channels to the GPUs. If your CPU isn't doing that much work neither do you need shit tons of RAM.

Again, correct me if i'm wrong here.. maybe i'll learning something i didn't know

Serious question, why get Ryzen when a 7700k still outperforms them in video games?

Non vidya related tasks.
Why do people buy platforms that are not suited for their particular use case? Brainlets spend little time researching/understanding the nuisances. They see a spec sheet and get hyped by an e-celeb and are sold.

That being said, there also is future proof. Vidya is going to get tuned past quadcores in due time.

Thread brapper's appeal is the platform and lanes.

Miners solved the lane issue with understanding that nothing uses anywhere near x16 worth of lane traffic and thus went for boards with x4 lanes and risers to x16 capable cards. I'm wondering when so-called professional brainlets with GPU bound compute tasks are going to catch up to this revelation. If you're getting threadripper for moar x16 lanes to GPUs for basic bitch rendering, you're a brainlet.

And when the GPU local memory is maxed out and it has to constantly push and pull from main system memory what then?

It wasn't supposed to end like this, not like this...

Just drop it on the son of a bitch.

Skylake-X/Purley Xeon's mesh needs NUMA optimization as well, stop pretending this is somehow an AMD only problem.

Why are you even posting this in an HEDT thread?

>Serious question, why get Ryzen when a 7700k still outperforms them in video games?
Not with 3600mhz ram, it doesn't.

Most applications you are going to want more pcie for over more cores is not going to hammer the cpu the same way lets say a video game does where it hammer's thread 0, and then uses some of 1-12,if you send it the task of rendering something out, it's handled on core,its why these loads can sku more toward multicore loads than single core.but I said before,you are likely going to be able to offload it to something like cuda so the pcie is beneficial there.
Not going to lie with rendering,not sure how necessary it is to have it at 16x,however I do know there are real time demos that use 48gb of vram so its possible data is piped to the gpu.
Now on another note,there are also physics engines and such,that im not sure if they accelerate with gpus,as you set the detail in them stupid high and bake the physics.Then there are programs like zbrush, something thats damn near the only program of its kind,that uses the cpu exclusively I believe, so there is merit there of having a good cpu,that and more ram that quad allows you would let you keep more data up at a time.
for 3d modeling,unless you are working for optimization, you can easily end up with 16+ textures that are all between 512 to 32k all in incompressible formats, for each and every object. that adds up fairly fast, then you also have the model itself which contains the data for every single point, and unless you optimized everything, is likely 1 million or there around polys and that itself is fairly large.
and with video editing, to load large videos hundreds of gb for 30~ minutes, fast storage is king, which is why nvme raid 0 is a thing, you likely won't have ram to house the videos and while editing, jumping around is going to happen a lot, and then not everything in the process is gpu accelerated, so it's a mixed workload, but generally the gpu does very heavy lifting here. Now streaming, I know cpu >>>>>> gpu for quality per bit, but im unsure if thats also the case for properly rendered video

That works when you are sending small data to the gpu for the gpu to crush then send the small result back.

If you are sending big data regularly its not an option that makes sense.

Outperforming ... At 5 Ghz ...

Just add the Luck + the money for cooling solutions against such a house fire, add the power consumption,

et vólia

this is why AMD Ryzen is beating Intel all the way up the arse

Clearly this is a GPU bottleneck. You don't do CPU comparisons on Ultra graphic settings.

because I have shit constantly processing shit, and when 20% of an intel cpu is used, that's 20% shaved off all max frame rates and shitting the bed on small ones,

However that 20% on intel becomes 10% on ryzen, and then running a game it never even wants to touch the core that's using 10% anyway so my gameplay is un fucked with.

should note that amd is also overclocked, but it effects the processor less then what it takes to get 5ghz out of intel.

I was going to make another thread for this but whatever.
The 1900x is $550 and comes with quad channel memory support and hopefully supports higher frequencies as well.
Infinity Fabric is reliant on memory frequency, it's set to half your RAM's speed.
We know ryzen is amazing at 60hz but the 7700k is still superior for 144hz games and when you get your FPS that high you generally run into memory speed as a limiting factor. Upgrading to 4k ddr4 was huge for me on my 6700k 144hz machine.

How do you Sup Forumsuys think a 4.2ghz(they said it does that stock with xfr) 8 core ryzen with quad channel memory will perform for high refresh gaming? I think it's going to outperform the 1700/1700x/1800x by a fair bit and those pci-e lanes are fukkin juicy

Thats right. But the 5 Ghz on intel is much harder to obtain then 3.9 Ghz on a ryzen 7. Also, the ryzen wont heat up that much, and the intel will only be able to withstand this heat when delitted.

must be opposite day

>You don't do CPU comparisons on Ultra graphic settings.
Ultra adds additional CPU overhead, dipshit. Stop with this fucking "hurr durr benchmark at 720p low" meme shit.

>1080 Ti
>1080p
>GPU Bottleneck
?

I think it's too early to jump on the octacore bandwagon

An overclocked i7 (even an i5) will shit all over Ryzen for another couple of years yet

Obviously "media creators" are a different case

>shit all over
>20 extra FPS in GTA V
>at well over 100 FPS
>at 1080p with a 1080 ti

Fuck off.

>An overclocked i7 (even an i5) will shit all over Ryzen for another couple of years yet
R7 with 3600mhz ram basically ties a 5ghz 7700k in Triple A games.

>MUH BATTLENAWCKZ!
The test was done with a 1080 Ti

i5 7400 outperforms all the Ryzens GTA V 4k

For God sake, are you retard or something?

There is no difference at 4k, you're completely GPU bottlenecked at that point.

Epyc was already tested and kicked serious ass, and TR is basically the same, so I'm afraid you'll have to put your FUD to rest. anandtech.com/show/11544/intel-skylake-ep-vs-amd-epyc-7000-cpu-battle-of-the-decade

Yes, every GPU can become a bottleneck on every resolution if the CPUs is strong enough.