How much does an IBM z cost?

How much does an IBM z cost?

Can one gut it out and turn it into a treehouse?

Other urls found in this thread:

youtube.com/watch?v=kmMn5Q_lnkk
youtu.be/ErZRSQoTFXw?t=11m32s
www-01.ibm.com/common/ssi/cgi-bin/ssialias?subtype=SP&infotype=PM&htmlfid=LXD02020USEN&attachment=LXD02020USEN.PDF
en.wikipedia.org/wiki/IBM_z13_(microprocessor)
catb.org/jargon/html/story-of-mel.html
youtube.com/watch?v=vuXrsCqfCU4
youtube.com/watch?v=SjtyifWTqmc
ganglia.wikimedia.org
twitter.com/SFWRedditGifs

greater than $100,000 for the lowest end
And it probably ranges to $5,000,000+ for the highest end.

It phones home to IBM. You get billed by the amount of CPU time you've used.

You pay for the hardware one time at acquisition if you buy the LinuxOne varriety.

It won't run z/System and IBM mainframe workl load though, just thousands of Linux VMs.

I wanna IBM treehouse...

>You pay for the hardware one time at acquisition if you buy the LinuxOne varriety
And how much is that?

At least $300,000.

these can run minecraft btw.

IBM hardware can run any Java shit totally fine.

And what kind of Linux does it run?
I really like how IBM is a closed box... you can't fucking find out how any of its shit works.

>How much does an IBM z cost?
easily a million or more if you buy it outright but most companies just lease them for god knows how much
>Can one gut it out and turn it into a treehouse?
it would be a tiny ass treehouse and a waste of really neat hardware desu

some day I want to buy a used S/390 or Z and set it up for public educational use

Ubuntu, Suse, or Red Hat.

>Ubuntu
they have a port? seems kind of odd

>some day I want to buy a used S/390 or Z and set it up for public educational use
This would be great. I want to program for mainframes, but I need some type of sandbox to try things out in.

The CPU instruction set is an open standard. Its got a software development manual. Therefore GCC can compile to it and linux can be targeted for it.

Serious question: What are mainframes used for? I didn't realize these were relevant since the 60's. Why would you use one of those instead of AWS or something? Just price? Seems like a bad idea to put all your eggs in one very expensive physical basket instead of a distributed virtual one?

Banks, PoS processing, Credit Cards. Anything that requires a centralized database and needs powerful performance that can handle millions of transactions per second.

I want to buy one, install Gentoo to it rice the shit out of it and use it to generate tripcodes all day. I would become the ultimate meme master on Sup Forums

Banks and government agencies have existing programs from the mainframe era.

It would cost too much money to rewrite and re-certify them.

Also, IBM mainframes are incredibly fast at transaction processing.

why not just run a server farm ? I agree it seems kinda strange to put all your stuff in one place instead of distributed.

can it run crysis?

probably not considering it has no gpu.

Security, for one

Shit looks like it came out of Desu Ex.

A server farm would probably end up being more expensive than a centralized system. All that data getting passed around adds up.

What if this fails though ? I mean with a server farm if one server fails the system can keep going and the servers can be replaced/fixed but with this the risk is totally consolidated.

why does the front look so silly? ibm is supposed to be the gods of classy, understated hardware

I think it looks pretty cool honestly, like out of a sci-fi movie

IBM z has ridiculous amount of redundancy.

It would take the building housing it collapsing for it to fail.

They have videos where they do earthquake tests on it. A System Z is probably some of the most reliable hardware out there.
youtube.com/watch?v=kmMn5Q_lnkk

Im genuinely interested how do they ensure the reliability ?

computers shouldn't look "cool" they should look professional

I get the whole understated thing and I hate the way gaming computers look now but this is sort of flashy and understated at the same time

Backup hardware that goes online when components fail, is one way they do it.

So is the IBM Z sort of a small server cluster ? (I mean small as in the space it occupies not the data it can process.)

Kind of, it also has much higher single thread performance than any x86 system. Also the programming model is pretty different.

youtu.be/ErZRSQoTFXw?t=11m32s

They put the whole thing on an earthquake simulator and shake it to find out what component fails first then fix the design of those components.

www-01.ibm.com/common/ssi/cgi-bin/ssialias?subtype=SP&infotype=PM&htmlfid=LXD02020USEN&attachment=LXD02020USEN.PDF

same, it would probably be pretty expensive to run but big iron seems like a pretty fun novelty for teaching and introducing people to the weirder IBM ecosystem and also something that needs to be better preserved for historical reasons

they're good at doing a shit ton of jobs concurrently, correctly and securely with decade-long uptimes and ridiculous failure tolerance, pretty much everything on them is redundant, they can tolerate multiple failures of critical components, even some system functions are redundant as well, IIRC modern Zs execute the same instructions multiple times and compare them to ensure the integrity of the result

on top of that they're also ridiculously compatible and can run unmodified code going all the way back to the S/360

g-dspeed

well, think about it, is it really a better deal in the end to invest in a datacenter's worth of servers, racks, individual software licenses, peripherals, cooling equipment, failure points, etc, when you can squish the whole thing into single system that can virtualize the whole thing in a rack or two and do it right?

how does it have such high single thread performance ? I thought the highest would be on something like a 7700K because of the clock speed. Is it water cooled ?

>clock speed
well Zs were clocking at 4.5 GHz.... three years ago? who knows what the latest generation runs at now

its got 4GB of CPU cache for one, for comparison I think the top x86 chips have 20MB cache. Also they use far more power than a typical x86 chip and have a bigger die size, allowing for much faster clocks.

at least 5 GHz

It has an insane CISC instruction set and it's able to do instructions that takes x86 processors 10+ cycles in one or two.

>4GB of CPU cache
Jesus

I work at a company who concerns itself with IBM mainframe software. Ask me anything.

What sort of work is typically done on a mainframe these days? I imagine financial stuff, but what else? Anything that I might not expect?

Do you have to use Eclipse for everything since it's IBM shit?

Are mainframes actually secure? No one has done many pentests or hacking on them since they are so expensive.

I'm sure I'm breaking my NDA here, but fuck it. The devs are still writing assembly code on a fucking terminal emulator to this day. I'm actually in the process of writing an eclipse plugin for system 370 assembly, because we're too cheap to actual buy the rational suite.

We're actually a middle-ware provider. Our core product is a database and a query programming language hybrid that is essentially a better COBOL. I'm not quite sure what people use it for, but our biggest client is a massive huehue bank.

As far as I know, there are hardware protections against stack corruptions or some shit.

>The devs are still writing assembly code on a fucking terminal emulator to this day.
Is that because its too hard to write a decent compiler for such a CISC architecture? I can't even imagine writing the LLVM or GCC optimizations for that.

>and can run unmodified code going all the way back to the S/360
That's gotta be a bitch to do on IBM's end

where did you get the 4GB figure?
en.wikipedia.org/wiki/IBM_z13_(microprocessor) says 64MB L3 cache

>(((O MY GOD, spot the jew cap at 0:35)))

Its doable. Just look at x86, they can still run 8086 code.

Saw it in the marketing video posted by

It's because our software still has legacy assembly code from the 1970s, even though IBM provides C and C++ compilers (sadly only C++98, which is a complete joke). I'm pretty certain the the majority of the development team doesn't even know any high level languages well enough to replace the assembly code. They all believe that they can optimize better than any compiler ever could. Given that some of them have been there for 20 to 30 years, I'm just going to believe them. Hell, they know what the code does by looking at the machine code itself, they can disassemble inside their heads.

>They all believe that they can optimize better than any compiler ever could. Given that some of them have been there for 20 to 30 years, I'm just going to believe them. Hell, they know what the code does by looking at the machine code itself, they can disassemble inside their heads.
Any of them named Mel?

Sounds like you work for the cool part of IBM. I'm ex-IBM myself but quit because I was in a soul-sucking Java monkey job with coworkers who only lived for coffee breaks and maybe chatting about football occasionally. It was a pity, because I knew that at least some parts of IBM did cool stuff.

No? Is that supposed to be some sort of pop culture reference or something?

I don't work for IBM.

that's the total cache for the whole frame, not for each cpu

That yamulke-wearing Jew is everywhere if you search for IBM z/Systems on YouTube.

>Is that supposed to be some sort of pop culture reference or something?
catb.org/jargon/html/story-of-mel.html

Probably to pander to their israeli customers.

Oh, I'm an idiot, I guess it would make 100x more sense to encounter someone who worked for somewhere that used IBM projects than someone who actually works on Z. Too much vodka I guess

>No one has done many pentests or hacking on them since they are so expensive.
That's kind of a ludicrous statement considering most companies running mainframes are pretty high-profile targets that are definitely going to be at the receiving end of many attacks. Pentesting is also a pretty routine practice companies that know what they're doing conduct on themselves regularly.

IBM implements a lot of stuff in hardware, like crypto acceleration, more advanced memory protection and virtualization-centric enhancements that other platforms don't have.

>Is that because its too hard to write a decent compiler for such a CISC architecture?
How would it really be difficult? Having a wider range of specialized instructions to accomplish specific tasks makes the job easier if anything, instead of figuring out how to break it down into smaller operations.

The bigger advantages of a reduced, simpler instruction set were simplicity and efficiency more than anything, older CISC designs really tried to make it easy on the compiler writers by making it more like a high-level language.

>How would it really be difficult? Having a wider range of specialized instructions to accomplish specific tasks makes the job easier if anything, instead of figuring out how to break it down into smaller operations
Theoretically, yes. But, a compiler needs to semantically understand each instruction, and what operations map to it during optimization. The more complex the instruction set, the harder it is to write the optimizations.

>How would it really be difficult?
Don't know about these in particular, but some of the most-compex architectures (including IA-64, I think) had instruction sets that could pack multiple "instructions" into a single very long instruction (called VLIW if you want to look it up). This could lead to good optimization, but would require some serious loop unrolling which a lot of compilers simply couldn't do that well.

how come consumer cpu's dont use this instruction set ?

I could see it, though I'm not terribly knowledgeable about the deeper aspects of compiler voodoo.

VLIW is a totally different animal from something like z/Arch or x86 though, IA-64 as far as I'm aware was most shitty because it relied too much on "compiler magic" to predict things that were far too hard to predict effectively.

How much would one use power?

>I could see it, though I'm not terribly knowledgeable about the deeper aspects of compiler voodoo.
A compiler is just a pattern matcher. Since you can write code which is functionally equivilant to CISC instructions in a lot of different ways (potentially unlimited), you need to do some serious number crunching to ensure the pattern matches and is correct. Theoretically you can get perfect compilation with a theorom prover like Z3 combined instruction synthesis, but that is NP-Complete. So instead compilers rely on shortcuts and heuristics, which means the more complex the architecture is, the more heuristics that need to be written.

Can't think of a VLIW chip that has ever been good, the Itanium especially was miserable in consumer roles and only reasonable on its own for supercomputing/HPC with its kickass FP performance that made up for the braindead compilers.

Sounds pretty interesting, I'm assuming in essence you're saying that to take advantage of more specific instructions it also requires more specific cases and thus more work to figure out what works and doesn't work with those cases?

>I'm assuming in essence you're saying that to take advantage of more specific instructions it also requires more specific cases and thus more work to figure out what works and doesn't work with those cases?
Exactly.

so it technically is only leased then?

there are too many branches

they have the money and time to write their archs and design their silicon

because you would need to rewrite everything to get the full advantage of that arch and no one is going to bother unless it gets mass adaptation

youtube.com/watch?v=vuXrsCqfCU4

For anyone interested in seeing what goes into an "older" mainframe from about a decade ago.

>this is ibm what do you need?
>user needs to buy the z model for tripcodes.
>*click

it's kind of sad to see stuff like this being called "junk" even if it's definitely a little difficult to justify using for anything other than historical purposes anymore

the engineering and design work that goes into high-end stuff like this is incredible and even years after its heyday it's nothing to laugh at

How do I get a job at IBM?

Be Indian or a lawyer.

or a sys admin specializing in OSX

Most hospitals still use them. We just got a new one desu.

Been "getting off mainframe" for 7 years now.

t. healthcare IT professional

>Been "getting off mainframe" for 7 years now
Is it starting to become cheaper to just use outsourced clusters nowadays?

massive redundancy
like, you have 2(or 3, 4, 5) PSUs, if one fails the others can cover the load easily.
3 fans, if one fails it doesn't matter since you only need two
raid 1 etc.
multiple CPUs
so if one component fails it doesn't matter because the other(s) are plenty. You can also hot-swap pretty much everything, i. e. install a new CPU while the system is running.

I think the plan is to move everything into (((Epic Systems™))).

We already do lots of stuff through it now.

so the correct meme word would be IaaS (infrastructure as a service)?

Nah we run our own servers but (((Epic Systems))) comes and sets everything up or does big upgrades. They have a team that just fly's around the country doing this shit. I heard those guys feel burned out really fast.

We run Epic through citrix desktop because it's cheap to run than fat clients. Works like shit too.

Eventually I think they want to do everything financial through Epic instead of mainframe.

>but would require some serious loop unrolling
what makes you think that?
x86 is so popular because it's x86, you can use existing software without recompiling (and porting), not necessarily because it's the fastest. Note that due to patents only Intel, AMD and VIA are allowed to build x86 CPUs

>That's kind of a ludicrous statement considering most companies running mainframes are pretty high-profile targets that are definitely going to be at the receiving end of many attacks.
It's true though, there is little pen-testing for mainframes. There's a lot of vendor trust("IBM does everything right") among other things.
Philip Young gave a lot of talks about it:
youtube.com/watch?v=SjtyifWTqmc

>IBM does everything right
When will this meme end?

probably never, they have very good PR apparently

tell me about this, whats cool about it and how many fps could i get in csgo? also why does ibm seem to do fuck all

>tfw you fell for the 10 TB meme

Better comparison would be that 4 GB is not unusual for the total RAM of a PC in 2016

Back in like the 80s or so, the machine instruction encoding was still dictated by the hardware architecture, in other words the binary code for an instruction was selected based on what was easy to implement in hardware efficiently with yes/no branches depending on whether a bit was set in the instruction word. Nowadays though, that's less of a consideration, so it's fairly easy to have "hardware level emulation" where a processor can execute the same instruction set (not just the same assembly instructions, but the same binary encoding) even if it's a different architecture.

>tfw you fell for the 100 TB meme
source: ganglia.wikimedia.org
note that the cpu count likely refers to CPU cores, not individual CPUs

I was going to tl;dr that shit but I'm about 10 minutes into it and it's pretty cool. Hell, just all the different outward-facing systems he's coming up with in the slide backgrounds is kind of cool to look at.

To be fair, he did seem to say that the use of 0-days were pretty much unprecedented, and since the world seems to continue to turn without collapsing into upheaval from frequent breaches and abuse of those breaches, they seem to be doing something right, or at the very least security through obscurity is working very much in their favor.

It still seems pretty strange though, I still just can't imagine that there are major banks and other companies with valuable information on these systems that aren't throwing even a bit of money at pentesting.

>It still seems pretty strange though, I still just can't imagine that there are major banks and other companies with valuable information on these systems that aren't throwing even a bit of money at pentesting.
In another talk he mentioned an anecdote that went something like this
"hey, can i portscan our mainframe"
"sure, but if it goes down you're fired"
they do throw tons of money at pentesting, just not pentesting their mainframe since the "IBM does everything right" mentality is very widespread. In another talk he talked a lot about how it's a very obscure system(unique terms for various stuff and often unique formats as well), communitys are often private or hostile and unhelpful. Additionally i'm guessing it's hard to (illegally) find up to date versions of z/OS and mainframe software.