>>58783754

>Cisco identified a problem
>Will replace items under warranty
>They're finished and bankrupt
OP proved how retarded he was yet again

>major core network products
>many people will not see this notice
>routers and switches randomly dying and being bricks
>man-hours to replace devices

Cisco will forever be known as that one company that made brick routers after 18 months of uptime

>many people will not see this notice
>because they never notify people with TAC contracts

They'll send emails out, tech news sites will cover it, spread by word of mouth, etc

i work for cisco
i have one of the largest accounts impacted by this in the usa
everything is fine
its not a cisco issue, its a third party component.
id be more worried about other gear from vendors who havent said anything

Ditto
Working on scheduling so many RMA's for the coming weeks on my accounts.
The only thing that this is, is slightly annoying.

avaya switches had a software glitch that after like 370 days of uptime they would lock up. guess how we found out about that on our core switches?

but holy shit i feel bad for everyone who has to replace all of their effected cisco gear in production, what a fucking nightmare

if you lived through the VCCP memory crunch of 2014, this isnt so bad
theres still a plaque in the woodcliff lake office that says 'thanks for the memorys - VCCP 2014' on it. makes me lulz.

rmas are customer scheduled though arent they?

its specifically a moderate range branch router. the richest companies will have two of these per site, by far the median is 1. if you have a 4200 or a 4400 or a g2 you're unaffected. its actually not as many as you'd thing is what i'm getting at

>tfw 5 year old switches at work
lel, feels good knowing our shit won't just up and die.

Just a red badge for a partnered managed service provider.
We have to do all the scheduling.

cant differentiate a router from a switch
stay helpdesk user

>switches aren't affected
Kill yourself.

This is a good point. If it's just an issue with a standard component....who else uses the component

>CISCO

Their shortening is shit anyway, I always buy store brand. Makes better biscuits.

if you have a 9504 enhanced switch in a closet, you're a fucking retard. its a datacenter spine, not an access switch. if an entire enterprise had more than a few of them i'd be surprised given one pair can handle 144 40g leaf nodes

one of my customers uses them but they're not using the enhanced model so they're not impacted
stay basic

Slightly annoying is having an SFP fail.

Replacing a fucking core switch is not.

>it takes a big deal to copy a config file and move a couple dozen to couple hundred cables during a maintenance window
the hardest part would be lifting the 9K out of the rack

>a couple hundred
>not a big deal

these posts do nothing but show how few people actually work in the real world

but i guess going in to work at 2am on a sunday morning for hours to replace 2 fuckhuge boxes is 'not a big deal'

>but i guess going in to work at 2am on a sunday morning for hours to replace 2 fuckhuge boxes is 'not a big deal'
data center ops guys are usually hourly. i'm sure they'll be happy for some overtime pay.

>t. i don't have a job

maybe the world in which you work isn't real enough pussy

like i said, only people who are jobless would think it would be fun to change out just one of those stacks.

I'm very confused as to what is even going on in this picture. you have a couple of X2 modules on the supervisor of one 6504 going to a another which doesnt even have cables running it its line cards. And then all the copper lines are going to other 6504s which seem to be otherwise mostly unconnected to other things.

just so we're clear, a 9504 is a 4 card machine with a max of 48 ports per card. that's 192 cables if its fully loaded, and lets be real here nobody on Sup Forums works for a company with that much money. if you do, this is likely an outsource or partner job and you don't care either way. with two chassis, that's 400 cables. with two people working, you should be done in three hours if youre not retarded.

this was a lab for one of the worlds largest casinos, I assure you it was doing something. we might have half torn it down at this point, but I don't think so

Until they turn everything back on and it turns out you made a mistake and large segments of the network are either down or fucked up in an infinite amount of possible ways.

>inb4 "I would never make a mistake because X Y Z"

theres no way to protect against this
you just trust

>labeling cables is hard

ive got 2 vsp's at work, and its all fiber so not many cables. that doesnt change the fact that replacing it would be a fucking giant pain in the ass.

but again if you like going into work at 2am on weekends good for you

>tfw none of your cisco products in the field are impacted.