How four rotten packets broke CenturyLink's network for 37 hours, knackering 911 calls, VoIP, broadband
.jpg)
A handful of bad network packets triggered a massive chain reaction that crippled the entire network of US telco CenturyLink for roughly a day and a half.
This is according to the FCC's official probe into the December 2018 super-outage, during which CenturyLink's broadband internet and VoIP services fell over and stayed down for a total of 37 hours. This meant subscribers couldn't, among other things, call 911 over VoIP at the time – which is a violation of FCC rules, and triggered a formal investigation.
"This outage was caused by an equipment failure catastrophically exacerbated by a network configuration error," America's communications regulator said in its summary of its inquiry, published yesterday.
"It affected communications service providers, business customers, and consumers who directly or indirectly relied upon CenturyLink’s transport services, which route communications traffic from various providers to locations across the country, resulting in extensive disruptions to phone service, including 911 calling."
CenturyLink has six long-haul networks that make up the backbone of its digital empire, interconnecting regions of America. These networks use Infinera-built nodes to switch packets over high-speed optic fiber: data flowing into each node is directed to other nodes, ultimately pumping VoIP, regular internet traffic, and more, across the nation as needed.
Each dodgy packet would arrive at a node, get rejected and be passed along a chain of filters until it was injected into a management channel and handed to all connecting nodes. Here's a flow diagram, courtesy of the FCC, showing how the corrupted packets ended up being forwarded on to all neighbouring nodes, and so on and so on, producing a growing chain reaction of corrupted packets.
"Due to the packets’ broadcast destination address, the malformed network management packets were delivered to all connected nodes. Consequently, each subsequent node receiving the packet retransmitted the packet to all its connected nodes, including the node where the malformed packets originated," the FCC said in its report.
"Each connected node continued to retransmit the malformed packets across the proprietary management channel to each node with which it connected because the packets appeared valid and did not have an expiration time. This process repeated indefinitely."
As you might imagine, the exponentially growing storm of packets soon overwhelmed CenturyLink's optic-fibre backbone, causing regular traffic to stop flowing: VoIP phones stopped working, internet access slowed to a halt, and so on. Folks in New Orleans were first to spot their connections stalling, at roughly 0356 EST on December 27.
Here is where things went from really, really bad to terrible: the nodes along the fiber network were so flooded, they could not be reached by their administrators to troubleshoot the issue. It wasn't until some 15 hours later the techies could finally track down the single errant node in Colorado responsible for sparking the deluge, not that replacing it helped. The packet tsunami was still washing back and forth, knocking nodes over.
"At 2102 on December 27, CenturyLink network engineers identified and removed the module that had generated the malformed packets," the report noted. "The outage, however, did not immediately end; the malformed packets continued to replicate and transit the network, generating more packets as they echoed from node to node."
It would be another three hours before CenturyLink's network admins could begin to get through to the other nodes and get them to kill off the spread of bad packets. It took until 1130 on December 28 to get visibility of the network back, and it wasn't until 2336 that all nodes had been restored. On December 29, just after midday, CenturyLink finally declared the crisis over.
"The event caused a nationwide voice, IP, and transport outage on CenturyLink’s fiber network. CenturyLink estimates that 12,100,108 calls were blocked or degraded due to the incident," the FCC said.
"Where long-distance voice callers experienced call quality issues, some customers received a fast-busy signal, some received an error message, and some just had a terrible connection with garbled words."
The outage also knackered local governments and telcos that relied on the CenturyLink network for portions of their services. State governments in Illinois, Kansas, Minnesota, and Missouri all had portions of their networks down for roughly 36 hours thanks to CenturyLink, and phone services sold by Comcast, Verizon, TeleCommunication Systems, General Dynamics IT, and West Safety Services – including 911 call centers – saw connectivity interrupted for some or all of the outage period.
As to what can be done to prevent similar failures, the FCC is recommending CenturyLink and other backbone providers take some basic steps, such as disabling unused features on network equipment, installing and maintaining alarms that warn admins when memory or processor use is reaching its peak, and having backup procedures in the event networking gear becomes unreachable.
"Currently, CenturyLink is in the process of updating its nodes’ Ethernet policer to reduce the chance of the transmission of a malformed packet in the future," the report notes. "The improved ethernet policer quickly identifies and terminates invalid packets, preventing propagation into the network. This work is expected to be completed in Fall, 2019."
source theregister
Industry: Unified Communications & Telecommunications

Latest Jobs
-
- Infrastructure (Network / Security) Engineer | West London commutable | Permanent
- London
- Apply today
-
Infrastructure (Network / Security) Engineer | West London commutable | Permanent This is an in house opportunity. Looking for someone that has on prem / data center experience MUST be a currently hands on config, Install, upgrade, troubleshooting experience Routing, Switching, Network Security (firewall, IDS etc), Microsoft Active Directory / 365. VMWare Scripting / automation experience wanted. Python, Powershell etc Must be commutable to West London twice a week. Visa sponsorship not available. Apply today for more information Book a call via this link https://calendly.com/d/crqf-t28-7tb
-
- Identity & Access Management Architect
- Edinburgh
- Upto £95000 plus bonus and benefits
-
Location: Edinburgh | Hybrid Working | Permanent Are you an experienced Identity & Access Management professional with a passion for designing and implementing cutting-edge security solutions? We are looking for a Lead Architect, where you’ll play a key role in helping clients enhance their IAM capabilities, protect critical data, and navigate complex security challenges. About the Role As a Lead Architect, you will be responsible for shaping and delivering IAM strategies, designing robust security solutions, and driving long-term digital transformation. You’ll leverage your expertise to provide strategic guidance on areas such as: Identity Governance & Administration (IGA) Privileged Access Management (PAM) Access Management (AM) Entitlement Management Directories & Authentication Solutions You will have the opportunity to work with innovative technologies and frameworks, ensuring that businesses can securely manage access to critical assets while enabling growth. What You’ll Be Doing Providing subject matter expertise in IAM and leading transformation projects for clients Developing IAM roadmaps, operating models, and governance frameworks Driving innovation by integrating IAM capabilities into wider digital transformation strategies Building and maintaining strong relationships with clients and stakeholders Designing and implementing scalable IAM solutions to meet business needs What We’re Looking For Proven experience in IAM strategy, solution architecture, or assurance Strong leadership skills with experience guiding technical teams Ability to work in a client-facing role, delivering clear communication and insights A technology-focused, innovative mindset with strong business acumen Willingness to work from our Edinburgh office 2-3 days per week
-
- Security Architect - Cloud - Consultancy London
- London
- N/A
-
Security Architect with a focus into Cloud (AWS, Azure or Google Cloud Platform) needed. You must have client facing consultancy experience. This mean you must have experience working with clients helping them to meet their security design needs. That could include working with existing internal teams to understand, review and mitigate / uplift existing Cloud Security designs, or perhaps helping clients set out / understand their current needs and deliver their cloud security strategy. (Or anything in between) Technical knowledge is of course essential but working with clients to understand and solve their Cloud Security design challenges is vital. You must obviously have a current history working as a cloud security architect. You will need to be commutable to London. Whilst a hybrid role the expectation is 3 days a week in the office / meeting clients. International relocation or Visa sponsorship isn’t available for this role. Apply on this page and arrange a call here https://calendly.com/d/crpz-m7j-wyx