At approximately 8:30AM we lost our BGP…

Mon Aug 5 10:37:06 PDT 2002 — At approximately 8:30AM we lost our BGP session with our Cable and Wireless upstream router. As a result all traffic flowing to the Internet was routing over our UUNet circuit causing high latency. We worked with Cable and Wireless’ NOC to get the circuit back up as fast as possible. The ckt is now up and passing traffic. -Scott, Kelsey, and Nathan

At approximately 8:30AM this morning we lost…

Mon Aug 5 09:05:15 PDT 2002 — At approximately 8:30AM this morning we lost out BGP session with our Cable and Wireless upstream router. As a result all traffic flowing to the Internet is routing over our UU.net circuit which is flooded under the load. We are currently in working with Cable and Wireless’ NOC to get our BGP session back on line as soon as possible. -Kelsey and Nathan.

Update: Cable and Wireless has an engineer working on the situation now. -Scott

Night Operations Complete: We have completed…

Mon Aug 5 04:17:00 PDT 2002 — Night Operations Complete: We have completed the redundant L2 core reconfiguration of the SMS and 1800 and our 7507 customer router. Extensive testing shows that the redundant L2 networks are functioning as expected.

We also overhauled gale, our Diablo-based NNTP feeder server with some additional disks for it’s spools and a third ethernet card. The changes have made a remarkable difference in the quality of our inbound NNTP feeds. If the current trends hold, we expect to process more than 300GB today, which is significantly higher than we’ve been able to handle in the past. Gale’s increased performance should result in a higher multi-part completion rate on our news server.

Zeke, Kevan and Tony relocated all of our leased Sun Cobalt colocations. They are now up and operating normally.

-Kelsey, Nathan, Zeke, Kevan and Tony

We have just finished moving our network…

Sat Aug 3 02:49:11 PDT 2002 — We have just finished moving our network monitoring and notification servers to our new facility. This leaves only a few odds and ends at our old facility which we expect to decommission by the end of this month once all customer colocations have been moved.

Earlier today we brought up our first three peers on the public switch at our colocation in Equinix’s San Jose IX. A direct peer to Yahoo! should be online by Monday morning. Peering at Equinix decreases utilization on our two T3 transit links while also providing decreased latency and improved throughput.

Monday morning as 12:01 AM we will be migrating Mega, one of our 7507 customer routers, as well as our SMS 1800 which terminates all PacBell DSL to our redundant meshed L2 core. We do not anticipate any loss of service while we bring up the second links to these two customer routers. -Kelsey, Nathan and Zeke

Leased Cobalt customer move.

Fri Aug 2 18:21:58 PDT 2002 — Leased Cobalt customer move. Monday morning at 12:01 AM we will be moving our leased Cobalt customers to the new datacenter. This will not affect any of our other services or customers. Downtime for the leased servers is expected to be less than two hours. -Zeke, Nathan, Kevan and Jared

Our SQL server experienced intermittent…

Thu Aug 1 11:50:42 PDT 2002 — Our SQL server experienced intermittent problems this morning. This caused Member Tools to be briefly inaccessible. No data was lost and all normal services are now restored. -Chris, Kelsey, Eli

Broadlink move preparation.

Thu Aug 1 10:40:49 PDT 2002 — Broadlink move preparation. Broadlink will be doing some prep work on their wireless backhaul today at 11am. In order to move the backhaul link between 300 B Street and their network they will be moving customers to a temporary 10Mb wireless link. This will allow them to move the 100 Mb link that currently carries backhaul traffic. The move will not cause service interruption, however, peak usage on the current link is 7 Mb/s and the interim 10 Mb link will create a temporary ceiling for traffic bursts. -Sonic.net and Broadlink

We brought the additional 75GB disk in…

Wed Jul 31 04:23:05 PDT 2002 — We brought the additional 75GB disk in typhoon live today, and gained another 60+ gigs of usable spool. This additional spool will help increase our binary retention as well as reducing IO contention on the other disks improving overall performance. -Kelsey and Nathan

Our Pac West (530-xxx-0174) numbers started…

Wed Jul 31 13:18:13 PDT 2002 — Our Pac West (530-xxx-0174) numbers started returning ‘All Circuits Busy’ messages about 30 minutes ago, and we’ve tracked to an issue with the Telco. Pac West’s engineers have not given us an ETR, but I will keep this space updated. — Eli, Stephanie

Update: The problem is more widespread than initially perceived, and affects all of our xxx-0174 numbers served from Stockton. This is a good portion of Northern California (excluding the Bay Area), and we’re working with Pac West to get this repaired as quickly as possible.

Update, 15:05hrs: The problem was caused by an administrative fumble at Pac West. Our backhaul circuit was mistaken for another, and disconnected. Service is fully restored, and we’re in deep dialogue with Pac West. — Eli