Chronic packet loss/latency in LAX (4 providers potentially involved)

Hi folks, For the past 3-4 weeks (likely longer), there has been a peering point in Los Angeles that has been repeatedly overtaxed every night for roughly 5-6 hours. Specifically high latency and packet loss. It tends to start at roughly the same time every day (1800 PDT), and tends to dissipate around the same time as well (2300 PDT or so) -- i.e. prime-time. This is not ICMP prioritisation -- this is real packet loss (I can tell via SSH, fetching mail (IMAP), etc.). However in recent days (past week or so), the issue has persisted even until hours as late as 0100 PDT, and today (for example) has existed since as early as 1500 PDT. So the situation as I said is getting worse. Because of asymmetrical routing, it's impossible for me to tell where peering-wise the actual issue is: a) between Comcast and GTT/nLayer b) between Comcast and Tata/AS6453 c) between NTT/Verio and Tata/AS6453 Of those 4 providers I've only a relationship with one; me getting all four to talk simultaneously, while the issue is happening, is virtually impossible. However it happens every night like clockwork, for a long period of time, so resolution shouldn't be all that difficult. I do periodic mtrs (think traceroute + ping combined) between my home connection and my VPS. Probe durations are 40 seconds, at intervals of 60 seconds. I store roughly 2-3 months of data, and I can make all this data available. Here's examples taken from tonight, both directions. Source and destination IPs are provided. Source IP: 67.180.84.87 (Comcast; Mountain View, CA) Dest IP: 206.125.172.42 (ARP Networks; Sylmar, CA) === Mon May 27 19:36:00 PDT 2013 (1369708560) HOST: icarus.home.lan Loss% Snt Rcv Last Avg Best Wrst 1.|-- 192.168.1.1 0.0% 40 40 0.3 0.3 0.2 0.4 2.|-- 67.180.84.1 0.0% 40 40 11.4 24.9 10.1 61.7 3.|-- 68.85.191.249 0.0% 40 40 11.3 11.7 8.9 27.7 4.|-- 69.139.199.106 0.0% 40 40 14.6 13.3 10.4 16.1 5.|-- 68.86.91.45 0.0% 40 40 15.9 19.6 13.1 34.3 6.|-- 68.86.88.58 0.0% 40 40 25.0 22.8 20.0 27.2 7.|-- 68.86.88.190 0.0% 40 40 21.5 21.9 19.6 38.2 8.|-- 173.167.57.138 10.0% 40 36 68.5 79.4 54.1 92.3 9.|-- 69.31.127.141 0.0% 40 40 81.4 95.1 64.5 128.8 10.|-- 69.31.127.130 35.0% 40 26 69.5 69.6 43.0 96.1 11.|-- 69.174.121.73 0.0% 40 40 67.0 72.3 39.6 90.4 12.|-- 67.199.135.102 10.0% 40 36 39.1 49.2 36.7 249.8 13.|-- 206.125.172.42 5.0% 40 38 37.1 36.0 34.2 49.9 === END DNS resolution for relevant hops: $ host 68.86.88.190 190.88.86.68.in-addr.arpa domain name pointer pos-0-4-0-0-pe01.600wseventh.ca.ibone.comcast.net. $ host 173.167.57.138 Host 138.57.167.173.in-addr.arpa. not found: 3(NXDOMAIN) $ host 69.31.127.141 141.127.31.69.in-addr.arpa domain name pointer ae0-110g.cr1.lax1.us.nlayer.net. Source IP: 206.125.172.42 (ARP Networks; Sylmar, CA) Dest IP: 67.180.84.87 (Comcast; Mountain View, CA) === Mon May 27 19:37:00 PDT 2013 (1369708620) HOST: omake.koitsu.org Loss% Snt Rcv Last Avg Best Wrst 1.|-- 206.125.172.41 0.0% 40 40 1.5 12.8 0.9 181.8 2.|-- 208.79.88.135 0.0% 40 40 0.6 1.0 0.5 11.6 3.|-- 129.250.198.185 0.0% 40 40 0.8 0.9 0.7 1.4 4.|-- 129.250.2.221 0.0% 40 40 1.0 1.0 0.9 1.5 5.|-- 64.86.252.65 32.5% 40 27 0.5 0.8 0.5 8.6 | `|-- 216.6.84.65 6.|-- 173.167.59.185 55.0% 40 18 19.6 19.3 15.7 75.1 7.|-- 68.86.82.61 40.0% 40 24 17.9 17.5 15.9 19.5 8.|-- 68.86.88.57 5.0% 40 38 25.2 25.7 24.1 27.5 9.|-- 68.86.90.94 7.5% 40 37 25.8 26.6 24.7 28.7 10.|-- 69.139.198.81 5.0% 40 38 26.6 26.6 26.3 27.0 11.|-- 68.85.191.254 7.5% 40 37 47.9 46.0 30.5 50.5 12.|-- 67.180.84.87 7.5% 40 37 36.0 36.1 34.0 50.3 === END DNS resolution for relevant hops (hop #5 indicates round-robin'ing between two different interfaces presumably on the same Tata/AS6453 router): $ host 129.250.2.221 221.2.250.129.in-addr.arpa domain name pointer ae-3.r05.lsanca03.us.bb.gin.ntt.net. $ host 64.86.252.65 65.252.86.64.in-addr.arpa domain name pointer ix-11-2-1-0.tcore2.LVW-LosAngeles.as6453.net. $ host 216.6.84.65 65.84.6.216.in-addr.arpa domain name pointer ix-9-1-2-0.tcore2.LVW-LosAngeles.as6453.net. $ host 173.167.59.185 185.59.167.173.in-addr.arpa domain name pointer xe-1-2-0-0-pe01.onewilshire.ca.ibone.comcast.net. If relevant providers are on this list and can respond, that would be awesome. If they can't respond but can instead start the ball rolling on rectifying this, I think myself and many other 'netizens would really appreciate it. -- | Jeremy Chadwick jdc@koitsu.org | | UNIX Systems Administrator http://jdc.koitsu.org/ | | Mountain View, CA, US | | Making life hard for others since 1977. PGP 4BD6C0CB |

Oh yes, I've been seeing this for some time, probably since mid-April. At times we've seen Comcast prepending their AS multiple times on announcements through Level3, causing outbound to Comcast traffic via congested routes with Tata, rather than what would otherwise be the shorter, uncongested route via Level3. (Return traffic from Comcast in my case returns via Level3.) For traffic from a server in San Francisco connected via Level3 and XO, connecting to two Comcast cable modems in San Francisco, I see traffic go out via XO, Tata, Comcast via LA, with what looks like major congestion, then back up to SF. With the appearance of major congestion on the Tata<->Comcast link. I don't have any traffic which visiblity traverses NTT or nLayer, so I couldn't say if the scope goes beyond the Tata-Comcast link. It's particularly annoying receiving LA latency + 5-6% packet loss going to/from well connected locations a couple miles apart in SF. I'm experiencing it now composing this message over ssh. Server in SF to Comcast in SF: ... 5. te1-0-0d0.mcr1.fremont-ca.xo.net 0.0% 1.4 5.9 1.2 82.0 15.8 6. vb1500.rar3.sanjose-ca.us.xo.net 0.0% 11.1 8.4 1.5 13.3 3.5 7. 207.88.14.226.ptr.us.xo.net 6.0% 25.3 8.0 1.8 41.9 12.9 8. if-10-12.icore1.SQN-SanJose.as6453 44.9% 6.0 7.0 2.0 14.9 4.3 9. Vlan3260.icore2.SQN-SanJose.as6453 0.0% 3.1 6.6 2.3 13.5 3.4 10. if-4-2145.tcore1.SQN-SanJose.as645 0.0% 3.4 3.7 2.7 8.0 0.9 11. if-3-2.tcore2.LVW-LosAngeles.as645 0.0% 32.4 27.9 14.7 57.1 9.7 12. xe-0-2-0-0-pe01.onewilshire.ca.ibo 8.2% 39.4 30.3 28.3 72.3 7.1 13. te-2-0-0-6-cr01.losangeles.ca.ibon 6.1% 32.2 30.7 28.6 32.6 1.2 14. he-0-11-0-0-cr01.sacramento.ca.ibo 4.1% 30.6 31.5 29.6 33.3 1.0 15. he-0-4-0-0-ar01.oakland.ca.sfba.co 10.2% 30.5 32.3 30.4 34.3 1.1 16. te-8-4-ur04.santaclara.ca.sfba.com 8.2% 32.6 33.0 32.5 41.9 1.4 17. te-17-10-cdn08.sf19th.ca.sfba.comc 8.2% 43.0 60.5 31.2 830.2 117.4 ... The ComcastSF -> ServerInSF goes out via Level3 and sees no loss until the final hop, suggesting the problem is with the reverse path, above. (Most of the time, sometimes the first Comcast hops going out are very congested too, though that's not usually the case). And here's two smokepings of the issue, in which you can clearly see the timing of the issue you mentioned. SF server to SF cable modem, 2mi apart. http://cl.ly/image/040U3O2F0810 (last 30 hours) http://cl.ly/image/1D053a0T1k2I (last 10 days)

I don't think the issue is limited to just the providers listed. Over the last several days we have been getting reports of long load times of sites hosted with us in Washington State. One specific customer is Just north of SF and is a comcast customer. Here is a traceroute from here before we worked with one of our upsteams today to route around cogent and currently the traffic is going over level 3 and it is better. I also had taken down that provider Last night for a while and Comcast took the traffic up to Seattle to my other provider were it was handed off. That link also worked very well. However Comcast to Cogent in Southern CA seems to be having issues best I can tell. Sorry I don't think I have a MTR feed. I don't think any of my techs saved one yet going from us back down to my customers location. Also i took out the beginning and ending of this for privacy. But the bulk of the traceroute is here. I find the hope from 68.86.87.6 to 154.54.11.109 interesting in that is seems like there seems to be a good jump in latency there. 3 te-7-3-ur01.novato.ca.sfba.comcast.net (68.87.196.201) 26.191 ms 12.109 ms 10.970 ms 4 te-8-2-ur02.sanrafael.ca.sfba.comcast.net (68.87.192.145) 10.665 ms 14.917 ms 10.483 ms 5 te-9-3-ur01.sanrafael.ca.sfba.comcast.net (68.87.192.141) 24.769 ms 10.678 ms 10.807 ms 6 te-1-10-0-13-ar01.sfsutro.ca.sfba.comcast.net (68.87.226.122) 16.090 ms 18.981 ms 16.005 ms 7 he-1-7-0-0-cr01.sanjose.ca.ibone.comcast.net (68.86.90.153) 24.986 ms 16.506 ms 48.016 ms 8 pos-0-2-0-0-pe01.529bryant.ca.ibone.comcast.net (68.86.87.6) 14.379 ms 14.265 ms 13.569 ms 9 te0-7-0-12.ccr21.sjc04.atlas.cogentco.com (154.54.11.109) 59.591 ms 63.475 ms 63.213 ms 10 be2018.mpd22.sfo01.atlas.cogentco.com (154.54.28.81) 59.691 ms be2015.ccr21.sfo01.atlas.cogentco.com (154.54.7.173) 65.196 ms be2018.mpd22.sfo01.atlas.cogentco.com (154.54.28.81) 66.781 ms 11 te0-7-0-6.ccr22.sea01.atlas.cogentco.com (154.54.86.41) 74.862 ms te0-7-0-0.ccr22.sea01.atlas.cogentco.com (154.54.86.37) 74.722 ms te0-7-0-6.ccr22.sea01.atlas.cogentco.com (154.54.86.41) 74.747 ms 12 te0-6-0-4.ccr21.sea02.atlas.cogentco.com (154.54.85.182) 75.177 ms te0-1-0-4.ccr21.sea02.atlas.cogentco.com (154.54.85.186) 75.314 ms te0-1-0-5.ccr21.sea02.atlas.cogentco.com (154.54.85.178) 76.532 ms 13 te7-8.mag01.sea02.atlas.cogentco.com (154.54.41.213) 146.612 ms te4-8.mag01.sea02.atlas.cogentco.com (154.54.41.209) 116.713 ms te7-8.mag01.sea02.atlas.cogentco.com (154.54.41.213) 146.250 ms 14 38.104.127.10 (38.104.127.10) 74.407 ms 73.262 ms 75.154 ms 15 core01-grassi-eth1-6.net.pocketinet.com (64.185.97.125) 80.996 ms 80.480 ms 80.905 ms I hope this info is helpful. I have more data I can provide. Been trying to work up some evidence on what is happening. And to try to pin point the problem that is causing issues. It seems to be intermittent. But like you said I think it does tend to be worse "Prime Time" Sincerely, Mark

This isn't surprising at all. Comcast is notorious for running their links into the ground. The Comcast<->TATA links have been congested for a while now. Some graphs from late 2010: http://kornholi.net/lol/ntoday.gif http://kornholi.net/lol/sqnday.gif http://kornholi.net/lol/ntomonth.gif Pretty sure they do this so content providers have no other choice but to pay them... -Kornelijus

Adding to the collective troubleshooting below (which mirrors my recent observations), I'd offer Tata being congested to various US-based peers as an additional "x factor". That is to say, even if Tata-Comcast isn't congested in a given market, there is a possibility that Tata-[provider x] might be. Welcome to an era in which Buster Poindexter's hit song "Hot, Hot, Hot" is popular listening music for capacity planning sessions. $0.02, -a On Tue, May 28, 2013 at 12:37 AM, Mark Keymer <mark@viviotech.net> wrote:
I don't think the issue is limited to just the providers listed.
Over the last several days we have been getting reports of long load times of sites hosted with us in Washington State. One specific customer is Just north of SF and is a comcast customer. Here is a traceroute from here before we worked with one of our upsteams today to route around cogent and currently the traffic is going over level 3 and it is better. I also had taken down that provider Last night for a while and Comcast took the traffic up to Seattle to my other provider were it was handed off. That link also worked very well. However Comcast to Cogent in Southern CA seems to be having issues best I can tell.
Sorry I don't think I have a MTR feed. I don't think any of my techs saved one yet going from us back down to my customers location.
Also i took out the beginning and ending of this for privacy. But the bulk of the traceroute is here. I find the hope from 68.86.87.6 to 154.54.11.109 interesting in that is seems like there seems to be a good jump in latency there.
3 te-7-3-ur01.novato.ca.sfba.comcast.net (68.87.196.201) 26.191 ms 12.109 ms 10.970 ms 4 te-8-2-ur02.sanrafael.ca.sfba.comcast.net (68.87.192.145) 10.665 ms 14.917 ms 10.483 ms 5 te-9-3-ur01.sanrafael.ca.sfba.comcast.net (68.87.192.141) 24.769 ms 10.678 ms 10.807 ms 6 te-1-10-0-13-ar01.sfsutro.ca.sfba.comcast.net (68.87.226.122) 16.090 ms 18.981 ms 16.005 ms 7 he-1-7-0-0-cr01.sanjose.ca.ibone.comcast.net (68.86.90.153) 24.986 ms 16.506 ms 48.016 ms 8 pos-0-2-0-0-pe01.529bryant.ca.ibone.comcast.net (68.86.87.6) 14.379 ms 14.265 ms 13.569 ms 9 te0-7-0-12.ccr21.sjc04.atlas.cogentco.com (154.54.11.109) 59.591 ms 63.475 ms 63.213 ms 10 be2018.mpd22.sfo01.atlas.cogentco.com (154.54.28.81) 59.691 ms be2015.ccr21.sfo01.atlas.cogentco.com (154.54.7.173) 65.196 ms be2018.mpd22.sfo01.atlas.cogentco.com (154.54.28.81) 66.781 ms 11 te0-7-0-6.ccr22.sea01.atlas.cogentco.com (154.54.86.41) 74.862 ms te0-7-0-0.ccr22.sea01.atlas.cogentco.com (154.54.86.37) 74.722 ms te0-7-0-6.ccr22.sea01.atlas.cogentco.com (154.54.86.41) 74.747 ms 12 te0-6-0-4.ccr21.sea02.atlas.cogentco.com (154.54.85.182) 75.177 ms te0-1-0-4.ccr21.sea02.atlas.cogentco.com (154.54.85.186) 75.314 ms te0-1-0-5.ccr21.sea02.atlas.cogentco.com (154.54.85.178) 76.532 ms 13 te7-8.mag01.sea02.atlas.cogentco.com (154.54.41.213) 146.612 ms te4-8.mag01.sea02.atlas.cogentco.com (154.54.41.209) 116.713 ms te7-8.mag01.sea02.atlas.cogentco.com (154.54.41.213) 146.250 ms 14 38.104.127.10 (38.104.127.10) 74.407 ms 73.262 ms 75.154 ms 15 core01-grassi-eth1-6.net.pocketinet.com (64.185.97.125) 80.996 ms 80.480 ms 80.905 ms
I hope this info is helpful. I have more data I can provide. Been trying to work up some evidence on what is happening. And to try to pin point the problem that is causing issues. It seems to be intermittent. But like you said I think it does tend to be worse "Prime Time"
Sincerely,
Mark
_______________________________________________ Outages mailing list Outages@outages.org https://puck.nether.net/mailman/listinfo/outages

A follow-up to this: I received *many* off-list mails about this, but nothing official. Lots of (justified) speculation about which peering points and/or providers it is, to which I did respond to some, but nothing conclusive. Without going into details (respecting folks' privacy), I can say that the NTT/Verio<->Tata peering point has been ruled out as the cause. I should also note that the ingress path (Comcast-->ARP) has changed due to what appears to be some peering adjustments done by Comcast last night (specifically May 27th at 23:22 PDT or thereabouts). This will be obvious below in a pair of live mtrs done by me manually, with DNS resolution enabled. (Someone off-list asked for this) My gut feeling right now is that the issue is at the Comcast<->Tata peering point (hops #5 and #6 in the 2nd mtr), or a Comcast router at One Wilshire is overwhelmed (hop #6 in the 2nd mtr). The reason I'm wanting to rule out ingress (Comcast->ARP) is because the path today looks different than it did yesterday (Comcast<->GTT/nLayer is no longer involved) yet the issue remains, thus something on the return path looks more likely. Anyway, here are the mtrs, taken about 10 minutes ago: Source IP: 67.180.84.87 (Comcast; Mountain View, CA) Dest IP: 206.125.172.42 (ARP Networks; Sylmar, CA) Host Loss% Snt Rcv Last Avg Best Wrst 1. gw.home.lan 0.0% 128 128 0.2 0.2 0.2 0.5 2. c-67-180-84-1.hsd1.ca.comcast.net 0.0% 128 128 25.6 25.9 10.5 66.4 3. te-0-0-0-12-ur05.santaclara.ca.sfba.comcast 0.0% 128 128 10.6 11.4 8.8 15.6 4. te-1-1-0-13-ar01.sfsutro.ca.sfba.comcast.ne 0.0% 128 128 11.3 14.8 10.2 19.7 5. he-3-9-0-0-cr01.sanjose.ca.ibone.comcast.ne 0.0% 128 128 20.3 20.7 12.4 40.6 6. be-13-pe02.11greatoaks.ca.ibone.comcast.net 0.0% 127 127 20.7 18.6 14.5 68.8 7. 173.167.59.82 0.0% 127 127 19.2 23.8 14.3 194.4 8. 63-218-212-14.static.pccwglobal.net 0.0% 127 127 24.8 27.1 23.1 78.8 9. cxa.r6.lax2.trit.net 0.0% 127 127 24.4 25.8 22.6 31.3 10. arpnetworks-lax2-gw.cust.trit.net 1.6% 127 125 40.9 55.1 37.2 210.4 11. omake.koitsu.org 10.2% 127 114 41.8 39.3 35.6 52.5 Source IP: 206.125.172.42 (ARP Networks; Sylmar, CA) Dest IP: 67.180.84.87 (Comcast; Mountain View, CA) Host Loss% Snt Rcv Last Avg Best Wrst 1. 206.125.172.41 0.0% 116 116 7.4 10.0 1.0 111.1 2. s7.lax.arpnetworks.com 0.0% 116 116 0.7 8.0 0.4 168.3 3. ge-0-7-0-24.r04.lsanca03.us.bb.gin.ntt.net 0.0% 116 116 0.7 0.8 0.6 1.5 4. ae-3.r05.lsanca03.us.bb.gin.ntt.net 0.0% 116 116 1.1 1.0 0.8 2.3 5. ix-9-1-2-0.tcore2.LVW-LosAngeles.as6453.net 0.0% 116 116 0.5 2.1 0.4 52.0 ix-11-2-1-0.tcore2.LVW-LosAngeles.as6453.net 6. xe-1-2-0-0-pe01.onewilshire.ca.ibone.comcas 4.3% 116 111 15.7 16.4 15.5 42.9 7. te-2-0-0-7-cr01.losangeles.ca.ibone.comcast 6.0% 116 109 17.8 17.6 15.7 19.9 8. pos-2-9-0-0-cr01.sanjose.ca.ibone.comcast.n 5.2% 116 110 28.8 27.9 25.8 29.9 9. he-0-4-0-0-ar01.sfsutro.ca.sfba.comcast.net 3.4% 116 112 28.0 28.2 25.9 30.0 10. te-0-6-0-0-ur05.santaclara.ca.sfba.comcast. 5.2% 116 110 29.9 29.7 29.1 37.0 11. te-18-10-cdn31.santaclara.ca.sfba.comcast.n 3.4% 116 112 41.7 47.9 31.2 54.3 12. c-67-180-84-87.hsd1.ca.comcast.net 3.4% 116 112 38.1 37.4 35.1 72.2 -- | Jeremy Chadwick jdc@koitsu.org | | UNIX Systems Administrator http://jdc.koitsu.org/ | | Mountain View, CA, US | | Making life hard for others since 1977. PGP 4BD6C0CB | On Mon, May 27, 2013 at 07:53:35PM -0700, Jeremy Chadwick wrote:
Hi folks,
For the past 3-4 weeks (likely longer), there has been a peering point in Los Angeles that has been repeatedly overtaxed every night for roughly 5-6 hours. Specifically high latency and packet loss.
It tends to start at roughly the same time every day (1800 PDT), and tends to dissipate around the same time as well (2300 PDT or so) -- i.e. prime-time. This is not ICMP prioritisation -- this is real packet loss (I can tell via SSH, fetching mail (IMAP), etc.). However in recent days (past week or so), the issue has persisted even until hours as late as 0100 PDT, and today (for example) has existed since as early as 1500 PDT. So the situation as I said is getting worse.
Because of asymmetrical routing, it's impossible for me to tell where peering-wise the actual issue is:
a) between Comcast and GTT/nLayer b) between Comcast and Tata/AS6453 c) between NTT/Verio and Tata/AS6453
Of those 4 providers I've only a relationship with one; me getting all four to talk simultaneously, while the issue is happening, is virtually impossible. However it happens every night like clockwork, for a long period of time, so resolution shouldn't be all that difficult.
I do periodic mtrs (think traceroute + ping combined) between my home connection and my VPS. Probe durations are 40 seconds, at intervals of 60 seconds. I store roughly 2-3 months of data, and I can make all this data available.
Here's examples taken from tonight, both directions. Source and destination IPs are provided.
Source IP: 67.180.84.87 (Comcast; Mountain View, CA) Dest IP: 206.125.172.42 (ARP Networks; Sylmar, CA)
=== Mon May 27 19:36:00 PDT 2013 (1369708560) HOST: icarus.home.lan Loss% Snt Rcv Last Avg Best Wrst 1.|-- 192.168.1.1 0.0% 40 40 0.3 0.3 0.2 0.4 2.|-- 67.180.84.1 0.0% 40 40 11.4 24.9 10.1 61.7 3.|-- 68.85.191.249 0.0% 40 40 11.3 11.7 8.9 27.7 4.|-- 69.139.199.106 0.0% 40 40 14.6 13.3 10.4 16.1 5.|-- 68.86.91.45 0.0% 40 40 15.9 19.6 13.1 34.3 6.|-- 68.86.88.58 0.0% 40 40 25.0 22.8 20.0 27.2 7.|-- 68.86.88.190 0.0% 40 40 21.5 21.9 19.6 38.2 8.|-- 173.167.57.138 10.0% 40 36 68.5 79.4 54.1 92.3 9.|-- 69.31.127.141 0.0% 40 40 81.4 95.1 64.5 128.8 10.|-- 69.31.127.130 35.0% 40 26 69.5 69.6 43.0 96.1 11.|-- 69.174.121.73 0.0% 40 40 67.0 72.3 39.6 90.4 12.|-- 67.199.135.102 10.0% 40 36 39.1 49.2 36.7 249.8 13.|-- 206.125.172.42 5.0% 40 38 37.1 36.0 34.2 49.9 === END
DNS resolution for relevant hops:
$ host 68.86.88.190 190.88.86.68.in-addr.arpa domain name pointer pos-0-4-0-0-pe01.600wseventh.ca.ibone.comcast.net.
$ host 173.167.57.138 Host 138.57.167.173.in-addr.arpa. not found: 3(NXDOMAIN)
$ host 69.31.127.141 141.127.31.69.in-addr.arpa domain name pointer ae0-110g.cr1.lax1.us.nlayer.net.
Source IP: 206.125.172.42 (ARP Networks; Sylmar, CA) Dest IP: 67.180.84.87 (Comcast; Mountain View, CA)
=== Mon May 27 19:37:00 PDT 2013 (1369708620) HOST: omake.koitsu.org Loss% Snt Rcv Last Avg Best Wrst 1.|-- 206.125.172.41 0.0% 40 40 1.5 12.8 0.9 181.8 2.|-- 208.79.88.135 0.0% 40 40 0.6 1.0 0.5 11.6 3.|-- 129.250.198.185 0.0% 40 40 0.8 0.9 0.7 1.4 4.|-- 129.250.2.221 0.0% 40 40 1.0 1.0 0.9 1.5 5.|-- 64.86.252.65 32.5% 40 27 0.5 0.8 0.5 8.6 | `|-- 216.6.84.65 6.|-- 173.167.59.185 55.0% 40 18 19.6 19.3 15.7 75.1 7.|-- 68.86.82.61 40.0% 40 24 17.9 17.5 15.9 19.5 8.|-- 68.86.88.57 5.0% 40 38 25.2 25.7 24.1 27.5 9.|-- 68.86.90.94 7.5% 40 37 25.8 26.6 24.7 28.7 10.|-- 69.139.198.81 5.0% 40 38 26.6 26.6 26.3 27.0 11.|-- 68.85.191.254 7.5% 40 37 47.9 46.0 30.5 50.5 12.|-- 67.180.84.87 7.5% 40 37 36.0 36.1 34.0 50.3 === END
DNS resolution for relevant hops (hop #5 indicates round-robin'ing between two different interfaces presumably on the same Tata/AS6453 router):
$ host 129.250.2.221 221.2.250.129.in-addr.arpa domain name pointer ae-3.r05.lsanca03.us.bb.gin.ntt.net.
$ host 64.86.252.65 65.252.86.64.in-addr.arpa domain name pointer ix-11-2-1-0.tcore2.LVW-LosAngeles.as6453.net.
$ host 216.6.84.65 65.84.6.216.in-addr.arpa domain name pointer ix-9-1-2-0.tcore2.LVW-LosAngeles.as6453.net.
$ host 173.167.59.185 185.59.167.173.in-addr.arpa domain name pointer xe-1-2-0-0-pe01.onewilshire.ca.ibone.comcast.net.
If relevant providers are on this list and can respond, that would be awesome. If they can't respond but can instead start the ball rolling on rectifying this, I think myself and many other 'netizens would really appreciate it.
-- | Jeremy Chadwick jdc@koitsu.org | | UNIX Systems Administrator http://jdc.koitsu.org/ | | Mountain View, CA, US | | Making life hard for others since 1977. PGP 4BD6C0CB |
_______________________________________________ Outages mailing list Outages@outages.org https://puck.nether.net/mailman/listinfo/outages
participants (5)
-
Adam Rothschild
-
Jeremy Chadwick
-
Jesse
-
Kornelijus Survila
-
Mark Keymer