
I can confirm I have also seen a few Podcasts experience the same thing, I have had to force download over and over until they finally complete. I have not done any technical searching or captures yet. -Mike -----Original Message----- From: Outages [mailto:outages-bounces@outages.org] On Behalf Of Jeremy Chadwick via Outages Sent: Friday, July 11, 2014 4:26 PM To: jrk1231-outml@nym.hush.com Cc: outages@outages.org Subject: Re: [outages] Parital iTunes Podcast Outage? On Fri, Jul 11, 2014 at 03:18:10PM -0400, jrk1231-outml@nym.hush.com wrote:
Sigh... Went back and tried pcap on a refresh several times, and got a ton of out of order and duplicate packets every time, followed by a fail like previous. I never saw the certificate but I did see several "Encrypted Alert" messages that Wireshark would not further detail.
Jeremy, I will send you the small pcap off list if you don't mind, and maybe you can decipher what is going on.
I reviewed the pcap. I'm going to ignore the out-of-order complaints because a NAT'd router is involved (not saying it's to blame though, it could be upstream network stuff, Internet stuff, or close-to-server-side stuff). The important part is that it doesn't impact the end goal in this case (figuring out SSL cert validity). I will say that the out-of-order stuff is a bit annoying in this case (especially right before socket closure). The certificate is in packet #13. The cert consists of a two-stage VeriSign CA plus the itunes.apple.com cert itself. If you expand each of those (signedCertificate -> validity) you'll see the dates printed in YY-MM-DD hh:mm:ss format per UTC time. The dates are as follows: itunes.apple.com - expires 2016-04-20 VeriSign Class 3 Extended - expires 2016-11-07 VeriSign Class 3 Public - expires 2021-11-07 I also verified this via openssl s_client -connect itunes.apple.com:443 (see madboa.com for details). So it's not an expired cert that's causing this behaviour, and there's more to verify that: In the same TCP session I was looking at ("tcp.stream eq 0", communication between lanip:55217 and 23.72.242.217:443), packets 21 through 29 seem to indicate (to me) that there is an actual conversion going on between client and server. The segments are smaller than MSS/MTU and there's no reassembly. If I had to take a guess, it looks like after cipher negotiation, the client submits its HTTP headers + payload (packets 21-23), the server responds with some HTTP headers + possibly a small payload (packet 25), followed by the server issuing a TLS Encrypted Alert message (packet 27) which the client sends back (packet 29), followed by the client sending FIN+ACK (30), the server sending FIN+ACK (31), then out-of-order stuff gets in the way (32, but its in reply to sequence 1854 which is packet 30), followed by the server sending 3 TCP RSTs 0.043 seconds later (which may be normal for SSL). The oddity of the last bits of communication of session 0 made me look at session 1 ("tcp.stream eq 1") which looks like a significantly more clear communication/"things are working" up until a certain point where I can tell the capture snaplen was probably too small (packet 71) followed by a bunch of duplicate ACKs and retransmissions. The behaviour in session 1 is indicative of some kind of network-level anomaly. I see things like the client repeatedly sending duplicate ACK packets back to the server for ack seq 22480 but the server never sends that / acts in some way like it did (but the client never got it). This sort of behaviour continues for quite some time, combined with further TCP retransmits. Like I said, this kind of behaviour seems to imply some kind of network-level anomaly and possibly packet loss but I couldn't tell you where. The TCP aspect of things is anomalous enough to make me think the true issue is at a lower layer. If everything looked "mostly" as clean as session 0 then I'd be more concerned with the payload that's being encrypted (and that's the biggest problem with use of SSL: if there is something application-level / layer 7 going on, you have virtually no way of debugging it. I often laugh at SSL for this reason -- great, you have your security, good for you and the assumption that nothing will ever go wrong + troubleshooting will never be needed other than at the physical client application + physical server application layer). Sorry I can't be of more help, but I can at least rule out server certificate expiry as the root cause. :-) -- | Jeremy Chadwick jdc@koitsu.org | | UNIX Systems Administrator http://jdc.koitsu.org/ | | Making life hard for others since 1977. PGP 4BD6C0CB | _______________________________________________ Outages mailing list Outages@outages.org https://puck.nether.net/mailman/listinfo/outages