Everything is broken
I'd like to tell you a story. Like all good stories, it starts with a stack trace obtained from a heap dump:
"jenkins.util.Timer [#6]" daemon prio=5 tid=34 RUNNABLE at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:129) at com.sun.net.ssl.internal.ssl.InputRecord.readFully(InputRecord.java:293) ... at com.sun.net.ssl.internal.ssl.AppInputStream.read(AppInputStream.java:75) at java.io.BufferedInputStream.fill(BufferedInputStream.java:218) ... at java.io.FilterInputStream.read(FilterInputStream.java:116) at sun.net.www.protocol.http.HttpURLConnection$HttpInputStream.read(HttpURLConnection.java:2676) at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:264) ... at java.io.InputStreamReader.read(InputStreamReader.java:167) at java.io.Reader.read(Reader.java:123) at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:2001) at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:1980) at org.apache.commons.io.IOUtils.copy(IOUtils.java:1957) at org.apache.commons.io.IOUtils.toString(IOUtils.java:819) at org.kohsuke.github.Requester.parse(Requester.java:384) ... at org.kohsuke.github.Requester$1.hasNext(Requester.java:295) at org.kohsuke.github.PagedIterator.fetch(PagedIterator.java:44) at org.kohsuke.github.PagedIterator.hasNext(PagedIterator.java:32) at org.kohsuke.github.PagedIterable.asList(PagedIterable.java:19) at org.kohsuke.github.GHRepository.getPullRequests(GHRepository.java:504) at org.jenkinsci.plugins.ghprb.GhprbRepository.check(GhprbRepository.java:82) at org.jenkinsci.plugins.ghprb.Ghprb.run(Ghprb.java:101) at org.jenkinsci.plugins.ghprb.GhprbTrigger.run(GhprbTrigger.java:149) at hudson.triggers.Trigger.checkTriggers(Trigger.java:265) at hudson.triggers.Trigger$Cron.doRun(Trigger.java:214) at hudson.triggers.SafeTimerTask.run(SafeTimerTask.java:51) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) ... at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662)
Obviously it is a Java stack trace, with
java.net.SocketInputStream.socketRead0 at the top, then some I/O layers, then some GitHub layers, then some ghprb (GitHub pull-request builder) plugin layers, then
hudson.triggers.Trigger$Cron.doRun, and finally some Java threading layers. What isn't shown is the C part of the stack frame above
socketRead0, but all that's there is a call to
recv, and after that it is Linux kernel stuff.
Let's start with Jenkins, the continuous integration server formerly known as Hudson. I don't have much love for it, but it is something that we use and live with. One of its responsibilities is to run jobs periodically, which it does by waking up once per minute, checking if there are any jobs which need doing, and if so, scheduling them. In other words, it reimplements
cron. I'm telling you this because it is the background for the story: we observed that our Jenkins instance stopped scheduling periodic jobs, which is a pretty serious problem. The stack trace was taken from our Jenkins instance, and from it, we can deduce why Jenkins broke.
hudson.triggers.Trigger$Cron.doRun frame on the stack means that we're looking at the code which runs every minute to check whether any jobs need doing. The frames immediately above it are GitHub-related, so clearly our Jenkins instance is talking to our GitHub Enterpise instance in order to establish whether any jobs need doing (because someone somewhere decided that a particular job should be triggered by events on GitHub, and decided to configure it via polling rather than webhooks). The frames above those are what you'd expect to see when Java code reads from a secure socket. Finally, in the piece of stack trace you can't see, the call to
recv is blocking.
The brokenness starts with
hudson.triggers.Trigger$Cron.doRun. If for some reason
doRun doesn't terminate, then Jenkins'
cron breaks. Given that
doRun can end up calling almost anything, and does so in the same thread without any explicit timeout, this seems like a pretty big problem.
OK, so, why isn't
doRun terminating? We can see that the ghbrp plugin is trying to communiate with GitHub over a socket. Let's give the plugin the benefit of the doubt and assume that it configures its sockets with a read-timeout so that reading won't block forever. After working our way through various layers, we end up at
socketRead0 - a function which accepts a socket and a timeout value and tries to read from the socket. At this point, we hit our next piece of brokenness. JDK-8049846, summarised as the timeout argument to
socketRead0 is sometimes ignored, or alternatively summarised as Java's socket I/O is broken.
OK, so, why is
socketRead0 blocking forever even though it was passed a timeout? As noted at JDK-8049846, the implementation of
socketRead0 is basically:
if (poll(socket, POLLIN, timeout)) recv(socket, buffer);
In other words, ask
poll to wait (with a timeout) until the socket is readable, and then read from it. No need to set
O_NONBLOCK on the socket, because
poll is used to check that a read won't block. What could possibly go wrong? Well, let's check the man page for
See the discussion of spurious readiness notifications under the BUGS section of
Let's check the man page for
select()may report a socket file descriptor as "ready for reading", while nevertheless a subsequent read blocks. This could for example happen when data has arrived but upon examination has wrong checksum and is discarded. There may be other circumstances in which a file descriptor is spuriously reported as ready. Thus it may be safer to use
O_NONBLOCKon sockets that should not block.
In other words, Linux's
poll is broken: it can report that a socket is ready for reading when, in fact, it isn't.
OK, so, why did
poll give us a spurious readiness notification on a socket? The socket in question was talking to our GitHub Enterprise instance, and, uh, GHE seems kind of flakey on the network side of things, so I wouldn't be surprised if it sent bad packets. In other words, GitHub Enterprise is broken (if anyone from GitHub is reading, please advise on how best to diagnose GitHub Enterprise being flaky). Also, while I'm wishing for things which won't happen: if any JDK maintainers are reading, please note that the current classification of JDK-8049846 as P4 (Minor loss of function, or other problem where easy workaround is present) is rather inaccurate: there is no workaround, and when it hits, the entire calling thread ceases to function. Of course, I'd be happier if you fixed the bug rather than fiddling around with priorities, but I'll take what I can get.
To conclude, everything is broken:
- Jenkins' cron implementation is broken (one bad virtual function call can stop all cron activity).
- Java's socket I/O is broken (JDK-8049846).
pollis broken (documented in the man pages).
- GitHub Enterprise is broken (flakey network interface).
When all of these things break at the same time, your continuous integration server stops continuously integrating.