Stressing the Plumber

Reproduced it with only 3 api containers and concurrent users at only 10:

It took a fair amount of testing to get to reproduce again, is it possible I'm getting buildup somewhere?

In order to do it, I needed to use a url file with a good number of bad requests (502 errors, 500 errors, you name it), successful requests don't seem to cause it.

So I used something containing mostly bad requests and ended up with this:

siege aborted due to excessive socket failure; you
can change the failure threshold in $HOME/.siegerc

I updated my failure threshold to 15000 and then was able to reproduce the error.

And then after that the error seemed easier to reproduce

siege -c10 -b -i -furl_for_siege.txt -lsiege.log
** SIEGE 3.0.8
** Preparing 10 concurrent users for battle.
The server is now under siege...
[Wed, 2018-08-08 20:47:27] HTTP/1.1 200   0.38 secs:   41802 bytes ==> GET  http://xxxxx
[Wed, 2018-08-08 20:47:27] HTTP/1.1 200   0.47 secs:   41802 bytes ==> GET  http://xxxxx
[alert] socket: 1342109440 select timed out: Connection timed out
[alert] socket: 1325324032 select timed out: Connection timed out
[alert] socket: 1316931328 select timed out: Connection timed out
[alert] socket: 1333716736 select timed out: Connection timed out
[alert] socket: 1300145920 select timed out: Connection timed out
[alert] socket: 1350502144 select timed out: Connection timed out
[alert] socket: 1291753216 select timed out: Connection timed out
[alert] socket: 1283360512 select timed out: Connection timed out
[alert] socket: 1308538624 select timed out: Connection timed out
[alert] socket: 1358894848 select timed out: Connection timed out

Are bad requests using up sockets?

1 Like