[FrameworkBenchmarks] gerardroche opened pull request #1148: Fix zend framework 2 db multi queries should default to min/max 1/500 (master...zf2-fix-db-multi) http://git.io/evWFig
[FrameworkBenchmarks] donovanmuller opened pull request #1149: Added missing tests, new configuration with HikariCP and bumped Play version (master...master) http://git.io/LkP3WA
[FrameworkBenchmarks] zloster opened pull request #1150: Dropwizard/plaintext resource now returning precomputed byte array (master...dropwizard-plaintext) http://git.io/nO6WIg
[FrameworkBenchmarks] msmith-techempower closed pull request #1149: Added missing tests, new configuration with HikariCP and bumped Play version (master...master) http://git.io/LkP3WA
[FrameworkBenchmarks] msmith-techempower pushed 3 new commits to master: http://git.io/ORMKag
FrameworkBenchmarks/master 4fa160b Tor Hveem: Lapis: Fix object inside of array
FrameworkBenchmarks/master 0ee1591 Tor Hveem: Lapis: split /db and /queries into different handlers
FrameworkBenchmarks/master e0e8fd1 Mike Smith: Merge pull request #1138 from torhve/lapis...
Grrr. DId tons of data collection on wsgi, was really confused at the sad performance numbers (like....100 requests / second). Turns out it's internally killing/restarting worker processes like crazy
alex__
yikes
alex__ is now known as alex-techempower
xt
like a prefork model?
hamiltont
no, like an "every other worker was throwing an exception and dying" model :-P
Think that's a new paradigm for serving web requests
GHellings
It's an AntiPattern known as the Guillotine Pattern. Threaten to behead some of the threads and others will respond by acting more quickly.
hamiltont
lols
alex-techempower
decimation pattern? 9 out of 10 threads to ruthlessly murder the 10th?
alex-techempower: PS - if your team is waitin on gh#1145 at all, please just merge it and fix up the last two problems yourself. I can help on IRC but no time at the moment to update the PR
alex-techempower: ha ha, that problem is one of the ones I fixed in 1145
alex-techempower
ugh
hamiltont
look at lin 684
there's your problem ;-)
In all seriousness, you should merge 1145 before you update any of the __forcibly... stuff
or at the least consult the PR and duplicate my work
alex-techempower
hamiltont: all right
I'll try to get that done today
hamiltont just compressed 11GB of TFB result data into 300MB. Thank the lord for compression!
how?
well, I know the bulk of it was a couple of problem frameworks
hamiltont
how what?
alex-techempower
how'd you compress it?
hamiltont
oh just gzip
alex-techempower
oh
hamiltont
nothing fancy
alex-techempower
you didn't change the output or anything
to make it better
hamiltont
just lots of repeated strings in the various raw files and results.json files
nah
just needed to download it all to my computer for plotting and stuff
alex-techempower
hey, did you ever hear back from travis re capacity issues?
hamiltont
After the intiial email+response, no
I suspect they would contact us again before any hard-kill, but I'm trying to cancel unneeded stuff where I can to be safe
gh#1029 is actually pretty simple to implement in the code, but it will likely have all sorts of tricky cascade effects (things like log output limits being hit unexpectedly), so I just don't have time to do taht + debug
Does anyone know what happens when you give nginx way too many workers? Like, 3 to 10 times the number of cores?
I can't find any info on if this is handled gracefully (e.g. it's not a negative, there's just no benefit) or if you will see a performance drop
xt
think there will be just overhead
but no insane drop either
just what i think though :-)
only tried with a few too many
hamiltont
that's what I think too, but there's no hard data to be found
hamiltont dances happy dance
Finally figured out the bug causing my docker branch all kinds of pain
yay
Turns out it was mostly not running git clean before ADDing files to the container. All those files added inside various TROOT places have a way of screwing up builds when you suddenly change the entire OS :-p
xt: Some initial numbers show that pinnin nginx to 1 core and using 24 workers gives me 33k/sec, whereas using 1 worker gives 35.5k requests/sec
pinning*
so there's a difference that's non trivial in the absolute, but fairly trivial as a percentage of total requests / sec
rough guess is at most 10% performance loss if you give it way too many workers