i haven't found this before.. this works very well on startu
statup
and the containers run with weave before reboot gets an weave ip after reboot. so it seems to work very nice.
and with the fast data path i get an acceptable performance.. looks really nice now
fons
ok, great, that aligns with what we would expect. We recently made some improvements to make Weave re-attach containers to the network on restarts, see https://github.com/weaveworks/weave/pull/1518
(which is part of Weave 1.2.0)
seti_k: Can you elaborate on your usecase? Particularly, why wasn’t performance acceptable before and why is it just “acceptable” now?
seti_k: Also, regarding cockpit, if you are starting containers with it (i.e. not using it just for monitoring) you will need to set its DOCKER_HOST at start up to point to the weave proxy.
seti_k
from the cockpit, there isn't a option to create new ones. only to manage existing. stop / restart / start existing
about performance.
if i loose abet 95% performance its not acceptable. i tested it with qperf. so i haven't really tested in "near production environment"
at the moment i'm still lossing about 40%, don't know exactly why, cause it was described as nearly native, but its alot better now
the usecase is to cluster some hosts for tomcate clustering with cache / session replication
and connection to solr reserver or clusters on some other hosts
sorry for my typo today..
dpw
Hi seti_k
You are losing 40% network throughput compared to host-host performance?
seti_k
yes.. one moment .. i retest it
[root@fedsrv01 ~]# qperf fedbox2 tcp_bw tcp_lat
tcp_bw:
bw = 228 MB/sec
tcp_lat:
latency = 90.9 us
[root@fedsrv01 ~]# docker run --rm -ti arjanschaaf/centos-qperf qperf1 tcp_bw tcp_lat
tcp_bw:
bw = 108 MB/sec
tcp_lat:
latency = 96 us
this are my 2 tests
is there something wrong?
the 2 boxes are both on virtualbox on the same host.
the latency seems really good.
dpw
How is the CPU use on that host while the test runs? With fast datapath, weave CPU use is lower than it was, but it is still higher than host networking. Is it possible that the CPU is the bottleneck in the weave test?
seti_k
on host - host test i got a max cpu usage of 36%
on docker - weave - docker test i got a max cpu usage of about 45%
on the host
do you think i get other values on a different environment?
i can push something to the data center to test it.
but my first tests are always on my workstation
djx joined the channel
dpw
If you are testing between virtual machines on a single host, then the virtual network may be passing 64KB segments between the VMs (I'm not sure whether this is true for virtualbox or not). So the vxlan might be having a bigger impact there than it does on a real network. Results on a real network might be different.
seti_k
ok im trying ;)
but what is a "real network" today? ;) most of them is also virtual
dpw
We've had some other reports where there is a big gap between host-host throughput and weave fast datapath throughput. We're looking into them.
seti_k
i get some results in a few minutes..
hm ok, same thing in the dc
2 fedora boxes on 2 different hardware machines
host - host:
qperf fedbox02 tcp_bw tcp_lat
tcp_bw:
bw = 777 MB/sec
tcp_lat:
latency = 115 us
docker - weave - docker :
docker run --rm -ti arjanschaaf/centos-qperf qperf1 tcp_bw tcp_lat
tcp_bw:
bw = 255 MB/sec
tcp_lat:
latency = 137 us
dpw
Hmmm
seti_k
but its alot faster than in earlier releases without fast data path :)
some hints? other testing methods? or something else i could test?
dpw
I don't have anything to suggest right now. Check back tomorrow?
seti_k
i'm in here tomorrow. so tell me if there is something to test ;)