I want to know why the stub layer is used like a wrapper around the call layer.
lucha has quit
zls joined the channel
zls
hey all, I'm trying to use gRPC for transfer of a large amount of data (gigabytes or so). It appears that if the client doesn't consume the data from the iterator fast enough, it never sees the rest of the data. I guess the client has some sort of buffer where it stores the messages, and if that fills up it just drops messages on the ground
what's the right thing for me to do here? I'm surprised it didn't do the right thing out of the box
this is with grpc 1.2.0
sorry make that 1.10.0
publio
zls: There's a default max message size of around 4mbs
Instead of raising it, you could modify your request to ask for a chunk (eg subset of rows), or use http if it's a file
zls
publio: I should have been more clear, I'm transferring a 5GB file it 2MB chunks using the "naive" implementation. It looks like the server spits out the full 5GB as fast as it can, regardless of how quickly the client can process it. The client gets left behind and then hangs because it never sees any of the data after the first 1GB, nor the onComplete
publio
zls: You could be running into some sort of other memory limit in go/the os? But it sounds like the lack of error could be a bug
zls
this is actually in java. It's even worse than a lack of error, the client hangs waiting for input that it doesn't understand it's already received
publio
Have you tried fiddling with the vm mem settings?
zls
yeah but not to great effect unfortunately :(
I've managed to make it work by using onReady, but I'm still surprised that the default behavior was to blow up the client. thanks for your help publio