Today JumpToApplicationThreadServerStreamListener leaks server state by transmitting details about uncaught StatusRuntimeException throwables to the client. This is a security problem.
This PR ensures that uncaught exceptions always close the ServerCall without leaking any state information. Users running in a trusted environment who want to transmit error details can install the TransmitStatusRuntimeExceptionInterceptor.
fixes#2189
Interning strings isn't needed much any more since keys are now
compared by byte value. Also, intern is slow (about 150ns per)
which affects code that creates keys often. Lastly, older
versions of Java don't garbage collect interned strings, which
lowers the applications stability.
The current implementation has a bug where certain methods are not forwarded to the delegate.
This is essentially the same as e4f1f39 which was merged to the v1.4.x branch. This PR uses the new license header.
Fixes#3061
The current check in ServerCallImpl is theoretically unsafe (#3059). Move that check into the stub, and expand the unit tests to cover other interesting edge cases on the server side:
client sends one, but zero requests received at onHalfClose
client sends one, but > 1 requests received at onHalfClose
server sends one, but zero responses sent at onComplete
server sends one, but > 1 responses sent via onNext
fixes#2243fixes#3059
Add some example tests for easier fields in AbstractManagedChannelImplBuilder.
Many fields are no longer Nullable, in order to move logic from construction to
mutation, which eases testing and simplifies cross-class interactions.
The nameResolverFactory comment starting "Avoid loading the provider unless
necessary" was outdated and has not been a concern since #2071 which swapped to
a hard-coded list on Android.
We don't want people to be desensitized to the Internal annotation.
NameResolverProvider should have probably been ExperimentalApi from the
start, but it was copied from ManagedChannelProvider.
Theoretically, ManagedChannelProvider could be marked ExperimentalApi as
well, but there are no known use cases for doing so since making an
alternative ManagedChannel implementation is quite difficuilt and we
aren't aware of anybody interested in doing so. That isn't the case for
NameResolverProvider; NameResolverProvider is core to 'targets' being
useful.
Creating the SslContext can throw, generally due to broken ALPN. We want
that to propagate to the caller of build(), instead of within the
channel where it could easily cause hangs.
We still delay creation until actual build() time, since TLS is not
guaranteed to work and the application may be configuring plaintext or
similar later before calling build() where SslContext is unnecessary.
The only externally-visible change should be the exception handling.
I'd add a test, but the things throwing are static and trying to inject
them would be pretty messy.
Fixes#2599
migrated simple tests using `GrpcServerRule`.
Kept the low level in-process channel setup and tear down code for the RouteGuide example to show how users can use in-process directly to set more custom channel builder options when needed.
resolves#2490
Calling CancellableContext#close() multiple times does not cause problems, but is unnecessary here. The return value of getListener() is a ServerStreamListenerImpl, which already calls CancellableContext#cancel() inside of the finally block of ServerStreamListener#closed().
When a cancellation happens, the ServerCall and Context get notified. Rather than serializing on the normal work queue (which may be doing user computation), we should execute the notification immediately, thereby allowing the user computation to see the cancellation.
The code uses std::back_insert_iterator which is part of the <iterator>
header. The header is not included though.
We (bazel) noticed the build failing on Windows when using the Visual Studio
C++ compiler.
This is a minor change setting the size of data frames sent when
interleaving RPCs. The size was ~1024bytes previously, which
resulted in the `writev` syscalls sending many smaller chunks
before hitting the low water mark. The end effect is larger calls
to `writev`, as seen with strace.
The effect of this is noticeable when sending a lot of data. When
sending as many 1MB messages as possible it nearly doubles the
rate.
Before:
```
INFO: single throughput GRPC
50.0%ile Latency (in nanos): 280856575
90.0%ile Latency (in nanos): 349618175
95.0%ile Latency (in nanos): 380444671
99.0%ile Latency (in nanos): 455172095
99.9%ile Latency (in nanos): 537198591
100.0%ile Latency (in nanos): 566886399
QPS: 346
Count: 103984
```
After:
```
gRPC
50.0%ile Latency (in nanos): 125948927
90.0%ile Latency (in nanos): 166322175
95.0%ile Latency (in nanos): 177276927
99.0%ile Latency (in nanos): 193840127
99.9%ile Latency (in nanos): 226841599
100.0%ile Latency (in nanos): 256110591
QPS: 774
Count: 232340
```