For Bazel, we upgrade to protobuf 3.6.1.2 and javalite HEAD to fix
incompatibilities in newer Bazel releases.
compiler/Dockerfile is unused, so it was removed instead of being updated.
protoc no longer includes codegen for nano, so we remain on the older protoc
any time nano is used.
Protobuf now requires C++11 when compiling, so windows was swapped to
VC 14.
This will allow enabling Error Prone on JDK 10+ (after
updating the net.ltgt.errorprone plugin), and is also a
prerequisite to that plugin update.
Also remove net.ltgt.apt plugin, as Gradle has native
support for annotationProcessor.
This PR adds an automatic gradle format checker and reformats all the *.gradle files. After this, new changes to *.gradle files will fail to build if not in good format, just like checkStyle failure.
NIO does not mean to use Jetty ALPN; the only reason to use Jetty ALPN
is to test OkHttp. We don't need to disable ciphers to test Java 7
(except for OkHttp, which we don't care about on Java 7 and it wasn't
plumbed already) and we _really_ don't want people to copy the code to
do so. useTransportSecurity()/usePlaintext() are preferred over the
transport-specific NegotiationType.
This moves away from the global String-based Span name registry which
is not as flexible as we desire.
Also renamed the option name to be more accurate. This is not
API-breaking because the origianl addition to MethodDescriptor and
code-gen didn't make it into the 1.7.0 release.
* MethodDescriptor is lazy loaded, so protobuf loading only happens on demand. This also means tracing registration happens on demand.
* The names of the getters all being with `method`. This makes it harder for autocomplete to pick them up.
* A new field is used, which matches the getter name. Rather than make the new-getters reference the old-fields, make the old-fields reference the new getters. This makes removal of the old-fields a simple operation.
* The getters may not be inlineable, but thats an easy fix if it ends up being a problem. Not worth premature optimization (but is worth future work).
The expected timeline for this is adding this to the 1.8 cut, and deprecating the old-fields. They will be removed in 1.9.
This is a more favorable approach than #3467. Doing the registration
in MethodDescriptor should allow us to deregister in case the
generated stub and its MethodDescriptors are garbage-collected
routinely, e.g., if they are loaded by a separate ClassLoader.
The benchmarks should be close to the code they're benchmarking, like
we do with tests.
This includes a bugfix to SerializingExecutorBenchmark to let it run.
The io.grpc.benchmarks.netty benchmarks in benchmarks/ depend on
ByteBufOutputMarshaller from benchmarks's main, so they were not moved.
This bump changelist is applied a bit late with respect to the
1.6.0 branch cut. Look at the 1.6.0 to see the source of truth of
where it was cut. Do not assume it is the commit that precedes
this one.
```
:grpc-benchmarks:compileJavaC:\jenkins\workspace\gRPC-Java-PR-Windows\benchmarks\src\main\java\io\grpc\benchmarks\qps\AbstractConfigurationBuilder.java:58: warning: [MissingOverride] getDefaultValue implements method in Param; expected @Override
public String getDefaultValue() {
^
(see http://errorprone.info/bugpattern/MissingOverride)
Did you mean '@Override public String getDefaultValue() {'?
error: warnings found and -Werror specified
1 error
1 warning
FAILED
FAILURE: Build failed with an exception.
```
Moved the following APIs from `io.grpc.testing.TestUtils` to `io.grpc.internal.TestUtils`:
`InetSocketAddress testServerAddress(String host, int port)`
`InetSocketAddress testServerAddress(int port)`
`List<String> preferredTestCiphers()`
`File loadCert(String name)`
`X509Certificate loadX509Cert(String fileName)`
`SSLSocketFactory newSslSocketFactoryForCa(Provider provider, File certChainFile)`
`void sleepAtLeast(long millis)`
APIs not to be moved:
`ServerInterceptor recordRequestHeadersInterceptor()`
`ServerInterceptor recordServerCallInterceptor()`
* Upgrade netty to 4.1.11.Final
* Upgrade netty-tcnative to 2.0.1.Final
* Remove `FixedHttp2ConnectionDecoder` as it's no longer needed
* Use new, extensible `DefaultHttp2HeadersDecoder` for custom headers handling
Background
==========
LoadBalancer needs to track RPC measurements and status for
load-reporting. We need to introduce a "Tracer" API for that.
Since such API is very close to the current
Census(instrumentation)-based stats reporting mechanism in terms of what
are recorded, we will migrate the Census-based stats reporting under the
new Tracer API.
Alternatives
============
We considered plumbing the LB-related information from the LoadBalancer
to the core, and recording those information along with the currently
recorded stats to Census. The LB-related information, such as LB_ID,
reason for dropping reqeusts etc, would be added to the Census
StatsContext as tags.
Since tags are held by StatsContext before eventually being recorded by
providing the measurements, and StatsContext is immutable, this would
require a way for LoadBalancer to override the StatsContext, which means
LoadBalancer API would has direct reference to the Census StatsContext.
This is undesirable because Census API is not stable yet.
Part of the LB-related information is whether the client has received
the initial headers from the server. While such information can be
grabbed by implementing a ClientInterceptor, it must be recorded along
with other information such as LB_ID to be useful, and LB_ID is only
available in GrpclbLoadBalancer.
Bottom line, trying to use solely the Census StatsContext API to record
LB load information would require extra data plumbing channel between
ClientInterceptor, LoadBalancer and the gRPC core, as well as exposing
Census API on the gRPC API. Even with those extensive changes, we are
yet to find a working solution. Therefore, we abandoned this idea and
propose this PR.
Summary of changes
==================
API summary
-----------
Introduce "StreamTracer" API, a callback interface for receiving stats
and tracing related updates concerning **a single stream**.
"ClientStreamTracer" and "ServerStreamTracer" add side-specific
events. A stream can have zero or more tracers and report to all of
them.
On the client-side, CallOptions now takes a list of
ClientStreamTracer.Factory. Opon creating a ClientStream, each of the
factory creates a ClientStreamTracer for the stream. This allows
ClientInterceptors to install its own tracer factories by overriding the
CallOptions.
Since StreamTracer only tracks the span of a stream, tracking of a
ClientCall needs to be done in a ClientInterceptor. By installing its
own StreamTracer when a ClientCall is created, ClientInterceptor can
associate the updates for a Call with the updates for the Streams
created for that Call. This is how we keep the existing Census
reporting mechanism in CensusStreamTracerModule.
On the server-side, ServerStreamTracer.Factory is added through the
ServerBuilder, and is used to create ServerStreamTracers for every
ServerStream.
The Tracer API supports propagation of stats/tracing information through
Context and metadata. Both client-side and server-side tracer factories
have access to the headers object. Client-side tracer relies on
interceptor to read the Context, while server-side tracer has
filterContext() method that can override the Context.
Implementation details
----------------------
Only real streams report stats. Pseudo streams such as delayed stream,
failing stream don't report. InProcess transport streams currently
don't report stats.
"StatsTraceContext" which used to receive updates from core and report
directly to Census (StatsContext), now delegates to the StreamTracers of
a stream. On the client-side, the scope of a StatsTraceContext reduces
from ClientCall to a ClientStream to match the scope of StreamTracer.
The Census-specific logic that was in StatsTraceContext is moved into
CensusStreamTracerModule, which produces factories for StreamTracers
that report to Census.
Reporting with StatsTraceContext is moved out of the Channel/Call layer
into Transport/Stream layer, to match the scope change of
StatsTraceContext.
Bug fixed
----------------
The end of a server-side call was reported in ServerCallImpl's
ServerStreamListenerImpl.closed(), which was wrong. Because closed()
receiving OK doesn't necessarily mean the RPC ended with OK. Instead it
means the server has successfully sent the final status, which may be
non-OK, to the client.
Now the end report is done in both ServerStream.close(any Status) and
before calling ServerStreamListener.closed(non-OK). Whichever happens
first is the reported status.
TODOs
=====
A follow-up change to the LoadBalancer API will add a
ClientStreamTracer.Factory to the PickResult to complete the API needed
by load-reporting.
This removes a needless warning, and isn't much slower. Also this
includes a benchmark for StatsTraceContext to measure the overhead
for creation. It adds about 40ns per RPC. Optimization will come
after structural changes are made to break the dependency on
Census.
This appears to have been broken by 3df1446 (which was reverted and
later rolled forward again in 66ab956).
Without this fix, the ServerServiceDefinition.Builder realizes that a
method is registered that isn't in the ServiceDescriptor. Swapping to a
different constructor causes the builder to generate the
ServiceDescriptor for us.
java.lang.IllegalStateException: No entry in descriptor matching bound method E6Cq77iKGNKVCGyVOqq8DqEazX9AcBdPNoMj86c3I5zo4Tv77U/vLe7QS7mhUfaooN7eYdBW7gd9oyV.kc9I0zJumfuUbhyb7SR1u
at io.grpc.ServerServiceDefinition$Builder.build(ServerServiceDefinition.java:164)
at io.grpc.benchmarks.netty.HandlerRegistryBenchmark.setup(HandlerRegistryBenchmark.java:107)
ErrorProne provides static analysis for common issues, including
misused variables GuardedBy locks.
This increases build time by 60% for parallel builds and 30% for
non-parallel, so I've provided a way to disable the check. It is on by
default though and will be run in our CI environments.
Futures almost universally should be handled in some way when being
returned, either to receive the value or to cancel scheduled tasks to
prevent leaks.
Netty is a bit of a special case though, since it constantly returns
futures that you ignore (even adding a listener returns the "this"
future). So we want to suppress the warning for code using Netty instead
of trying to fix it. When we enable ErrorProne in the build, we should
start passing -Xep:FutureReturnValueIgnored:OFF in the compilerArgs.
This is needed because in interceptor tests, often the types cannot
be changed. The void methods stay for users who are writing tests
where they actually don't care about types. The noop methods
require types to be specified. This is for users who don't care
about the implementation. These represent different levels of
commitment.
This eases the transition of code Mocking MethodDescriptor, which
breaks in this release.
If a LoadBalancer2 is passed in, the builder will create ManagedChannelImpl2 instead of ManagedChannelImpl. This allows us to test the LBv2 classes on a large scale.
not necessary to synchronze every time calling
getServiceDescriptor(), if the descriptor has been created already;
go with the double-checked locking idom
Perhaps by accident, each LoadClient channel got its own executor
pool rather than sharing between all of them. The benchmarks are
currently set up with C++ idioms, creating 64 "channels" per
client. C++ uses one thread per channel, while java does not,
leading to a threading model mismatch.
ForkJoinPool is the current executor because it has good contention
characteristics. However, FJP is also very sensitive to the
number of actively running tasks. Too many, and the contention
will go up. Too few, and threads will repeatedly go to sleep and
wake up.
This change basically makes the FJP shared for all clients in the
same process. It doesn't try at all to shut it down properly or
be generally useful; it's just a benchmark. Additionally, this
also groups the Netty and OkHttp specific channel options into
helper functions. This means OkHttp benchmarks will also use
the correct executor now.
A quick test shows QPS going from ~107kqps to 149kqps.
Fixes: #2406
core: adds @Nullable Object getAttachedObject() to ServiceDescriptor
compiler: Plumbing necessary to access proto file descriptors via
the reflection service
This doesn't impact test behavior per-se, but causes it to produce less
useless log output of the form:
java.lang.IllegalStateException: call was half-closed
at com.google.common.base.Preconditions.checkState(Preconditions.java:174)
at io.grpc.internal.ClientCallImpl.sendMessage(ClientCallImpl.java:380)
at io.grpc.stub.ClientCalls$CallToStreamObserverAdapter.onNext(ClientCalls.java:299)
at io.grpc.benchmarks.driver.LoadClient$AsyncPingPongWorker$1.onNext(LoadClient.java:406)
at io.grpc.benchmarks.driver.LoadClient$AsyncPingPongWorker$1.onNext(LoadClient.java:400)
at io.grpc.stub.ClientCalls$StreamObserverToCallListenerAdapter.onMessage(ClientCalls.java:382)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1MessageRead.runInContext(ClientCallImpl.java:473)
... 7 more
Fixes#2372
The DefaultHttp2Headers class is a general-purpose Http2Headers implementation
and provides much more functionality than we need in gRPC. In gRPC, when reading
headers off the wire, we only inspect a handful of them, before converting to
Metadata.
This commit introduces a Http2Headers implementation that aims for insertion
efficiency, a low memory footprint and fast conversion to Metadata.
- Header names and values are stored in plain byte[].
- Insertion is O(1), while lookup is now O(n).
- Binary header values are base64 decoded as they are inserted.
- The byte[][] returned by namesAndValues() can directly be used to construct
a new Metadata object.
- For HTTP/2 request headers, the pseudo headers are no longer carried over to
Metadata.
A microbenchmark aiming to replicate the usage of Http2Headers in NettyClientHandler
and NettyServerHandler shows decent throughput gains when compared to DefaultHttp2Headers.
Benchmark Mode Cnt Score Error Units
InboundHeadersBenchmark.defaultHeaders_clientHandler avgt 10 283.830 ± 4.063 ns/op
InboundHeadersBenchmark.defaultHeaders_serverHandler avgt 10 1179.975 ± 21.810 ns/op
InboundHeadersBenchmark.grpcHeaders_clientHandler avgt 10 190.108 ± 3.510 ns/op
InboundHeadersBenchmark.grpcHeaders_serverHandler avgt 10 561.426 ± 9.079 ns/op
Additionally, the memory footprint is reduced by more than 50%!
gRPC Request Headers: 864 bytes
Netty Request Headers: 1728 bytes
gRPC Response Headers: 216 bytes
Netty Response Headers: 528 bytes
Furthermore, this change does most of the gRPC groundwork necessary to be able
to cache higher ordered objects in HPACK's dynamic table, as discussed in [1].
[1] https://github.com/grpc/grpc-java/issues/2217