Overview¶
This is a draft of batch CNP requests and responses.
Batch requests could be used to send multiple requests for content without making multiple connections and CNP round-trips.
This solves the problem that keep-alive connections or request pipelining would solve while still keeping the entire request/response a single CNP round trip where the endpoints send or receive their entire side of transmission at once.
This solution is significantly simpler to implement than the previous solutions and solves most of the same problems. The main downside is that all content to be requested must be known ahead of time, so loading pages with embedded content means first making a normal request connection for the page and then a batch request connection for all the embedded content to be loaded immediately instead of using a single keep-alive connection. However, the count of round-trips remains the same as a pipelined keep-alive connection.
Specification¶
A batch request would use the keyword batch
instead of the request intent, the count of batch requests in the count
parameter and CNP requests as the data. The length
parameter is not necessary when count
is provided in a supported context, such as the batch
request; however, each sub-request must use the length
parameter if it has any data.
Note that the data of a batch request and the next batch request are not separated by a line feed; as such, a request in the batch does not always start on a new line but instead after the last request's length
of data has been processed. All requests in the batch still end with a line feed.
Only one request for each individual host+path pair (i.e. request intent) may be present in the batch request. If multiple requests in the batch have the same intent, it is an invalid
error.
Requests in the batch may not be batch requests themselves. If any of the sub-requests is a batch request, an invalid
error should be returned.
The response would use the keyword batch
as the intent with the count
parameter set to the count of responses. Each response in the batch must include the length
parameter, even if the length is zero, and the intent
parameter set to the parameter of the corresponding request in the batch request. Individual responses in the batch are parsed the same way as batch requests.
Batch responses do not have to come in the same order as the requests and may be processed in arbitrary order, including concurrently. The client must not rely on the requests being executed in order and thus should not issue non-idempotent batch requests that interact with other requested resources.
A batch response is not required to contain responses to all batch requests, but is recommended to. Any omitted responses can be requested again with individual requests to ensure the server answers them.
Possible changes¶
The CNP version field is unnecessary in messages in the batch and could be elided. However, this may require special handling by parsers, so it may be simpler to just leave it there.
If a message in the batch has a non-zero length body, a line feed could be appended to the end. This would produce more human-readable requests and responses, but may be slightly more complicated to parse, since the line feed character would have to be consumed before the next message is parsed.
Batch requests with body could be forbidden. This would solve the problem of non-idempotent requests conflicting with each other, but the same can happen with multiple concurrent connections anyway.
Force the response to have the same number of messages as the request, with any omitted responses being
error
(perhaps with a newreason
parameter value). This risks discarding mostly complete responses, though.Force the batch response to retain the ordering of requests. This may make responses easier to parse, but would introduce head-of-line blocking. The
intent
parameter would still be required, since it ensures that the response can be understood even without the context of the request body (i.e. the batch of requests).