IESG Narrative Minutes
Narrative Minutes of the IESG Teleconference on 2017-05-11. These are not an official record of the meeting.
Narrative scribe: John Leslie and Ignas Bagdonas (The scribe was sometimes uncertain who was speaking.)
Corrections from: (none)
1 Administrivia
2. Protocol actions
2.1 WG submissions
2.1.1 New items
Telechat:
Telechat:
Telechat:
2.1.2 Returning items
Telechat:
2.2 Individual submissions
2.2.1 New items
2.2.2 Returning items
2.3 Status changes
2.3.1 New items
2.3.2 Returning items
3. Document actions
3.1 WG submissions
3.1.1 New items
Telechat:
Telechat:
Telechat:
3.1.2 Returning items
3.2 Individual submissions via AD
3.2.1 New items
3.2.2 Returning items
3.3 Status changes
3.3.1 New items
3.3.2 Returning items
3.4 IRTF and Independent Submission stream documents
3.4.1 New items
Telechat:
3.4.2 Returning items
4 Working Group Actions
4.1 WG Creation
4.1.1 Proposed for IETF Review
4.1.2 Proposed for Approval
4.2 WG Rechartering
4.2.1 Under evaluation for IETF Review
4.2.2 Proposed for Approval
5. IAB News We can use
6. Management Issues
Telechat::
Telechat::
Telechat::
Telechat::
Telechat::
7. Working Group News
7a. Other Business
1053 EDT Adjourned
(at 2017-05-11 06:00:03 PDT)
draft-ietf-core-coap-tcp-tls
I have many of the same concerns as others, but see not need to hold a DISCUSS myself.
1) This draft removes the reliability and ordering features COAP when used over reliable transports, under the assumption that the transport will provide. But the draft also includes the assumption that COAP proxies exist. This has the potential for creating a problem, since the transport can only provide guaranty reliable delivery and ordering to the next hop. Once you have a proxy in play, you loose that guaranty end to end. This is further complicated because this draft contemplates cross-transport proxies, where one side may be over WebSocket (and I assume might be over TCP) and the other side over UDP. If the client sends via TCP but a proxy changes it to UDP, the client has no way to specify the reliability properties to be used on the UDP connection. If one imagines a client that uses UDP to a forward proxy, which speaks TCP to a reverse-proxy, which then switches back to UDP, any reliability properties specified by the client will get lost. Also, a proxy can potentially reorder messages, even if it uses TCP on both sides. If one leaves ordering to the transport, then one needs to add rules about proxies maintaining that order. 2) It seems problematic to encode the transport choice in the URI scheme. Section 7 says "They are hosted in distinct namespaces because each URI scheme implies a distinct origin server." IIUC, this means any given resource can only be reached over a specific transport. That seems to break the idea of cross-transport proxies as discussed in section 7. It also does not seem to fit with a primary motivation for this draft. That is, one might want to use TCP because of local NAT/FW issues. But if there is a resource with a "coap" scheme, I cannot switch to TCP when I'm behind a problematic middlebox, and have an expectation of reaching the same resource.
Subtantive: 3.2: I agree with Adam that this length scheme seems very complex for the return 3.3: Since the initiator can start sending messages before receiving a CSM from the responder, how long should the initiator wait for a CSM before bailing? 3.4: Can you offer any guidance about how often to send keep-alives? I note that these keepalives are not necessarily bi-directional. Aren't there some NAT/FW cases where bi-directional traffic is needed to keep bindings from timing out? This and other places explicitly mention that in-flight messages may be lost when the transport is closed or reset. This creates uncertainty about whether such messages have been processed or not. Is that really okay? 4: After the discussion resulting from Mark's Art-Art review, I expected to see more emphasis about WebSocket being intended for browser-based clients. There's a couple of in-passing mentions of browser-clients buried in the text; I would have expected something more up front. 4.2: Is it really worth making the framing code behave differently for WebSocket than for TCP? 5.3: Do I understand correctly that once an option is established, it cannot be removed unless replaced? (Short of tearing down the connection and starting over, anyway.) 7.2: The text mentions 443 as a default port, but really seems to make 5684 the default. If 443 is really a default, then this needs discussion about why and why it's okay to squat on HTTPS. The text about whether ALPN is required is confusing. Why not just require ALPN and move one, rather than special casing it by port choice? (There seems to be some circular logic about requiring 5685 to support clients that don't do ALPN, then saying clients MUST do ALPN unless they are using port 5685.) 7.3: I agree with Adam's DISCUSS comment. And even if people decide that the well-known bit can be specified in CORE, I think it does future users of a well-known URIs for ws a disservice to make them dig through this spec to find the update to 6455. It would be better to pull that into a separate draft. That's also a material addition post IETF last call, so we should consider repeating the LC. 10.2: Is the registration policy "analogous to" that of [RFC7252] S12.2, or "identical to" it. If the answer is not "identical", then the policy should be detailed here. Editorial: Figures 7 and 8: "Payload (if any)" - Can we assume that if one uses either extended length format, one has a payload? 3.3: Is the guidance about what errors to return if you don't implement a server any different here than for UDP? 4.3 and 4.4 seem to primarily repeat details that are the same for WS as for TCP, even though the introduction to the WS part says that it won't do that :-) 5.3: "One CSM MUST be sent by both endpoints...": s/both/each 7.6: The "updates" in this section are confusing. I understand this to mean that the procedures for TCP and WS are identical to those for UDP except for the mentioned steps. But the language of the form of "This step from [RFC7252] is updated to:" makes it sound like this intends to actually change the language in 7252 to this new language. If the latter, then that effectively removes UDP support from 7252 as updated. This could easily be fixed by changing that to something to the effect of "When using TCP, this step changes to ..." Appendix A: Why is this an appendix? Updates to a standards track RFC seem to warrant a more prominent position in the draft.
I strongly agree with Adam's point about default port for coaps+tcp URI scheme. Also the following comments from IANA should be looked at: While we have an approval from the well-known URI expert, we're still waiting for a response from the expert for ALPN Protocol IDs. Also, Graham Klyne, who's traveling, sent the response below to our request for a URI scheme review. When we register this URI scheme, can/should we add the note Graham proposes to the registry, and call it an "IESG Note"? thanks, Amanda Baber Lead IANA Services Specialist == I am concerned that these scheme registrations (with multiple schemes for the same resource accessed using a different protocol) present an "antipattern" that was controversial when a similar proposal was raised about 18 months ago; e.g. see this earlier comment from Roy Fielding: https://mailarchive.ietf.org/arch/msg/uri-review/ZXTfNQ7PDxHBSccrqrrbGH5N-Ko , specifically this: [[ A URI scheme should define what it names and how that naming maps to the URI syntax. There is nothing wrong with using separate schemes for different transports if those transports are essential parts of the name (e.g., if something named Fred at TCP:80 is different from something named Fred at UDP:89898). [...] In short, I think you need to better document what each URI scheme means from the perspective of a server and then what the client is expected to do with such a URI. ]] I was hoping that the URI-review list would pick up on this and provide some further discussion. I've seen a couple of private messages that seem to express similar concerns. In summary, I would have responded on the URI review list as an individual if I had been in a position to do. But if this comes back to me as a registration request that has passed WG last call, then I see no grounds for refusal, even though I think the design is misguided (or at least not adequately explained in the registration templates). In this situation, I might feel inclined to request adding an "IESG Note" to the registration along the following lines (if this is deemed acceptable for a request that has passed an IETF last-call review): [[ The CoAP protocol registers different URI schemes for accessing CoAP Resources via different protocol. This runs counter to the principle that a URI identifies a resource, and that multiple URIs for identifying the same resources should be avoided. URIs should be used to hide rather than expose the purely technical mechanisms used when accessing a resource. ]]
I share a lot of the concerns raised in the DISCUSSes and I look forward to their resolution.
Agree with the concerns raised in the DISCUSSes, looking forward to their resolution.
Watching all discussions
I agree with EKR's technical comment that MTI cipher suites need to be defined.
After reading Mark Nottingham's review, I'm persuaded we should at least discuss the relationship of this work to the parallel work in HTTP.
Document: draft-ietf-core-coap-tcp-tls-08.txt TECHNICAL You need to specify MTI cipher suites. I don't think that the ones you specified in 7925 are very useful for TLS. Is this really really what you want? S 3.2. Having the lengths offset by 13 bytes is, IMO, pretty silly. I realize it avoids duplication, but it also makes the packets hard to read for not much value. As a practical matter, it expands the 1-byte length for the range 256-268, for a savings of less than .5% even on those packets and on average far less. S 4.1. The WebSocket client MUST include the subprotocol name "coap" in the list of protocols, which indicates support for the protocol defined in this document. Any later, incompatible versions of CoAP or CoAP over WebSockets will use a different subprotocol name. This doesn't make much sense, because you are willing to have incompatible protocols for TCP, where you use CSM to distinguish them, and you do the same thing with ALPN. S 5.5. These release semantics seem quite problematic. In particular, when people want an orderly close, they typically want the other side to process all the outstanding requests and then return them, but this doesn't seem to do that (note that just because the responses need to be *delivered* in order doesn't mean they need to be generated in order). So, for instance, say I have the following sequence of events: C S DB GET /a -> Request A -> Release -> FIN <- Response A It seems like the only difference between Abort and Release is that the sender is saying "don't expect that I processed any of your messages", but in at least a lot of scenarios (e.g., where the initiator is basically just a client), this doesn't actually tell the server much about sequence because the responses aren't ordered wrt Release AFAICT. Release message by closing the TCP/TLS connection. Messages may be in flight when the sender decides to send a Release message. The general expectation is that these will still be processed. This is not really useful language. For CoAP over reliable transports, the recipient rejects such messages by sending an Abort message and otherwise ignoring the message. No specific option has been defined for the Abort message in this case, as the details are best left to a diagnostic payload. I don't understand this text. Abort seems to mean "I'm done", but then how am I ignoring the message. S 6. I found this section pretty confusing. In 7959, when M=0 you need to stay *under* the block boundary but here you say: In descriptive usage, a BERT Option is interpreted in the same way as the equivalent Option with SZX == 6, except that the payload is also allowed to contain a multiple of 1024 bytes (non-final BERT block) or more than 1024 bytes (final BERT block). And your examples pretty clearly show it being >> 1024. What's the reasoning here In control usage, a BERT option is interpreted in the same way as the equivalent Option with SZX == 6, except that it also indicates the capability to process BERT blocks. But: Block-wise Transfer Option. If a Max-Message-Size Option is indicated with a value that is greater than 1152 (in the same or a different CSM message), the Block-wise Transfer Option also indicates support for BERT (see Section 6). Subsequently, if the Max-Message- Is this an instruction to set the BTO to be 7? Or redundancy? EDITORIAL S 3.2. Length (Len): 4-bit unsigned integer. A value between 0 and 12 directly indicates the length of the message in bytes starting I think you want to say "0 and 12 inclusive" S 5.3.1. These are not default values for the option, as defined in Section 5.4.4 in [RFC7252]. A default value would mean that an empty Capabilities and Settings message would result in the option being set to its default value. This is pretty confusing text. I take it that it means that if the base values of both A and B are 0, then: Start // A=0, B=0 CSM[A=1] // A=1, B=0 CSM[B=2] // A=1, B=2 Whereas if these were default values, then this would be: Start // A=0, B=0 CSM[A=1] // A=1, B=0 CSM[B=2] // A=0, B=2 <- A resets to default If that's so, perhaps you could say: These are not default values for the option, as defined in Section 5.4.4 in [RFC7252], because default values apply on a per-message basis and thus reset when the value is not present in a given CSM.
ISSUE 1: WebSockets and .well-known - Part of the document is outside the scope of the charter of the WG which requested its publication While I understand that this document requires a WebSockets mechanism for .well-known, and that such a mechanism doesn’t yet exist, it seems pretty far out of scope for the CORE working group to take on defining this itself (unless I missed something in its charter, which is entirely possible: it’s quite long). Specifically, I fear that this venue is unlikely to bring such a change to the attention of those people best positioned to comment on whether .well-known is appropriate for WebSockets. Even if this is in scope for CORE, it really needs to be its own document. If some future document comes along at a later point and wants to make use of its own .well-known path with WebSockets, it would be really quite strange to require it to reference this document in describing .well-known for WS. ================================================== ISSUE 2: Assignment of port 443 as default - Widespread deployment would be damaging to the Internet or an enterprise network for reasons of congestion control, scalability, or the like. I'd like to thank the authors for helping me to understand the intention with the use of port 443 more clearly. Based on their clarifications, I need to move my issue about assigning a default of port 443 to coaps+tcp from my Comment into the Discuss, as it does have implications for the Internet at large that will have long-term damaging effects. The rationale being offered for the using the already-assigned port 443 as a default is that it tends to go through firewalls that other ports may not, and that doing so is fine because ALPN makes it possible. These arguments, if we accept them, are manifestly true for all future TLS-using protocols. Allowing CoAP to re-use an assigned port on this basis established precedent for pretty much all future protocols to do so, effectively moving the protocol demux point for future protocols from port numbers to ALPN IDs (all over port 443). It is hard to imagine an outcome *other* *than* firewall manufacturers starting to whitelist desired ALPN IDs, which effectively ossifies the available set of IDs to whatever is defined at that moment, destroying the future utility of the mechanism. There are other issues having to do with software architecture, protocol demultiplexing in user space rather than kernel space, and operational considerations that come into play as well, but they don't technically fall under discuss criteria.
General — this is a very bespoke approach to what could have been mostly solved with a single four-byte “length” header; it is complicated on the wire, and in implementation; and the format variations among CoAP over UDP, coap+tls, and coap+ws are going to make gateways much harder to implement and less efficient (as they will necessarily have to disassemble messages and rebuild them to change between formats). The protocol itself mentions gateways in several places, but does not discuss how they are expected to map among the various flavors of CoAP defined in this document. Some of the changes seem unnecessary, but it could be that I’m missing the motivation for them. Ideally, the introduction would work harder at explaining why CoAP over these transports is as different from CoAP over UDP as it is, focusing in particular on why the complexity of having three syntactically incompatible headers is justified by the benefits provided by such variations. Additionally, it’s not clear from the introduction what the motivation for using the mechanisms in this document is as compared to the techniques described in section 10 (and its subsections) of RFC 7252. With the exception of subscribing to resource state (which could be added), it seems that such an approach is significantly easier to implement and more clearly defined than what is in this document; and it appears to provide the combined benefits of all four transports discussed in this document. My concern here is that an explosion of transport options makes it less likely that a client and server can find two in common: the limit of the probability of two implementations having a transport in common as the number of transports approaches infinity is zero. Due to this likely decrease in interoperability, I’d expect to see some pretty powerful motivation in here for defining a third, fourth, fifth, and sixth way to carry CoAP when only TCP is available (I count RFC 7252 http and https as the first and second ways in this accounting). I’m also a bit puzzled that CoAP already has an inherent mechanism for blocking messages off into chunks, which this document circumvents for TCP connections (by allowing Max-Message-Size to be increased), and then is forced to offer remedies for the resultant head-of-line blocking issues. If you didn’t introduce this feature, messages with a two-byte token add six bytes of overhead for every 1024 bytes of content — less than 0.6% size inflation. It seems like a lot of complicated machinery — which has a built-in foot-gun that you have to warn people about misusing — for a very tiny gain. I know it’s relatively late in the process, but if these trade-offs haven't had a lot of discussion yet, it’s probably worth at least giving them some additional thought. I’ll note that the entire BERT mechanism seems to fall into the same trap of adding extra complexity for virtually nonexistent savings. CoAP headers are, by design, tiny. It seems like a serious over-optimization to try to eliminate them in this fashion. In particular, you’re making the actual implementation code larger to save a trivial number of bits on the wire; I was under the impression that many of the implementation environments CoAP is intended for had some serious on-chip restrictions that would point away from this kind of additional complexity. Specific comments follow. Section 3.3, paragraph 3 says that an initiator may send messages prior to receiving the remote side’s CSM, even though the message may be larger than would be allowed by that CSM. What should the recipient of an oversized message do in this case? In fact, I don’t see in here what a recipient of a message larger than it allowed for in its CSM is supposed to do in response at *any* stage of the connection. Is it an error? If so, how do you indicate it? Or is the Max-Message-Size option just a suggestion for the other side? This definitely needs clarification. (Aside — it seems odd and somewhat backwards that TCP connections are provided an affordance for fine-grained control over message sizes, while UDP communications are not.) Section 4.4 has a prohibition against using WebSockets keepalives in favor of using CoAP ping/pong. Section 3.4 has no similar prohibition against TCP keepalives, while the rationale would seem to be identical. Is this asymmetry intentional? (I’ll also note that the presence of keepalive mechanisms in both TCP and WebSockets would seem to make the addition of new CoAP primitives for the same purpose unnecessary, but I suspect this has already been debated). Section 5 and its subsections define a new set of message types, presumably for use only on connection-oriented protocols, although this is only implied, and never stated. For example, some implementors may see CSM, Ping, and Pong as potentially useful in UDP; and, finding no prohibition in this document against using them, decide to give it a go. Is that intended? If not, I strongly suggest an explicit prohibition against using these in UDP contexts. Section 5.3.2 says that implementations supporting block-wise transfers SHOULD indicate the Block-wise Transfer Option. I can't figure out why this is anything other than a "MUST". It seems odd that this document would define a way to communicate this, and then choose to leave the communicated options as “YES” and “YOUR GUESS IS AS GOOD AS MINE” rather than the simpler and more useful “YES” and “NO”. I find the described operation of the Custody Option in the operation of Ping and Pong to be somewhat problematic: it allows the Pong sender to unilaterally decide to set the Custody Option, and consequently quarantine the Pong for an arbitrary amount of time while it processes other operations. This seems impossible to distinguish from a failure-due-to-timeout from the perspective of the Ping sender. Why not limit this behavior only to Ping messages that include the Custody Option? [Moved from Comment to Discuss: I find the unmotivated definition of the default port for “coaps+tcp” to 443 — a port that is already assigned to https — to be surprising, to put it mildly. This definitely needs motivating text, and I suspect it's actually wrong.] I am similarly perplexed by the hard-coded “must do ALPN *unless* the designated port takes the magical value 5684” behavior. I don’t think I’ve ever seen a protocol that has such variation based on a hard-coded port number, and it seems unlikely to be deployed correctly (I’m imaging the frustration of: “I changed both the server and the client configuration from the default port of 5684 to 49152, and it just stopped working. Like, literally the *only* way it works is on port 5684. I've checked firewall settings everywhere and don't see any special handling for that port -- I just can't figure this out, and it's driving me crazy.”). Given the nearly universal availability of ALPN in pretty much all modern TLS libraries, it seems much cleaner to just require ALPN support and call it done. Or *don’t* require ALPN at all and call it done. But *changing* protocol behavior based on magic port numbers seems like it’s going to cause a lot of operational heartburn. The final paragraph of section 8.1 is very confusing, making it somewhat unclear which of the three modes must be implemented on a CoAP client, and which must be implemented on a CoAP server. Read naïvely, this sounds like clients are required to do only one (but one of their choosing) of these three, while servers are required to also do only one (again, of their choosing). It seems that the chance of finding devices that could interoperate under such circumstances is going to be relatively low: to work together, you would have to find a client and a server that happened to make the same implementation choice among these three. What I’m used to in these kinds of cases is: (a) server must implement all, client can choose to implement only one (or more), (b) client must implement all, server can choose to implement only one (or more), or (c) client and server must implement a specifically named lowest-common denominator, and can negotiate up from there. Pretty much anything else (aside from strange “everyone must implement two of three” schemes) will end up with interop issues. Although the document clearly expects the use of gateways and proxies between these connection-oriented usages of CoAP and UDP-based CoAP, Appendix A seems to omit discussion or consideration of how this gatewaying can be performed. The following list of problems is illustrative of this larger issue, but likely not exhaustive. (I'll note that all of these issues evaporate if you move to a simpler scheme that merely frames otherwise unmodified UDP CoAP messages) Section A.1 does not indicate what gateways are supposed to do with out-of-order notifications. The TCP side requires these to be delivered in-order; so, do this mean that gateways observing a gap in sequence numbers need to quarantine the newly received message so that it can deliver the missing one first? Or does it deliver the newly-received message and then discard the “stale” one when it arrives? I don’t think that leaving this up to implementations is particularly advisable. Section A.3 is a bit more worrisome. I understand the desired optimization here, but where you reduce traffic in one direction, you run the risk of exploding it in the other. For example, consider a coap+tcp client connecting to a gateway that communicates with a CoAP-over-UDP server. When that client wants to check the health of its observations, it can send a Ping and receive a Pong that confirms that they are all alive and well. In order to be able to send a Pong that *means* “all your observations are alive and well,” the gateway has to verify that all the observations are alive and well. A simple implementation of a gateway will likely check on each observed resource individually when it gets a Ping, and then send a Pong after it hears back about all of them. So, as a client, I can set up, let’s say, two dozen observations through this gateway. Then, with each Ping I send, the gateway sends two dozen checks towards the server. This kind of message amplification attack is an awesome way to DoS both the gateway and the server. I believe the document needs a treatment of how UDP/TCP gateways handle notification health checks, along with techniques for mitigating this specific attack. Section A.4 talks about the rather different ways of dealing with unsubscribing from a resource. Presumably, gateways that get a reset to a notification are expected to synthesize a new GET to deregister on behalf of the client? Or is it okay if they just pass along the reset, and expect the server to know that it means the same thing as a deregistration? Without explicit guidance here, I expect server and gateway implementors to make different choices and end up with a lack of interop. From i-d nits (this appears to be in reference to Figure 1): ** There is 1 instance of too long lines in the document, the longest one being 3 characters in excess of 72.
1) My general concern is that, while I don't necessarily want to block the proposed format, I would like to understand further before publication why this approach was chosen. Similar to Ben's discuss, I don't understand why the format was chosen so differently. You could just use the format (plus a new length option) as defined for UPD and just never have any retransmission or reordering but be more flexible on the lower layer transport to use. However, if you actually prefer a new format (to save space), than that sounds like a new version for me, while the draft says: "CoAP is defined in [RFC7252] with a version number of 1. At this time, there is no known reason to support version numbers different from 1." However, in this case it could even have made sense to define a new format/version that could be used for both underlying protocols and either have a length option or a message type and IP option. Further I also don't understand why on the other hand the TCP COAP framing is re-used for websockets because websockets already provides message framing and a length field. Also inline with Ben's discuss, the use of the Block option for CAOP/TCP is not very clear to me. The draft says: "a UDP-to-TCP gateway may simply not have the context to convert a message with a Block Option into the equivalent exchange without any use of a Block Option (it would need to convert the entire blockwise exchange from start to end into a single exchange)" However, given that the COAP/TCP and COAP/UDP format are so different, it's anyway a more complex conversion than just sticking another transport underneath. The argument for HOL blocking due to e.g. upgrades is also not clear to me because you should probably better just use a different TCP connection for that as it really seems to be a different use case. For me this draft looks like you are defining basically a new protocol version and not just COAP over TCP. Again, I don't necessarily want to block this but I would like to understand why the proposed approach was chosen. 2) Comments from the tsv-art review needs to be addressed as well (Thanks to Yoshi Nishida for the review!). Here is the review text for your connivence: "Summary: This document is well-written. It is almost ready to be published as a PS draft once the following points are addressed. 1: It is not clear how the protocol reacts the errors from transport layers (e.g. connection failure). The protocol will just inform apps of the events and the app will decide what to do or the protocol itself will do something? 2: There will be situations where the app layer is freezing while the transport layer is still working. Since transport layers cannot detect this type of failures, there should be some mechanisms for it somewhere in the protocol or in the app layer. The doc needs to address this point. For example, what will happen when a PONG message is not returned for a certain amount of time? 3: Since this draft defines new SZX value, I think the doc needs to update RFC7959. This point should be clarified more in the doc.“ 3) And inline with Yoshi's comment, I don't think this part in section 3.3 is well specified; especially I don't understand how these two thing fit together: "To avoid unnecessary latency, a Connection Initiator MAY send additional messages without waiting to receive the Connection Acceptor's CSM; ..." and "Endpoints MUST treat a missing or invalid CSM as a connection error and abort the connection (see Section 5.6)." Also how long should I wait until I abort the connection?
draft-ietf-6man-rfc1981bis
I'm watching the many e-mail threads on IESG ballot positions for this draft, but don't have anything to add.
The OpsDir review (https://datatracker.ietf.org/doc/review-ietf-6man-rfc1981bis-04-opsdir-lc-hares-2017-03-04/ ) raises an interesting point, which I had not occurred to me. If the MTU for a path is 1440 bytes, and ND/RA suddenly says that the interface MTU is only 1400 bytes, what should implementations do? I'd expect something like decrease the path MTU (for all paths) to min (new link MTU, current path MTU), but it isn't (AFAICT) specified. It's entirely possible that this is a: already covered and / or b: covered at a different layer / different protocol, happy to be hit with a clue bat...
Please also see the OpsDir review.... I sent email about this to the authors on Feb 23rd - I seem to still have have many of the same questions... Comments: 1: Sec 1: "Path MTU Discovery relies on such messages to determine the MTU of the path." -- it is unclear which "such" refers to. Perhaps s/such/ICMPv6/ (or PTB). 2: Sec 3: "Upon receipt of such a message, the source node reduces its assumed PMTU for the path based on the MTU of the constricting hop as reported in the Packet Too Big message" -- this says that it reduces it *for the path*. But (as somewhat alluded to later in the draft) the nodes doesn't know what the path *is* -- it can decrease for the destination, or flow, or even interface, but (unless it is strict source routing) it doesn't control or really know the path (see also #4) 3: Sec 4: "The recommended setting for this timer is twice its minimum value (10 minutes)." - as above. This was from 1996 - were these metrics discussed at all during the -bis? I suspect that the average flow is much shorter these days (more web traffic, fatter pipes, etc) and so a flow of 10 minutes seems really long (to me at least). 4: Sec 5.2: "The packetization layers must be notified about decreases in the PMTU. Any packetization layer instance (for example, a TCP connection) that is actively using the path must be notified if the PMTU estimate is decreased. Note: even if the Packet Too Big message contains an Original Packet Header that refers to a UDP packet, the TCP layer must be notified if any of its connections use the given path." - this is related to #2 -- I don't know *which* path my packets take - once I launch them into the void, they may be routed purely based upon destination IP address, or they may be hashed based upon some set of header fields to a particular ECMP link or LSP. Once packets hit a load balancer, it is probably even *likely* that the UDP and TCP packets end up on different things. So, if I get a PTB from a router somewhere, I can probably guess that other packets to the same destination address will also follow that path, but I cannot know that for sure. I'm fine to decrease MTU towards that destination IP, but is that what this is suggesting? If so, please say that. If not, please let me know what I should do. The above is even more tricky / fun when I'm using flow id as the flow identifier -- if I get a PTB for flow 0x1234, what do I do? 5: Sec 5.3: "Once a minute, a timer-driven procedure runs through all cached PMTU values, and for each PMTU whose timestamp is not "reserved" and is older than the timeout interval ...". Please consider providing clarifications here. The wording implies that I should set a timer to fire on the minute, and trigger the behavior. If all of the (NTP synced!) machines in my datacenter do this, and all try send bigger packets (on 1/10th of long flows) their first hop router will get many, many over-sized packets and it will severely rate-limit the PTBs. Nits (Some of these are purely academic.) I understand that you are trying to limit the changes, so feel free to ignore these: 1: "A node sending packets much smaller than the Path MTU allows is wasting network resources and probably getting suboptimal throughput." - the "much" confuses me. If I'm using anything less than the MTU I'm wasting network resources and getting suboptimal throughput - I might not care, but if (used MTU) < (path MTU) I'm wasting resources. 2: "Nodes implementing Path MTU Discovery and sending packets larger than the IPv6 minimum link MTU are susceptible to problematic connectivity if ICMPv6 [ICMPv6] messages are blocked or not transmitted." The "implementing Path MTU Discovery and" seems redundant. ALL nodes sending packets larger than minimum MTU are "susceptible to problematic connectivity if ICMPv6 [ICMPv6] messages are blocked or not transmitted.". I get what you are trying to say, but my OCD tendencies would not allow me to ignore this... 3: "In the case of multipath routing (e.g., Equal Cost Multipath Routing, ECMP),"- this is vague / confusing -- (Equal Cost Multipath Routing, ECMP) makes it sound like either ECMP is an acronym for Equal Cost Multipath Routing, or that ECMP is something different to Equal Cost Multipath Routing. I'd suggest just dropping the "ECMP" (or, "Equal Cost Multipath (ECMP) routing", but that seems clumsy)
I'm putting in this point as a DISCUSS because I think that the current text may be confusing and vague. As others have pointed out, this document includes rfc2119-like language, both capitalized and not. I realize that rfc1981 was published before rfc2119 and that no expectation on the language existed then. However, we're at a point in time where not only rfc2119 is in place, but draft-leiba-rfc2119-update (which clarifies that only uppercase language has special meaning) is in AUTH48. I think that this leads to the possibility that the average reader may interpret the requirements in this document in a way that it wasn't intended. While I would prefer that this document be consistent (and either use capitalized rfc2119 language as intended, OR, not used it at all), I understand the intent of not changing some of the original text. I would be happy with a note like this one: "Note: This document is an update to RFC1981 that was published prior to RFC2119 being published. Consequently while it does use "should/must" style language in upper and lower case, the document does not cite the RFC2119 definitions. This update does not change that." [I borrowed this text from the the INTDIR review thread. [1]] I find that including a note in the Shepherd's write-up is not enough because the average reader/implementer will not consult it. [1] https://mailarchive.ietf.org/arch/msg/int-dir/bVH_0ydVdGssOiszJKhQXLYPuXY/?qid=4000f8a954b226266f429842911101f5
I agree Alvaro's DISCUSS point about 2119 language.
Nodes not implementing Path MTU Discovery MUST use the IPv6 minimum link MTU defined in [I-D.ietf-6man-rfc2460bis] as the maximum packet size. I searched for "IPv6 minimum link MTU" in draft-ietf-6man-rfc2460bis-09, and could not find that term. Even unlikely at this point in the IPv6 implementation cycle, we don't want readers to believe that they should look at the minimum of the device IPv6 MTU link(s). Proposal: define "IPv6 minimum link MTU" as 1280 octets in 2460bis, or in both documents.
In this document, I see: IPv6 nodes SHOULD implement Path MTU Discovery in order to discover and take advantage of paths with PMTU greater than the IPv6 minimum link MTU [I-D.ietf-6man-rfc2460bis]. A minimal IPv6 implementation (e.g., in a boot ROM) may choose to omit implementation of Path MTU Discovery. In draft-ietf-6man-rfc2460bis-09: It is strongly recommended that IPv6 nodes implement Path MTU Discovery [RFC1981], in order to discover and take advantage of path MTUs greater than 1280 octets. However, a minimal IPv6 implementation (e.g., in a boot ROM) may simply restrict itself to sending packets no larger than 1280 octets, and omit implementation of Path MTU Discovery. So a SHOULD in one document versus "strongly recommended" in the other. We should reconcile the two texts. Note: may and may are consistent. ICMPv6 PTB => ICMPv6 Packet to Big (PTB)
Thanks for the agreed text update from the SecDir review that should show up in the next revision: https://mailarchive.ietf.org/arch/msg/secdir/TSP93gEx0QW9WDOHUK3X3ipGiMk I also agree with others that use of RFC2119 and ensuring consistent use of normative language would be helpful.
Document: draft-ietf-6man-rfc1981bis-06.txt OVERALL I see in the shepherd's writeup that you have opted not to cite RFC 2119, but that makes the mixed case use of SHOULD/MUST even more confusing. I would suggest that at minimum you go through the document and evaluate whether each should/must should be capitalized, though I would prefer a cite to 2119. For instance: changed. Therefore, attempts to detect increases in a path's PMTU should be done infrequently. Is this normative? I also share the concerns others have raised about whether, given the actual state of PMTU this is something we should be making IS, but I'm willing to bow to the majority here. S 3. Note that Path MTU Discovery must be performed even in cases where a node "thinks" a destination is attached to the same link as itself. I think you need to qualify this must because you just said above that you don't need to if you use the minimum. Perhaps: Note that even when a node "thinks" a destination is attached to the same link as itself, it might have a PMTU lower than the link MTU... S 4. Nodes SHOULD appropriately validate the payload of ICMPv6 PTB messages to ensure these are received in response to transmitted traffic (i.e., a reported error condition that corresponds to an IPv6 packet actually sent by the application) per [ICMPv6]. This seems like it ought to be a MUST. Is there a good reason why it is not? Perhaps also a cite to how one validates. When a node receives a Packet Too Big message, it MUST reduce its a valid Packet Too Big message, I think because in graf 2 you say you should validate. elicit Packet Too Big messages. Since each of these messages (and the dropped packets they respond to) consume network resources, the node MUST force the Path MTU Discovery process to end. It's not clear to me what the requirement is. Nodes using Path MTU Discovery MUST detect decreases in PMTU as fast as possible. Nodes MAY detect increases in PMTU, but because doing Same thing, what are you requiring. How could I be nonconformant to this? S 5. This section discusses a number of issues related to the implementation of Path MTU Discovery. This is not a specification, but rather a set of notes provided as an aid for implementers. However, this section contains a lot of normative language. Is that all non-normative? S 5.3. If the stale PMTU value is too large, this will be discovered almost immediately once a large enough packet is sent on the path. No such mechanism exists for realizing that a stale PMTU value is too small, so an implementation SHOULD "age" cached values. When a PMTU value has not been decreased for a while (on the order of 10 minutes), the PMTU estimate should be set to the MTU of the first-hop link, and the packetization layers should be notified of the change. This will cause the complete Path MTU Discovery process to take place again. Is this really good advice for TCP? It seems like if you have a situation where it required several attempts to get the true PMTU (for instance, if you have successively narrower tunnels), then a PMTU reset could have a pretty material impact on throughput. S 6. dropped. A node, however, should never raise its estimate of the PMTU based on a Packet Too Big message, so should not be vulnerable to this attack. I get that this is now not a normative statement but rather a claim about what nodes who follow the MUST NOT in S 4, but it might still be better to make it a MUST to avoid confusion.
I'd like to add my voice to the concerns expressed regarding RFC2119 language. I understand the desire to only deal with the changed parts of the specification; but the current process as I understand it is that bis versions of document are expected to be "brought up to code" according to modern IETF document practices. Perhaps we should have a conversation about whether that practice is in need of revising, but I'm not sure making piecemeal exceptions is the best way to go about starting that conversation. In any case, I believe that current practice is that new RFCs cite 2119 and adhere to its definitions when using these specific terms in all-caps; and that this will be published as a new RFC. Minor technical comment: Like others, I also had a really hard time with the paragraph in section 4 concluding with "MUST force the Path MTU Discovery process to end." It's difficult to read this as anything *other* than "you get one Path Too Big, and just shut down discovery," but that's clearly not what the rest of the document says. (I'll also note that if we are treating capital "MUST" as normative, then "MUST attempt to" is kind of meaningless). Minor technical comment: Section 5.4 has three paragraphs, starting "Alternatively, the retransmission could be done in immediate response to a notification" that propose a more aggressive means of dealing with packets lost to PMTU issues; most of this text is a warning about how this can go awry and (if you'll excuse a bit of hyperbole) melt the Internet. Given that the first alternative works just fine and appears to be much safer, is this alternative actually something we want to recommend for today's implementations? The remainder of my comments are editorial. The second-to-last paragraph of the introduction uses the phrase "such messages" in a way that makes the antecedant difficult to find. I spent a while trying to figure out how PMTUD used TCP three-way-handshakes or blackholed TCP packets to determine PMTU. Suggest: "..relies on ICMPv6 messages to determine..." The abbreviation "PTB" appears in the second paragraph of section 4. I would ordinarily suggest expanding on first use; but as this is this first and only use, I suggest simply replacing it with the long form used in the rest of the document. Section 5.1 introduces the term MMS_S, and relates it to EMTU_S. I note that the former is not in the terminology section, while the latter is -- I suspect that they should both be present. Section 5.2 uses the acronym "ECMP". I would suggest citing the related document in which ECMP is defined and optionally expanding the acronym. Section 5.2 indicates: Also, the instance that sent the packet that elicited the Packet Too Big message should be notified that its packet has been dropped, even if the PMTU estimate has not changed, so that it may retransmit the dropped data. It is quite nonintuitive how this situation could arise: if the packet is of size X, and the PMTU has not changed, then it follows that X <= PMTU (as the packet would have been reduced in size otherwise). If the PMTU has not changed, it also follows that PMTU <= MTU (from the Packet Too Big message). it follows that X <= MTU (from the Packet Too Big message), and so it really should not have been dropped unless the corresponding router has an implementation flaw of some kind. More importantly, it would seem that an attempt to transmit another packet of size X at this point would run an overwhelmingly high chance of triggering another Packet Too Big for whatever errant reason caused the first one to be sent. I'm sure this naïve explanation overlooks whatever nonintuitive situation is envisioned by this paragraph. It would be quite helpful if such a situation were described: I, as an implementor, would look at this and say "What? No. I'm not doing that. It's extra work for no benefit." I suspect it will be dealt with by the RFC editor, but the first normative reference seems to have some kind of issue with the production of the authors' names. I see several acknowledgements in section B.1 that will be removed prior to publication. The authors may wish to consider moving these names to section 7 for the sake of posterity.
I know this is a bis document and my discuss does not address any text that changed in the bis version but given all previous discussion, I would like to discuss the following text parts on statements regarding retransmissions which don't seem to be appropriate for this document and are partly even wrong. In general it does not make a lot of sense to talk about retransmission semantics in this document because this really depends on the upper layer protocol, and I'm really not sure if any implementaiton of a reliable transport does send retransmission based on the receptions of PTB messages (if exposed) rather than only relying on it's own loss detection mechanism. This discuss concerns a few sentence all over the document and most parts of section 5.4. Details proposals below: I propose to either remove this sentence, or at least reword to the following (or something similar): OLD: "Retransmission should be done for only for those packets that are known to be dropped, as indicated by a Packet Too Big message." NEW: "The IP layer may indicate loss to the upper layer protocol of those packets that are known to be dropped, as indicated by a Packet Too Big message." Or MAY or SHOULD or MUST...? Subsequently the following sentence should be removed as well: "An upper layer must not retransmit data in response to an increase in the PMTU estimate, since this increase never comes in response to an indication of a dropped packet." And here is the bigger change in section 5.4: OLD "When a Packet Too Big message is received, it implies that a packet was dropped by the node that sent the ICMPv6 message. It is sufficient to treat this in the same way as any other dropped segment, and will be recovered by normal retransmission methods. If the Path MTU Discovery process requires several steps to find the PMTU of the full path, this could delay the connection by many round- trip times. Alternatively, the retransmission could be done in immediate response to a notification that the Path MTU has changed, but only for the specific connection specified by the Packet Too Big message. The packet size used in the retransmission should be no larger than the new PMTU." NEW "When a Packet Too Big message is received, it implies that a packet was dropped by the node that sent the ICMPv6 message. A reliable upper layer protocol will detect the loss of this segment, and recover it by its normal retransmission methods. Depending on the loss detection method that is used by the upper layer protocol, this could delay the connection by many round-trip times. Alternatively, the retransmission could be done in immediate response to a notification that the Path MTU was decreased, but only for the specific connection specified by the Packet Too Big message. The packet size used in the retransmission should be no larger than the new PMTU." I don't understand the following paragraph. Can this be removed? "Note: A packetization layer must not retransmit in response to every Packet Too Big message, since a burst of several oversized segments will give rise to several such messages and hence several retransmissions of the same data. If the new estimated PMTU is still wrong, the process repeats, and there is an exponential growth in the number of superfluous segments sent." The following text is fine but probably is not needed if the whole document is reworded accordingly to ensure that retransmissions are solely the responsibility of the upper layer protocol: "Retransmissions can increase network load in response to congestion, worsening that congestion. Any packetization layer that uses retransmission is responsible for congestion control of its retransmissions. See [RFC8085] for more information." This can also be removed, because a reliable protocol that detected loss and decided to send a retransmission, should and will do the same processing as for all other retransmissions, e.g. reset the retransmission time in TCP. Mentioning this separately is rather confusing. "This means that the TCP layer must be able to recognize when a Packet Too Big notification actually decreases the PMTU that it has already used to send a packet on the given connection, and should ignore any other notifications." And this is even incorrect. Slow start means that you will increase the connection window exponentially. Only sending one segment means setting the congestion/sending window to one. I propose the following change: OLD "Many TCP implementations incorporate "congestion avoidance" and "slow-start" algorithms to improve performance [CONG]. Unlike a retransmission caused by a TCP retransmission timeout, a retransmission caused by a Packet Too Big message should not change the congestion window. It should, however, trigger the slow-start mechanism (i.e., only one segment should be retransmitted until acknowledgements begin to arrive again)." NEW "A loss caused by a PMTU probe indicated by the reception of a Packet Too Big message MUST NOT be considered as a congestion notification and hence the congestion window may not change." And I also don't understand this sentence: "TCP performance can be reduced if the sender's maximum window size is not an exact multiple of the segment size in use (this is not the congestion window size)."
1) I agree with Ekr on this sentence: "Nodes SHOULD appropriately validate the payload of ICMPv6 PTB messages to ensure these are received in response to transmitted traffic (i.e., a reported error condition that corresponds to an IPv6 packet actually sent by the application) per [ICMPv6]." This sounds like it should be a MUST but I guess it depends on the upper layer protocol if such a validation is possible or not, e.g. if information are available that can be used for validation. Maybe you can be more explicit here and even say something like pmtu discovery should/must only be used if the upper layer protocol provides means for validation of the icmp payload (like a sequence number in TCP)…? Further also note that if the upper layer does the validation while the IP layer maintains EMTU_S, there must be an interface from the upper layer to the IP layer to tell if a packet is valid or not before the IP layer updates the MTU estimate. This seems actually more complicated than this one sentences indicates. 2) Also as Ekr says, I also have problems to fully understand this normative text in section 4: "After receiving a Packet Too Big message, a node MUST attempt to avoid eliciting more such messages in the near future. The node MUST reduce the size of the packets it is sending along the path. Using a PMTU estimate larger than the IPv6 minimum link MTU may continue to elicit Packet Too Big messages. Since each of these messages (and the dropped packets they respond to) consume network resources, the node MUST force the Path MTU Discovery process to end. Nodes using Path MTU Discovery MUST detect decreases in PMTU as fast as possible." I especially don't understand the first part, given that a PTB message may still indicate a MTU that is larger than the minimum link MTU which then may cause another PTB message later on the path. This text reads like if you receive one PTB message you should better end discovery and fall back to the minimum link MTU to avoid any further PTB message and not waist any resources. I don't think that's the intention and as such I don't understand when it is recommended to end discovery here...? 3) Section 5.2 seems to be written with only single homed hosts in mind. It might be good to advise that the pmtu information should always be stored on a per interface basis...? 4) Also section 5.2: You only advise to store information per flow ID, however, if the flow label is not used, wouldn't it make really sense to just use the 5-tuple instead? Also note that EMCP is often done based on the 5-tuple or even 6-tuple (with the ToS field). 5) And more in section 5.2: "When a Packet Too Big message is received, the node determines which path the message applies to based on the contents of the Packet Too Big message. " MAYBE: "When a valid Packet Too Big message is received, the node determines which path the message applies to based on the contents of the Packet Too Big message." And further on: "If the tentative PMTU is less than the existing PMTU estimate, the tentative PMTU replaces the existing PMTU as the PMTU value for the path." This doesn't cover the case where a pmtu probe with a larger size was send and the PTB message returns a larger value then stored. Maybe state this explicitly. This applies similar to this sentence in section 6: OLD "A node, however, should never raise its estimate of the PMTU based on a Packet Too Big message, so should not be vulnerable to this attack.“ NEW "A node, however, MUST NOT raise its estimate of the PMTU based on a Packet Too Big message that is not a (validated) response to a PMTU probe that was previously send by this node, so should not be vulnerable to this attack." 6) Further section 5.2: Should this statement be maybe upper case MUST: "The packetization layers must be notified about decreases in the PMTU. " 7) Technical comment on section 5.3 in general: There is a difference between aging if a flow is active or not. While I maybe don't want to probe again for this connection because my application already decided to use a mode where it can live with the current pmtu and it's too much effect to switch, I really want to probe at the beginning of the next connection again to check if I can use a different mode now. While the IP layer does not have a notion of connection it can observe if packets are frequently send with the same 5-tuple and reset the cached pmtu after a certain idle time. 8) Section 5.4: should this maybe be normative, at least the last MUST NOT (be fragmented): "A packetization layer (e.g., TCP) must track the PMTU for the path(s) in use by a connection; it should not send segments that would result in packets larger than the PMTU, except to probe during PMTU discovery (this probe packet must not be fragmented to the PMTU). " Nit: The abbreviation PTB is only used once in section 4 (and never expanded).
I agree with Alvaro's proposed resolution to the 2119 issue, which was also raised by the Gen-ART reviewer.
draft-ietf-dprive-dtls-and-tls-profiles
As a current WG chair I am recusing myself from this ballot.
I do have a concern regarding section 7.3 as it is not clear what really is being requested on the DHCP front here. While using an IP address or an FQDN are generally both possible choices while providing configuration options using DHCP, the use of FQDNs for acquiring trusted DNS servers seems problematic. We have spent a great deal of effort writing up some of the potential issues in Section 8 of RFC7227. It would be good if you can take a look and clarify what is required from a potential new DHCP option and how the failure modes are expected to be handled.
I'm balloting "yes", but I do have some comments: Substantive: 5: "Clients using Opportunistic Privacy SHOULD try for the best case..." When might it be reasonable _not_ to try for the best case? (That is, why not MUST)? 5.1: What's a reasonable granularity for the profile selection? The text suggests that decision is on a per-query basis; is that the intent? I assume you don't expect a user to make a decision for each query. 6.5: The statement that a client using OP "MAY" try to authenticate seems inconsistent with the "SHOULD try for the best case" statement in S5. (But seem my comment above about that.) 13.2: [I-D.ietf-dprive-dnsodtls] is referenced using 2119 keywords, so it should be a normative reference. (Note that this would be a downref.) Editorial: 2: "MUST implement DNS-over-TLS [RFC7858] and MAY implement DNS- over-DTLS [I-D.ietf-dprive-dnsodtls]." Unless these are new-to-this-draft requirements, please use descriptive (non-2119) language. (Especially in a definition). 5: "Strict Privacy provides the strongest privacy guarantees and therefore SHOULD always be implemented in DNS clients along with Opportunistic Privacy." Does that mean "SHOULD implement both strict and opportunistic privacy" or "If you implement opportunistic you SHOULD also implement strict?" 6.2: Should list item "2" be "ADN+IP", like in the table? 11: Is "SHOULD consider implementing" different than "SHOULD implement"? If so, please consider dropping the 2119 "SHOULD" when talking about what people think about.
Please consider the editorial comments in the Gen-ART review: https://datatracker.ietf.org/doc/review-ietf-dprive-dtls-and-tls-profiles-09-genart-telechat-dupont-2017-05-10/
Here is Eric Vyncke's OPS DIR review: From the abstract: This document discusses Usage Profiles, based on one or more authentication mechanisms, which can be used for DNS over Transport Layer Security (TLS) or Datagram TLS (DTLS). This document also specifies new authentication mechanisms. DPRIVE (DNS Private exchange) aims at enhancing DNS privacy by encrypting the DNS traffic (DNSsec only provides authentication/integrity). There are two profiles: strict and opportunistic. The latter allows normal DNS operations as a fallback, which is key for successful deployment. This document in section 6 compares the SIX different authentication mechanisms and gives some guidelines with a lot of SHOULD and MAY and little MUST. Unsure whether it makes the implementers' task easy. Section 8 is more directive and more useful. Section 7.3 is mainly about the legacy DHCP server for the legacy IPv4. No word about IPv6 and no word about RFC 8106 (DNS info for SLAAC). Overall, there are no discussion about the performance (latency, load of clients/servers) of one authentication mechanism compared to the others, no discussion about resilience (i.e. if one server fails, for example in the PKIX cert chains) and I believe that performance and resilience to network error could be useful for the implementer/architect. As a reader, I regret that this document combines two aspects: description of the profiles but also how to extend one TLS authentication method to DTLS... I would have preferred having two documents. But, this is mainly about readability.
I agree with EKRs and Mirja's comments and see they are being addressed.
S 9 mandates RFC 7250: o Raw public keys [RFC7250] which reduce the size of the ServerHello, and can be used by servers that cannot obtain certificates (e.g., DNS servers on private networks). This needs to be updated to indicate that the client MUST NOT offer 7250 unless it has a preconfigured SPKI, otherwise you're going to have interop problems.
TECHNICAL S 5. subsequently connect. The rationale for this is that requiring Strict Privacy for such meta queries would introduce significant deployment obstacles. This profile provides strong privacy guarantees to the client. This Profile is discussed in detail in Section 6.6. This point seems unclear. If you do these queries unprotected, then you don't have strong privacy guarantees. I think you mean that you get them via some trusted mechanism such as DHCP. widespread adoption of Strict Privacy. It should be employed when the DNS client might otherwise settle for cleartext; it provides the maximum protection available. I don't think this statement is accurate. It provides the best protection that the attacker will allow. Table 1 seems to have N and D paired, so maybe you can coalesce them? S 6.4. A DNS client that is configured with both an authentication domain name and a SPKI pinset for a DNS server SHOULD match on both a valid credential for the authentication domain name and a valid SPKI pinset (if both are available) when connecting to that DNS server. The overall authentication result should only be considered successful if both authentication mechanisms are successful. You should cover the topic of user-defined trust anchors. Here's the relevant text from 7469: For example, a UA may disable Pin Validation for Pinned Hosts whose validated certificate chain terminates at a user-defined trust anchor, rather than a trust anchor built-in to the UA (or underlying platform). EDITORIAL Do you want to cite TLS 1.3 at this point? S 3. o Any server identifier other than domain names, including IP address, organizational name, country of origin, etc. addresses, names, for parallel structure with domain names S 9. There are known attacks on (D)TLS, such as machine-in-the-middle and protocol downgrade. These are general attacks on (D)TLS and not specific to DNS-over-TLS; please refer to the (D)TLS RFCs for discussion of these security issues. This text seems pretty unhelpful. Given that you are specifying 1.2, if you also require the relevant strong algorithms, then there should not be downgrade or MITM attacks.
The final paragraph of section 6.6 says "The system MUST alert by some means that the DNS is not private during such bootstrap." -- presumably, this means "client" where it says "system" (as opposed to any other part of the infrastructure) -- but I'm having a hard time envisioning how this gets practically implemented, given that the functionality described by this section is going to be implemented in DNS stub resolver libraries, which tend to be pretty far removed from any user interface. Given that this is a MUST-strength requirement, I think it would be very useful to describe what this alerting might look like for (a) interactive applications like a web browser; (b) commandline utilities like curl; and (c) background tasks like software update daemons. This would provide some context for the library implementors to provide the proper hooks to enable this "MUST" to be satisfied. Section 11.1 mentions that it will describe techniques for thwarting DNS traffic analysis, including "monitoring of resolver-to-authoritative traffic". I see that there have been measures added to prevent authoritative servers from determining the identity of the client; but given the phrasing I cite above, I was expecting a description of how to prevent eavesdroppers who can see both incoming and outgoing traffic from the recursive resolver from correlating the encrypted packets I send to that resolver with the plaintext queries it emits for non-cached results. As far as I can tell, there are no described counter-measures against such an attack (aside from hoping that volume of traffic to the resolver is too great to perform such correlation with any real precision), right? If such measures have been defined, I imagine a citation would be warranted. If not, the above phrasing should probably be qualified; e.g., "monitoring of resolver-to-authoritative traffic alone." Nits: draft-ietf-dprive-dnsodtls has been published as RFC 8094 The draft header indicates that this document updates RFC7858, but the abstract doesn't seem to mention this, which it should.
My main comment is on this part related to profile selection I guess: Section 5.1 says: "A DNS client SHOULD select a particular Usage Profile when resolving a query." but then section 6.5 says: "This information can provide a basis for a DNS client to switch to (preferred) Strict Privacy where it is viable.“ I assume the later sentence is supposed to mean to switch for the next query to the same resolver? Actually regarding the sentence above in section 5.1., it is also not fully clear to me why you use profiles on a per query basis? However, to me the sentence in section 6.5 does not really make sense given that Opportunistic Privacy means that I want the best privacy possible but not fail if that not available. Why would I chance this policy? I guess you mean to says something like, if authentication worked once to a certain resolver and it is therefore known that the server supports DNS-over-(D)TLS, one could consider to switch to Strict Privacy for future requests because the likelihood that an authentication failure is an attack is high. Not sure if that's true though. However, I think that this needs a separate discussion. In general it might be worth in this document to better distinguish the case where encryption or authentication is not even offered/available for any reason and where it fails (and therefore there might or might not be an (passive) attack). Minor comments but related: 1) "A DNS client that implements DNS-over-(D)TLS SHOULD NOT default to the use of clear text (no privacy)." I'm not sure I understand this sentence. What are the implications here? Does this mean that if you have implemented DNS-over-(D)TLS, you have to use it? Not sure that this is something that can or should be specified in an RFC. 2) section 6.5: "In this case, whilst a client cannot know the reason for an authentication failure, from a privacy standpoint the client should consider an active attack in progress and proceed under that assumption. " What does this mean? Does this lead to any meaningful actions, like logging? More advise would be helpful here.
(I just updated both my DISCUSS and my comment section.) I would like to ballot YES on this document, but I would like to discuss the following: Sorry for being DownRef police, but RFC 7918 is clearly Normative (because there is a SHOULD level requirement), but it is listed as Informative reference. It would be a DownRef once it is made Normative, unless the procedure from RFC 8067 is used. Is RFC 7918 a suitable DownRef? Is it widely implemented?
Please make RFC 7918 and RFC 7924 Normative references, as they are mentioned in SHOULD level requirements. I am agreeing with Ekr comments. I am also agreeing on first and last Mirja's comment. Section on future DHCP extension is a bit "hand-wavy". Is any work on this planned? I see that Suresh raised a DISCUSS on this point, so I am happy for him to hold it.
draft-ietf-bess-evpn-vpws
[ For -11 / -12 ] This document is very heavy on the acronyms, and could do with some expanding of these -- for example, the document starts out with "This document describes how EVPN can be used...". I'm no MPLS VPN person, so much time was spent searching to try figure out what everything meant. I also agree with Spencer's "In multihoming scenarios, both B and P flags MUST NOT be both set. " being hard to parse, and disagree with Acee that is it clear. [ For -13 ] The draft was revised to address Alia's DISCUSS, and also Spencer's "traditional way" and "both B and P flags MUST NOT be both set" comment, but still does not expand EVPN; I also agree with Spencer that it would be helpful to expand P2P on first use. I reread the document and have some additional comments - note that these are are only comments, but I think that they would make the document more readable... 1: Introduction: "that in EVPN terminology would mean a single pair of Ethernet Segments ES(es)." - I'm confused by the 'ES(es)' - guessing this was an editing failure and 'Ethernet Segments (ES)' was intended? If not, You use both "Ethernet AD" and "Ethernet A-D" - please choose and stick with one. 1.1: Terminology: "EVI: EVPN Instance." -- Ok, but EVPN is still not defined / referenced. 3.1 EVPN Layer 2 attributes extended community " A PE that receives an update with both B and P flags set MUST treat the route as a withdrawal. If the PE receives a route with both B and P clear, it MUST treat the route as a withdrawal from the sender PE." Do the above 2 sentences say the same thing? It sure sounds like repetition, if not, please explain the difference. If not, removing one would make this less confusing. Figure 3: EVPN-VPWS Deployment Model You use the terms / labels "PSN1", "PSN2" - what are these? "Provider <something> Network"?
I agree with people that the document is rather heavy on acronyms.
I did have some non-Discuss questions that you might wish to think about before the document is approved ... In the Abstract This document describes how EVPN can be used to support Virtual Private Wire Service (VPWS) in MPLS/IP networks. EVPN enables the following characteristics for VPWS: single-active as well as all- active multi-homing with flow-based load-balancing, eliminates the need for traditional way of Pseudowire (PW) signaling, and provides fast protection convergence upon node or link failure. everything is exceptionally clear, except that I don't know what the "traditional way" of signaling means. The same phrase appears in Section 1 Introduction This document describes how EVPN can be used to support Virtual Private Wire Service (VPWS) in MPLS/IP networks. The use of EVPN mechanisms for VPWS (EVPN-VPWS) brings the benefits of EVPN to P2P services. These benefits include single-active redundancy as well as all-active redundancy with flow-based load-balancing. Furthermore, the use of EVPN for VPWS eliminates the need for traditional way of PW signaling for P2P Ethernet services, as described in section 4. with the addition of "as described in section 4", but I didn't see an explicit statement in Section 4 that explained what was replacing the "traditional way". Even a clear reference to an RFC where the "traditional way" was defined would be helpful. It would probably be helpful to expand acronums like "P2P" on first use. I immediately thought "peer to peer?" but I bet you didn't mean that. Yes, there's a terminology section, but it's three and a half pages into the document. In this text, For EVPL service, the Ethernet frames transported over an MPLS/IP network SHOULD remain ^^^^^^ tagged with the originating VLAN-ID (VID) and any VID translation MUST be performed at the disposition PE. why is this a SHOULD? I guess my first question should be "does this still work if you don't?" In this text, In multihoming scenarios, both B and P flags MUST NOT be both set. the double both(s) made this difficult to parse. Is it saying In multihoming scenarios, the B and P flags MUST be cleared. or something else? But I'm guessing, and the rest of that paragraph made me doubt my guesses.
The shepherd write-up says: "Two IPR discussions from Juniper & Cisco respectively: https://datatracker.ietf.org/ipr/search/?submit=draft&id=draft-ietf-bess-evpn-vpws Haven't seen WG discussion on that." Can we confirm that the wg is aware of the IPRs before publication? Other minor comments: 1) Agree with Warren that all the acronyms make it hard to read. Please check that you've spelled out all acronyms at the first occurrence in the intro accordingly, including EVPN. 2) section 3.1: Is the B flag even needed? Doesn't P=0 indicate that this is the Backup PE? 3) I would maybe move section 5 right after the intro because it provides some background on the benefits of this extension. 4) Are you sure there are no additional security consideration based on the information provided in this extension? E.g. an attacker indicates being the primary PE and thereby causes a conflict, or problems based on the indication of a small MTU by an attacker? Not sure if there is any risk or if that is covered somewhere else...?
I'm interested to see the response to Mirja's comment #4. Glad to see #1 is okay.
Thanks for addressing my Discuss on clarifying the use of VXLAN & that it is the tunneling technology and how it pertains to the services supported. =====================
Looking at the Shepherd write up and the Ballot, I see no mention of the normative reference to RFC 7348, which is informational and part of the Independent Submission stream. As I mention in my comments below, I can't fully follow the technical contents of this document, but this seems like a red flag to me and -- as far as I can tell -- it hasn't been discussed yet. It's possible that the reference just ended up in the wrong section (and should actually be informative), but it's not immediately obvious on a casual examination whether that's true.
I strongly second Mirja's comment requesting positive confirmation from the WG that is is collectively aware of the associated IPR declarations. From https://www.rfc-editor.org/materials/abbrev.expansion.txt: > It is common in technical writing to abbreviate complex technical terms > by combining the first letters. The resulting abbreviations are often > called acronyms. Editorial guidelines for the RFC series generally > require expansion of these abbreviations on first occurrence in a > title, in an abstract, or in the body of the document. These guidelines go on to point out which acronyms are common enough not to require expansion on first use. The following acronyms are not considered common, and require expansion on first use *and* in the title (I'm being very careful to cite only those which are *not* expanded on first use, so each of these should be actionable): - VPWS - EVPN - P2P - MAC (which is, itself, ambiguous without an expansion on first use) - PE - CE - L2 - DF - iBGP - eBGP I don't think "encap" is a word. I have not made a complete effort to understand the technical aspects of this document as its acronym use is literally too thick for me to read and comprehend its contents. I presume it is readable to its community of interest (as three implementations already exist); but finding ways to write in prose instead of acronyms where possible would be highly welcome.
From Roni's Gen-ART review: Nits/editorial comments: In section 1 second paragraph "[RFC7432] provides the ability " looks like the reference is not a link to RFC7432.
draft-ietf-manet-olsrv2-multipath
I am not a MANET person, and know very little about the Optimized Link State Routing Protocol, however I found this document to be very vague and poorly worded in many places. At some point I simply gave up trying to understand it, but have concerns that it is not sufficiently clear for independent implementations. I almost made these a DISCUSS, but, as I said, I'm not a OLSR person, and so I'm trusting Alvaro to know if it is deployable / implementable Comments: S1.1: "The multi-path extension for OLSRv2 is expected to be revised and improved to the Standard Track," - I'm not sure an extension can be "improved to the Standard Track" - perhaps you mean that the documents will be improved and published as Standards track? Or that once implementations are more stable they will be documented on Standards Track? "Although with existing experience, multiple paths can be obtained even with such partial information, the calculation might be impacted, depending on the MPR selection algorithm used." - I don't understand the "with existing experience", and this sentence is a fragment. I suspect that removing " with existing experience," would make this cleaner, but I don't really understand what you are trying to say... "Different algorithms to obtain multiple paths, other than the default Multi-path Dijkstra algorithm introduced in this specification." - this should have a reference to somewhere in the document. 5.1: "CUTOFF_RATIO The ratio that defines the maximum metric of a path compared to the shortest path kept in the OLSRv2 Routing Set. For example, the metric to a destination is R_metric based on the Routing Set." - I don't understand what the last sentence is trying to say. "CUTOFF_RATIO MUST be greater than or equal to 1. Note that setting the value to 1 means looking for equal length paths, which may not be possible in some networks." -- surely setting it to 2 (or any other number) will also end up looking for paths which might not be possible? E.g: ┌──┐ ┌──┐ ┌──┐ ┌──┐ ┌────▶│R1│─▶│R2│─▶│R3├─▶│R4│─────┐ │ └──┘ └──┘ └──┘ └──┘ ▼ ┌───┐ ┌──┐ ┌───┐ │ S │───────────▶│R6│───────────▶│ D │ └───┘ └──┘ └───┘ "SR_HOLD_TIME_MULTIPLIER The multiplier to calculate the minimal time that a SR-OLSRv2 Router Tuple SHOULD be kept in the SR-OLSRv2 Router Set. It is the value of the Message TLV with Type = SOURCE_ROUTE." - this is vague / confusing. I think that you need a reference to Sec 6.1.1. 9. Configuration Parameters "the users of this protocol are also encouraged to explore different parameter setting in various network environments, and provide feedback." -- where? 12. "IANA Considerations This section adds one new Message TLV, allocated as a new Type Extension to an existing Message TLV." -- this section seems to be missing some important information, like which registry this updates Message Type 7 in. Nits: S1.1: "Because the packet drop is normally bursty in a path" -- "Because packet drops on a path are normally bursty"... "Other than general experiences including the protocol specification and interoperability with base OLSRv2 implementations, the experiences in the following aspects are highly appreciated:" s/ experiences including/ experiences, including / (grammar) s/ the experiences / experiences / (grammar) "Although with existing experience, multiple paths can be obtained even with such partial information, the calculation might be impacted, depending on the MPR selection algorithm used." s/Although with existing experience/Although, with existing experience/ (grammar) "In scenarios where the length of the source routing header is critical, the loose source routing can be considered." s/ the loose source / loose source / "for example, the paths with lower metrics (i.e., higher quality) can transfer more datagrams compared to paths with higher metrics." -- nit: many people (perhaps incorrectly) associate 'datagram' with 'UDP' - you might want to clarify (or just say packet) S3: "MP-OLSRv2 is designed for networks with dynamic topology by avoiding single route failure." - this makes it sound like it was *designed* by avoiding single route failure. "in IPv4 networks the interoperability is achieved by using loose source routing header;" - in IPv4 networks interoperability is achieved using loose source routing headers;" (or "by using the loose...") S4: "The reactive operation is local in the router" - "local to the router" S5.1: "All the intermediate routers MUST be included in the source routing header, which makes the number of hops to be kept a variable." -- I don't understand how the "the number of hops to be kept" is "a variable"; this makes it sound like I can set the number of hops to be kept. Perhaps you meant "a variable number of hops" or "the number of hops changes"?
I find it really strange that this document uses an experimental Routing header type codepoint (254) but requires the processing to be same as the RPL Routing header (Type 3). Is there a reason things are done this way instead of just using the Type 3 header as is?
The following text in section 4 seems to indicate that scheduling is done on a per-packet basis: "When there is a datagram to be sent to a destination, the source router acquires a path from the Multi-path Routing Set (MAY be Round-Robin, or other scheduling algorithms)." This seems not appropriate as e.g. TCP packets routed on links with largely different delays may suffer performance. ECMP usually hashes the 5-tuple or 6-tuple (incl. DiffServ Codepoint) to setup state and routes all packets belonging to the same flow on the same route. I recommend to apply the same here. Also related is this text in section 8.4 that should explain Round-Robin on a per flow basis instead. Further this should only be an example scheduling alogirthm while text belong seems to assume that Round-Robin is always used. "If a matching Multi-path Routing Tuple is obtained, the Path Tuples of the Multi-path Routing Tuple are applied to the datagrams using Round-robin scheduling. For example, there are 2 path Tuples (Path-1, Path-2) for destination router D. A series of datagrams (Packet-1, Packet-2, Packet-3, ... etc.) are to be sent router D. Path-1 is then chosen for Packet-1, Path-2 for Packet-2, Path-1 for Packet 3, etc. Other path scheduling mechanisms are also possible and will not impact the interoperability of different implementations." Related is this text in section 8.4.: "If datagrams without source routing header need to be forwarded using multiple paths (for example, based on the information of DiffServ Code Point [RFC2474])" RFC2474 does not specify any application requirements on multipath use and as such the DiffServe field should not be used to determine if the flow can be routed on multiple paths. The ability to profit from multipath routing depends not only on the application and protocols used but also on the characteristics of the multipath link(s); so it's hard to make any implicit assumptions here. However, if routing would only be recommended on a per-flow basis this problem does not occur and the brackets above could be remove. Further, if routed on a per flow basis would be done, DiffServ could actually be used to decide which path to use, if e.g. one path has a lower delay, but that seem to need further discussion as well.
Minor comments/questions: 1) section 8.4: this sentence is not clear: "It is RECOMMENDED to use MTU sizes considering the source routing header to avoid fragmentation." MAYBE "It is NOT RECOMMENDED to fragment the IP packet if the packet with the source routing header would exceed the minimum MTU along the path. In this case source routing and therefore the additional path calculated by MP-OLSRv2 SHOULD NOT be used." 2) section 9: "For IPv6 networks, it MUST be set to 0, i.e., no constraint on maximum number of hops." Why is that? 3) Not sure why section 12.1. is there? Can this be removed?
I reviewed the -12 version of this document, and had a comment I was going to make about dropping packets when no contiguous path of source-routing capable routers existed between the endpoints; but when I went to quote the offending text, discovered that it has been fixed in the ink-is-still-wet -13 version of the document, dropped one day before the telechat. To highlight for anyone else who has similarly reviewed the -12 version: the only other non-editorial change I find is that avoiding fragmentation has been demoted from normative to non-normative (see the last two paragraphs of section 8.4). My intuition is that fragmentation is sufficiently disruptive that normative language is called for here, but I don't feel strongly about it.
draft-ietf-i2nsf-problem-and-use-cases
I appreciate the work that has gone into the document for the exercise of defining the problem and use-cases, however I question the value of publishing this document as an RFC, but I will not block its path should other ADs consider it of use. I am taking an abstain position.
Similar to other Abstains, I won't block publication but question the value, especially the current version to be published at this time. The document rambles on descriptions and it is not concise on the problem to be addressed by i2nsf. I recommend holding off on publication until it can be fine tuned, it currently appears to be a cut and paste of many documents. Agree with others the document should be Informational, not Standards track. Examples, section 5 seems to summarize that i2nsf will only focus on policy provisioning. Yet, section 3.4 discusses capability negotiation and 3.1.2 discusses monitoring mechanisms and execution status of NSFs capabilities. And other sections also infer much more, describing expectations of security controller functionality. There are several rather overzealous claims: Section 4.4 "botnet attacks could be easily prevented by provisioning security policies using the i2nsf..interface" and section 4.5 "security controller would keep track of ..if there is any policy violation ..proof..in full compliance with the required regulations". Several sentences don't parse e.g. "thereby raising concerns about the ability of SDN computation logic to send security policy-provisioning information to the participating NSFs".
I found the document to provide a useful overview and introduction - I think that documents which provide an introduction to a technology are useful, as they set the stage for users and implementers to understand how everything ties together. I thank the authors for writing it. I also note that this is part of the I2NSF charter, and was written to satisfy this.
Agree with my Abstaining co-ADs and don't think this document should be published on the Standards track but I will not stand in the way of publication.
I don't see value in the publication of this document in the RFC series. I can see that this document was useful for discussion in the working group but I don't know why it needs to be published as RFC. Also there is quite some redundancy everywhere in the document as well as between the problem statement and use case part. Spelling out requirements for the protocol design based on the analysis of these problems and use cases (which was already a bit attempted from time to time in the doc) would have been more useful but does still not have an archivable value that justifies publication as RFC in the IETF stream (indicating IETF consensus).
I agree with the various abstains about this draft not appearing to have archival value. I chose not to ballot "abstain" because I think it's best to handle that issue at charter or adoption time rather than doing so this close to the finish line. (I note that the WG charter explicitly says that the WG may choose not to publish, so this is a borderline case.) If there really are good reasons to expect archival value, it would be helpful to include a paragraph early in the document describing those reasons. [Update: Thanks for addressing my other comment.]
[Mirja beat me to the DISCUSS; FWIW, I completely agree that, if published, this document should not be in the Standards Track.] Because this document only provides background on the problem space and some use cases, I don't think it has the long standing value to be published as an RFC (of any maturity level). Having a clear understanding of the problem and of the use cases is important for the eventual development of a solution, but in this case no specific path is clearly marked: the language includes a lot of "may be required/need/etc" not resulting in a strong basis to build a solution. I know that the i2nsf Charter gave the WG the option to not publish this document, and that it is being published anyway...so I won't stand in the way of publication and am ABSTAINing instead.
The document has been changed to informational. The ballot writeup was not changed as that would have reset all of the ballots. The edits made from the initial IESG review have, IMO, significantly helped to improve the document that reads more of a problem statement/overview now and is hopefully a helpful document to anyone coming into the working group or using the work later.
Given that the WG charter gave i2nsf the decision about whether to publish as an RFC and that this is Informational, I am fine with this document being published as an RFC. I think that it will serve as useful background to folks considering the i2nsf work and understanding the motivations for standardizing interfaces that have previously been vendor-specific.
I don't have any problem with this document per se, but it's a little odd how it's written in a vacuum as if there weren't already technologies which did a lot of the things you are talking about here (e.g., YANG) and which the WG intends to use. I think this document would be a lot stronger if it didn't act as if the WG was agnostic and instead called out what solutions the WG intends to adopt for these. I'm also somewhat surprised this is being advanced as Standards Track, given that it doesn't have any normative content, and becaus ehte writeup says that there isn't commitment to implement this. I won't hold a DISCUSS on this, but I would suggest it be Informational. S 2. Flow-based NSF: An NSF which inspects network flows according to a security policy. Flow-based security also means that packets are inspected in the order they are received, This seems over-specific, because sometimes firewalls and the like will store packets so that it can re-assemble them, in which case it inspects them in logical not time order. S 3.1.7. Different policies might need different signatures or profiles. Today, the construction and use of black list databases can be a win-win strategy for all parties involved. Well, except for attackers. They are involved. S 3.1.9; bullet 3. Symmetric keys and group keys are not the same type of category, so I can't read this section. What are you trying to say here? S 3.5. "xamine" and "scnearios" are misspelled. S 3.6. ToR seems to be undefined. Figure 3. I think this dotted circle-thing is intended to tell me that the operator controls the stuff inside the circle, but I'm not sure. Maybe some labels would help.
I agree with Mirja's DISCUSS, which appears to have been mostly addressed. The IESG writeup appears to need updating to match the new document's intended status. I am voting No Objection rather than Abstaining for the reasons Ben outlines in his No Objection.
I agree with Alvaro and Mirja.
draft-ietf-grow-large-communities-usage
Overall the document was well written and easy to read. I did have one question though. It is not clear how the values for the Local data part 1 are matched up to the functions and communicated between the peer ASes? Is this going to stay purely a local matter between ASes or is there going to be a movement towards some sets of known functions (e.g. the BGP blackhole community RFC7999)?
Introduction BGP Large Communities [RFC8092] provide a mechanism to signal opaque information between Autonomous Systems (ASs). Not only "between": BGP communities might also be used inside an AS, as you described in the "informational communities" As mentioned by Jouni in his OPS-DIR review: One minor nit I have relates to management & administration to the large communities functions and description of their semantics. Are those maintained somewhere? If there are existing repositories, documentation, etc it would be nice to point out those. The document now hints to NANOG and NLNOG..
Thanks for addressing the SecDir review (and GenArt). https://mailarchive.ietf.org/arch/msg/secdir/T4xX2o_TrMIRVQ1Z2u25wGvxR0U
So this is basically for attaching arbitrary tags to BGP routes? Privacy Considerations probably need to be added to this document.
conflict-review-pantos-http-live-streaming
A few comments/questions on the draft itself: - The draft contains the following addition to the copyright notice: "This document may not be modified, and derivative works of it may not be created, and it may not be published except as an Internet-Draft.” While this is up to the ISE to decide, it seems like that notice may preclude publication as an RFC. - I am concerned at the lack of a privacy discussion in the draft. Authors, please consider adding a privacy consideration section to address questions like the following: What can an on-path third party (e.g. an ISP) learn about the media consumption habits of a person using this mechanism? Can that be mitigated? If so, how? - application/vnd.apple.mpegurl is already registered. This appears to be an update to that registration. Has (or will) this update go through the expert review required for the registration of vendor-tree media types?
I don't object to the final result of the relationship to the work not preventing publication. However, the text is not one of the rfc5742 options; I don't think we need to qualify the relationship, it either is related or it isn't.
I agree with Ben that the updated MIME registration template should go through MIME expert reviewer. I don't see anything particularly wrong with the new template, so this shouldn't be a problem.
Sounds like we have a resolution regarding the copyright notice, thanks.
I agree with Ben's comment that the following statement does not seems to be compatible with the IETF copyright: "This document may not be modified, and derivative works of it may not be created, and it may not be published except as an Internet-Draft."