IESG Narrative Minutes
Narrative Minutes of the IESG Teleconference on 2014-08-21. These are not an official record of the meeting.
Narrative scribe: John Leslie and Susan Hares (The scribe was sometimes uncertain who was speaking.)
Corrections from: Cindy
2. Protocol Actions
2.1 WG Submissions
2.1.1 New Items
2.1.2 Returning Items
2.2 Individual Submissions
2.2.1 New Items
2.2.2 Returning Items
2.3 Status Changes
2.3.1 New Items
2.3.2 Returning Items
3. Document Actions
3.1 WG Submissions
3.1.1 New Items
3.1.2 Returning Items
3.2 Individual Submissions Via AD
3.2.1 New Items
3.2.2 Returning Items
3.3 Status Changes
3.3.1 New Items
3.3.2 Returning Items
3.4 IRTF and Independent Submission Stream Documents
3.4.1 New Items
3.4.2 Returning Items
1211 EDT no break
4 Working Group Actions
4.1 WG Creation
4.1.1 Proposed for IETF Review
4.1.2 Proposed for Approval
4.2 WG Rechartering
4.2.1 Under evaluation for IETF Review
4.2.2 Proposed for Approval
5. IAB News We can use
6. Management Issues
7. Agenda Working Group News
1228 EDT Adjourned
(at 2014-08-21 07:30:04 PDT)
In section 3.2: In the event that the resource changes in a way that would cause a normal GET request at that time to return a non-2.xx response (for example, when the resource is deleted), the server sends a notification with an appropriate response code (such as 4.04 Not Found) and removes all clients from the list of observers of the resource. Would the 4.04 message be confirmable in cases where a 2.05 would be? If so, does the removal happen when the confirmation is received, or immediately? Also, this text implies that the server sends one message and then removes all the clients from the list of observers; wouldn't it make more sense to say that the server sends one message and removes the client to which it sent the message from the list of observers? Otherwise it seems as if only one client would be notified.
I liked this document more than the number of questions I have might make you guess ... These aren't blocking, and "Spencer doesn't grok CoAP" could be a reasonable response to most of them, but I'd ask that you consider them along with any other comments you might collect during IESG evaluation. In this text: 3.2. Notifications Notifications typically have a 2.05 (Content) response code. They include an Observe Option with a sequence number for reordering detection (see Section 3.4), and a payload in the same Content-Format as the initial response. If the client included one or more ETag Options in the request (see Section 3.3), notifications can also have ^^^^ a 2.03 (Valid) response code. I would read “also” as implying “simultaneously”, and I bet that’s not true. If it’s not, would “notifications would have a 2.03 (Valid) response code rather than 2.05” be clearer? I mention this mostly because CoAP is the same as HTTP except when it isn’t, so I don’t know that you don’t mean “simultaneously” without going to look :-) In this text: 3.3.1. Freshness To make sure it has a current representation and/or to re-register its interest in a resource, a client MAY issue a new GET request with the same token as the original at any time. All options MUST be identical to those in the original request, except for the set of ETag Options. It is RECOMMENDED that the client does not issue the request while it still has a fresh notification/response for the resource in its cache. Additionally, the client SHOULD at least wait for a random amount of time between 5 and 15 seconds after Max-Age expired to avoid synchronicity with other clients. Am I reading this correctly as “wait between 5 and 15 seconds after Max-Age expires to send a GET and re-register”? If so, you folk are the experts, but is this making it more likely that the client will miss state changes if the GET to re-register is dropped? Thanks for the shout-out to randomness, of course. In this text: 4.3.1. Freshness After returning the initial response, the server MUST try to keep the ^^^^^^^^^^^^^^^ returned representation current, i.e., it MUST keep the resource state observed by the client as closely in sync with the actual resource state as possible. and in at least one other place in Section 4 that talk about trying to keep the client in sync, it looks like you’re using RFC 2119 language to describe what the protocol designers are thinking (“we MUST make sure that happens”), in ways that can’t be tested and don't impact interoperability. The second MUST seems more reasonable (squishy, but I wouldn't complain about it). In this text: 4.5.1. Congestion Control The server SHOULD NOT send more than one non-confirmable notification ^^^^^^^^^^ per round-trip time (RTT) to a client on average. If the server cannot maintain an RTT estimate for a client, it SHOULD NOT send more than one non-confirmable notification every 3 seconds, and SHOULD use an even less aggressive rate when possible (see also Section 3.1.2 of RFC 5405 [RFC5405]). could you give some guidance on violating the SHOULD, and when/why that would be a great idea? The rest of the congestion control section seemed very reasonable. Thank you. In this text: 5. Intermediaries To perform this task, the intermediary SHOULD make use of the ^^^^^^ protocol specified in this document, taking the role of the client and registering its own interest in the target resource with the next hop towards the server. I find myself wondering why this isn’t a MUST.
I have no objection to the publication of this document, but I note a number of issues below that may be documentation concerns or may be wrinkles in the protocol. I leave the authors, shepherd, and AD to work out if any action is needed. --- Section 1.1 needs to explain what is a "resource". There is a special meaning in the context of this document (I think) that is not the same as the "network resource" that other people working in constrained networks might consider. You should be able to slot this in to... The model of REST is that of a client exchanging representations of resources with a server, where a representation captures the current or intended state of a resource and the server is the authority for representations of the resources in its namespace. A client interested in the state of a resource initiates a request to the server; the server then returns a response with a representation of the resource that is current at the time of the request. --- I was doing well in understanding how this protocol was a trade-off in optimization. A repeated Get/response exchange is heavy on network resources. A register/push system (like hear) addresses that but trades it for state on the server. You can't win, but you can choose, and this document appears to make a choice. And then, in Section 1.2... A client remains on the list of observers as long as the server can determine the client's continued interest in the resource. The interest is determined from the client's acknowledgement of notifications sent in confirmable CoAP messages by the server. When the client deregisters, rejects a notification, or the transmission of a notification times out after several transmission attempts, the client is considered no longer interested and is removed from the list of observers by the server. So this has two problems: 1. It appears to say that the Get/response mode is replaced with register/push/ack which does not reduce the message flows and causes the server to retain even more state :-( 2. "client's acknowledgement of notifications sent in confirmable CoAP messages by the server" is ambiguous. It could be read to say that the aknowledgements are sent in confirmable messages by the server! I think you need some more clarity in this paragraph. How about... A client remains on the list of observers as long as the server can determine the client's continued interest in the resource. The may server send a notification in a confirmable CoAP messages to request an acknowledgement by the client. When the client deregisters, rejects a notification, fails to respond to a confirmable CoAP message, or when the transmission of a notification by the server times out after several transmission attempts, the client is considered to be no longer interested and is removed from the list of observers by the server. See also comments on Section 3.5. Maybe the document is also missing guidance about how often to seek confirmation. An acknowledgement message signals to the server that the client is alive and interested in receiving further notifications; if the server does not receive an acknowledgement in reply to a confirmable notification, it will assume that the client is no longer interested and will eventually remove the associated entry from the list of observers. --- Section 2 jumps in a little with some assumptions of how much the reader knows about CoAP. Of course, it is reasonable to assume familiarity, but maybe some references for what an Option is and how it is encoded? --- Section 3.1 A client ... MUST NOT register more than once for the same target resource. So, suppose a client does register a second time for the same resource. The server still has to handle it, notwithstanding the "MUST NOT". It can handle it by saying: - I see it is a duplicate, I'll ignore it or - I see it is a duplicate, I'll treat it as a protocol violation and reject it. But in Section 4.1 If an entry with a matching endpoint/token pair is already present in the list (which, for example, happens when the client wishes to reinforce its interest in a resource), the server MUST NOT add a new entry but MUST replace or update the existing one. So, you have written text to describe how a server handles this case and you have even described why a client might send a second registration. Can you clarify? --- Section 3.3.1 A client MAY store a notification like a response in its cache and use a stored notification that is fresh without contacting the server. This reads very much like an implementation detail rather than a protocol specification. From a protocol point of view the information in the notification is fresh until it times out. What use the client makes of that is surely up to the client. --- Section 3.5 An acknowledgement message signals to the server that the client is alive and interested in receiving further notifications; if the server does not receive an acknowledgement in reply to a confirmable notification, it will assume that the client is no longer interested and will eventually remove the associated entry from the list of observers. Now. Suppose the notification or acknowledgement is lost (I think message loss is possible in CoAP, right?). Or suppose there is reordering so that the confirmable notification is overtaken by a subsequent non-confirmable notification? Shouldn't the server have a slightly more rigorous approach to determining that a client is no longer interested in notifications to avoid falsely removing an interested client? Perhaps that is what "eventually" is supposed to convey, but that is not a suitable word for including in a protocol spec.
As usual, I'm impressed by the high quality of documents from the CORE WG. Thanks! There are just a few points that need clarification. One very temporary point: Has anyone done a comparison between this work and the server push aspects of HTTP/2? Is there any value in trying to align the two? It's unclear to me from the text of Section 2 how the 0/1 register/deregister values are used. Are these reserved values out of the sequence number space? Or are they carried somewhere else in the option? I infer from Section 3.6 that the answer is the former, but Section 2 should be explicit about this. In fact, it seems like it's not necessary to reserve the value 1 at all, since the server must interpret any positive value as deregistration. Calling out 1 as special invites server implementations to screw this up. "the time elapsed between the two incoming messages is not so large that the difference between V1 and V2 has become larger than the largest integer that it is meaningful to add to a 24-bit serial number" The text seems confused about whether the value of the Observe option is a serial number or a time value. The definition says that it's a serial number, but this sentence implies that it's somehow related to time. In order to avoid clients making unwarranted assumptions about the value of the Observe option, it seems important to clarify this.
"And third, the server may erroneously come to the conclusion that the client is no longer interested" To mitigate this, might it be useful for a client to sometimes send "gratuitous ACKs"? That is, to periodically re-ACK the last notification to re-confirm its interest? "If the server returns a 2.xx response that includes an Observe Option as well..." Does the value of this option matter at all? Could the server, for example, simply mirror the client's option? "Notifications are additional responses..." Might be helpful to re-word to emphasize that the only difference between a "notification" and a normal response is the presence of the Observe option. "Non-2.xx responses do not include an Observe Option..." Should this be a MUST NOT? It seems like an interop requirement, in the sense of maintaining a consistent view of subscription state between server and client.
- You probably won't want to but I'll ask anyway, just in case:-) The timing and sizes of notifications could expose sensitive information to a network attacker even if encrypted. TLS 1.3 is considering providing padding but TLS 1.2 and earlier don't really. HTTP/2.0 is also considering allowing padding. So should CoAP allow for padding in general, and if so, should this extension also? Or, is there a way to get the same result by sending out-of-date or other notifications that won't be accepted by the observer? If so, might it be worth documenting that? (However, it'd probably be better if both sides knew what was going on.) - I expected to see something about DTLS in section 7. Is there really nothing to be said about session lifetimes or expiry or keep-alives? Has anyone tried this protocol over DTLS in the interops?
Two points/questions: - I wondered what happens in the case when a server is sending too frequently notifications to the client and had to read to Section 4.5.1. Do you mind adding a reference at the end of Section 1.4 to Section 4.5.1? Just to get an early heads-up to the interested reader. - I have a headache with the model used in Section 3.6, i.e., that the client just forgets its wish to receive notifications and solely relies on the transport, i.e., sending the Reset message. The second part, i.e., describing how to explicitly removing notifications looks the much more straight forward way of removing the notifications form the server. Your first approach looks much more like a last resort handling. Especially, since the Reset messages can get lost and it will take a very long time in this case until the server stops sending notifications. And by the way: thanks for the considerations about congestion control!
Thank you for this excellent and much needed work. I'm happy to recommend the publication of this specification as an RFC:
3.3.1: The server uses the Max-Age Option to indicate an age up to which it is acceptable that the observed state and the actual state are inconsistent. If the age of the latest notification becomes greater than its indicated Max-Age, then the client MUST NOT assume that the enclosed representation reflects the actual resource state. The first sentence never defines "acceptable" or "inconsistent" in this context. It sounds like there is no guarantee that if Max-Age hasn't expired, the observed state is identical to the actual state. If so, then does the server merely decide what is acceptable for itself? The semantics of Max-Age aren't clear. The second sentence just seems wrong: Unless you want to say that the client MUST/SHOULD ignore the value if it is beyond Max-Age (in which case say *that*), I'd change the sentence to "The client can use Max-Age to determine if the latest notification received by the client reflects the actual resource state." 3.4: Since the goal is to keep the observed state as closely in sync with the actual state as possible, a client MUST NOT consider a notification fresh that arrives later than a newer notification. First, I think the MUST NOT is too strong. Also, "consider" is not an actionable implementation. How about: Since the goal is to keep the observed state as closely in sync with the actual state as possible, a client SHOULD discard an older notification that arrives later than a newer notification. 4.3.1: After returning the initial response, the server MUST try to keep the returned representation current, i.e., it MUST keep the resource state observed by the client as closely in sync with the actual resource state as possible. Neither of those MUSTs are reasonable. Saying "the server MUST do its best" is silly. The server MAY wish to prevent that by sending a new notification with the unchanged representation and a new Max-Age just before the Max-Age indicated earlier expires. If you actually want this to be an option, s/The server MAY wish to prevent that by sending/To prevent that, the server MAY send. However, I don't understand why that isn't a SHOULD. If the server knows the client has stale data, SHOULDn't the server refresh client? 4.5: A server that transmits notifications mostly in non-confirmable messages MUST send a notification in a confirmable message instead of a non-confirmable message at least every 24 hours. This prevents a client that went away or is no longer interested from remaining in the list of observers indefinitely. That can't possibly be a protocol requirement. If I am a server that has plenty of space in my table and there are infrequent enough changes, I don't have to do this every 24 hours. I can choose to do it every minute, every day, or every week, as I see fit.
I think it is worth adding a mention of Section 9 applying in addition to section 11 as a reference for security considerations in 7252. This would only add a couple of words and make it clear that you've covered session encryption options with explanations of why each option exists and the risks.
I do agree with Adrian's comments and concerns though.
This is a cool document—I'm glad the IETF is working on this. Please don't take the following comments as anything other than an attempt to address what I think are some relatively minor issues with the document, hopefully easily addressed. 4.1 says "Some regulatory domains may have specific rules regarding how long the spectrum data remains valid in these cases." It might be worth adding that in such cases the initialization procedure is required. This is addressed to some extent in section 4.3, but I'm not sure it's completely addressed. The potential problem I see is that a device might be used in a regulatory regime where this is required, but might have pre-configured values for some other regulatory regime. If these values are not valid in the regime that applies for its current location, it might have a too-long refresh interval configured, fail to detect that this is the case, and as a result accidentally break the law. Can a scenario like this occur in practice? Section 5.2 says that the manufacturer ID and model ID are optional, but may be required by some rulesets. Section 4.3.1 does not say what error is returned when a deviceDesc that does not contain one or both of these parameters is sent to the server, but the matching ruleset requires either parameter. I suspect the answer is straightforward, but I think it needs to be stated explicitly in 4.3.1. E.g., if the optional but required parameters are accepted in the initialization but rejected in the registration, it would be good to say so explicitly. In 4.4.2, I think that if none of the rulesets are accepted, the intent is that the database should return a REGISTRATION_RESP with the error element, and will not return a rulesetInfos list. However, the specification does not make an exception for this case when it says "A RulesetInfo list MUST be included" and "The list MUST contain at least one entry". I can't think of another valid interpretation, but you've stated a MUST, so you need to say that it doesn't apply in the case of the error. In 4.5: If some locations within a batch request are outside the regulatory domain supported by the Database, the Database MAY return an OK response with available spectrum for only the valid locations; otherwise, if all locations within a batch request are outside the regulatory domain, the Database MUST respond with an OUTSIDE_COVERAGE error. What should the database do if it doesn't follow the MAY? Should it return an OUTSIDE_COVERAGE error? It seems to me that this MAY is going to require some unclear heuristics on the side of the master device. Why isn't this a MUST? I think if this were a MUST, it would be clear how implementations should behave; by making it a MAY, there's a big gap that I think will lead to interoperability problems as different implementors make different choices about how to treat this situation.
Section 3 describes spectrum-usage messages as purely informational. Section 4.5 says that the database can require that such messages be sent. Which is it: is it sometimes required, or always purely informational? Or does "purely informational" just mean that the protocol provides no mechanism for tracking such notifications? It would be helpful if this were clearer. In 4.1, the database URI change spec seems incomplete. Is there a time when a database starts announcing its URI change after which it need no longer provide service on the old URI? I see Alissa asked a similar question, so I imagine you could address both questions together. In 4.1 and 5.17, the meaning described for UNSUPPORTED appears to be somewhat fluid. I think you intend for it to mean one specific thing, which might imply one of several other things. But you don't say the one thing that it means: instead you mention possible implications. It would be helpful to try to make this more explicit. I think Stephen noticed this same problem, but to be clear, it appears that section 4.5 says that when the master is sending a request on behalf of a slave, it sends its own location. But section 4.5.1 says that the master sends the slave's location. It appears that there are two parameters, one being the master location, which is required, and the other being the slave location, which is optional. However, this is never really stated explicitly; it might be helpful if it were. I have to confess that the use of "master" and "slave" throughout this document makes me quite uncomfortable in reading it. It would really be preferable to use terms that have less painful history associated with them, like "core" and "leaf" or "coordinator" and "supplicant." I realize this is a matter of taste, and I do not at all mean to suggest that there's anything wrong with the use of these terms, which are certainly common in the computing industry. This is just a comment, and whether you address it is entirely up to you. In 4.6.1, it would help to have text that explains what it means to validate a device. It's certainly not required for interoperability, but might be helpful for implementors of database servers. In 5.3, it's a little puzzling that antenna direction, radiation pattern, gain and polarization aren't defined, because they seem like properties that should have universal applicability, even if they are not required in all cases. Is this being left as future work, or is there some more subtle reason why these haven't been defined explicitly? In 5.11, power is expressed in dbm, which leads me to wonder if this is the product of the antenna gain and the input power to the output amplifier, or whether it's just the input power to the amplifier, or whether it's being left intentionally ambiguous.
Stephen asked a geolocation question about RFC 6953 as it was being processed to try to establish whether the location information supplied for a query needed to be the location of the querrier or the location about which the querry is being made. Going back to the text in 1.2 of 6953, it is pretty clear that the intent is to issue a queery that relates to the whitespace at a location. Given this, I agree with Stephen that including location info in the INIT_REQ and REGISTRATION_REQ seems unnecessary. It seems to be being used as some form of authorisation check, and I don't see how that is safe or valid. But also I don't see what stops the sender from lying. Surely the important thing is about where the whitespace is available, not from where the requester operates? But even in 4.5.1 there is some confusion... location: The GeoLocation (Section 5.1) for the device requesting available spectrum. The location SHOULD be the current location of the device, but more precisely, the location of the radiation center of the device's antenna. When the request is made by the Master Device on its own behalf, the location is that of the Master Device and it is REQUIRED. When the request is made by the Master Device on behalf of a Slave Device, the location is that of the Slave Device and it is OPTIONAL (see also masterDeviceLocation). The location may be an anticipated position of the device to support mobile devices, but its use depends on the ruleset. If the location specifies a region, rather than a point, the Database MAY return an error with the UNIMPLEMENTED (Table 1) code, if it does not implement query by region. I don't see why you have SHOULD given the subsequent lowercase may. Surely you need "the location it the location about which the enquiry is being made." Reading all this and going back to 6953 I am not sure I understand the difference between having permission to operate on a frequency at a location (which seems to be granted by registration) and discovering which frequencies are available (which seems to be determined by AVAIL_SPECTRUM_RESP). Clearly I am missing something in my read through. If you think it is already covered, that's fine. But if not, perhaps you could add some text to explain things.
Hi, I've a few things to discuss, but I think only the first is possibly tricky... (1) 4.4.1 - why are location and deviceOwner required? The FCC may require those but why does the protocol? I don't see that those are needed for interop. Isn't that latter the right criterion for inclusion as a required field? I could see a reason for the location-from-which-I-want-to-use-WS but that's not what is described I think. The discuss here is both relating to the privacy issues with requiring exposure of identifying information but also relating to the criteria used to determine required vs. optional and how those map to interop vs. to current known sets of regulatory rules. (Same for 5.2 serialNumber, though there a dynamic/random choice may be needed instead.) (2) 4.4.1 - is the location here the location of the device or the location from which spectrum is to be used? I think you need to disambiguate those. I continue to think there should be a way to ask for spectrum in London, tomorrow; even whilst still in Dublin, today. 4.5.1 seems to imply this may be allowed sometimes, but I'm not clear if that works - how would the "tomorrow" value be sent? (3) Section 7: I think it'd be a good idea to either reference the UTA TLS BCP for ciphersuites etc or else (if you'd rather not have a dependency to another WG's I-D, which I'd understand) to specify some here. The MTIs from 5246 are looking pretty jaded right now. You may also want to mandate use of OCSP or even stapling - there are a few more TLS things you can do that help interop and to speed things up. I'm not trying to insist you do/document all of those things, but would like to chat about it for a bit. (4) Section 7: You are assuming some kind of PKI is used to authenticate DBs. Now that might work with the current Web PKI, except that then any public Web PKI CA can fake any DB but are regulator's ok with that? Secondly, TLS requires that the DB nominate the acceptable CAs for client auth, so there's a bit of specification missing here which is maybe that a DB's description needs to include information about its acceptable trust anchors. I don't think you necessarily need to fix all of that in this document, but what is the plan here? (And maybe some of it ought be fixed here, not sure.)
- write-up: its a pity that coders haven't gotten together more openly and done interop, but I guess different businesses are different. - section 1, last para: I realise authorized devices is what the WG are interested in, but the protocol ought not require that, so the last sentence here is wrong - it surely should be: s/device is authorized to operate/device operates/ - Ruleset: I hope there's a NULL, meaning "no rules":-) - 4.4.1 - nothing stops a device lying about location, right? - 4.5 - the slave location vs. master location seems unclear to me. Can you clarify? - 4.5.1 - timestamp has to be UTC right? You only seem to indicate that via the "Z" in the timestamp format which I expect could be easily missed. Suggest you emphasise that. You should probably also say if truncated timestamps are ok, for example just to the minute granularity without specifying seconds. I assume that's not allowed? And lastly, please specify if the start (resp. end) of the second (or whatever) unit is when a device gains (resp. looses) spectrum. (Or add a global statement on timezones where you earlier said that identifiers are case sensitive by default.) Some of this is in 5.14, but I'm not sure if that's enough. (It could be.) - 5.2 - I don't get why you need X.520 here. - 5.5 - could a vCard value just be (the moral equivalent of) "Internet" or "I'm not telling"? - section 7: Saying the master device MUST implement server auth is confusing, since the master device is the TLS client, right? - Section 10: Under the privacy bullet you should also recognise that an authorized entity can be privacy invasive (e.g. selling contact information, sending all on to law enforcement without permission). - Section 10: Given diginotar and similar (incl. by nation states), having the master device send its identifying information in its first message means that simply saying "use TLS" is not enough. You need to say "TLS, assuming the PKI used is ok,..." or similar.
""" [RFC Editor: In the Author's Addresses section, please list "iconectiv" as "iconectiv (formerly Telcordia Interconnection Solutions). One occurrence."] """ Just make the edit yourself? The document uses the terms "primary user" and "secondary user". It would be helpful to define them. "... the Master Device may verify with the Database that the Slave Device is valid" How is "valid" being used here? It seems a little constraining to describe this as a protocol between the Master Device and the database. Might this protocol also be used between a Slave Device and a Master Device? Section 5.1 seems to imply that a Geolocation object MUST NOT include both "point" and "region" components. Is that case? If so, that seems unfortunate; since region support is optional, it seems like it would be helpful for a client to be able to include a point as a fall-back.
Thanks for doing this work, the draft looks good and my discuss should be easy enough to address as I am just looking for some clarifying information that may be helpful if I didn't miss the answer somewhere else. Can clients query any database entries or is the interface restricted to the list of supported interactions? I assume the answer is that it is limited to the set of database interactions defined, but could not find any statement saying that in this draft or the prior requirements in RFC6953. Authentication is only a MAY in the Security Considerations Section, which raises another possible concern for me. Since clients can get back pretty much all of the defined datatypes (DeviceDescriptor is one example) and authentication is not required, there should be a discussion on the risks of revealing this information for both the privacy reasons Stephen and Alissa outlined as well as possible security concerns. I think this should be on a field basis in terms of sensitive elements where relevant. I could see how you might want/need the types of information gathered within an administrative domain or accessed by a restricted set of users, but revealing data like what is contained in deviceDescriptor (includes model) as well as sensitive fields in other classes (AntennaCharacteristics) seems like a risk as it could be used in targeted attacks if there are known vulnerabilities to those devices. The attacks could target specific regions at specific times to effect events or to be used as part of some larger attack (could include physical). This may sound crazy, but layered attacks are very real. Is there anything that prevents a client from fingerprinting? Perhaps recommendations at the field level would help implementors understand these risks (privacy & security) and then they may be more motivated to enable authenticated and encrypted access. This wouldn't be necessary for all fields, just the ones that could be used in attacks or reveal privacy related information. Implementers may take the optional field use more seriously or create options in application interfaces so that users are then aware of the risks with these fields and make different choices. Ideally, there would be a limited set of data returned based on role information so that devices or other clients only get what they need as opposed to what is available. I didn't see any mention of restrictions on who could access what (role based access), is that possible? I'm not sure if the primary & secondary users allow for this? Thanks in advance.
Vince handled all my comments during last call, and I'm happy with this version. Thanks, especially, for all the work that went into re-doing Section 6.
I support Alissa's discuss - clarifying what is in the location, privacy around why point is required, and other privacy concerns. Also - I note the 3 IPR claims for a, sorry, not particularly innovate (to my eyes) protocol - where the terms are RAND with reciprocity and RAND with possible royalty. Those look like they'd kill any open source implementation?
I'm glad this work is being done in the IETF, thanks for all your effort. A couple of my points below have overlaps with some of Stephen's points. = Shepherd write-up = "Yes, there were 2 IPR disclosures filed that reference this document. They were discussed in the WG, and nobody came forward to say that they'd like to change anything in the document because of the disclosures." But there are 3 IPR disclosures in the tracker, not 2. Were all three discussed in the WG? = Section 4.1 = "A Database MAY change its URI, but before it changes its URI, it MUST indicate so by including the URI of one or more alternate databases using DbUpdateSpec (Section 5.7) in its responses to devices." The behavior here seems ambiguous. Does the database need to send responses containing the updated URI to all devices it has ever communicated with, or all registered devices, before it can actually change the URI? Does the database even maintain such lists? If not, how many such responses does it need to send out before changing its URI, or how long does it need to wait? Just saying that sending the new URI needs to happen "before" the URI changes does not seem specific enough for devices to know whether the URI has changed, or for databases to know when they can disable an old URI or stop sending the DbUpdateSpec indication. = Section 4.5 = These two sentences seem to contradict each other: "The device identifier, capabilities, and characteristics communicated in the AVAIL_SPECTRUM_REQ message MUST be those of the Slave Device, but the location MUST be that of the Master Device." "When the request is made by the Master Device on behalf of a Slave Device, the location is that of the Slave Device and it is OPTIONAL (see also masterDeviceLocation)." Perhaps this can be solved by making the reference to "the location" in the first sentence more specific -- the masterDeviceLocation -- but I'm not really sure from reading the text. Then later in the section, I started getting even more confused by this: "When the request is made by the Master Device on behalf of a Slave Device, the location is that of the Slave Device and it is OPTIONAL (see also masterDeviceLocation). ... masterDeviceLocation: When the request is made by the Master Device on behalf of a Slave Device, the Master Device MAY provide its own GeoLocation (Section 5.1)." Does this mean it's acceptable for a Master device acting on behalf of a Slave device to send neither the Slave device location (in the location parameter) nor the Master device location (in the masterDeviceLocation parameter)? I believe that the current text allows this. If so, why is location required in the registration step (and in batch available spectrum requests) but not in the available spectrum step? Seems like it should be the other way around -- that a device could register without specifying its location, but not request available spectrum without it. If the above interpretation was not intended, something needs to be fixed in one part or the other of the above text to indicate that either the Master device location or the Slave device location (or both) must be present in the request. The same issue arises in Section 4.5.5. = Section 5.1 = I'd like to discuss why the single point location format needs to be supported here. Is it really the case that a portion of whitespace spectrum will ever be available only at a single point, as opposed to a region? If not, it seems like sending a point (and, moreover, allowing region to be unsupported but not point) divulges more precise information about the requesting device than is ever actually necessary to fulfill the goals of this protocol. Do regulators require a single point? Why? = Section 5.2 = I'd like to discuss why the device serial number needs to be included in the device descriptor, rather than some (perhaps persistent) randomly generated device identifier that is used only in the context of this protocol (which would better protect the privacy of the user of the device, since the whitespaces database administrator wouldn't be able to correlate the device's spectrum requests with other activities linked to the serial number). It's not really clear why serial number is collected since both this document and RFC 6953 note the protocol does not defend against abuse or mis-use of spectrum. I'm asking the above two questions in light of requirement P.7 from RFC 6953, "The PAWS protocol SHOULD support privacy-sensitive handling of device-provided data where such protection is feasible, allowed, and desired." A separate interesting question that does not seem to be addressed anywhere in the draft is whether a device can be fingerprinted by the database operator by virtue of the collection of elements it sends (rulesetIds, manufacturer, model, antenna characteristics, device capabilities, etc.) even if it doesn't send a serial number or device owner information that uniquely identify it. That seems worth discussion in Section 10.
= Shepherd write-up= "An in-depth review by a JSON expert might be useful." Did that happen? = Section 1 = "It opens the door for innovations in spectrum management that can incorporate a variety of parameters, including user location and time. In the future, it also can include other parameters, such as user priority, time, signal type and power, spectrum supply and demand, payment or micro-auction bidding, and more." Time seems to be listed both as a current parameter and a future one, which is confusing. = Section 4.4 = "FCC rules, for example, require that a 'Fixed Device' register its owner and operator contact information, its device identifier, its location, and its antenna height." It would be nice to have a citation for the rules referenced here. = Section 5.1 = Feel free to ignore this if it's completely misguided, but does altitude really not matter? Are we sure this protocol won't be re-used for devices on airplanes trying to find available spectrum? (I note that in RFC 6953, requirement D.1 specifies that the data model must support "the height and its uncertainty" -- I have no idea what "the height" means or if it is related to altitude.) = Section 10 = I agree with Stephen that the database operator should be considered as a potential adversary from the standpoint of potentially being able to create a fine-grained database that tracks the locations and spectrum use patterns of individual devices. That data could certainly be abused.
I agree that this shouldn't be on the Standards Track.
This is a very fine document that does what needs to be done. Please don't take any further comments as any sort of criticism of that. Please note and consider the substantive comments after the rant in the following paragraph. As we've noted before, the IESG itself doesn't know what to do with this sort of thing, but I think this is a perfect example of a document that "updates" a Standards Track document, but should not, itself, be Standards Track. Informational is the correct status of this document, and I urge the IESG to make it so. I see no reason to *require* all updates to Standards Track documents to be Standards Track, and this document changes nothing that would indicate that status. If it defined new values, it probably should be Standards Track. But as it just creates the registry and registers what was already defined, it should not. Now, substantive comments -- not blocking (note the "Yes" ballot), but please consider making these changes: The allocation policy for values 0x00 to 0xFA is IETF Review. Values 0xFB to 0xFE are experimental and are not to be assigned. 0xFF is reserved. 1. I think you need a citation to RFC 5226 here, and a normative reference. 2. FB to FE are not to be assigned; what about FF? I suggest "0xFF is reserved for possible extensibility, and may only be assigned via Standards Action [RFC5226]." 3. For the values you register from 6514, you give the reference as "[RFC 6514] [RFC-to-be]". I suggest just "[RFC 6514]", as this RFC says nothing substantive that would be useful to someone looking up what, say, 0x03 means. 4. I don't think Section 2 has any value, and I would simply remove it. Section 4 says all that's needed.
I agree with Barry's point that this document does not need to be Standards Track.
One major note and a couple of minor notes: (0) In line 7 of the pseudocode, s/Key/Name/ (1) Is there any concern about confusion due to the fact that patches are syntactically indistinguishable from JSON? Presumably this is mitigated by the use of the media type, but it might be worth a mention, e.g., in the Security Considerations. (2) Thanks for the note about the fact that this document assumes that an attribute being absent and an attribute being null are equivalent. This might be worth reprising in the Security Considerations, in case there are usages of JSON that use this distinction to express security-relevant information. For example, if presence of an element is used to signal support for a feature, but null is allowed as a value.
I didn't get why if Patch is not an object then MergePatch(foo, Patch) returns Patch.
no objection from my side and my checks where solely related to any Transport layer related issues.
I am certainly willing to be talked out of this, but I am concerned about the implementability and would like to DISCUSS it a bit. In section 2: To apply the merge patch document to a target resource, the system realizes the effect of the following function, described in pseudocode. For this description, the function is called MergePatch, and it takes two arguments: the target resource document and the merge patch document. The Target argument can be any JSON value, or undefined. The Patch argument can be any JSON value. It took me repeated reading of the pseudocode (and may I mention that I *hate* languages which rely on indentation ;-) ) to figure out that: - If the Patch if not an object, the result *is* the Patch - The Patch can't act on the internals of an array; it can only replace the whole thing - The Patch cannot replace objects with new objects. Can't we *please* have a textual description of this protocol in addition to a (recursive!) pseudocode function? I am not convinced that an implementer will get their implementation right just from the pseudocode.
Thanks for the discussion on checking the integrity of received patches, it was helpful.
Thanks for bringing this forward as Experimental. Thanks also for Section 7 which is really helpful.
general: would this work with CGAs? I guess, only for a short while, but that's probably ok for consenting peers, right? Presumably not worth a mention. possible new security consideration: if an application supports TFO and might include sensitive data in the SYN (think cookies) then this seems to provide a new and slightly more efficient way to steal that sensitive data. I don't think this is worth noting to be honest, but raise it just in case you do. I think the bad actor here has to write less code maybe compared to the situation without TFO. 6.3.4 - I wondered how TFO would affect page load times if all links are accessed via TLS. Any info on that? The author affiliations on page 1 don't quite match those at the end. The secdir review  noted some nits and was ack'd but I think the nits are maybe still there (one anyway).  https://www.ietf.org/mail-archive/web/secdir/current/msg04708.html
It may be worth noting for completeness that a server could implement a scheme where cookie values are entirely random, with state stored on the server. And also worth noting the (many) reasons that this is a bad idea. In section 4.1.2, I would expand the second property to note that this implies that the cookie must be generated with a cryptographic operation using a secret key specific to the server, such as encryption, MAC, or signature.
I think this is a Truly Fine thing to experiment with, and I'm very eager to see more deployment of it. I have a number of comments about the document. None of these are blocking, but some of them are quite significant and I urge you to consider them and to chat with me about them if it will help. Thanks. To the document shepherd: thanks especially for the detailed answer to question 9 in the shepherd writeup. -- Section 4.1.1 -- You talk about two options (Fast Open Cookie and Fast Open Cookie Request), but they're really the same option, so that's a little confusing. To make it more confusing, the document at least once refers just to a "Fast Open option". I suggest that you consistently refer to this as the "Fast Open Option" throughout the document, and say that a Fast Open request is made using a Fast Open Option with a length of 2, and a Fast Open cookie is sent using a Fast Open Option with a length greater than 2. I think this will make things much more consistent, and clearer. -- Section 4.1.2 -- The server is in charge of cookie generation and authentication. The cookie SHOULD be a message authentication code tag with the following properties: Why "SHOULD"? What might be a reason for a server not to do this? -- Section 4.1.3 -- Please expand "MSS" on first use. The "RECOMMEND" in the second paragraph doesn't appear to be a protocol requirement, and should probaby be lower case, not a 2119 key word. In particular it's known an IPv4 receiver advertised MSS less than 536 bytes would result in transmission of an unexpected large segment. I can't parse that. Can you rephrase, please? In general, this section has quite a number of English problems that would benefit from a quick editing pass by a native speaker. -- Section 4.2.1 -- Is the double "SHOULD" in bullet 2 really what you want? It seems to me that what this means to say, 2119-wise, is that when the server responds with a SYN-ACK, the SYN-ACK SHOULD [be set up as specified]. The last two paragraphs seem out of place here. I suggest putting them into a new Section 220.127.116.11, clearly labelled as an alternative solution that could be explored if problems develop with this one. Or perhaps even put this into the "related work" section (currently Section 8, but see my comment below). -- Section 4.2.2 -- 5. Send the SYN-ACK packet. The packet MAY include a Fast Open Option. Doesn't that MAY conflict with the SHOULD that I complain about above, in Section 4.2.1 ? -- Section 6 -- I think the "i.e." here is not correct, and that you mean "e.g." (This is one reason I recommend that people avoid using these Latin abbreviations.) -- Section 6.3.1 -- Although not all GET requests are idem-potent Really? According to RFC 7231, Sections 4.2.2 and 4.2.1, GET is an idempotent method. RFC 7231, Section 4.2.2: Of the request methods defined by this specification, PUT, DELETE, and safe request methods are idempotent. RFC 7231, Section 4.2.1: Of the request methods defined by this specification, the GET, HEAD, OPTIONS, and TRACE methods are defined to be safe. And "idempotent" is one word, not hyphenated; please fix that here and in Section 6.3.3. -- Section 6.3.2 -- What does this section have to do with this spec? -- Section 7.1 -- Further study is required to evaluate the performance impact of these malicious drop behaviors. It may work against this protocol, but is there actually evidence that there's malicious intent here? If not, I would avoid describing it as "malicious". Dropping the word (if not the packets) seems like the right thing. Another interesting study is the (loss of) TFO performance benefit behind certain carrier-grade NAT. Why the parentheses? Shouldn't it just be "the loss of TFO performance benefit", without the parens? -- Section 7.2 -- The implementation can provide an experimental feature to allow zero length, or null, cookies as opposed to the minimum 4 bytes cookies. Thus the server may return a null cookie and the client will send data in SYN with it subsequently. Haven't you cut yourself off here by using a zero-length cookie to mean a cookie request? How would this work? -- Section 8 -- A very small point: I would make this an Appendix, rather than a mainline section.
Please do consider Barry's questions - I'm not repeating them, but I had several of them myself. For Martin: this specification points out that TFO can be used to speed up TLS (piggybacking on the SYN exchange). Is it obvious whether TCPINC could also use TFO?
I agree with Barry that this a useful experimental document. I have a few things for the authors/WG to consider as a part of this experimentation. 1. How does this functionality work in the presence of an anycast IP address for the server? 2. Are there functional changes needed to work with load balancers?
Glad this has been documented.
I have no strong objection to the publication of this document although there is to me a faint whiff of what a sceptic might call snake oil. Some of that arises from an imbalance of language ("advantages" against "caveats" rather than "opportunities" against "disadvantages") and some of it could have been dispelled by answering the shepherd write-up question on implementation by describing the existing deployments that use this technique. Anyway, here are two editorial issues for you to consider... Are the last two paragraphs of 2.2 in the right section? They do not appear to describe "advantages" of the proposed scheme. The text "using only link-local addresses on infrastructure links" is lumpy to read, but does convey exactly what you mean. There is a temptation to read it as "using link-local addresses only on infrastructure links" and you will need to watch the RFC Editor to make sure that bug doesn't creep in. And you will need to fix Section 3 where you have Using LLAs only on infrastructure links reduces the attack surface of a router
nitty nits only: section 1: "attack horizon" isn't the usual phrase, "attack surface" is I think more common (and is used later for this). section 1: "The deployment of this technique is appropriate where it is found to be necessary" seems to be a tautology. 2.4: I think uRPF and PTMUd are used without expansion. (And why the small 'd' in PMTUd, don't recall that before.)
During WG and IETF last call the technical correctness of the document has been reviewed, however debate exists as to whether to recommend this technique. The deployment of this technique is appropriate where it is found to be necessary. Wow. The above (especially the second sentence), along with the shepherd writeup, does make one wonder whether the WG really wanted to publish this document. I'm not about to stand in the way, but to say that the "technique is appropriate where it is found to be necessary" is not a very meaningful claim.
Overall, I think this is a well written draft and think the security benefits could be very positive. In section 2.2, could you move up the reference to RFC6752 and then you can avoid the last sentence in this section. I think it makes it cleaner and leads you right to the detailed description for iACL. Suggest change from: "This may ease protection measures, such as infrastructure access control lists (iACL)." To: "This may ease protection measures, such as infrastructure access control lists (iACL). [RFC6752]" I agree with the point made in this paragraph and think another advantage is that you can define ACLs for the pass through traffic at this point that is 'invisible' for direct attacks. Some firewalls operate in what they call bridge mode for that purpose. Please see the recommendation in the SecDir review to include references to security considerations sections in previously mentioned RFCs in the draft. Here's a link in case you didn't see it. https://www.ietf.org/mail-archive/web/secdir/current/msg04709.html
Thank you for documenting what many folk (including me) partially understand!
I believe there is an additional that should be discussed in this document, but I want to get the authors' opinions on it. The document makes the claim that using LLAs between infrastructure devices reduces the attack surface of the network. However, I think there is an adverse side effect that needs to be discussed. If every device in a network follows this advice and only uses a globally-scoped address on its loopback interface, it seems easier to map the topology of the network. Since all responses to pings/traceroutes from a single router will have the same global address, an attacker can map adjacent devices by probing from different points outside the network. In the non-LLA case, responses to pings/traceroutes typically use the interface addresses, which would vary in the same type of probing. By reducing the number of addresses that can be used in ping/traceroute responses, the LLA-only network is more vulnerable to network mapping.
- It would be useful if the document actually defined "infrastructure link" as a network link with no endpoints/hosts.
= Section 2.3 = If it seems reasonable, might it be possible to say "LLAs have usually been EUI-64 based" rather than "LLAs are usually EUI-64 based" given that there is some movement away from embedding hardware addresses in IIDs (e.g., draft-ietf-6man-default-iids)?
- 2.4: defining e2e security as just meaning data integrity without confidentiality is unusual enough that it should probably be noted. Separately I'm surprised that you don't include some form of origin authentication in your concept of e2e security - why is that? - 3.2: possible threat - ensure specific client(s) offset by X (different X for each set you need to track) in order to spot (or reduce search space for) those clients in other protocols when timestamp are sent. Worth adding? I'm not sure if a mechanisms meeting the 5.9 requirement would or would not be sure to mitigate this. (You could also advise protocols emitting timing information to slighly perturb any time signals they emit, to disguise any small but detectable offset from the wall-clock time.) - 3.2: another possible threat: if a mobile node sends time protocol requests at a specific frequency (e.g. every N seconds, at 283 ms past the second) then that can be used to identify (or reduce the search space for) the mobile node irrespective of crypto or address changes. (A similar thing has been a real concern in vehicular networks btw. with the basic safety message). Those are probably not that big a deal here and the migitation is probably just to tell implementers to not do that, which is pretty simple:-) - 3.2 - Similarly, if a node sends out complex time protocol messages those might allow fingerprinting of the node regardless of other changes. For example, it could be easy to track a Brazilian node that's in Europe if it sends queries out saying it mostly trusts something in .br. Not sure if that's as easy to deal with, perhaps the requirement there is just that protocol developers think about it. (This relates to Kathleen's discuss also probably.) - 5.6.1: that requirement is stated as an operational requirement, don't you need a protocol requirement here i.e. to say that it MUST be possible to ensure keys are fresh? - 5.10 - I wonder if there's not a case to be made for an opportunistic mode, e.g. where one learns that some master can be authenticated and thereafter requires that. In this document I think such an opportunistic mode would maybe be a MAY - the WG can think later if they figure that'd help enough to be worthwhile. The reason to raise it now is thus so as to not rule it out for later. I think this is different from, and possibly much better than the hybrid thing you have now and ought be much more deployable than a "secure" mode (as in 5.10.1, and that's a bad term for that section/mode btw). - 7.5: Kerberos is notoriously more time-sensitive than PKI stuff - why not mention it?
Thanks for addressing the SecDir review. I just have a question on the Confidentiality (5.8) part of the Security Considerations section, it says: "Requirement Level The requirement level of this requirement is 'MAY' since it does not prevent severe threats, as discussed below." That reads a bit oddly to me and I am wondering if there is a typo, maybe presents instead of prevents?
I think that IEEE1588 and NTPv4 are normative references.
I think Barry is likely correct about the time protocols themselves being normative references. This document was a pleasure to review - very clear and well-organized for One Unskilled In The Art ...
neither 3.2.7 nor 3.2.2 3.2.4 or 5.3 describe actual dos attacks using the ntp protocol. Those do involve spoofing client source address (of the victim) but rely on nothing other an asymmetry in the size of the response relative to the query.
The abstract on this document is about three paragraphs too long. Is there any way to shorten it?
I have no objection to the publication of this document, but I don't think it is appropriate to say (as in 3.1.1) what RFC 6304 does. This document entirely replaces 6304. It would be fine (desirable) to have a section somewhere (probably in App A) that captures the changes from 6304, but this document should otherwise simply describe AS112 Nameserver Operations so that there is no need to feel dependent on the old RFC.
Seems like a fine document. A few comments: 1. This document seems like a fine set of operational guidelines that have community consensus. Why isn't it being published as a BCP? Seems like AS112 in general should get its own BCP number and these documents ought to be published under it. Yeah, I know that 6304 was Informational, but we don't need to repeat mistakes, eh? (Perhaps we need a new designation: Operational Practices and Guidelines.) 2. Logging is mentioned in one of the configuration examples, but it sure would be nice to have a few sentences on it. I could see saying something like, "Keeping a log of entities that are improperly querying would allow for the wagging finger of shame to be shook in front of bad implementers. You probably only want a single log entry per bad actor; they will send you lots of queries, and no need to have huge logs." Etc. 3. "The IANA is directed…" Pushy, aren't we? :-) I generally say, "IANA is requested…" or the like. No, it doesn't really make a difference.
I think this draft is a good idea and it makes perfect sense to blackhole traffic like this. I was glad to see the security consideration for leaking host information. I didn't see anywhere that such queries are logged and think a statement that they are not logged would be helpful (assuming that is the case). Keeping such data in an aggregated spot would only amplify the concern. If I missed it, maybe repeating that point in the security considerations section would be helpful. Thank you.
I have no objection to the publication of this document, but time to stop "proposing" things, and start "defining" or "describing". This document proposes a more flexibl approach for sinking queries And other examples. Oh, and s/flexibl/flexible/ --- In Section 2 Some or all of the existing AS112 nodes should be extended to support these new nameserver addresses, and to host the EMPTY.AS112.ARPA zone. I wondered about "should" in this sentence. Is there an objective to this, or a philosophical reason, or...?
I have one possibly dumb question. I don't think any change is likely needed, but I'd like to check. Let's imagine a horrible thing happens and some company decide to fire one of their DNS ops people. On the way out the door, that person installs a DNAME RR for (some of) the company's addresses that sends reverse lookups of those addresses to AS112. Is that something new either in terms of being harder to detect or to fix? Or could our now ex-employee also have sent other queries (e.g. forward lookups) to AS112 as well and would that be easily spotted and fixed? If none of this is really new or interesting or harder to detect, (as I suspect), then we're done. But like I said, I wanted to check in case there actually is a new security consideration which doesn't seem that unlikely since we're adding another level of indirection and those often do add a security wrinkle or two. This also relates to the secdir review  I guess, though the reviewer was happy with the responses he got thus increasing the probability that my question is dumb:-)  https://www.ietf.org/mail-archive/web/secdir/current/msg04956.html
A strictly procedural point that I haven't seen anyone else comment on. Section 7 says that the address delegation will require IAB approval. Is the IESG tracking that, or is that an IANA thing to track, or is it done, or what?
In the document writeup, the shepherd suggests that there was some ambivalence regarding the status of this document, and suggested that perhaps Experimental would be appropriate, even though they landed on Informational. I think Experimental is exactly right, with an eye toward moving this to BCP eventually. (See my comments on 6304bis.)
Once my questions in RFC6304bis are addressed, I think I'm good with this draft. Thanks for you work on the draft!
Some very minor nits: In this text in the abstract: The AS112 project does not accommodate the addition and removal of DNS zones elegantly. Since additional zones of definitively local significance are known to exist, this presents a problem. This document describes modifications to the deployment and use of AS112 infrastructure that will allow zones to be added and dropped much more easily. when this draft is approved, the first sentence won’t be true, will it? Could you think about how you want the RFC to say this? The corresponding text in the Introduction may also benefit from the same thinking: AS112 nameserver operators are only loosely-coordinated, and hence adding support for a new zone (or, correspondingly, removing support for a zone that is no longer delegated to the AS112 nameservers) is difficult to accomplish with accuracy; testing AS112 nameservers remotely to see whether they are configured to answer authoritatively for a particular zone is similarly challenging since AS112 nodes are distributed using anycast [RFC4786]. Nit: s/flexibl /flexible / In this text: 6. DNAME Deployment Considerations DNAME was specified a significant time following the original ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ implementations of [RFC1035], and hence universal deployment cannot be expected. This didn’t parse well for me. Perhaps “was specified years after the original implementations of”? In this text: 9. Security Considerations This document presents no known additional security concerns to the Internet. For security considerations relating to AS112 service in general, see [RFC6304bis]. is the first paragraph saying “no additional considerations beyond http://tools.ietf.org/html/draft-ietf-dnsop-rfc6304bis-00#section-8” or "no additional considerations beyond [RFC6304]"? I’m not sure how to read it.
from scott bradner: opsdir review. in the 2nd paragraph of section 3.3 - remove the comment about the IPv6 testing environment and the note about the IANA request in the 2nd paragraph of section 4.2 - add a forward reference to section 4.4. Routing Software
I have no objection to the publication of this document. Here are two editorial notes for you to consider. --- The use of upper case "MUST" in a pseudo-quote from RFC5766 seems unnecessary. --- A little work on abbreviations is needed... I don't believe TURN or STUN are "well-known abbreviations" according to http://www.rfc-editor.org/rfc-style-guide/abbrev.expansion.txt so you should expand them in: - document title - Abstract - main document on first use (you do this for TURN, but note that the referenced 5766 uses a different expansion from the one you use!) Is there some inconsistency between WebRTC and RTCWEB? ICE should be expanded on first use. You have expanded it two paragraphs later.
A bunch of comments, but all non-blocking or really less. That is, feel free to ignore these unless you think they make the doc better. I'd rather this got out there and we moved on to solutions more quickly compared to waiting for the perfect version of this. section 1: 1st bullet - where is it defined how to use a TURN server for privacy? And I don't actually see how that'd work, unless the caller trusts the TURN server to do it. Might well be my ignorance of TURN though. section 4, point 7: malicious JS is not required, any browser can view the source, including the STUN pwd. section 4: A possible 8th point is that a STUN/TURN server for WebRTC could have to work for many many browsers at which point assuming any secret known to all is a secret is just silly. section 4: Another possible new point is that STUN auth was meant for the WebRTC equivalent of the browser but in fact the web site (via the site's JS) is in control and hence it really ought be the web site that authenticates to the STUN server and not (supposedly) the WebRTC browser. section 4: Yet another - if a WebRTC server sets "per-user" STUN passwords via some form of co-ordination with some STUN/TURN servers, then that will assist in tracking individual user behaviour and so may be a privacy concern. section 4 (or 1): Current STUN/TURN auth essentially forces some centralisation into WebRTC where that ought not be needed. For example, models based on a leap-of-faith could actually be good enough to prevent widespread abuse of resources.
I am interested to see the responses to a few of Stephen's questions and don't have any more to add. Thanks for your work on the draft.
Martin has a good catch about updates to 7252 and I support his Discuss. --- It would have been nice to have heard about implementations of CoAP that do group communication and whether they consider themselves consistent with this I-D.
Given the complete lack of security mechanisms, I would be happier if this were Experimental, to be promoted to Standards Track once the group keying issues are addressed.
Updated: I have objections about this draft being published in its current form and with its current status. The draft updates protocol behavior of RFC 7252 in at least two Sections: 2.5 and also 2.7. But is never saying so and legally speaking also not requesting any update of RFC 7252. Section 2.5 updates the handling of tokens for the mutlicast usage. Section 2.7 adds a completely new functionality to the protocol. This has also been nicely pointed out by the GENART reviewer: http://www.ietf.org/mail-archive/web/core/current/msg05535.html Probably the document is heading for standards track instead of informational? Further, I do see multiple playes where the RFC 2119 language is inappropriate: 2.2. Group Definition and Naming [...] All CoAP server nodes SHOULD join the "All CoAP Nodes" multicast group [RFC7252], Section 12.8) by default to enable CoAP discovery. For IPv4, the address is 18.104.22.168 and for IPv6 a server node joins at least both the link-local scoped address FF02::FD and the site- local scoped address FF05::FD. IPv6 addresses of other scopes MAY be enabled. Since this is (hopefully) just repeating what RFC 7252 say, it is just good to say: "All CoAP server nodes should/are supposed to join the "All CoAP Nodes" multicast" Also the "MAY be" is a "can be" in reality. 2.7.2. Membership Configuration RESTful Interface The " an OPTIONAL CoAP membership configuration RESTful" is misplaced, as this in informational draft, but this goes back to my general DISCUSS point above. Further: To access this interface a client MUST use unicast CoAP methods (GET/PUT/POST/DELETE). This interface is a This MUST is not correct here. A sentence saying that only the unicast methods are useful in this respect is sufficient. And here the contradiction is happening, as I do not see a MUST for DTLS in RFC 7252 and this text below is also not adding anything useful to the discussions what is needed to achieve interoperability, i.e., inappropriate for MUST in this place: Also, a form of authorization (making use of DTLS-secured CoAP [RFC7252]) MUST be used such that only authorized controllers are allowed by an endpoint to configure its group membership.
Thank you for writing this document on this important topic. Iit is indeed something that many users of CoAP technology are wondering about. For context, the comments below are based on the in-depth Gen-ART review for this document, by Ben Campbell - thank you! I have also read the document myself. I have also implemented multicast-based functionality for CoAP devices. The document contains a lot very useful material. I do agree though with Ben and my esteemed Area Director colleagues who have raised Discusses: the document would be better classified as Experimental at this stage. The re-classification though is something that the IESG can easily do. But Martin and Ben, for instance, have provided additional useful suggestions on how to change the text itself; some changes are also necessary in my opinion. A few more detailed comments. These are provided as-is, in the hopes that they are useful. At this moment I have no specific Discuss-level requirements for the draft, mainly because Brian and Martin already hold Discusses that I would have otherwise held. all.bldg6.example.com "all nodes in building 6" all.west.bldg6.example.com "all nodes in west wing, I have found that it is often useful to separate communications addressing and grouping from conceptual or application-specific grouping. The gathering of nodes in a group - and the design of a network to subnets - should be done based on communications convenience and efficiency, but it should not necessarily dictate all application-level groups that may be needed. To do so might in some cases lead to inflexibility and undue constraints on the network design. It may not be convenient to put all nodes in building 6 in one group, for instance, for various reasons. I would like to advocate a two-level approach, multicast for efficient reachability, but the application decisions are still based on information about the resources (such as location), rather than who answered a particular query on this multicast address. This also makes it easy to perform queries of any complexity in a reasonable manner, e.g., "all sensors that have a reading higher than 27C". The non-idempotent CoAP method, POST, may only be used for group communication if the resource being POSTed to has been designed to cope with the unreliable and lossy nature of IP multicast. This is certainly possible, and in my experience, very useful. The document would be most helpful if it could provide more guidance regarding when a design is safe for use of POST. In the third case, typical in scenarios such as building control, a dynamic commissioning tool determines to which group a sensor or actuator node belongs, and writes this information to the node, which Please make sure the document recognises the possibility that a device may simultaneously belong to multiple groups. Also, a form of authorization (making use of DTLS-secured CoAP [RFC7252]) MUST be used such that only authorized controllers are allowed by an endpoint to configure its group membership. Is this "MUST use a form, e.g., DTLS" or "MUST use DTLS, a form"? I'd advocate the former... o A server should not accept an IP multicast request that cannot be "authenticated" in some way (cryptographically or by some multicast boundary limiting the potential sources) [RFC7252]. See Section 5.3 for examples of multicast boundary limiting methods. It is difficult for a server to know what boundary limitations may be in place in other devices around it. 2.9. Congestion Control This section talks mainly about what the servers should do. There is only one rule about clients. Perhaps some additional guidance for the frequency that clients can send multicast requests would be advisable, unless it is already specified somewhere else. 3. Use Cases and Corresponding Protocol Flows 3.1. Introduction I would have thought discovery of devices in the other direction, i.e., a GET ./well-kown to all-coap-nodes would have been an interesting use case as well. 5. Security Considerations I'm looking in particular at Sections 5.1 through 5.3. I'd like to say that I have had very positive experiences about using group communication with data object security, which is quite a lot better suited for group communication than transport layer solutions. No individual channels have to be established with the many group members. Messages can be signed with the authority of the sender rather than tying the security to the receiver. Forwarding through intermediaries retains security properties. And so on. Perhaps this alternative should be mentioned. Indeed, other than for low-security applications such as discovering network components, it is difficult to imagine completely unsecured operation, particularly considering that many if not most networks have a multitude of different devices in them, and it is not in my opinion a workable long-term solution to assume a firewall-based security model. 8. References The group communication draft does not mention the observe draft, and the observe draft does not mention multicast. Are there any feature interaction issues that you might want to cover?
I agree with others: I sure would like to see this be Experimental, see some implementation experience, and, when security issues are sussed out, see it moved onto the Standards Track. A few small comments; nothing earth-shattering: 22.214.171.124/126.96.36.199/188.8.131.52/184.108.40.206: ...the endpoint MUST effect registration...as soon as possible. "MUST" do something "as soon as possible" does not seem like a testable requirement. What purpose is it serving? 2.8: These normative behaviors and guidelines look like they could use 2119 keywords. In particular, quite a few of those "should not"s look like they are meant as "SHOULD NOT"s. They obviously don't need to be, but I was wondering if there was something I was missing.
Overall, the draft is well written and although security is not present, the threats and proposed future solution discussion is good. Section 5.4 is specific to pervasive monitoring, but I think monitoring is better as it casts a wider net and could include pervasive monitoring concerns, but other concerns as well related to monitoring. Monitoring could be targeted to observe specific organizations, people, or devices, that could also be used as part of a targeted attack. Monitoring could also lead to privacy concerns if patterns of behavior are observed for individuals. Thanks.
I support the comments by others on the document not really being informational. The lack of security controls is an issue, experimental would be good until it is resolved as there is a lot of work to be done in this space and it is active.
I share the concerns Martin listed in his Discuss, but that conversation is already underway. I saw that Martin is questioning whether this document should be Informational, but 1.2. Scope While [RFC7252] supports various modes of DTLS based security for CoAP over unicast IP, it does not specify any security modes for CoAP over IP multicast. That is, [RFC7252] assumes that CoAP over IP multicast is not encrypted, nor authenticated, nor access controlled. This document assumes the same security model (see Section 5.1). However, there are several promising security solutions being developed that should be considered in the future for protecting CoAP group communications (see Section 5.3.3). would make me uneasy about publishing it as standards-track in 2014. If this specification was Experimental, I'd feel better, but as the specification itself correctly points out: 5.4. Pervasive Monitoring Considerations A key additional threat consideration for group communication is pointed to by [RFC7258] which warns of the dangers of pervasive monitoring. CoAP group communication which is built on top of IP multicast should pay particular heed to these dangers. This is because IP multicast is easier to intercept (e.g., and to secretly record) compared to unicast traffic. Also, CoAP traffic is meant for the Internet of Things. This means that CoAP traffic is often used for the control and monitoring of critical infrastructure (e.g., lights, alarms, etc.) which may be prime targets for attack. Approving it as a Proposed Standard seems to be begging for someone to deploy it without reading the warning labels ... would anyone who's planning to use CoAP group communications without security (beyond suggestions such as enabling WiFi security), be unwilling to use it at Experimental? In this text: 2.3. Port and URI Configuration A CoAP server that is a member of a group listens for CoAP messages on the group's IP multicast address, usually on the CoAP default UDP port, 5683. If the group uses a specified non-default UDP port, be careful to ensure that all group members are configured to use that same port. Therefore different ports for the same IP multicast address cannot be used to specify different CoAP groups. I'm probably missing something, but I'm not understanding this. If I have two groups of IoT devices configured to use the same IP multicast address, and each group uses its own UDP port, what doesn't work?
I have no issues with the goal of this document, but I do have some items I would like to talk about: 1. The text in section 1.2 implies that the primary multicast mode of operation is source-specific, but doesn't explicitly state that. Is it intended that multicast is only done in an SSM mode? If so, it would be good to provide explicit discussion of that (including references to relevant SSM RFCs such as 3569, 4604, and 4607). If the goal is to also support ASM, that should be stated explicitly. 2. Does the multicast proxy forwarding described in RFC 4605 apply in section 2.1? 3. I am a little concerned with the SHOULD used in section 2.2 when discussing nodes joining the All-CoAP-Nodes multicast address. What happens if a device does not join that group? Will the protocol break? 4. Section 2.3 should allow a device to obtain the port number for the group from the URI schema referenced in section 2.2. 5. I see that the socket() API (RFC 3542) is referenced, but it is not discussed in terms of CoAP interacting with the group management protocols. Is it assumed that implementers know that they are to use that API to indicate changes in group membership state to IGMP/MLD?
The various discussions of reliable multicast in the document may benefit from an informative reference to a technique such as RFC 3940.
I agree with Richard/Martin that this doesn't seem like it is informational, and that experimental is probably the appropriate status.
This is not a DISCUSS as I think more analysis is needed on the threats faced by BFD than is presented here (apologies if that's elsewhere and I'm ignorant) and a later solutions draft that doesn't do that can always attract a DISCUSS. Anyway, the point is that simply complaining that the current auth schemes don't work doesn't tell you what to fix. (Note: I am not saying that GMAC is the wrong answer, I'm saying that there's not enough presented here to know so a solution draft that says "look there - GMAC has to be right" could well attract such a DISCUSS.) As a separate general point, I have to say that this document doesn't explain to the reader why there's a real problem with crypto for BFD. And doing a HMAC in microseconds is not a problem - opennssl tells me that it takes about 8 microseconds for a hmac-md5 over 1024 bytes on my laptop in pure s/w - 10 milliseconds for 3 instances of HMAC is ages. I think you're neglecting here to say that there are 1000's of parallel sessions or something, but in any case, as presented, the timing constraints do not seem to be at all hard which makes the document less convincing that I believe ought be the case. (Note - I do believe from chatting with folks that there is some real problem with BFD security, but this document does not capture that well as far as I can see.) section 2: MD5 and SHA-1 are used with HMAC, right? With HMAC, collision resistance for the hash function is not the required property, but rather pre-image resistance and we don't know that SHA-1 is weak in that respect, and even HMAC-MD5 is still ok. There are still be good reasons to want new algorithms, but I don't think that lack of collision reisistance for hash function used in HMAC is one of them. section 3 says: "Echo packets are not defined in the BFD specification, though they can keep the BFD session up. The format of the echo packet is local to the sending side and there are no guidelines on the properties of these packets beyond the choice of the source and destination addresses." That seems really weird. The "add security but we're not saying how" that follows is even weirder.
Is this text: There are several requirements described in section 3 of The Threat Analysis and Requirements for Cryptographic Authentication of Routing Protocols' Transports [RFC6862] that BFD does not currently meet: still going to be true when it appears in an RFC? Perhaps “that BFD as defined in [RFC5880] does not meet”? or whatever the right reference would be …
It would be helpful to distinguish between cases where a replay attack requires that you be on-link (e.g. some link-layer encapsulation) versus the cases such as ipv4/ipv6 maybe mpls that could potentially be injected /spoofed from offlink.
I agree with Barry. I suspect we'd not standardise exactly this but no harm that someone plays with it.
I think this doesn't add enough that's useful beyond what S/MIME already does, with "The sending client MAY wrap a full MIME [RFC 2045] message in a message/rfc822 wrapper in order to apply S/MIME security services to header fields," and I don't think the document makes a compelling case. I don't think this proposal is likely to see much implementation. All that said, I don't think there's any harm, and some people might find it useful. No conflict with IETF work that I can see, so let's give it a shot.