Minutes of the ISSLL Working Group from the 46th IETF meeting in Washington, DC. November 11, 1999 1530-1730. Minutes recorded by Eric Crawley. The agenda for this meeting is available as http://ana.lcs.mit.edu/ISSLL/meetings/11.99/agenda.pdf These minutes are also available as http://ana.lcs.mit.edu/ISSLL/meetings/11.99/minutes.txt John Wroclawski went over the status of the current drafts: - draft-ietf-issll-is802-sbm-09.txt is going to the IESG. Latest version was in response to comments from AD and other editorial comments. This version will be submitted with the other IS802 documents to the IESG immediately. - draft-ietf-issll-is802-sbm-mib-01.txt has received no comments on the mailing list. The author was not available for an update but it is likely that we will send this to the IESG soon. The majority of the activity in the WG is centered around the IntServ over DiffServ area. - draft-ietf-issll-diffserv-rsvp-03.txt framework document is getting a few last comments before going to the IESG. - draft-ietf-issll-dclass-01.txt DCLASS document is complete and will be going to the IESG shortly - draft-ietf-issll-nullservice-00.txt, Null Service specification has some comments that are being incorporated and is in WG last call. The last call will end 11/25/99 [ed. this has slipped. now will end sometime in December] Bruce Davie gave an update on the RSVP Aggregation draft (draft-ietf-issll-rsvp-aggr-00.txt): SLIDES: http://ana.lcs.mit.edu/ISSLL/meetings/11.99/bsd-rsvp-aggr.pdf A small number of changes have been made, including the renaming of the draft. Bruce proposed a clarification be made regarding aggregate reservations; they can be used to manage BW in a Diffserv cloud independent of e2e RSVP reservations, much like a tunnel. He proposed to add text to cover this case. This draft can then be put forward for WG last call. Any questions and issues can be raised on the mailing list. Next, John Wroclawski discussed DCLASS Marking Negotiation. SLIDES: http://ana.lcs.mit.edu/ISSLL/meetings/11.99/jtw-dclass-neg.pdf The problem is: What happens when the host doesn't know what a DCLASS object is and can't mark the traffic? This is another instance of DCLASS marking negotiation for aggregates. There are a few options available we can: - Ignore the issue - Describe it and handle it locally by domain-specific configuration - Develop a simple yes/no protocol to send responses upstream. This protocol can be implemented as RSVP objects, made into a separate protocol or something else. The following discussion noted that this is a similar problem to that of the TCLASS object used by the IS802 SBM. There was a question about where, in terms of documents, such a specification would belong. John felt that it should be a separate document. There was concern that all the negotiation would come down to a least common DCLASS, which is true. There was more discussion about the use of RSVP for this function and it was pointed out that this approach requires some form of acknowledgement which is not very strongly supported in RSVP but the responses are only needed when using RSVP. The general consensus was that this problem needed to be documented but not solved immediately. If it is determined that the problem needs to be solved, it will be done in separate documents and will not impede the current set of documents. The problem may be a lot harder than we think right now. The remainder of the meeting was spent on an excellent discussion of IntServ/DiffServ service mapping. John Wroclawski introduced the problem. SLIDES: http://ana.lcs.mit.edu/ISSLL/meetings/11.99/jtw-svc-map.pdf The purpose of IntServ/DiffServ service mapping is to make a DiffServ cloud act like an IntServ "network element". This means the DiffServ cloud must: - Implement a scheduling service (G,CL) - Perform admission control - Export some information (delay info for G) - provide a control interface (E2E RSVP, SNMP, etc.) For a Diffserv cloud we must: - Select appropriate PHBs - Select edge behavior (metering, policing, remarking) - Select admission control algorithm(s) - Collect and export required information (CL and G terms) These topics must be documented in draft-ietf-issll-isdif-svc-map-xx, which does not yet exist. It must also answer the top level questions of: Which Services? How Accurately? and What Approach? The discussion next turned to mapping specific IntServ services. John Wroclawski started off with Controlled Load (CL). CL requires an assurance that BW is available and that a flow's expected queuing delay is related to its own burst size. A CL node must carry over-TSpec traffic when capacity permits and CL traffic should not disrupt adaptive best-effort traffic. In DiffServ, this means we must: - Use AF PHBs - Use different AF classes to handle traffic with different delay expectations (burst sizes) - Police flows at the cloud edge where in spec traffic is marked AFx1 and out of spec traffic is marked AFx2 - Set AF Class BW to carry expected traffic - Set AF class dropping parameters to limit delay See John's presentation (http://ana.lcs.mit.edu/ISSLL/meetings/11.99/jtw-svc-map.pdf) for an example picture using multiple queues and weight or priority across AF classes . In such a mapping, admission control is a must. There are three current methods for admission control; Parameter Based, Measurement based, and Experience based. The Open Questions/Issues are: - Do we require/recommend a specific number of delay classes? - Do we require/recommend a particular output scheduling behavior? - What should be said about admission control? - What should be said about alternate mappings? In the discussion that followed, Questions about the aggregate and the delays experienced by the aggregate of the bursts were raised. This can be handled by multiple AF classes and the aggregate usually wins statistically. Another question was raised about burst sizes and how they are specified. This is really a problem for the IntServ models as they exist. The application programmers have to figure this out or it is available via policy servers. Further questions and discussion about the use of DiffServ PHBs, which are specifically *not* services to construct IntServ *services*. Yes, DiffServ PHBs do not define services but using a DiffServ network as an IntServ node, we are able to develop services. There was a concern that by combining multiple burst sizes together would either run out of AF classes or not get the service desired. This is a case where the classes have to be provisioned properly and admission control is used to control the resources. There was a concern that if we specify the behavior and end up specifying more of AF than is already specified. John Wroclawski pointed out that we would only be talking about weighting between AF classes and not inter-PHB behavior. There was a comment that we have gone from a very specific micro-flow monitored network to one that can scale by aggregating but we lose some of the detail so a bit of over-provisioning is needed. However, this allows a statistical multiplexing gain that is good for aggregates. There was a question about just recommending AF when one could also use EF. John pointed out that that EF will throw away over-spec traffic while AF can't so we *must* used AF for Controlled Load. A concern was voiced about ordering the AF classes but it is likely that we can only recommend the order. John Wroclawski next gave an introduction to Guaranteed Service mapping. The key difference between G and CL is that there is a mathematical guarantee of worst case. This means that the nodes must Generate and export "error terms". There are some tradeoffs that must be considered such because the mathematical guarantee and error terms impose difficult implementation requirements. Handling these over TSPec traffic well is "very hard". John presented the following strawman: - Shape all G traffic at ingress to CBR - Use EF to carry G traffic - optionally reshape G traffic at exit - overall delay bound is sum of delays Anna Charny gave a presentation on a mathematical Model for Guaranteed traffic in a DiffServ network. Slides: http://ana.lcs.mit.edu/ISSLL/meetings/11.99/charny-g-aggr.pdf The assumptions in the model are: - Backbone implements priority FIFO for EF - Ingress node performs per "flow" shaping - All devices in the backbone are output-bufferened (standard in intserv and diffserv) The basic formula is: delay = ingress shaping delay + backbone delay + egress shaping delay For more details see Anna's slides. She has worked out a proof for the formula but it was done recently and hasn't been completely checked yet. Anna's model showed that delay is very sensitive to priority traffic utilization and also sensitive to the accuracy of shaping at all ingress points as well as the smallest flow rate acorss the cloud. This formula applies to arbitrary topologies too. For more examples, please see Anna's slides. Anna noted that delay is inversly proportional to the smallest rate you shape at. She showed some startling numbers about the maximum delay. There was lots of discussion on what these numbers and models mean. The interesting and counter-intuitive result is that aggregation actually reduces the delay! Observations: - There is a reasonable area in parameter space that gives reasonable delay bounds meaning that G service can be provided . - The ability to provide G service requires cooperation of all edge and core equipment, not just along the path. - High utilization of delay-sensitive traffic is not feasible - Appropriate aggregation (on edge-to-edge basis) *decreases* delay bounds!