CURRENT_MEETING_REPORT_ Reported by Robert Reschly/US Army Research Laboratory Minutes of the Network Status Reports Working Group (NETSTAT) The session was chaired by Marsha Perrott in the absence of Gene Hastings. The following presentations were given: o CoREN Update - Scott Bradner o NC-REN (formerly CONCERT) - Tim Seaver o ``Transition from NSFnet Backbone to the NAPland'' - Susan Hares o NSFnet Statistics - Guy Almes o CA*net - Eric Carroll o MAE-East Evolution - Andrew Partan CoREN Scott Bradner reported on the status of the CoREN Network Services Request for Proposals (RFP) process was briefed. Scott emphasized one key feature of this RFP: it will result in a contract to provide services to the regionals, not in a contract to build a backbone to interconnect regionals. Since they are buying a service, CoREN expects to be one customer among many using the same service. CoREN does not want to have to rely on the NAPs for everything. CoREN feels NAPs and RAs are a good idea, but.... Scott observed that dollars flow from the NSF to the Regionals to fully connected network service providers (NSPs) to the NAPs. The only NSPs eligible to provide connectivity paid for by NSF funding are those which connect to the all primary NAPs (NY, IL, CA). The CoREN provider will establish connectivity to all primary NAPs, MAE- East, and the CIX. Scott was asked about planned NOC responsibilities: NOC integration and coordination is being worked on. Discussion points are relative responsibilities, e.g. NEARnet vs CoREN provider hand-off. When asked for information on non-CoREN American provider plans, Scott knew of at least two providers who will be at other NAPS. Scott indicated MCI will be at the Sprint NAP soon. Others later. As for the CoREN RFP evaluation, more than one of proposals was pretty close from a technical perspective, and they were close financially. The selected provider came out ahead in both measurements and additionally offered to support a joint technical committee to provide a forum for working issues as they arise. In particular, early efforts will focus on quantifying QOS issues as they were intentionally left out of the specification so they can be negotiated as needed (initially and as the technology changes). The circuits are coming in and routers (Cisco 7000s) are being installed in the vendor's PoPs this week. First bits will be flowing by 1 August. Line and router loading and abuse testing is expected to commence by 15 August, and production testing is should be underway by 15 September. Cutover is expected before 31 October. Someone noted there may be some sort of problem related to route cache flushing in the current Cisco code which could impact deployment. NC-REN (Formerly CONCERT) Presented by Tim Seaver. o CONCERT is a statewide video and data network operated by MCNC. - primary funding from State of NC - primary funding from State of NC - currently 111 direct, 32 dialup, and 52 uucp connections - 30K+ hosts - 4.5Mbps inverse multiplexed 3xDS1 link to ANS pop in Greensboro, NC o Replaced by NC-REN - expands to North Carolina Research and Education Network - DNS name is changing from concert.net to ncren.net o Service changes - dropping commercial services - concentrating on R&E - focus on user help o Main reason for name change - British Telecomm and MCI wanted the CONCERT name. MCNC never registered CONCERT. o In return MCNC management wanted - NC service community more prominent - alignment with NREN - emphasis on R&E o Press release 15 April Conversion to ncren.net in progress - Domain registered February 1994 - Local changes simple but time-consuming - Remote changes hard and time consuming - Targeting 1 October completion fairly sure of conversion by 31 October - Decommission CONCERT by 1 January 1995 o Existing service problems - Help desk overloaded from dialup UNIX shell accounts - Commercial providers springing up everywhere - The Umstead Act (a NC state law) says state funds cannot subsidize competition with commercial services. - CONCERT had sufficient non-governmental funding to cover commercial services, but accounting practices could not prove separation so they just decided to just stop. o Service changes - Turned over dialup UNIX shell connectivity to Interpath March 1994 - Planning to stop providing commercial IP and UUCP services by October 1994 - Planning to stop providing commercial direct services by 1 January 1995 - Will continue direct connects, IP, UUCP for government, research and education customers. o Plans - Pursuing new R&E customers: * Remaining private colleges * Community colleges * K-12 schools * State and local government * Libraries - Providing security services: firewalls, Kerberos, PEM, secure DNS, secure routing. - Expanding information services: m-bone, NC state government documents, WWW services, and consultation -- to provide more access - Internet connection will be upgraded to 45Mbps October, 1994 - Work on a NC Information Highway (NCIH) In response to a question about NC microwave trunking he noted that the Research Triangle Park area is at 45Mbps and remote areas are at 25Mbps. In passing he noted ATM interaction with research community is an interesting opportunity, indicating Southern bell GTE and Carolina telephone working ATM infrastructure In response to a question about the number of sites changing to NC-REN he stated there were about 20 R&E direct connections which would move, and that the narrowed focus of the NC-REN would not change the cash flow model significantly. ``Transition from NSFnet Backbone to the NAPland'' Sue Hares encouraged mid-level networks to send her information concerning any aspects of plans to transition. She would need an indication of what can be published and will respect confidentiality requirements. Information is desperately needed about local and regional plans so the transition can be managed for NSF. Sue presented slides which gave information on NAP on-line dates and three categories of organizations to move. Category 1 is primarily CoREN, category 2 is the other regionals, and category 3 includes supercomputer sites and less firmly planned sites. Her slides follow these minutes. It should be noted that the information presented concerns currently scheduled NSFnet service turn-down and does not say anything about tangible infrastructure changes, only NSFnet service plans. That is, NSF says they intend to stop paying for the forwarding of traffic via the indicated ENSSs, no more, no less. In conversation it was reported that PREPnet is not to use PSC connection for access after 1 October. The real message is that these numbers are ``official notification'' for management planning. It was recommended to ``flick the lights'' before actual turn-off, i.e., install the replacement connectivity and turn off the NSFnet connection to see what breaks. Sue reiterated that the ``decommissionings'' are simply organization's status as recipients of NSFnet services. It would be a good idea for each affected organization to talk to any or all service providers between the organization and the NSFnet for details about other aspects of the connection. When asked about the time-lines for the various categories, it was stated that NSF wants to have the category 1 sites switched off the NSFnet by 31 October. Beyond that, it is currently phrased as a best effort task. There was some discussion about CoREN test and transition plans. Load and trans-NAP plans are still being worked. There appears to be significant concern about not taking any backwards steps. One proposed working bilateral testing agreement. This provoked discussion of a tool called offnet* (and some nice tools Hans-Werner Braun has written). Some or all of these tools will be made available by Merit, however it was stressed that use of these tools by the regionals is intended to instrument local sites, and Merit cannot allow additional connections to NSFnet backbone monitoring points. * [Offnet was/is a program which tracked/tracks the nets which are configured but not heard. This is used by Enke Chen (enke@merit.edu) in generating reports about the number of configured vs. heard nets (difference = ``silent nets''). There is a constantly-increasing number of nets which have been configured but are not actually announced to the NSFNET. Anyone wanting the OFFNET code should contact Susan Hares and cc: merit-ie@merit.edu.] NSFnet Statistics Guy Almes reported that traffic is still doubling! Traffic topped 70 Gigapackets per month in May and June. Guy noted that December 94 chart will be interesting -- how to measure, and what makes sense to measure, new in backboneless regime. There will be a transition from traffic into backbone to traffic into multiple whatevers. Should any resulting numbers be counted? It was observed that it would be hard to avoid double counting in such an environment. The general consensus was that there is a need to pick an appropriate set of collection points: e.g. transition from BARRnet to/from NSF to BARRnet to/from CoREN provider. One position contends that we really want customer to BARRnet data rather than BARRnet to CoREN provider. However it was observed that this is not tractable or trackable. Other statistics show: o 952 Aggregates currently configured in AS690 o 751 announced to AS690 o 6081 class based addresses represented There were two additional slides depicting: 1)IBGP stability: solid line is percentage of IBGP sessions which have transitions during the measurement intervals, and 2) External route stability: solid line is external peers. Data collection is once again in place on backbone and has been operational since 1 June. In conversation, it was noted that the Route Servers will be gathering statistics from the NAPs. The Route Servers will be gated engines and will be located at the NAPs Updates: ANS router software activity. Software enhancements: o RS960 buffering and queuing microcode updated - increased number of buffers, also went from max MTU sized buffers to 2+kB chainable buffers (max FDDI will fit in two buffers with room to spare. - dynamic buffer allocation within card - two together really improve dynamic burst performance o Design for improved end-to-end performance - Based on Van Jacobson and Floyd random early drop work. - End-to-end performance is limited by bandwidth delay product - current protocols deal gracefully with a single packet drop but multiple packets dropped push algorithm into slow start. With ``current'' van Jacobson code, even brief congestion in the path will cause things to back off under even low end loadings. Work shows that Random Early Drop slows things just enough to avoid congestion without putting particular flows into slow-start. In passing, Guy noted that he figures the speed of light as roughly 125 mi/ms on general phone company stuff. The conditions and results were summarized on two slides: + Single flow Van Jacobson random early drop: 41Mbps at 384k MTU cross-country (PSC to SDSC?) This code (V4.20L++) is likely to be deployed in a month or so. By way of comparison Maui Supercomputer center to SDSC was 31Mbps using an earlier version of code with 35 buffers. Windowed ping with the same code did 41Mbps. + Four flow Van Jacobson random early drop: 42Mbps at 96kB MTU. All the numbers are with full forwarding tables in the RS960s In other news...: o SLSP support for broadcast media completed o Eliminated fake AS requirement for multiply connected peers. o Implemented IBGP server. Pensalken (the SPRINT NAP) is a FDDI in a box. CA*net Eric Carroll reported that all but three backbone links are now at T1 and there are dual T1s to each US interconnect. Pulled in Canadian government networks. Using Ciscos to build network. Still seeing 8-10x US costs for service. CA*net will grow to DS3 when they can get it and afford it(!). Numbers on map slide are percentage utilization. Note that 12 routers were installed between mid-March and the end of April and these are early numbers. Note that the British Columbia to NWnet link T1 went to saturation in 5 hours. Appears to be due to pent up demand, not particular users or programs. 7010 roll-out had a lot of support from Cisco. Ran into some problems with FT1 lines in queuing discipline. Still doing NNSTAT on an RT for now, but working with a RMON vendor to get stuff for new implementation. When asked about using inverse multiplexors for increased bandwidth, Eric indicated CA*net was currently just using Cisco's load sharing to US, however they would be considered when needed. A question was raised about CA*net connectivity plans in light of the impending NSF transition. Currently international connectivity is just to US, specifically to the US R&E community. There is some interest and discussions for other international connectivity, but cost and other factors are an issue. CA*net hopes to place NSF connectivity order by next week. Biggest concern is the risk of becoming disconnected from what Eric termed the R&E affinity group. CA*net currently carries 1000 registered 900 active networks in CA*net. CA*net is not AUP free, instead it is based on a transitive AUP ``consenting adults'' model. If two Canadian regionals or providers agree to exchange a particular kind of traffic then CA*net has no problem. CA*net just joined CIX which prompted a question as to whether Onet is a CIX member. In response Eric characterized CA*net as a cooperative transit backbone for regional members. Therefore CA*net joining CIX is somewhat meaningless in and of itself, and, by implication, is only meaningful in the context of the regionals and providers interacting via CA*net. In response to another question, Eric indicated that CA*net is still seeing growth. MAE-East Evolution (MAE == Metropolitan Area Ethernet) Andrew Partan volunteered to conduct an impromptu discussion of MAE-EAST plans. There is an effort underway to install a FDDI ring at the MFS Gallows Rd PoP and connect that ring to MAE-East using a Cisco Catalyst box. MAE-East folks are experimenting with GDC Switches Is there a transition from MAE-East to the SWAB?: Unknown (SWAB == SMDS Washington [DC] Area Backbone) MFS DC NAP is proposing to implement using NetEdge equipment. Any MAE-East plans to connect to MFS NAP?: Unknown. ALTERnet is currently using a Cisco Catalyst box and is happy. Time-frame for implementing MAE-East FDDI?: Not yet, still need management approval. Hope to have a start in next several weeks.. Those interested in MAE-EAST goings-on and discussions with members should join the mailing list MAE-East[-request]@uunet.uu.net For what it may be worth, they ``had to interrupt MAE-LINK for 5 seconds this week to attach an MCI connection.'' In summary (to a question) one would contract with MFS for connectivity to MAE-East. Then one would need to individually negotiate pairwise arrangements with other providers with which there was an interest in passing traffic. As far is as known there are no settlements currently, but cannot say for sure. Random Bits SWAB (SMDS Washington Area Backbone): In response to point of confusion, it was stated that the SWAB bilateral agreement template is just a sample, not a requirement CIX: The CIX router is getting a T3 SMDS connection into the PacBell fabric. ALTERnet and PSI are doing so too. CERFnet currently is on. Noted in passing: Each SMDS access point can be used privately, to support customers, to enhance backbone, etc.... This could have serious implications for other provider agreements. CERFnet: Pushpendra Mohta is reported to be happy, but the group understood that most CERFnet CIRs are at 4Mbps over T3 entrance facilities. PacBell was reportedly running two 2OOMbps backplane capacity switches interconnected with single T3. Planning to increase provisioning -- already have a lot of demand. [Pushpendra adds: Pacbell operates two switched in the Bay Area. One in San Francisco and one in Santa Clara. The former is practically full , and the latter is brand new. All new T3 orders will end up on the Santa Clara switch. It IS true that the backplane of the switch is only 200Mbps. Because the Santa Clara switch is new, the switches are interconnected by only one T3 link. However, the switches are capable of more than one T3 link and the product manager at Pacbell (Dick Shimizu ) has assured me that enough demand would warrant a new T3 between the switches etc. Providers thinking of buying T3 level services should specify the Santa Clara switch although it should end up being used anyway. I have alerted the Product manager and he will ensure that T3 circuits are on the SC switch. A new switch is being planned for early next year, although enough demand will accelerate that deployment as well.] [Addendum from PSI: In addition, PSI installed an SMDS switch in Santa Clara several weeks ago which has a gigabit backplane. So, if there is a problem with CIX SMDS throughput there is a ``net,'' using PtP T3's and multiple SNI's on the CIX (PacBell) SMDS connection, remapped into another (PSI's) switch. Marty]