*** This file is venera.isi.edu:mbone/faq.txt *** *** Corrections and Additions Requested ***
The MBONE is a virtual network. It is layered on top of portions of the physical Internet to support routing of IP multicast packets since that function has not yet been integrated into many production routers. The network is composed of islands that can directly support IP multicast, such as multicast LANs like Ethernet, linked by virtual point-to-point links called "tunnels". The tunnel endpoints are typically workstation-class machines having operating system support for IP multicast and running the "mrouted" multicast routing daemon.
Previous versions of the IP multicast software (before March 1993) used a different method of encapsulation based on an IP Loose Source and Record Route option. This method remains an option in the new software for backward compatibility with nodes that have not been upgraded. In this mode, the multicast router modifies the packet by appending an IP LSRR option to the packet's IP header. The multicast destination address is moved into the source route, and the unicast address of the router at the far end of the tunnel is placed in the IP Destination Address field. The presence of IP options, including LSRR, may cause modern router hardware to divert the tunnel packets through a slower software processing path, causing poor performance. Therefore, use of the new software and the IP encapsulation method is strongly encouraged.
Between continents there will probably be only one or two tunnels, preferably terminating at the closest point on the MBONE mesh. In the US, this may be on the Ethernets at the two FIXes (Federal Internet eXchanges) in California and Maryland. But since the FIXes are fairly busy, it will be important to minimize the number of tunnels that cross them. This may be accomplished using IP multicast directly (rather than tunnels) to connect several multicast routers on the FIX Ethernet.
The intent is that when a new regional network wants to join in, they will make a request on the appropriate MBONE list, then the participants at "close" nodes will answer and cooperate in setting up the ends of the appropriate tunnels. To keep fanout down, sometimes this will mean breaking an existing tunnel to inserting a new node, so three sites will have to work together to set up the tunnels.
To know which nodes are "close" will require knowledge of both the MBONE logical map and the underlying physical network topology, for example, the physical T3 NSFnet backbone topology map combined with the network providers' own knowledge of their local topology.
Within a regional network, the network's own staff can independently manage the tunnel fanout hierarchy in conjunction with end-user participants. New end-user networks should contact the network provider directly, rather than the MBONE list, to get connected.
Note that the design bandwidth must be multiplied by the number of tunnels passing over any given link since each tunnel carries a separate copy of each packet. This is why the fanout of each mrouted node should be no more than 5-10 and the topology should be designed so that at most 1 or 2 tunnels flow over any T1 line.
While most MBONE nodes should connect with lines of at least T1 speed, it will be possible to carry restricted traffic over slower speed lines. Each tunnel has an associated threshold against which the packet's IP time-to-live (TTL) value is compared. By convention in the IETF multicasts, higher bandwidth sources such as video transmit with a smaller TTL so they can be blocked while lower bandwidth sources such as compressed audio are allowed through.
It is best if the workstations can be dedicated to the multicast routing function to avoid interference from other activities and so there will be no qualms about installing kernel patches or new code releases on short notice. Since most MBONE nodes other than endpoints will have at least three tunnels, and each tunnel carries a separate (unicast) copy of each packet, it is also useful, though not required, to have multiple network interfaces on the workstation so it can be installed parallel to the unicast router for those sites with configurations like this:
+----------+ | Backbone | | Node | +----------+ | ------------------------------------------ External DMZ Ethernet | | +----------+ +----------+ | Router | | mrouted | +----------+ +----------+ | | ------------------------------------------ Internal DMZ Ethernet(The "DMZ" Ethernets borrow that military term to describe their role as interface points between networks and machines controlled by different entities.) This configuration allows the mrouted machine to connect with tunnels to other regional networks over the external DMZ and the physical backbone network, and connect with tunnels to the lower-level mrouted machines over the internal DMZ, thereby splitting the load of the replicated packets. (The mrouted machine would not do any unicast forwarding.)
Note that end-user sites may participate with as little as one workstation that runs the packet audio and video software and has a tunnel to a network-provider node.
To set up and run an mrouted machine will require the knowledge to build and install operating system kernels. If you would like to use a hardware platform other than those currently supported, then you might also contribute some software implementation skills!
We will depend on participants to read mail on the appropriate mbone mailing list and respond to requests from new networks that want to join and are "nearby" to coordinate the installation of new tunnel links. Similarly, when customers of the network provider make requests for their campus nets or end systems to be connected to the MBONE, new tunnel links will need to be added from the network provider's multicast routers to the end systems (unless the whole network runs MOSPF).
Part of the resources that should be committed to participate would be for operations staff to be aware of the role of the multicast routers and the nature of multicast traffic, and to be prepared to disable multicast forwarding if excessive traffic is found to be causing trouble. The potential problem is that any site hooked into the MBONE could transmit packets that cover the whole MBONE, so if it became popular as a "chat line", all available bandwidth could be consumed. Steve Deering plans to implement multicast route pruning so that packets only flow over those links necessary to reach active receivers; this will reduce the traffic level. This problem should be manageable through the same measures we already depend upon for stable operation of the Internet, but MBONE participants should be aware of it.
Machines Operating Systems Network Interfaces -------- ----------------- ------------------ Sun SPARC SunOS 4.1.1,2,3 ie, le, lo Vax or Microvax 4.3+ or 4.3-tahoe de, qe, lo Decstation 3100,5000 Ultrix 3.1c, 4.1, 4.2a ln, se, lo Silicon Graphics All ship with multicast
There is an interested group at DEC that may get the software running on newer DEC systems with Ultrix and OSF/1. Also, some people have asked about support for the RS-6000 and AIX or other platforms. Those interested could use the mbone list to coordinate collaboration on porting the software to these platforms!
An alternative to running mrouted is to run the experimental MOSPF software in a Proteon router (see MOSPF question below).
ipmulti-pmax31c.tar ipmulti-sunos41x.tar.Z Binaries & patches for SunOS 4.1.1,2,3 ipmulticast-ultrix4.1.patch ipmulticast-ultrix4.2a-binary.tar ipmulticast-ultrix4.2a.patch ipmulticast.README [** Warning: out of date **] ipmulticast.tar.Z Sources for BSDYou don't need kernel sources to add multicast support. Included in the distributions are files (sources or binaries, depending upon the system) to modify your BSD, SunOS, or Ultrix kernel to support IP multicast, including the mrouted program and special multicast versions of ping and netstat.
Silicon Graphics includes IP multicast as a standard part of their operating system. The mrouted executable and ip_mroute kernel module are not installed by default; you must install the eoe2.sw.ipgate subsystem and "autoconfig" the kernel to be able to act as a multicast router. In the IRIX 4.0.x release, there is a bug in the kernel code that handles multicast tunnels; an unsupported fix is available via anonymous ftp from sgi.com in the sgi/ipmcast directory. See the README there for details on installing it.
IP multicast is also included in Sun's Solaris 2.1 and in BSD 4.4 when/if it is released.
The most common problem encountered when running this software is with hosts that respond incorrectly to IP multicasts. These responses typically take the form of ICMP network unreachable, redirect, or time-exceeded error messages, which are a nuisance but mostly harmless until we get several such hosts each sending a packet in response to 50 packets per second of packet audio. These responses are in violation of the current IP specification and, with luck, will disappear over time.
Multicast routing algorithms are described in the paper "Multicast Routing in Internetworks and Extended LANs" by S. Deering, in the Proceedings of the ACM SIGCOMM '88 Conference.
There is an article in the June 1992 ConneXions about the first IETF audiocast from San Diego, and a later version of that article is in the July 1992 ACM SIGCOMM CCR. A reprint of the latter article is available by anonymous FTP from venera.isi.edu in the file pub/ietf-audiocast-article.ps. There is no article yet about later IETF audio/videocasts.
/pub/net-research/mbone-map-{big,small}.psThe small one fits on one page and the big one is four pages that have to be taped together for viewing. This map is produces from topology information collected automatically from all MBONE nodes running the up-to-date released of the mrouted program (some are not yet updated so links beyond them cannot be seen). Pavel Curtis at Xerox PARC has added the mechanisms to automatically collect the map data and produce the map. (Thanks also to Paul Zawada of NCSA who manually produced an earlier map of the MBONE.)
The advantages to linking DVMRP with MOSPF are: fewer configured tunnels, and less multicast traffic on the links inside the MOSPF domain. There are also a couple potential drawbacks: increasing the size of DVMRP routing messages, and increasing the number of external routes in the OSPF systems. However, it should be possible to alleviated these drawbacks by configuring area address ranges and by judicious use of MOSPF default routing.
AlterNet ops@uunet.uu.net CERFnet mbone@cerf.net CICNet mbone@cic.net CONCERT mbone@concert.net Cornell swb@nr-tech.cit.cornell.edu JvNCnet multicast@jvnc.net Los Nettos prue@isi.edu NCAR mbone@ncar.ucar.edu NCSAnet mbone@cic.net NEARnet nearnet-eng@nic.near.net OARnet oarnet-mbone@oar.net PSCnet pscnet-admin@psc.edu PSInet mbone@nisc.psi.net SESQUINET sesqui-tech@sesqui.net SDSCnet mbone@sdsc.edu SURAnet multicast@sura.net UNINETT mbone-no@uninett.no SuperJANET uk-mbone@mhs-relay.ac.uk
If you are a network povider, send a message to the -request address of the mailing list for your region to be added to that list for purposes of coordinating setup of tunnels, etc:
mbone-eu: mbone-eu-request@sics.se Europe mbone-jp: mbone-jp-request@wide.ad.jp Japan mbone-korea: mbone-korea-request@mani.kaist.ac.kr Korea mbone-na: mbone-na-request@isi.edu North America mbone-oz: mbone-oz-request@internode.com.au Australia mbone: mbone-request@isi.edu otherThese lists are primarily aimed at network providers who would be the top level of the MBONE organizational and topological hierarchy. The mailing list is also a hierarchy; mbone@isi.edu forwards to the regional lists, then those lists include expanders for network providers and other institutions. Mail of general interest should be sent to mbone@isi.edu, while regional topology questions should be sent to the appropriate regional list.
Individual networks may also want to set up their own lists for their customers to request connection of campus mrouted machines to the network's mrouted machines. Some that have done so were listed above.
STEP 2: Set up an mrouted machine, build a kernel with IP multicast extensions added, and install the kernel and mrouted; or, install MOSPF software in a Proteon router.
STEP 3: Send a message to the mbone list for your region asking to hook in, then coordinate with existing nodes to join the tunnel topology.
phyintThe phyint command can be used to disable multicast routing on the physical interface identified by local IP address[disable] [metric ] [threshold ] tunnel [metric ] [threshold ]
The tunnel command can be used to establish a tunnel link between
local IP address
The metric is the "cost" associated with sending a datagram on the
given interface or tunnel; it may be used to influence the choice
of routes. The metric defaults to 1. Metrics should be kept as
small as possible, because mrouted cannot route along paths with a
sum of metrics greater than 31. When in doubt, the following
metrics are recommended:
Since the multicast routing protocol implemented by mrouted does
not yet prune the multicast delivery trees based on group
membership (it does something called "truncated broadcast", in
which it prunes only the leaf subnets off the broadcast trees), we
instead use a kludge known as "TTL thresholds" to prevent
multicasts from traveling along unwanted branches. This is NOT
the way IP multicast is supposed to work; MOSPF does it right, and
mrouted will do it right some day.
Before the November 1992 IETF we established the following
thresholds. The "TTL" column specifies the originating IP
time-to-live value to be used by each application. The "thresh"
column specifies the mrouted threshold required to permit passage
of packets from the corresponding application, as well as packets
from all applications above it in the table:
Mrouted will not initiate execution if it has fewer than two
enabled vifs, where a vif (virtual interface) is either a physical
multicast-capable interface or a tunnel. It will log a warning if
all of its vifs are tunnels, based on the reasoning that such an
mrouted configuration would be better replaced by more direct
tunnels (i.e., eliminate the middle man). However, to create a
hierarchical fanout for the MBONE, we will have mrouted
configurations that consist only of tunnels.
Once you have edited the mrouted.conf file, you must run mrouted
as root. See ipmulticast.README for more information.
A pre-release of the LBL audio tool "vat" is available by
anonymous FTP from ftp.ee.lbl.gov in the file vat.tar.Z. Included
are a binary suitable for use on any version of SPARCstation, and
a manual entry. Also available are dec-vat.tar.Z for the DEC 5000
and sgi-vat.tar.Z for the SGI Indigo. The authors, Van Jacobson
and Steve McCanne, say the source will be released "soon". You
may find that the vat tar file includes a patch for the kernel
file in_pcb.c. This has been superceded by a patch that is now
included in the IP multicast software release for SunOS. These
patches allow demultiplexing of separate multicast addresses so
that multiple copies of vat can be run for different conferences
at the same time.
In addition, a beta release of both binary and source for the
UMass audio tool NEVOT, written by Henning Schulzrinne, is
available by anonymous FTP from gaia.cs.umass.edu in the pub/nevot
directory (the filename may change from version to version).
NEVOT runs on the SPARCstation and on the SGI Indigo.
You can test vat or NEVOT point-to-point between two hosts with a
standard SunOS kernel, but to conference with multiple sites you
will need a kernel with IP multicast support added. IP multicast
invokes Ethernet multicast to reach hosts on the same subnet; to
link multiple subnets you can set up tunnels, assuming sufficient
bandwidth exists.
Once you build the SunOS kernel, you should make sure that the
kernel audio buffer size variable is patched from the standard
value of 1024 to be 160 decimal to match the audio packet size for
minimum delay. The IP multicast software release includes patched
versions of the audio driver modules, but if for some reason you
can't use them, you can use adb to patch the kernel as shown
below. These instructions are for SunOS 4.1.1 and 4.1.2; change
the variable name to amd_bsize for 4.1.3, or Dbri_recv_bsize for
the SPARC 10:
The video we used for the July 1992 IETF was the DVC (desktop
video conferencing) program from BBN, written by Paul Milazzo and
Bob Clements. This program has since become a product, called
PictureWindow. Contact picwin-sales@bbn.com for more information.
For the November 1992 IETF and several events since then, we have
used two other programs. The first is the "nv" (network video)
program from Ron Frederick at Xerox PARC, available from
parcftp.xerox.com in the file pub/net-research/nv.tar.Z. An 8-bit
visual is recommended to see the full image resolution, but nv
also implements dithering of the image for display on 1-bit
visuals (monochrome displays). Shared memory will be used if
present for reduced processor load, but display to remote X
servers is also possible. On the SPARCstation, the VideoPix card
is required to originate video. Sources are to available,
as are binary versions for the SGI Indigo and DEC 5000 platforms.
Also available from INRIA is the
IVS
program written by Thierry
Turletti and Christian Huitema. It uses a more sophisticated
compression algorithm, a software implementation of the H.261
standard. It produces a lower data rate, but because of the
processing demands the frame rate is much lower and the delay
higher. System requirements: SUN SPARCstation or SGI Indigo,
video grabber (VideoPix Card for SPARCstations), video camera,
X-Windows with Motif or Tk toolkit. Binaries and sources are
available for anonymous ftp from avahi.inria.fr in the file
pub/videoconference/ivs.tar.Z or ivs_binary_sparc.tar.Z.
Schedules for IETF audio/videocasts and some other events are
announced on the IETF mailing list (send a message to
ietf-request@cnri.reston.va.us to join). Some events are also
announced on the rem-conf mailing list, along with discussions of
protocols for remote conferencing (send a message to
rem-conf-request@es.net to join).
IP multicast host extensions are being added to some vendors'
operating systems. That's one of the first steps. Proteon has
announced IP multicast support in their routers. No network
provider is offering production IP multicast service yet.
LAN, or tunnel across a single LAN: 1
any subtree with only one connection point: 1
serial link, or tunnel across a single serial link: 1
multi-hop tunnel: 2 or 3
backup tunnels: sum of metrics on primary path + 1
The threshold is the minimum IP time-to-live required for a
multicast datagram to be forwarded to the given interface or
tunnel. It is used to control the scope of multicast datagrams.
(The TTL of forwarded packets is only compared to the threshold,
it is not decremented by the threshold. Every multicast router
decrements the TTL by 1.) The default threshold is 1.
TTL thresh
--- ------
IETF chan 1 low-rate GSM audio 255 224
IETF chan 2 low-rate GSM audio 223 192
IETF chan 1 PCM audio 191 160
IETF chan 2 PCM audio 159 128
IETF chan 1 video 127 96
IETF chan 2 video 95 64
local event audio 63 32
local event video 31 1
It is suggested that a threshold of 128 be used initially, and
then raise it to 160 or 192 only if the 64 Kb/s voice is excessive
(GSM voice is about 18 Kb/s), or lower it to 64 to allow video to
be transmitted to the tunnel.
What hardware and software is required to receive audio?
The platform we've been primarily using is the Sun SPARCstation,
but also the SGI Indigo. You don't need any additional hardware
(assuming yours is new enough that it came with a microphone, else
you have to buy one). The audio coding is provided by the
built-in 64 Kb/s audio hardware plus software compression for
reduced data rates (32 Kb/s ADPCM, 13 Kb/s GSM, and 4.8 Kb/s LPC).
The software for packet audio and video is available by anonymous
FTP. In the future, we expect this or similar software to be
available for other platforms such as NeXT or Macintosh. One key
requirement, however, is that the host machine have IP multicast
software added to its kernel. You can add it now to SunOS 4.1.1
4.1.2, or 4.1.3; Sun includes it the standard kernel with Solaris
2.1 (though these programs may not yet run on Solaris).
adb -k -w /vmunix /dev/mem
audio_79C30_bsize/W 0t160 (to patch the running kernel)
audio_79C30_bsize?W 0t160 (to patch kernel file on disk)
If the buffer size is incorrect, there will be bad breakup when
sound from two sites gets mixed for playback.
What hardware and software is required to receive video?
No special hardware is required to receive the slow-frame-rate
video prevalent on the MBONE because the decoding and display is
done all in software. The data rate is typically 25-150 Kb/s. To
be able to send video requires a camera and a frame grabber. Any
camcorder with a video output will do. For monitor-top mounting,
the wide-angle range is most important. There is also a small
(about 2x2x5 inches) monochrome CCD camera suitable for desktop
video conference applications available for around $200 from
Stanley Howard Associates, Thousand Oaks, CA, phone 805-492-4842.
Subjectively, it seems to give a picture somewhat less crisp than
a typical camcorder, but sufficient for 320x240 resolution
software video algorithms. There are also color and infrared (for
low light, with IR LED illumination) models. The programs listed
below use the Sun VideoPix card to input video on the SPARCstation.
How can I find out about teleconference events?
Many of the audio and video transmissions over the MBONE are
advertised in "sd", the session directory tool developed by
Van Jacobson at LBL. Session creators specify all the address
parameters necessary to join the session, then sd multicasts the
advertisement to be picked up by anyone else running sd. The
audio and video programs can be invoked with the right parameters
by clicking a button in sd. From ftp.ee.lbl.gov, get the file
sd.tar.Z or sgi-sd.tar.Z or dec-sd.tar.Z.
Have there been any movements towards productizing any of this?
The network infrastructure will require resource management
mechanisms to provide low delay service to real-time applications
on any significant scale. That will take a few years. Until that
time, product-level robustness won't be possible. However,
vendors are certainly interested in these applications, and
products may be targeted initially to LAN operation.