Akamai Multicast Interconnect for Carriers (AMIC) - Lab Setup

Document created by Cheng Jin Employee on May 31, 2017
Version 1Show Document
  • View in full screen mode

Intro

We introduced the recipe to connect to the Akamai Multicast Backbone in https://community.akamai.com/docs/DOC-8118

In this document, we describe how one can build a lab setup to connect to our backbone and receive the test Multicast stream.

Testing and Debugging

Multicast Receiver Directly Attached to AMT Gateway

Setup

We built a simple topology to connect to and verify the Akamai Multicast backbone in the lab.  We setup a Ubuntu 16.04 KVM host (Ubuntu cloudimage http://cloud-images.ubuntu.com/releases/) with two guest images, one is the qcow2 version of the Cisco CSR1000v version 3.16.4bS (the qcow2 image is named csr1000v-universalk9.03.16.04b.S.155-3.S4b-ext.qcow2) and a Ubuntu 16.04 guest as the Multicast receiver.

In addition to the AMT configuration specified in the previous section, the CSR1000V is configured to have 192.168.122.2 on GigabitEthernet1 and 10.100.200.1 on GigabitEthernet2.

Lab Config Instructions 

First, one needs to setup another network, for example, isolated1 (please see the content of the xml file below), using a new virtual bridge interface virbr1.

user@u1604:~$ cat isolated1.xml
<network ipv6='yes'>
  <name>isolated1</name>
  <bridge name='virbr1' stp='on' delay='0'/>
</network>
user@u1604:~$ virsh net-define isolated1.xml
Network isolated1 defined from isolated1.xml
user@u1604:~$ virsh net-autostart isolated1
Network isolated1 marked as autostarted
user@u1604:~$ virsh net-start isolated1
Network isolated1 started
user@u1604:~$ virsh attach-interface u1604.2 bridge virbr1
Interface attached successfully
user@u1604:~$ virsh attach-interface csr1k bridge virbr1
Interface attached successfully

 

Second, one needs to set up the newly available interfaces with the designed network config on both the CSR1000v and the Ubuntu KVM guest.

For CSR1000v:

csr1# config t
Enter configuration commands, one per line.  End with CNTL/Z.
csr1(config)#interface GigabitEthernet2
csr1(config-if)#ip address 10.100.200.1 255.255.255.0
csr1(config-if)#ip pim passive
csr1(config-if)#ip igmp version 3
csr1(config-if)#ip igmp explicit-tracking
csr1(config-if)#no shutdown
csr1(config-if)#negotiation auto
csr1(config-if)#exit

For Ubuntu KVM guest:

user@u1604:~$ cat /etc/network/interfaces.d/50-cloud-init.cfg
auto ens6
iface ens6 inet static
    address 10.100.200.100
    netmask 255.255.255.0
    gateway 10.100.200.1
    dns-nameservers 8.8.8.8

 

Third, one also needs to add static routes on the KVM host so the subnet 10.100.200/24 and 10.100.201/24 are reachable from the KVM host.  I have experienced failures in establishing the tunnel because these static routes were not properly configured.  I spent a lot of wasted time thinking that the iptables configuration was incomplete instead.

sudo route add -net 10.100.200.0 netmask 255.255.255.0 gw 192.168.122.2
sudo route add -net 10.100.201.0 netmask 255.255.255.0 gw 192.168.122.2

 

Lastly, both the 10.100.200/24 and 10.100.201/24 networks need to be NAT'ed via the virbr0 interface (that has the 192.168.122.1 IP) to the outside.  libvirt sets up the NAT for the 192.168.122 network automatically, but one has to manually set up the iptables rules for the 10.100.200/24 and 10.100.201/24 networks.  We need additional rules modeled after the 192.168.122 subnet.

In the nat table:

# add DNAT entry for ssh
#-A PREROUTING -p tcp --dport 2200 -j LOG --log-prefix "ssh-"
-A POSTROUTING -s 10.100.200.0/24 -d 224.0.0.0/24 -j RETURN
-A POSTROUTING -s 10.100.200.0/24 -d 255.255.255.255/32 -j RETURN
-A POSTROUTING -s 10.100.200.0/24 ! -d 10.100.200.0/24 -p tcp -j MASQUERADE --to-ports 1024-65535
-A POSTROUTING -s 10.100.200.0/24 ! -d 10.100.200.0/24 -p udp -j MASQUERADE --to-ports 1024-65535
-A POSTROUTING -s 10.100.200.0/24 ! -d 10.100.200.0/24 -j MASQUERADE
  
-A POSTROUTING -s 10.100