Netflix – Application based routing over another VPN
In my last blogpost I already showed how you could use a VPN connection for one application, while using your normal WAN for all other applications. Although I’m using Netflix as an example, you could use the same mechanism for virtually any other IP-based application.
[h2]The idea[/h2]
For many people the suggested solution will suffice. However, as more and more people do these days, I proxy all my traffic through my (EU-based) VPS. This is perfectly doable with the setup I described in my last blogpost. But it becomes tedious to set up if you have multiple devices. Therefore, I want to do all my routing on my VPS, as outlined in this diagram:
[pre]
|———-| |———-|
| | | |
/——| VPN |——>| Netflix |
|———-| |———-| / | | | |
| | | |——-/ |———-| |———-|
| Desktop |—–> | VPS |
| | | |——-\ /——^—–\
|———-| |———-| \ / \
\——-| Internet |
\ /
\———–/
[/pre]
The connection between my Desktop and VPS is an OpenVPN tunnel. The connection between the VPS and VPN is also an OpenVPN tunnel provided by Private Internet Access (PIA)[/url]. We cannot simply duplicate the exact same logic from my last blogpost in this situation, because it’s based off the UID of the Netflix application, and in the current setup the routing is done on a different machine than where the Netflix application runs. In this particular example the advantage of routing to the VPN service from PIA are limited, but once you get multiple devices (laptop, mobile phone…) on the left side of the VPS, and start using multiple VPN services or locations, it starts to pay off.
[h2]IPIP[/h2]
We need to find a solution to tell the VPS what gateway to take. It would of course be possible to set up another VPN connection between my Desktop and VPS but that would be tedious, especially once the number of clients and gateways increases. In my current VPN setup OpenVPN has been configured in routed mode (using TUN interfaces rather than TAP) and that means that we cannot use any layer-2 specific features (like abusing the QoS bit to indicate the desired gateway). In my case I decided to with an IPIP (Ip over Ip) tunnel. When an IP Packet is sent through an IPIP-tunnel, the only thing that happens is that an extra Source and Destination header is added to the IP Packet, as is illustrated in the Wireshark capture below (click to enlarge):
[h2]Setting it up[/h2]
In the steps outlined below I’ll assume you’ve already set up a VPN connection between the Desktop and the VPS. If you have not, you could take a look at this blogpost[/url]. On both the client and server the interfaces are called tun0. Furthermore, I’ll assume you’ve already configured a VPN on the VPS towards the US, called tunUS, as described in my last post.
[h3]Creating the IPIP-tunnel[/h3]
Because the IPIP-tunnel can only be initialized once the OpenVPN connection between the Desktop (Client) and VPS (Server) has been established, I put the logic of creating this IPIP-tunnel in a script that would be called by OpenVPN once the connection is initialized (using the OpenVPN ‘up’ directive).
On the client side:
#!/bin/bash
# make sure all routes have been initialized
sleep 5
ip tunnel add tunUS-in-tun0 mode ipip remote 172.31.254.1 local 172.31.254.10
ip address add 172.31.252.2/30 dev tunUS-in-tun0
ip link set tunUS-in-tun0 up
ip route add default via 172.31.252.1 src 172.31.252.2 table 2
ip route add 172.31.252.0/30 dev tunUS proto kernel scope link src 172.31.252.2 table 2
The 172.31.254.1 and .10 addresses are the OpenVPN gateway and client ip. I’d have preferred to use the variables that OpenVPN passes on to this script, but for some reason the gateway ip that’s supplied is wrong (or technically right, but cannot be used for this purpose). On the server side we have a script that is almost the same:
#!/bin/bash
# make sure all routes have been initialized
sleep 5
ip tunnel add tunUS-in-tun0 mode ipip remote 172.31.254.10 local 172.31.254.1
ip address add 172.31.252.1/30 dev tunUS-in-tun0
ip link set tunUS-in-tun0 up
ip route add 172.31.252.0/30 dev tunUS-in-tun0 proto kernel scope link src 172.31.252.1 table 2
[h3]Directing traffic on the client[/h3]
Next up, we need to do direct the Netflix traffic from the Netflix application into the IPIP-tunnel. You could very well add this to the same script as the IPIP-tunnel on the client:
iptables -t mangle -A OUTPUT -m owner --gid-owner netflix -j MARK --set-mark 2
iptables -t nat -A POSTROUTING -o tunUS-in-tun0 -j SNAT --to 172.31.252.2
ip rule add fwmark 2 table 2
To verify that everything so far works, you can open a ping as user netflix on the client to a random (public) ip, and verify using tcpdump on the server that the traffic is coming in from the right ip on the tunUS-in-tun0 interface.
[h3]Forwarding traffic from Client to VPN on VPS[/h3]
To get the traffic from the VPS to Netflix via the VPN service we still need to route all traffic from the tunUS-in-tun0 to the tunUS interface on the VPS:
echo 1 > /proc/sys/net/ipv4/ip_forward
iptables -t nat -A POSTROUTING -s 172.31.252.0/24 -o tunUS -j SNAT --to-source 10.198.1.6
ip rule add from 172.31.252.0/24 lookup 2
ip route add default via 10.198.1.5 dev tunUS src 10.198.1.6
In this case, the 10.198.1.5 and .6 are my gateway and client ip from my VPN Service, I retrieved those using:
# ip a s tunUS
17: tunUS:
link/none
inet 10.198.1.6 peer 10.198.1.5/32 scope global tunUS
I’m thinking you could perfectly well retrieve those from OpenVPN in its export variables when invoking it using the up-script configuration directive. Beware, in the case of PIA these addresses are changed once a day or so, so hardcoding them may not be the way to go.
[h3]RP_Filter[/h3]
Linux has a feature called reverse-path filtering. This feature drops traffic from any ip it has no route for on that interface. Because it does not keep into account the use of multiple routing tables, we’ll need to disable it: on both the client and the server.
for f in /proc/sys/net/ipv4/conf/*/rp_filter; do
echo 0 > $f
done
[h2]Troubleshooting the setup[/h2]
With all debugging, you begin initiating traffic on the client, as the Netflix application does:
client# sudo -u netflix ping 8.8.8.8
Then, you should see the traffic going out on the right interface:
client# sudo tcpdump -i tunUS-in-tun0 -ns0 icmp
[sudo] password for dolf:
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on tunUS, link-type RAW (Raw IP), capture size 65535 bytes
03:13:47.599418 IP 172.31.252.2 > 8.8.8.8: ICMP echo request, id 25294, seq 115, length 64
03:13:47.835465 IP 8.8.8.8 > 172.31.252.2: ICMP echo reply, id 25294, seq 115, length 64
Please take note of the IP addresses. If these don’t match, things are bound to go wrong.
So once you’ve confirmed the traffic leaves your desktop from the right interface using the right ip addresses, it’s time to hop on to the VPS, see if the traffic comes in there:
server# sudo tcpdump -i tunUS-in-tun0 -ns0 icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on tunUS-in-tun0, link-type RAW (Raw IP), capture size 65535 bytes
03:12:38.890815 IP 172.31.252.2 > 8.8.8.8: ICMP echo request, id 25294, seq 64, length 64
03:12:39.104200 IP 8.8.8.8 > 172.31.252.2: ICMP echo reply, id 25294, seq 64, length 64
03:12:39.894865 IP 172.31.252.2 > 8.8.8.8: ICMP echo request, id 25294, seq 65, length 64
03:12:40.108261 IP 8.8.8.8 > 172.31.252.2: ICMP echo reply, id 25294, seq 65, length 64
The traffic you see here should match 1:1 with the output of the previous command. If it does not (or you have no traffic at all on this interface), there’s something wrong with your IPIP tunnel.
The last step to see if all goes as it should, is to see if the traffic towards the VPN service leaves your VPS correctly:
server# sudo tcpdump -i tunUS -ns0 icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on tunUS, link-type RAW (Raw IP), capture size 65535 bytes
03:17:38.141751 IP 10.198.1.6 > 8.8.8.8: ICMP echo request, id 25294, seq 363, length 64
03:17:38.355633 IP 8.8.8.8 > 10.198.1.6: ICMP echo reply, id 25294, seq 363, length 64
03:17:39.136641 IP 10.198.1.6 > 8.8.8.8: ICMP echo request, id 25294, seq 364, length 64
03:17:39.350106 IP 8.8.8.8 > 10.198.1.6: ICMP echo reply, id 25294, seq 364, length 64
The outgoing IP here should match that of the tunUS address. If you don’t see any traffic at all, it’s likely that one of your iptables, rule or routes was dropped. This can happen if your tunUS interface is removed (this happens if you e.g. restart OpenVPN – or PIA does).
[h3]RP_Filter[/h3]
If you do see traffic coming in at one place, but it doesn’t reach the application or next hop, it may not be related to any of your firewall rules, ip rules, or routes. It will probably mean the return path filtering is kicking in. You can debug this using:
echo 1 >/proc/sys/net/ipv4/conf/
Edit September 24, 2013: I did some more digging. It turns out the rp_filtering will by defintion have to be disabled. The post has been modified to reflect this.