Skip to main content

How to configure multiple VLANs on QNAP TS-869U

It's unbelievable that QNAP still doesn't support multiple VLANs on a single bond0 interface via GUI when they now just released the QTS v4.1.0 NAS Operation System for QNAP. The underlying Linux OS (QTS) does support it, and there should at least not be any problems with Intel chipsets. Some are reporting problems with Marvell.. but I haven't tried.

I wanted to use the QNAP as a iSCSI storage for my LAB using a second interface while having full redundancy and get max bandwith (2x1GbE) for my ESXi hosts, and I didn't want that interface routed. At the same time I of course need the possibility to manage the QNAP system via the main interface, which is routed.

This CLI "hack" will at configure the QNAP for a second VLAN interface that will be persistent during reboots. It's not been verified that it works after an upgrade of the firmware, but I presumed it will.

To get this to work I presume you already have the following working:
  • Network onfigured with LACP with an IP for management with gateway etc.
  • A small DataVolume created in your StoragePool.
    • I've created a volume of 10GB called 'SysVol01'
    • In this volume there should be a .qpkg folder: /share/CACHEDEV1_DATA/.qpkg
  • QTS v4.1.0 installed (only version I've tested - 2014-06-26)
  • Know how to login with ssh and edit files with 'vi'.

The configuration and output below indicates that I already have a VLAN 10 for management and I'm adding a VLAN 20 for iSCSI without gateway, and enable JumboFrames.

Click 'read more' to see the CLI configuration...

CLI Configuration

  1. Log into your QNAP with SSH.
  2. Edit QPKG config file to define your own "network package"

    vi /etc/config/qpkg.conf

    Input the following:

    [my-network]
    Name = autorun
    Version = 0.1
    Author = joffer
    Date = 2014-06-26
    Shell = /share/CACHEDEV1_DATA/.qpkg/my-network/autorun.sh
    Install_Path = /share/CACHEDEV1_DATA/.qpkg/my-network
    QPKG_File = my-network.qpkg
    Enable = TRUE

  3. Create the folder for your qpkg

    mkdir /share/CACHEDEV1_DATA/.qpkg/my-network/

  4. Create the autorun.sh script that will configure your aditional network settings

    vi /share/CACHEDEV1_DATA/.qpkg/my-network/autorun.sh

    Populate the script:

    #!/bin/bash
    
    # Configure MTU JumboFrames on bond0
    # (disabled here, done in WebGUI for main interface) 
    #/sbin/ifconfig bond0 mtu 9000
    
    # Configure a new VLAN - VLAN ID 20
    /usr/local/bin/vconfig add bond0 20
    
    # Configure IP address for this VLAN (not routed) and enable JumboFrames
    /sbin/ifconfig bond0.20 10.10.20.101 broadcast 10.10.20.255 netmask 255.255.255.0
    /sbin/ifconfig bond0.20 mtu 9000
    
    # General Network Tuning
    # Source: http://wiki.qnap.com/wiki/Running_Your_Own_Application_at_Startup#Optimized_networking
    /sbin/ifconfig eth0 txqueuelen 50000
    /sbin/ifconfig eth1 txqueuelen 50000
    echo 1 > /proc/sys/net/ipv4/tcp_rfc1337
    echo 2 > /proc/sys/net/ipv4/tcp_frto
    echo 2 > /proc/sys/net/ipv4/tcp_frto_response
    echo 1 > /proc/sys/net/ipv4/tcp_mtu_probing
    echo 1 > /proc/sys/net/ipv4/tcp_window_scaling
    echo 1 > /proc/sys/net/ipv4/tcp_workaround_signed_windows
    echo 1 > /proc/sys/net/ipv4/tcp_tw_reuse
    echo 0 > /proc/sys/net/ipv4/tcp_tw_recycle
    echo 1 > /proc/sys/net/ipv4/tcp_low_latency
    echo 1 > /proc/sys/net/ipv4/tcp_ecn

  5. Log out and restart to verify that it works. You need to log into the box with SSH and run 'ifconfig' or 'ip a' to see the extra network configuration. The WebGUI only shows the default configuration. Example (#6 shows the new VLAN Interface):

    [~] # ip a
    
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
    2: eth1: <NO-CARRIER,BROADCAST,MULTICAST,SLAVE,UP> mtu 9000 qdisc pfifo_fast master bond0 state DOWN qlen 50000
        link/ether 00:08:9b:d4:ff:32 brd ff:ff:ff:ff:ff:ff
    3: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc pfifo_fast master bond0 state UP qlen 50000
        link/ether 00:08:9b:d4:ff:32 brd ff:ff:ff:ff:ff:ff
    4: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP
        link/ether 00:08:9b:d4:ff:32 brd ff:ff:ff:ff:ff:ff
    5: bond0.10@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
        link/ether 00:08:9b:d4:ff:32 brd ff:ff:ff:ff:ff:ff
        inet 10.10.10.101/24 brd 10.10.10.255 scope global bond0.10
    6: bond0.20@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP
        link/ether 00:08:9b:d4:ff:32 brd ff:ff:ff:ff:ff:ff
        inet 10.10.20.101/24 brd 10.10.20.255 scope global bond0.20

Comments

Anonymous said…
Hello,

thank you, for the great howto.
Do you know a way, to bind an specific service, on one vlan? At the moment, there are all services on all vlan's. In the GUI, there is the second not visible, so there, I can't do any configuration.

Thank you.

Fabian
Joffer said…
No sorry. I don't have a QNap to test with either.. been a long time since I was using QNAP
Dale said…
Yep I am having the same issue... I can't believe that QNAP does not support multiple VLANs over a LACP link... it's quite a joke!!!
Unknown said…
With at least firmware version 4.3.6.0993 on a TS-431XeU the suggested configuration doesn't work anymore.

Starting with 4.3.3 qnap provides a new way for enabling autostart scripts.
It's described here: https://wiki.qnap.com/wiki/Running_Your_Own_Application_at_Startup

So short way:

Enable autorun.sh:
Control Panel → Hardware → General: Run user defined startup processes (autorun.sh)

Enable Jumbo Frames up to MTU 9000:
Control Panel → network → Interfaces → Interface config → set MTU value to 9000

Mount config storage (Valid for AL-based NAS(TS-x31+ and TS-x31X) and TS-x31, other see Wiki Link):
ubiattach -m 6 -d 2
/bin/mount -t ubifs ubi2:config /tmp/config

Create / edit autorun.sh:
vi /tmp/config/autorun.sh

Content:
#!/bin/bash

# Configure MTU JumboFrames on bond0
# (Do it in WebGUI for main interface)

# Configure VLAN ID 90
ip link add link bond0 name bond0.90 type vlan id 90

# Configure IP address for this VLAN (not routed) and enable JumboFrames
/sbin/ifconfig bond0.90 10.10.90.10 broadcast 10.10.90.255 netmask 255.255.255.0
/sbin/ifconfig bond0.90 mtu 9000

# General Network Tuning
/sbin/ifconfig eth0 txqueuelen 50000
/sbin/ifconfig eth1 txqueuelen 50000
echo 1 > /proc/sys/net/ipv4/tcp_rfc1337
echo 2 > /proc/sys/net/ipv4/tcp_frto
echo 1 > /proc/sys/net/ipv4/tcp_mtu_probing
echo 1 > /proc/sys/net/ipv4/tcp_window_scaling
echo 1 > /proc/sys/net/ipv4/tcp_workaround_signed_windows
echo 1 > /proc/sys/net/ipv4/tcp_tw_reuse
echo 0 > /proc/sys/net/ipv4/tcp_tw_recycle
echo 1 > /proc/sys/net/ipv4/tcp_low_latency
echo 1 > /proc/sys/net/ipv4/tcp_ecn

Make autorun.sh executable and unmount config storage:
chmod +x /tmp/config/autorun.sh
echo .
echo "unmounting /tmp/config..."
umount /tmp/config
ubidetach -m 6
Unknown said…
I also didn't find a way to bind the iSCSI to just one IP.
Enabling the service binding via WebUI and disabling iSCSI on the trunk where both vlan are configured disables binding to both ips. Even if just the first (via WebUi) configured ip is visible.

Perhaps there might be a way via QPKG to develop a package which is capable of adding multiple vlan interfaces to the WebUI. this might be a good starting point: https://wiki.qnap.com/wiki/QPKG_Development_Guidelines

Popular posts from this blog

Force Dell BIOS Upgrade

I just experienced a problem upgrading a Huawei N8300 OceanStor NAS Engine node (OEM'ed Dell PowerEdge R710 server). Running the linux binary update file looked good and it asked me to reboot. After reboot the same old v2.1.15 BIOS was there, not the latest v6.4.0 (as of writing this small post). Next up was creating a FreeDOS Bootable ISO with the bios update program included (see this page for how to do that in Windows). Running the BIOS upgrade program from FreeDOS (virtual media and DRAC) I got this error: Cannot use a "Dell System PowerEdge R710" BIOS in a " -  " Pres any key to exit. ROM update not performed. After some googleing I found the solution on Dell Community Forum - use the /FORCETYPE option. So to force the update, I just ran the update package with that option (I had renamed the file for 8.3 DOS filename): R710-640.exe /FORCETYPE And thats it. v6.4.0 BIOS up and running :)

Intel Rapid Storage Technology (iRST) driver for Windows 10 on older chipsets (7-series and older)

My computer is still more than fast enough even though it's over 4 years old. It's a Intel Core i7-3770K with 32GB DDR4-1600 RAM on an Asus ROG Maximus V Gene mainboard. The chipset is Z77 and so it was more or less the best to get at the time being. I started with two Samsung SSD 830 in RAID-9 and later upgraded to a OCZ RevoDrive3 X2 PCIe SSD for my OS disk. Fast forward from Windows 7/8 to Windows 10 (v1607) and Intel seems to not have released any Windows 10 supported SATA controller drivers for Z77. Or have they? It seems that as long as your BIOS has the Intel SATA controller set to AHCI-mode, Windows 10 will install and use the generic ' Standard SATA controller ' and trying to install the latest iRST driver v15.2.0.1020 (latest version as of 2016-12-15) on an AHCI configured Z77-system only gives you an error: Platform not supported The good news is that Intel does have driver support for some of the older chipsets; those that have RAID capabilities, whi...