However, it's a NAS, how hard can it be to make it work and just leave it alone?
Well, it seems like it's quite hard. For some reason known only to themselves and their crazy programmers, Thecus had hard-coded the number of iSCSI connections that the NAS could handle to 8. With multipathing, that meant that you could only connect 4 devices to it, and no more.
That left me somewhat disheartened with the device and I exported a few disks via NFS (which was already unusually slow), but pretty much gave up on it.
That was, until Ubuntu 16.04 turned up, with built in ZFS and clever things like that, *and* a nice easy way to install it. So here's the walkthrough.
Parts Required
1 x N8800
1 x Screwdriver to undo the screws holding the top on
1 x USB Device to turn into a Ubuntu 16.04 Installer
1 x USB Device to run the system from
1 x Ubuntu Server 16.04 LTS amd64 ISO file
1 x Something to talk to the serial port of the Thecus
I happened to have a couple of identical 16GB USB2 thumb drives lying around, but it looks like even a 4G would be sufficient. Note that something that small is unlikely to be USB2, and will be USB1, and will take *forever* to install. Go spend a few dollars and get a few USB3 thumb drives. They won't work at USB3 speed, but they'll at least be faster than that USB1 thing you found behind the couch.
Serial Port?
Yeah. Serial port. Unless you're extremely lucky, your Thecus doesn't have a VGA port. If you don't have anything that can talk serial, your other option is grabbing a cheap VGA card and plugging it into the onboard PCIe slot. (If you do do that, then just install as per normal, and you can skip down to the 'Set up ZFS' section below. If you're not comfortable messing around with serial connections, that may be the best idea)
This is, actually, the fiddliest bit, if you don't have the correct cable. If you *don't* have the correct cable, but you can join a few wires together, you need to make a very basic null modem cable. Simply join pin 3 from one end to pin 2 on the other end (both ways), and pin 5 to pin 5. That's all you need. (That's sending the 'Transmit' from each side to the 'Receive' of each other, and then joining up the ground wire)
Preparing your N8800
Remove all the HDD's in it. You *will* be able to access the data on it when you finish, but as soon as you do, you're going to blow it all away anyway. So make sure everything's backed up and copied across somewhere else. You then need to open it up, and remove the little flash drive. This is what it looks like:
Just pull it directly up, wiggling it from side to side. It'll pop off. Discard it, you'll never need it again. It's an interesting design, actually. It's an old-style Parallel ATA (PATA) interface, with two 128MB drives as the master and slave.
These days you'd just use a USB device.. Oh wait, that's what we're doing next!
These days you'd just use a USB device.. Oh wait, that's what we're doing next!
Create a bootable USB drive
This is pretty easy. Download the Ubuntu 16.04 ISO and then (if you're on Windows) follow the 'How to make a bootable USB drive' instruction here. Don't use the DD method, as you need to edit some files slightly.
Enable serial installation on the drive
This is extremely simple. When you look at your USB drive, it'll have a file called 'syslinux.cfg' in the root. Add this line to the start:
SERIAL 0 115200 0x003
That must be the first line in the file, before the 'DEFAULT loadconfig' option.
Then, because I was lazy, I bypassed all the menu options, because I knew what I wanted to do. I changed the 'CONFIG' line to point straight to the 'txt.cfg' file. This is what my 'syslinux.cfg' file looked like:
SERIAL 0 115200 0x003
DEFAULT loadconfig
LABEL loadconfig
CONFIG /isolinux/txt.cfg
APPEND /isolinux
Then opening up the /isolinux/txt.cfg file, I had to tell Linux that I wanted to use a serial console, too:
default install label install menu label ^Install Ubuntu Server kernel /install/vmlinuz append file=/cdrom/preseed/ubuntu-server.seed initrd=/install/initrd.gz console=ttyS0,115200n8 ---
Those are the only changes needed. (This may be unclear - the 'append' line starts with 'file=' and ends with '---', it is not over two lines)
Boot from it!
All you need to do now is plug it in, and wait for it to boot. Do not plug in your other drive just yet. You don't want to have ANY other drives plugged in, to confuse the machine into booting from something else (you DID remove your HDDs earlier, right?)
You may also note that I also disabled 'quiet' mode, because I really want to be sure that it's booting when I'm sitting in front of a USB stick that's flashing without any other explanation! You should see a bunch of things fly up the console, and then it'll ask you what language you want to use. Only after you see that question should you plug your new USB Drive in.
Install as per normal.
This is pretty uneventful. Install as per normal, and don't forget to turn on SSH when you're at the package selection page, otherwise you'll have a really bad time trying to log into it!
That was the hardest bit. Honestly. Now it's EASY!
Install our needed packages
We want zfsutils-linux, of course, and iSCSI.
root@thecus:~# apt-get install zfsutils-linux iscsitarget targetcli
Plug the HDDs back in and Configure ZFS
The HDD's that I used were still perfectly visible, with all the data on them. Once I plugged them back in, they all came back up and re-established the RAID settings they were using previously. This is not what I wanted. I had to remove the RAID partitions and volumes manually, using 'vgemove' (you can run 'vgdisplay' to get a list, you'll have at least vg0, maybe vg1 and vg2)
root@thecus:~# vgremove vg1
Do you really want to remove volume group "vg1" containing 3 logical volumes? [y/n]: y
Do you really want to remove and DISCARD active logical volume syslv? [y/n]: y
Logical volume "syslv" successfully removed
Do you really want to remove and DISCARD active logical volume lv0? [y/n]: y
Logical volume "lv0" successfully removed
Do you really want to remove and DISCARD active logical volume iscsi0? [y/n]: y
Logical volume "iscsi0" successfully removed
Volume group "vg1" successfully removed
root@thecus:~#
Then stop the RAIDs (note, I've edited out a lot of stuff, I had to delete three RAIDs, and there's 7 HDD's in this machine, but this should be enough for you to get the idea)
root@thecus:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md125 : active (auto-read-only) raid10 sdh2[0] sdg2[1] sdf2[2] sde2[3]
972674688 blocks super 1.0 64K chunks 2 near-copies [4/4] [UUUU]
root@thecus:~# mdadm --manage /dev/md125 --stop
mdadm: stopped /dev/md125
root@thecus:~# mdadm --zero-superblock /dev/sdh2 /dev/sdg2 /dev/sdf2 /dev/sde2
root@thecus:~#
You will probably have to repeat that for any number of md12?'s ..
Then nuke the partitions on those drives (where we're going, we don't NEED partitions!)
root@thecus:~# fdisk /dev/sdh Welcome to fdisk (util-linux 2.27.1) Changes will remain in memory only, until you decide to write them. Be careful before using the write command. Command (m for help): d Partition number (1,2, default 2): Partition 2 has been deleted. Command (m for help): d Selected partition 1 Partition 1 has been deleted. Command (m for help): w The partition table has been altered. Calling ioctl() to re-read partition table. Syncing disks. root@thecus:~#
And finally, you can now assign them to a ZFS pool! (Yes, use raidz2, there's no reason not to. That's perfect for a device of this size).
Just fill it up with as many disks as you can get your hands on, and upgrade the disks as you want to. The pool will automatically grow as you replace disks with larger ones!
root@thecus:~# zpool create -o ashift=12 n8800pool raidz2 /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh
root@thecus:~#
You now have a ZFS pool that's ready to go! (You almost always want to have 'ashift=12' - even if you're adding an old disk, writing to THAT in 4k chunks won't slow it down. Writing to a NEW disk in 512b chunks WILL slow it down.)
Create an iSCSI target for your new ZFS pool.
This is the easy part. I'm now going to create a 1TB iSCSI volume that we actually use to store our data.
zfs create -o compression=off -o dedup=off -o volblocksize=32K -V 1024G n8800pool/iscsi-1
zfs set sync=disabled n8800pool/iscsi-1
Now all we need to do is set up the iSCSI Target on our thecus. Note that I'm disabling all authentication in targetctl because I'm comfortable that this machine will never be accessible by any nefarious hacker. You may not want to do that.
/> cd backstores/iblock
/backstores/iblock> create name=iscsi1 dev=/dev/zvol/n8800pool/iscsi-1
Created iblock storage object iscsi1 using /dev/zvol/n8800pool/iscsi-1.
/backstores/iblock> cd /iscsi
/iscsi> create
Created target iqn.2003-01.org.linux-iscsi.thecus.x8664:sn.0a4b33134abb.
Selected TPG Tag 1.
Created TPG 1.
/iscsi> cd iqn.2003-01.org.linux-iscsi.thecus.x8664:sn.0a4b33134abb/tpg1/
Remember, you can use tab expansion here. You don't need to copy and paste that huge string. The keypresses I used were 'create[enter]cd iq[tab][tab][enter]'.
You now need to link the target to the physical device you created earlier (both highlighted green)
You now need to link the target to the physical device you created earlier (both highlighted green)
/iscsi/iqn.20...33134abb/tpg1> cd luns
/iscsi/iqn.20...abb/tpg1/luns> create /backstores/iblock/iscsi1
Selected LUN 0.
Created LUN 0.
/iscsi/iqn.20...abb/tpg1/luns> cd ..
/iscsi/iqn.20...33134abb/tpg1> set attribute authentication=0 demo_mode_write_protect=0 generate_node_acls=1 cache_dynamic_acls=1
Parameter demo_mode_write_protect is now '0'.
Parameter authentication is now '0'.
Parameter generate_node_acls is now '1'.
Parameter cache_dynamic_acls is now '1'.
/iscsi/iqn.20...33134abb/tpg1> cd portals
Parameter demo_mode_write_protect is now '0'.
Parameter authentication is now '0'.
Parameter generate_node_acls is now '1'.
Parameter cache_dynamic_acls is now '1'.
/iscsi/iqn.20...33134abb/tpg1> cd portals
/iscsi/iqn.20.../tpg1/portals> create
Using default IP port 3260
Automatically selected IP address 10.91.80.189.
Created network portal 10.91.80.189:3260.
/iscsi/iqn.20.../tpg1/portals> saveconfig
Save configuration? [Y/n]:
Saving new startup configuration
/iscsi/iqn.20.../tpg1/portals> exit
Comparing startup and running configs...
Startup config is up-to-date.
One thing I have noticed is that the 'demo_mode' sometime's won't start working until you reboot the machine. If you DO want to enable authentication, then configure that up as per normal, but if you do NOT want auth and discover things like this in your 'dmesg', then you'll need to reboot the thecus:
[ 6779.673114] iSCSI Initiator Node: iqn.1998-01.com.vmware:ssd-4142c0bc is not authorized to access iSCSI target portal group: 1.
[ 6779.684631] iSCSI Login negotiation failed.
Congratulations, you're done!
That is 100% it. You now have a 1TB iSCSI target that's visible from your thecus. You can now do other things, like create a NFS store as part of your pool, but that's well documented elsewhere. From here, you have a fully functional Ubuntu 16.04 machine, with ZFS, and super fast iSCSI.
Enjoy!
1 comment:
Thanks for the instructions. Can you please add additional info on what each Unix command does. I don't know Unix too well and would like to understand what and why you're entering these commands.
Also, how does installing Ubuntu on this machine deal with the addition of new drives? Is there a GUI to configure new drives as you add them, or do you need to know linux and configure using line commands? Lastly, why would you want to enable NFS is you have iSCSI and all windows/mac machines on the network can just connect to the SCSI drive?
Thanks.
Post a Comment