Build log: Titus/Andronicus

Discussion in 'Cases / Modding Discussion' started by stdPichu, Jul 19, 2013.

Share This Page

  1. stdPichu

    stdPichu Kustomer

    Joined:
    Nov 13, 2012
    Messages:
    34
    Likes Received:
    0
    Evening gents. Been a while since I posted here but hopefully there's still a few interested people around :)

    Not sure this is the right forum since there isn't really any modding going on here yet, but I figured this was the best place. This is a wee server build detailing my new backup server (soon to be followed up by a new file server). This build came about because of a) my disappointment with my QNAP and b) finding that there was finally a case on the market of the same form factor as all those little NAS units. I duly informed the ever-reliable Kustom who were so kind as to go through an excessive amount of badly translated verbiage and timelines that were more like mirages through Alibaba until, finally, a small consignment of these boxes landed here in Blighty. Here beginneth the build log. But first some background.

    Back in the misty days of yore, I had a large, hulking and above all noisy Debian 4U file server that I eventually replaced with a quite ridiculously overpriced QNAP. But the QNAP was small, dead silent, fast enough, and used very little power, and professed to be linux with even less maintenance than usual. So far so good, and indeed I've been running it for three years now after discounting the Synology models which were much less hackable to get third party software to run on them.

    However, there's been trouble in paradise. QNAP have not only been keeping woefully outdated pieces of software in the toolchain (many of which contain public and remotely exploitable vulnerabilities), but in their efforts to modernise their OS they've run into an absolutely cavalcade of crap, a veritable river of regressions. Suffice to say I chose not to supply them with any more of my money and went out to build my own linux file server once more, since I have more than enough linux-fu to do so (not that you need much - setting up your own FreeNAS or suchlike should be boneheadedly simple).

    This has turned into a two-stage project - I'm first replacing my backup server (a bog-standard linux machine eating up a lot of space in a venerable Lian Li PC75) and, assuming that goes well, then the QNAP. Before we delve into the details of the build, here's the hardware I settled on for the backup server;
    CPU: i5-3470T
    Mobo: Intel DH77DF
    RAM: 4GB DDR3
    HSF: Akasa AK-CCE-7106HP
    Boot: 32GB Crucial M4 mSATA
    HBA: IBM M1015 reflashed to LSI 9211-8i IT-mode (no boot ROM)
    PSU: Seasonic SS-350M1U
    Cables: 2x SFF 8087 -> SATA forward breakout

    ...and of course the case, the U-NAS 800.

    Even with the over-inflated specs, the whole shebang is still about half of what QNAP/Synology will charge for similar.

    A quick note on the HBA; there isn't a single mini-ITX mobo out there with enough SATA ports to drive all 8 bays in this case (plus boot drive(s)), so I went looking for a cheap RAID card or HBA. I discounted most RAID cards because they're a) bloody expensive and b) typically, if the RAID card fails, you need to buy another RAID card of the same type to be able to read your discs again (since proprietary RAID cards tend to write their own signature to the start of the disc; been there done that with my ye olde 3ware), so for my purposes it's best to use linux software RAID (aka mdadm, and more on its awesome flexibility later) and a dumb SATA controller. LSI make a bunch of nice SAS/SATA HBAs that are well supported under Linux, but they're still quite expensive.

    Enter the IBM M1015. It's essentially a rebadged LSI 9211-8i but with a different board layout and custom firmware. Thankfully, it can be reflashed into a standard LSI but curiously it can be bought brand-new for about half the cost of the 9211-8i, and if you're willing to buy one of the never-used models from eBay that have been ripped out of a brand-new IBM xSeries, a quarter of the cost. Reflashing is a simple matter of a DOS/UEFI boot stick, zeroing out the ROM and installing a new one. Using IT mode without a boot ROM means that a) the HBA just passes the connected discs straight through to the OS and b) there's no waiting for the BIOS to kick in, resulting in a near-instant boot. Seriously, when the IBM boot ROM was loading I was worried I was going to be waiting until the heat death of the universe. Using IR (Integrated RAID) mode gives you "hardware" RAID functions, but since the M1015 doesn't have a dedicated RAID controller there's little point (other than restricting yourself to another LSI card if your HBA ends up face down in the dirt); even with the feature key (a hardware dongle that enabled RAID5 in the firmware), RAID5/6 performance on the hardware RAID is poor. Software RAID performance however is excellent as we'll see later. Happy to post more details on the M1015 if anyone's interested.

    The rest of the hardware should all be self-evident; for the backup server, I just need enough CPU grunt for RAID6, rsync and SSH at about 100MB/s (essentially the limit of 1Gbps ethernet). Given that my previous backup hardware, a lowly 1.66GHz C2D, was able to reach 75MB/s across much inferior SATA controllers when using only a single core for both the RAID and SSH/rsync stuff, and you can see why it made sense to go for the power-sipping 3470T. Why not an i3? Mostly thanks to Intel's stupid CPU-segmentation, since most of those didn't contain AES-NI or crc32 hardware. An Intel mobo was chosen because they're typically the only ones that come with Intel NICs (the only ones worth bothering with in my opinion) and both these NICs and the mobos themselves usually have the best Linux support bar none. The HSF is one of the lowest-profile you can get for the 1155 socket (important as we'll see later), I picked an mSATA drive so I'd have as few cables around as possible (and honestly, for the kind of linux install I'm doing here I could have got away with a 4GB drive). And of course a 1U PSU from the ever-wonderful SeaSonic range. The RAM was plucked out of a box I have that's full of RAM, and then doused in chicken blood so as to appease the dark lords of Not Running Memtest Like A Proper Builder Should. Only kidding - of course I ran memtest. The chicken blood is just to make it go fasta.

    Anyway, onto the pics and the step-by-step.
     
  2. stdPichu

    stdPichu Kustomer

    Joined:
    Nov 13, 2012
    Messages:
    34
    Likes Received:
    0
    [​IMG]
    Here's the box the U-NAS 800 comes in after you remove the impervious bubblewrap Kustom covered it in :)

    [​IMG]
    Unboxed.

    [​IMG]
    Showing some leg.

    [​IMG]
    Here's the components - motherboard/CPU/RAM, case and the various doohickeys the case some with; namely screws and zip ties, plus the PCIe riser on the right.

    [​IMG]
    Here's the drive bay with the trays taken out

    [​IMG]
    Side view showing where the motherboard will go, note the slightly odd backwards mounting points. The top two are hidden behind the bar at the top and fasten the "right" way around.

    [​IMG]
    Here's a picture I took where I'm in the process of getting the cables wrong, I had to undo these and try again. The motherboard slides in from the top and won't go in from the side as I was trying to do here.

    [​IMG]
    That's better. But still slightly hair-raising. Note the plastic sheath that comes with the case to protect the rear of the motherboard from shorts.

    [​IMG]
    Finally in, but still rather a fustercluck of cables.

    [​IMG]
    To help with cable management, I decided to take the back plate off. Here you can see the SATA backplance with the associated cables.

    [​IMG]
    Marco is not impressed at not getting enough attention.

    [​IMG]
    PCIe riser fitted. As it went in, the cable buckled and got in the way of the CPU fan...

    [​IMG]
    ...thankfully it wasn't an issue to pull it out.

    [​IMG]
    Here's the M1015 in place as I wanted to experiment with some cable routing.

    [​IMG]
    The plan is to have the SFF 8087 cable route the other way and go underneath the PSU, meaning less congestion in the motherboard area and less cable-wrangling to get it to do 180's.

    I basically decided to stop here for now because a) the PSU doesn't fit and b) the cables from the PSU don't reach either :) The PSU is quite tricksy of which there is more stuff to come.

    [​IMG]
    I put the backplate back on; note how the SATA cables now stick in the blades of the fans. These will need to be tied back or the fans will need grilles fitted.

    [​IMG]
    It's a warm friday evening and that can mean only one thing... martini time! 8 parts gin to 1 part vermouth and a dash of orange bitters and as much ice as you can find. Do NOT shake it!

    [​IMG]
    Cheers until next time :)
     
  3. Archaon

    Archaon Eats, Drinks, Sleeps Kustom

    Joined:
    Jul 5, 2004
    Messages:
    5,421
    Likes Received:
    0
    Looking to do something similar myself. HP P410 looks like a viable option as well, I've seen them for ~£70 with 256mb cache. P400 even cheaper.
     
  4. stdPichu

    stdPichu Kustomer

    Joined:
    Nov 13, 2012
    Messages:
    34
    Likes Received:
    0
    Indeed, I've been helping a mate with his HP Microserver build. There's plenty of options for hacking it (not least of which is a proper BIOS to enable hotplug and a few other goodies) and for the price it utterly knocks the stuffing out of the 4-bay QNAP/Syno models. Although I think they've been EOL'ed now to make way for the updated models so those cheap prices won't be around for long.

    For people who need more power or flexibility though, I don't think these U-NAS cases can be beaten.

    Edit: d'oh, comprehension fail - P410 is a RAID card. Don't know where I got the idea of the Microserver from.

    As I've noted above though, I'm not going to be touching hardware RAID for the foreseeable future I think. Since you usually can't just pluck a drive from one RAID controller and plug it into another (or indeed non-RAID) controller, they're painful from a recovery perspective unless you can guarantee you'll always have another of the RAID cards lying around. As it is, I've saved a lot of people's bacon when their NAS has blown up out of warranty and they couldn't afford to replace it, you can just plug the drives into any PC running linux (assuming it has enough SATA ports) and pull your files off.
     
  5. stdPichu

    stdPichu Kustomer

    Joined:
    Nov 13, 2012
    Messages:
    34
    Likes Received:
    0
    A minor update in lieu of pictures...!

    After many intervening holidays and other distractions, I've finally finished this first build. I've had to swap PSUs from the Seasonic to the FSP 300W which uses a standard 1U mounting as opposed to the slightly different FlexATX mounting that I've found the Seasonic uses.

    http://www.kustompcs.co.uk/acatalog/info_16101.html

    As any potential buyers may have noticed, the Kustom folks are creating a custom bracket to better enable the use of PSUs like the Seasonic. The Seasonic is quieter than the FSP (the FSP has a small whine which you can hear when you get close enough whereas the Seasonic is almost totally inaudible) but whose cables are not long enough to reach the motherboard, so anyone following in my footsteps will probably also need some further molex extensions - some more pics on that to follow in the next section.

    I did initially have concerns that the 12V rails on the 300W FSP wouldn't be enough to supply enough spin-up current to the hard drives but they appear to work without any issue at all. I now have a RAID6 array of eight 2TB drives which will happily give me 400MB/s seq/reads and 300MB/s seq writes.

    Total cost without the hard drives: about £700.

    The base components for the next non-backup NAS-box are a lot more enterprisey (and hence enterpricey) but not beyond the dreams of avarice.
    CPU: Xeon E3 1265-v2
    Motherboard: Intel S1200KPR
    Memory: 2x4GB Crucial ECC
    Boot: 2x128GB Crucial M4's to be put in software RAID1
    HBA: Yet another IBM M1015 flashed to factory LSI 9211-8i status

    Build incoming as soon as Graeme can get some more of these cases in stock :)
     
  6. stdPichu

    stdPichu Kustomer

    Joined:
    Nov 13, 2012
    Messages:
    34
    Likes Received:
    0
    Just a quick update as I'm waiting on the arrival of my second case.

    Given the new low-power haswell systems that re coming out, I decided to switch my build to one of those and relegate my S1200KPR build into that of the backup server as my paranoia has decided I must have ECC on both my servers. The DH77DF will probably be re-purposed as an HTPC.

    So on to the new build specs:
    CPU: Xeon E3 1230v3
    Motherboard: ASRock E3C226D2I
    Memory: 2x8GB Crucial 1.35v ECC
    HSF: Noctua NH-L9i
    PSU: Seasonic SS 350M1U
    HBA: Yet another IBM M1015 reflashed to an LSI 9211-8i
    Boot drives: 2x128GB Crucial M4

    The case, of course, remains the quietly excellent NSC 800 plus I'll be upgrading the fron USB port to USB3 via this adapter. I did toy with the idea of punching an eSATA port through the back of the motherboard IO shield but I figure USB3 will be good enough for my purposes. I also purchased an additional 2.5" mounting plate and some M3 spacers so the system can hold two SSDs.

    Now a note on the power supply; it's difficult info to find, but the Seasonic 350 is a format called FlexATX which differs very slightly from a standard 1U power supply, most notably the mounting holes. However, the Seasonic 350W is one of the most efficient and quietest power supplies available that will fit in this case, so Kustom have been awesome enough to make a made-to-measure adapter plate. I'm happily running the FSP 300 in my backup box at present but its dual 40mm fans, whilst very quiet, do produce a perceptible hum in a quiet room. The leads from the SS 350 aren't long enough to reach the molex plugs on the motherboard so I've also thrown in an ATX extension. Yes, it's a lot of sacrifices to make for a PSU but quietness is of utmost priority for me. The SeaSonic SS400 would otherwise look like an ideal candidate, what with having the right mounting holes, but by mine and Graeme's measurements it's just a tad too long to fit inside the confines of the NSC 800.

    My first built used the perfectly-good K25 775 which is still present on my backup system, but once again I got fed up with its reliance on Intel's daft push-pin system (which managed to pop out twice during motherboard installation), so this time I've opted for the Noctua which uses a much more robust bolt-through design and should be quieter to boot. Whilst slightly taller, this cooler still leaves about 5mm of clearance between the top of the HSF and the drive cage inside the case.

    Speshul mention has to go to the unexpectedly great E3C226D2I - ECC support, dual intel i210 NICs and a baseboard management controller. Users looking for ECC support but not wanting to splash out on a Xeon can pick up one of the dual-core celeron or pentium chips which, stupidly, also support ECC whilst the i3 and i5 ranges do not. Thanks for gouging on market segmentation Intel!

    Of course, those of you without any high CPU requirements at all can also go for the ASRock Avoton platforms such as the C2750D4I, although of course the onboard controllers are mostly a) the rather poor Marvell ones and b) wouldn't gel well with the SFF 8087 connectors inside the NSC 800, so you'll probably still be pairing those with a decent HBA.

    Just waiting on the rest of my shipment now, and this time I swear I'll upload some pics of the inside! I've also got a host of tips'n'tricks for the debian build that'll be onboard for those of you fancying some tinker time.
     
  7. stdPichu

    stdPichu Kustomer

    Joined:
    Nov 13, 2012
    Messages:
    34
    Likes Received:
    0
    OK! After what feels like an aeon, the build is finally nearing completion. And yes, as promised, I've finally taken some pics of the inside and how I've organised the cables. And believe me, with this case you'll need to organise the cables :)

    But firstly I've made a couple of changes from what I posted above; instead of using two M4's, I've switched to the M500, the M4's successor. This was done for a couple of reasons - partly due to the M500 being only 7mm tall but mostly due the the inbuilt power protection capacitors. Performance is as close to identical as makes no odds, and the recent bargain-basement prices on them meant I could easily switch to the 240GB model. The extra space might well be used as an SSD cache for the main array via dm-cache.

    The other change I've made is replacing the supplied fans with BeQuiet PWM fans. The supplied fans are quiet enough, but the BeQuiet fans are quieter still and support PWM fan control, ideal for plugging into the two PWM ports on the ASRock mobo. More on the fan control in a bit, first here's a pic of the backplane with the new fans attached. Bonus points to BeQuiet for using push-pin mounting with rubberised grommets to reduce vibration.

    [​IMG]

    Here's a pic of the USB3 front-panel module. Since I've reluctantly decided to dispense with eSATA, I figured a USB3 port on the front of the case would be useful. It's very simple to fit, just unscrew those four screws on the existing USB2 module and screw this one in.

    [​IMG]

    Here's the Seasonic 350W PSU fitted with Kustom's FlexATX to 1U adapter.

    [​IMG]

    Like I did on my previous build, I've rewired the backplace so the SATA cables go in the opposite direction towards the other side of the case, this makes cable routing much easier later on and reduces the amount of clutter coming into the motherboard area.

    [​IMG]

    As you can see from this shot down the length of the backplane, the supplied SATA cables are easily thin enough to bend so that they're nearly flush - there's plenty of headroom between the cables and the edge of the fans so I've no need to use fan guards.

    [​IMG]

    ...and here's everything fitted together with the backplate attached.

    [​IMG]

    Now to gingerly fit the motherboard... here's a shot of the rear I/O panels and a top-down view. The HSF is the rather superbly engineered Noctua.

    [​IMG]

    [​IMG]

    I'll save you the hair-raising shots of the motherboard being squeezed in, but here's a top-down shot showing about 5mm of clearance between the top of the Noctua and the side of the HD cage.

    [​IMG]

    One of the things I loved most about this motherboard was that I only needed to fit the 24-pin power cable into it before placing it inside the case; as you can see here all of the other connectors are more-or-less easily accessible after it's screwed in. Clockwise from top left - case buttons/LEDs, USB3 internal header, SATA ports, 4-pin power, USB3 external header.

    [​IMG]

    Not the new improved PCIe riser is fitted and the M8087 connectors plugged in - much, MUCH easier to handle this when the cables come from the other side. You can see the 4-pin mobo power going underneath the M1015 and the first M500 SSD mounted underneath.

    [​IMG]

    A better shot of the M500. Using an 9mm SSD with a metal body would have gotten me uncomfortably close to the underside of the M1015, to the extent that other people doing similar builds have used electrical tape or similar as an insulation layer. Thanks to the M500's slim profile, I just stuck on one of the plastic spacers. Here you can also see a better view of the 20mm tall M3 female/female spacers I yanked out of my toolbox.

    [​IMG]

    If you have the eyes of an eagle, if you look in the centre of the pic you can just make out there's about 2mm clearance between the top fo the SSD and the bottom of the M1015.

    [​IMG]

    For the top-mounted SSD, I did add some tape to the bottom of the mounting plate Just In Case it intefered with the M1015.

    [​IMG]

    Here's all the front I/O plugged in.

    [​IMG]

    My Cunning Plan has worked! The SATA backplane requires two molex connectors - but the Seasonic only has one. So I used a SATA -> Molex adapter that runs under the PSU to connect the second which you can see...

    [​IMG]

    ...here, complete with the double-decker SSDs. And no, I'm not worried about daisy-chaining the SSDs off of that power connector either since SSDs don't really use anything in the way of startup current like wot platter-based drives do. If you look carefully you can also see the super-short SATA cables in black'n'blue leading from the motherboard to the SSDs.

    [​IMG]

    Lastly a view from the mobo side showing the gymnastics I've forced all the cabling into so as to produce no kinks, obstructions to airflow, or pressure on other cables/connectors Importantly, plenty of space/airflow above the M1015 and motherboard area.

    [​IMG]
     
  8. stdPichu

    stdPichu Kustomer

    Joined:
    Nov 13, 2012
    Messages:
    34
    Likes Received:
    0
    Walls-of-text time now.

    So! Now onto the config side of things, but first a quick note about power - yes, I'm using an overpowered CPU but I am going to be doing some fairly heavyweight gubbins in this box... but I have tried to keep power consumption low. The config you see above - namely the CPU, motherboard, SSDs and HBA - have just been clocked at idling at 35W, so I expect that once 6-8 hard drives are added it will tick along at about 50W. Compare with my QNAP TS-659 Pro+ which idles at 60W despite having an anaemic Atom processor. Peak power will obviously be much higher but I don't expect this will happen often.

    So - a quick summary of what this server will be doing:
    • File serving over NFS and CIFS
    • Samba4 active directory domain controller
    • DHCP server
    • DNS server (both local and caching)
    • get_iplayer PVR and transcoder
    • Bittorrent client
    • Mail server for local network
    • IMAP server
    • Miniature VM test lab via KVM
    • Web server hosting a wiki and various other programs
    • rsync server (both for recieving backups from other machines and copying the backups to my backup server)
    • OpenVPN server (indispensable if you ever have to use public wifi)

    But all of that is of course dependant on the base install which is ol' faithful Debian, in my instance the Testing branch currently named Jessie. Now first off, I had no end of problems getting my install to boot thanks to the not-very-well documented topic of installing under UEFI in Debian. At the time of install (which was a while ago TBH), as soon as the Debian installer detected a UEFI BIOS, it would insist on installing grub-pc-efi... even if you wanted it to install ye olde BIOS-based boot (namely grub-pc); this is doable for me since both my S1200KPR and E3C226D2I both have a UEFI CSM (Compatability Support Module) that allows UEFI to boot from an MBR bootloader. If you're also using the E3C22xD2I, I don't think this was added until the 1.7 BIOS.

    Why did I insist on using MBR boot? Well, because I found absolutely no way I could boot Debian from UEFI if I'm using a RAID1 array as my boot drive. UEFI boot apparently requires a small FAT32 partition to boot the system from, and if you try and create one of these on a RAID1 disc, UEFI won't be able to see it. Since I'm not using any >2TB discs to boot from, it was simply far easier for me to fall back on the tried and trusted MBR boot (which still works perfectly with RAID1). So when I installed Debian, I dropped to a shell, removed grub-pc-efi and manually installed grub-pc instead. There's been a major release of Debian Installer since so hopefully that bug has been squashed.

    As always, I use the Debian netinstall process which installs a very barebones system with the minimum of cruft. During installation, I formatted both of the 128GB SSDs in the following fashion:
    sdX1: 512MB RAID1 partition
    sdx2: 512MB swap partition
    sdx3: Logical partition
    sdX5: 127GB RAID1 partition

    Both halves of sdX1 were then combined into softraid RAID1 at /dev/md0 and then formatted/mounted as /boot. sdX2 wasn't RAIDed and instead linux will stripe swap across both drives, effectively a RAID0. Linux will almost never write out to swap if it doesn't need to (it's not uncommon for machine that's been up for six months with its RAM at 95% full to have less than 1MB written into swap) and I won't be using hibernate so there's no need for a "full size" swap partition. The final sdX5 partitions were transformed into a RAID1 of /dev/md1 and then formatted as an LVM physical volume. From that I carved out 8GB logical volumes for /, /usr, /var, /home and /var/log and the rest is left empty for the time being; total installation footprint plus a bunch of the above programs is about 1.5GB.

    Code:
    root@titus:~# cat /proc/mdstat
    Personalities : [raid1]
    md1 : active raid1 sdb5[0] sda5[1]
          233365080 blocks super 1.2 [2/2] [UU]
          bitmap: 0/1 pages [0KB], 131072KB chunk
    
    md0 : active raid1 sdb1[0] sda1[1]
          498368 blocks super 1.2 [2/2] [UU]
    
    unused devices: 
    Code:
    root@titus:~# df -h
    Filesystem                      Size  Used Avail Use% Mounted on
    /dev/mapper/vg_root-lv_root     7.3G  343M  6.5G   5% /
    udev                             10M     0   10M   0% /dev
    tmpfs                           1.6G  416K  1.6G   1% /run
    tmpfs                           5.0M     0  5.0M   0% /run/lock
    tmpfs                           3.4G     0  3.4G   0% /run/shm
    /dev/md0                        464M   47M  389M  11% /boot
    /dev/mapper/vg_root-lv_home     7.3G   18M  6.9G   1% /home
    /dev/mapper/vg_root-lv_usr      7.3G  793M  6.1G  12% /usr
    /dev/mapper/vg_root-lv_var      7.3G  506M  6.4G   8% /var
    /dev/mapper/vg_root-lv_varlog   7.3G  123M  6.9G   3% /var/log
    /dev/mapper/vg_root-lv_storage   63G   52M   60G   1% /storage
    After I decided to upgrade to the M500s, I used dd to clone each of the M4's to the M500's and then used parted to shunt the end of sdX5 to the end of the disc, then booted back into Debian and extended the LVM physical volume to end end of the disc. Anything that goes onto the SSDs in future will be provisioned out of this PV.

    Both softraid and LVM have a small overhead that can reduce throughput, but in practice it's small enough to be negligible. Even on these not-that-fast SSDs I still get ~480MB/s reads. hdparm's a crap non-benchmark however so I'll do some proper IO testing later.

    Code:
    root@titus:~# hdparm -tT --direct /dev/md1
    
    /dev/md1:
     Timing O_DIRECT cached reads:   946 MB in  2.00 seconds = 472.80 MB/sec
     Timing O_DIRECT disk reads: 1440 MB in  3.00 seconds = 479.76 MB/sec
    So far, so hoopy. Now onto getting the hardware prodded...
     
  9. stdPichu

    stdPichu Kustomer

    Joined:
    Nov 13, 2012
    Messages:
    34
    Likes Received:
    0
    OK, before I've got the hardware prodded I've come across two problems - one was the box rebooting all the time, and the other was the box not rebooting or shutting down properly. Thankfully these were reasonably easy to diagnose and the box is now up and running again.

    Firstly, the random reboots - not really what you want in a box designed for 24/7 use. Thanks to the nature of this motherboard for servers, I have both ECC memory and an event log in the BMC telling me about hardware events. A quick glance at this showed I was getting hundreds of correctable (single-bit) memory errors every few minutes - these are the sort of errors that ECC recovers from (although your home machines running standard RAM won't and this may cause your computer to crash or, worse, silently corrupt data)... and when a double-bit error comes along, ECC can't correct and so the motherboard automatically reboots. Clearly my RAM is not behaving, and a quick install and test of memtest86+ confirmed my suspicions. So, erm, argh! But a quick shimmy around with a torch showed that one of my memory modules wasn't seated correctly (one of the downside of using memory slots that don't have clips on both sides); a little jimmying about and no more memory errors and the machine looped through about 20 runs of memtest86+ overnight.

    Secondly, the not rebooting properly. Symptoms would be that when I issued a "reboot" command, 9 times out of 10 the box would instead power off. Powering it back up again, it would work fine... but still wouldn't reboot. The poweroff command worked as expected.

    Thankfully, 1 time out of 10 the box did reboot, and then failed as soon it tried to boot debian with the following errors visible on the console:
    wait hw ready failed
    waiting for mei start failed
    link layer initialization failed
    init hw_failure

    Success! I mean, sort of! It's succeeded in failing with a repeatable error I mean. A quick google shows those errors are related to the Intel Management Engine, which is governed by the "mei" and "mei_me" modules in linux. As a test, I unloaded those modules from memory and then rebooted the box... and it went without a hitch. So until this gets fixed, for the time being I've blacklisted those kernel modules from being loaded:
    Code:
    echo "blacklist mei" >> /etc/modprobe.d/ime-blacklist.conf
    echo "blacklist mei_me" >> /etc/modprobe.d/ime-blacklist.conf
    Those modules are still present in the initrd so let's rebuild that now:
    Code:
    update-initramfs -u -k $(uname -r)
    One reboot later and no more hang-on-shutdown problems.
     
  10. stdPichu

    stdPichu Kustomer

    Joined:
    Nov 13, 2012
    Messages:
    34
    Likes Received:
    0
    So now we've got the OS and stuff working right, let's looks at hardware monitoring. Let's install lm-sensors and see what we can see; this board uses the Nuvoton NCT6775 chip which sensors-detect was happy enough to load for me.

    Code:
    root@titus:~# cat /etc/modules
    # /etc/modules: kernel modules to load at boot time.
    #
    # This file contains the names of kernel modules that should be loaded
    # at boot time, one per line. Lines beginning with "#" are ignored.
    # Parameters can be specified after the module name.
    
    
    # Generated by sensors-detect on Thu Jan 30 04:04:13 2014
    # Chip drivers
    coretemp
    nct6775
    After some fiddling, I found as ASRock-provided tweak file for lm-sensors somewhere on t'imterwebs; I created the following:

    Code:
    root@titus:~# cat /etc/sensors.d/asrock_e3c226d2i
    # From ASRock technical support :
    # * Scaling : VALUE * (R1+R2)/R2 * 8mV
    #   (*8 already done by driver)
    # * +3.30V : register 0x23 R1=34 and R2=34
    #   (integrated resistors, scaling already done by driver)
    # * +5.00V : register 0x25 R1=20 and R2=10
    # * +12.00V : register 0x26 R1=56 and R2=10
    #
    # From nct6775 module source code :
    # 0x20, 0x21, 0x22, 0x23, 0x24, 0x25, 0x26, 0x550, 0x551, 0x552
    #  in0   in1   in2   in3   in4   in5   in6    in7    in8    in9
    #                   +3.3V        +5V  +12V
    
    chip "nct6776-*"
        # Voltages ------------------------------------------------------------
        label in3 "+3.30V"
        set in3_min  3.3 * 0.95
        set in3_max  3.3 * 1.05
    
        label in5 "+5.00V"
        compute in5 @*3, @/3
        set in5_min  5.0 * 0.95
        set in5_max  5.0 * 1.05
    
        label in6 "+12.00V"
        compute in6 @*6.6,@/6.6
        set in6_min  12.0 * 0.95
        set in6_max  12.0 * 1.05
    
        # Need more documentation :
        ignore in0
        ignore in1
        ignore in2
        ignore in4
        ignore in7
        ignore in8
    
        # Temperatures --------------------------------------------------------
        label temp1 "M/B Temp"
        set temp1_max 60
        set temp1_max_hyst 50
    
        label temp2 "CPU Temp"
        set temp2_max 80
        set temp2_max_hyst 75
    
        # Fans ----------------------------------------------------------------
        # No value, controlled by ASPEED AST2300 ?
        ignore fan1
        ignore fan2
    
        # Disable unconnected sensors -----------------------------------------
        ignore intrusion0
        ignore intrusion1
        ignore cpu0_vid
        ignore beep_enable
        ignore temp3  # AUXTIN
        ignore temp7  # PECI Agent 0
        ignore temp8  # PCH_CHIP_TEMP
        ignore temp9  # PCH_CPU_TEMP
        ignore temp10 # PCH_MCH_TEMP
    
    This lets lm-sensors ignore the values it can't understand and gives me a much nicer format for output in sensors; please not this was captured with the case cover still off so real temps will probably be much higher. Thankfully the fan control in the ASRock UEFI is pretty great - BIOS 1.7 would only power up the CPU fan when temps hit 50degC but this is fixed in BIOS 1.8, and now the fan runs at a perpetually low speed.

    Annoyingly, I can't yet find a way to get lm-sensors or pwmconfig to see the fan RPM sensors; I can see them in the BMC but they don't seem to be exposed to the OS however which is a shame. I was planning to get the fan speed linked to those of the hard drives so that nothing ended up going over 50 or so; not that this is important (I'm still running a bunch of WD greens and Samsung 2TB drives that I accidentally cooked to nearly 70degC - much like the google study there's little to no correlation between drive temperatures and failure rates), it just would have been cool to do.

    Anyway, here's what I ended up with:

    Code:
    root@titus:~# sensors
    coretemp-isa-0000
    Adapter: ISA adapter
    Physical id 0:  +32.0°C  (high = +80.0°C, crit = +100.0°C)
    Core 0:         +32.0°C  (high = +80.0°C, crit = +100.0°C)
    Core 1:         +31.0°C  (high = +80.0°C, crit = +100.0°C)
    Core 2:         +31.0°C  (high = +80.0°C, crit = +100.0°C)
    Core 3:         +30.0°C  (high = +80.0°C, crit = +100.0°C)
    
    nct6776-isa-0290
    Adapter: ISA adapter
    +3.30V:       +3.38 V  (min =  +2.98 V, max =  +3.63 V)
    +5.00V:       +5.04 V  (min =  +4.75 V, max =  +5.26 V)
    +12.00V:     +12.30 V  (min = +11.40 V, max = +12.62 V)
    M/B Temp:     +32.0°C  (high = +60.0°C, hyst = +50.0°C)  sensor = thermistor
    CPU Temp:     +34.0°C  (high = +80.0°C, hyst = +75.0°C)  sensor = thermistor
    These temps are with case lid off and no hard drives installed so I expect CPU and motherboard temperature will be much higher once the box is fixed up.

    What about the temperatures of the hard drives? Let's install a promising-sounding package called hddtemp and set that to start at boot as a local daemon:

    Code:
    root@titus:~# grep -v '#' /etc/default/hddtemp
    RUN_DAEMON="true"
    DISKS_NOPROBE=""
    INTERFACE="127.0.0.1"
    PORT="7634"
    RUN_SYSLOG="0"
    OPTIONS=""
    Discs can be probed individually on the command line or by using other funky stuff wot I'll come to later.

    Code:
    root@titus:~# hddtemp /dev/sda
    /dev/sda: Crucial_CT240M500SSD1: 31°C
    OK, after some more experimentation I've been able to get a reading of the fan speeds, but it's not visible to the usual sensor programs. As I suspected, the fans seem directly connected to the BMC and not exposed to the OS but it's apparently possible to prod the IPMI interface directly with ipmitool. I found I had to add ipmi_devintf to /etc/modules to get this to work, but now when I run ipmitool I get the following:

    Code:
    root@titus:~# ipmitool sensor
    ATX+5VSB         | 5.070      | Volts      | ok    | 4.050     | 4.260     | 4.500     | 5.490     | 5.760     | 6.030
    +3VSB            | 3.440      | Volts      | ok    | 2.660     | 2.800     | 2.960     | 3.620     | 3.800     | 3.980
    Vcore            | 1.800      | Volts      | ok    | 1.440     | 1.530     | 1.620     | 1.980     | 2.070     | 2.160
    VCCM             | 1.350      | Volts      | ok    | 1.160     | 1.230     | 1.280     | 1.650     | 1.730     | 1.810
    +1.05V           | 1.050      | Volts      | ok    | 0.850     | 0.900     | 0.940     | 1.150     | 1.210     | 1.270
    VCCIO_OUT        | 1.000      | Volts      | ok    | 0.850     | 0.900     | 0.940     | 1.150     | 1.210     | 1.270
    BAT              | 3.060      | Volts      | ok    | 2.500     | 2.640     | 2.780     | 3.400     | 3.540     | 3.680
    +3V              | 3.360      | Volts      | ok    | 2.660     | 2.800     | 2.960     | 3.620     | 3.800     | 3.980
    +5V              | 5.040      | Volts      | ok    | 4.050     | 4.260     | 4.500     | 5.490     | 5.760     | 6.030
    CPU_FAN1         | 1000.000   | RPM        | ok    | na        | na        | 300.000   | na        | na        | na
    REAR_FAN1        | 500.000    | RPM        | ok    | na        | na        | 300.000   | na        | na        | na
    FRNT_FAN1        | 500.000    | RPM        | ok    | na        | na        | 300.000   | na        | na        | na
    MB Temperature   | 38.000     | degrees C  | ok    | na        | na        | na        | 80.000    | na        | na
    CPU Temperature  | 38.000     | degrees C  | ok    | na        | na        | na        | 91.000    | na        | na
    +12V             | 12.200     | Volts      | ok    | 9.600     | 10.200    | 10.700    | 13.100    | 13.800    | 14.500
    That'll have to do for the time being.

    Now we've got all this fancy schmancy hardware monitoring, what better way to display it than on a nice web page? I'm going to be installing apache anyway, so let's aptitude install phpsysinfo... for some reason the symlinks aren't created by default any more so I had to create a symlink from /var/www/html to the phpsysinfo install directory:

    Code:
    root@titus:~# ls -l /var/www/html/
    total 0
    lrwxrwxrwx 1 root root 22 Apr  2 05:19 phpsysinfo -> /usr/share/phpsysinfo/
    Now when I hit http://1.2.3.4/phpsysinfo I get greeted with a reasonable approximation of what my system is up to... but like all things it can be improved. I changed /etc/phpsysinfo/config.php with the following:

    Code:
    define('PSI_PLUGINS', 'MDStatus,SMART');
    define('PSI_LOAD_BAR', true);
    define('PSI_SENSOR_PROGRAM', 'LMSensors');
    define('PSI_HDD_TEMP', 'tcp');
    define('PSI_SHOW_MOUNT_OPTION', false);
    define('PSI_SHOW_INODES', true);
    define('PSI_SHOW_NETWORK_INFOS', true);
    This gives us the following respectively:
    • Two extra windows that show us the status of our softraid devices and SMART status of our individual discs
    • A bar showing current CPU load
    • Our hardware temperature sensors
    • Our hard drive temperatures from the hddtemp daemon
    • Disables showing any extra mount options (much cleaner display and if you're ever foolish enough to mount a drive such as CIFS with credentials this will hide them also)
    • Show the percentage of filesystem inodes in use; if this ever goes above 10% you have too many small files! :)
    • Some extra information about your NICs (things like IP and MAC addresses)
     
  11. stdPichu

    stdPichu Kustomer

    Joined:
    Nov 13, 2012
    Messages:
    34
    Likes Received:
    0
    So that's hardware monitoring mostly taken care of, what other tweaks can we do? Well I'm glad you asked. I created a UDEV rule that allowed the kernel to use the deadline IO scheduler when it detected an SSD, but to switch to a platter-friendly scheduler when it detects a regular hard drive:

    Code:
    root@titus:~# cat /etc/udev/rules.d/60-io-sched.rules
    # set deadline scheduler for non-rotating disks
    ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="0", ATTR{queue/scheduler}="deadline"
    Discs where rotational==1 will default to the cfq scheduler. You can verify this is working by checking sysfs (I've got an old 1TB drive plugged in for some testing at present):

    Code:
    root@titus:~# cat /sys/block/sda/queue/scheduler
    noop [deadline] cfq
    root@titus:~# cat /sys/block/sdc/queue/scheduler
    noop deadline [cfq]
    Whilst we're on discs, we need to change the rather conservative linux defaults for doing a RAID rebuild to set the minimum and maximum speeds; generally you'll always want the RAID to rebuild as fast as possible, especially if you decide to use RAID5 or 6 which suffer very heavy penalties when they throw a disc. Add the following to /etc/sysctl.conf:

    Code:
    dev.raid.speed_limit_min = 50000
    dev.raid.speed_limit_max = 5000000
    Those of you using RAID5 or 6 will probably also want to increase the size of the stripe cache; this is basically the amount of pending page writes to the discs that are allowed to be kept in memory but if you're foolish it can a) result in your system running out of memory and b) result in a data corruption if you lose power whilst data is still in-flight, so please exercise caution. The default size is 256 but can go all the way up to 32,000; use the following formula to get a rough idea of how big your stripe cache can be:

    Memory chomped = system page size (almost always 4k on x86) x stripe cache size x number of discs

    So to set, say, a cache of ~64,000 stripes (probably as big as you'd ever want to consider) do the following:

    Code:
    echo 65536 > /sys/block/mdX/md/stripe_cache_size
    Using the above formula, that would have an eight-drive RAID5/6 array eating 4096*65536*8 = 2147483648 = 2GB RAM. Only use a value this large if you have at least twice that much physical memory and a UPS to ensure that stuff in that 2GB cache actually gets written to disc before the power goes out. If you do, it has a dramatic effect on performance, but most of you will be fine with a much smaller number that's bigger than 256 (say 1024). I'll be experimenting with different values for this once I rebuild Andronicus.

    I don't use this myself for this server though, since Titus is going to be RAID10; with the ever-increasing amount of time it takes to rebuild a RAID5 or RAID6 array I believe this is becoming an increasingly sensible option plus "simple" RAIDs like 10 put less of a load on your CPU and still have excellent throughput. Don't believe any chancers that tell you RAID5 or RAID6 are better for data integrity because they have checksums; they're not. Data integrity requires checksums at the filesystem level and checksums at the block level have no real idea what the filesystem level should be. This is something that only ZFS, btrfs and some other more experimental filesystems offer. More on that later when I get around to raising a glass to Jens Axboe.

    "But, stdPichu" I hear you all cry, "why so much bother in IO tweakery when you're inherently limited by the fact that your network connection maxes out at about 100MB/s, which is slower then even a single hard drive is capable of these days?". Well I'm glad you ask. Part of the reason I insist on a) dual NICs and b) dual Intel NICs is so I can use link aggregation to cheat myself into getting more than 100MB/s out of my NAS. Now I don't currently have a switch that's capable of Etherchannel or 802.33ad/LACP (but I have mein eyes on a nice Mikrotik) so I've set my NICs up using one of Linux's inbuilt load-balancing doohickeys called balance-alb. This doesn't need speshul switch hardware or configuration since it uses kernel magic to multiplex TX and RX packets across all interfaces - but it does need NICs where you can flip the MAC around as desired so that you can easily get over 200MB/s aggregate. However, cheap Realtek and Marvell NICs are famous for being utterly bobbins at this (as well as sucking in a load of other different ways) so it's easiest for me to plump my time and money on Intel. Setting up a bonded NIC interface is a pile o' peas in Debian once you have ifenslave installed; just take your /etc/network/interfaces file, remove the config the installer put in there for you and create your new interface - bond0, to be made out of eth0 and eth1 and any other network interfaces you might happen to have lying around.

    Code:
    root@titus:~# cat /etc/network/interfaces
    # This file describes the network interfaces available on your system
    # and how to activate them. For more information, see interfaces(5).
    
    # The loopback network interface
    auto lo
    iface lo inet loopback
    
    # custom bonding setup
    auto bond0
    iface bond0 inet static
            address 192.168.1.2
            netmask 255.255.255.0
            network 192.168.1.0
            gateway 192.168.1.1
            slaves eth0 eth1
            bond_mode balance-alb
            bond_miimon 250
            bond_downdelay 250
            bond_updelay 250
            mtu 9000
            dns-nameservers 192.168.1.2 192.168.1.1
            dns-search goosnargh.local
            ethtool-wol g
    For those of you wondering what some of the other values are for - mtu 9000 enables 9k jumbo frames (fantastic if the rest of the devices on your network can handle it) and ethtool-wol g puts both adapters into a state where they can boot the computer from standby by a magic packet spat at their MAC. A quick restart of the network and you should be in with your nice new bonded NIC setup.
     
  12. stdPichu

    stdPichu Kustomer

    Joined:
    Nov 13, 2012
    Messages:
    34
    Likes Received:
    0
    So that's most of the nuts'n'bolts of the OS done and dusted. But what about firmware? Most firmware can be handled by the kernel but we have at least two bits of kit that require external firmware upgrades throughout their lifecycle - the LSI 9211-8i and the Crucial M500 SSDs. Thankfully it's still easy to get these set up from within linux.

    The LSI HBA is the easiest. Since it's a server-oriented product it of course comes with out-of-the-box linux support. This comes in the way of a linux command-line utility called sas2flash which I placed in /usr/local/sbin and chmod'ed +x to allow is execute permissions. Now you can copy your latest IT firmware onto your linux box and upgrade the firmware from there.

    Code:
    sas2flash -o -f /path/to/2118it.bin
    It doesn't require a reboot to take effect but it will cause your discs to drop whilst the controller reboots.

    The M500's are a little more complicated as they need to be booted into a special "firmware upgrade" environment in order for crucial's utility to work (the same is true with the windows updater too). But grub can boot anything, so why don't we get grub to boot it rather than relying on burning a CD or going to the bother of using unetbootin on a USB stick every time?

    Inside Crucial's bootable image, you'll find a file called BOOT2880.IMG and another called MEMDISK. Although they use syslinux to boot this, this image file is actually FreeDOS. Thankfully this isn't an issue for grub; extract the IMG and MEMDISK files and throw them somewhere into /boot; I picked renaming them to memdisk and m500latest.img BECAUSE I CANT STAND ALL CAPS.

    Yes, grub has scary configuration and is always terrifying but thankfully here we can cheat because the crucial loader uses syslinux to boot that freedos thing. Inside the ISO, you can open up the file called syslinux.cfg or isolinux.cfg and you'll see entries like this:

    Code:
    PROMPT 1
    TIMEOUT 30
    
    DEFAULT default
    
    DISPLAY bootMsg.txt
    
    LABEL default
    	KERNEL memdisk
    	append initrd=boot2880.img floppy raw
    
    LABEL alternate
    	KERNEL memdisk
    	append initrd=boot2880.img floppy
    That's basically all the boot params we need. I'm now going to create a custom grub bootloader to do the same thing in /etc/grub.d/40_custom (obviously you'll need to change the values to fit your own setup):

    Code:
    # boot from /dev/sdX1 aka (hd0,0)
    menuentry "M500 Firmware" {
    	insmod ext2
    	insmod mdraid1x
    	linux16 (hd0,0)/memdisk floppy raw
    	initrd16 (hd0,0)/m500latest.img
    	}
    Now just run update-grub to create the new boot menu. Next time you reboot your computer, you can apply the firmware update by selecting it as a boot option. Further updates from crucial can be applied by just overwriting the m500latest.img file with the one from the latest crucial installer.

    I've not actually tested this one yet, so here's hoping I made a decent backup :)
     
  13. stdPichu

    stdPichu Kustomer

    Joined:
    Nov 13, 2012
    Messages:
    34
    Likes Received:
    0
    A minor interlude whilst I get to grips with Samba4; as mentioned above, for the sort of server where there's the potential for lots of critical pending stuff to be held in RAM, a UPS is as close to essential as can be. I've got a small APC unit, only a 500VA job, but it's enough to give my critical systems enough time to shut down cleanly. Just make sure you've got one with a data port - some of the lower-end models don't come with any way of notifying computers that the power has gone out, so check before you buy.

    You can easily configure APC UPS' in Debian by installing apcupsd. After powering it on and plugging it into one of the USB ports, configure the files and start the daemon. Mine is a USB model so I've used the following in /etc/apcupsd/apcupsd.conf:

    Code:
    UPSCABLE usb
    UPSTYPE usb
    DEVICE
    I've also set the following settings:

    Code:
    BATTERYLEVEL 10
    MINUTES 5
    This shuts down the server when it has 10% or 5 minutes of battery remaining, whichever happens first. Now just set ISCONFIGURED=yes in /etc/default/apcupsd and start the daemon.

    You can then run apcaccess and see wot it can see:

    Code:
    root@titus:~# apcaccess
    APC      : 001,034,0869
    DATE     : 2014-05-01 21:15:40 +0100
    HOSTNAME : titus
    VERSION  : 3.14.10 (13 September 2011) debian
    UPSNAME  : titus
    CABLE    : USB Cable
    DRIVER   : USB UPS Driver
    UPSMODE  : Stand Alone
    STARTTIME: 2014-05-01 21:15:38 +0100
    MODEL    : Back-UPS ES 550G
    STATUS   : ONLINE
    LINEV    : 240.0 Volts
    LOADPCT  :   2.0 Percent Load Capacity
    BCHARGE  : 100.0 Percent
    TIMELEFT :  42.2 Minutes
    MBATTCHG : 10 Percent
    MINTIMEL : 5 Minutes
    MAXTIME  : 0 Seconds
    SENSE    : Medium
    LOTRANS  : 180.0 Volts
    HITRANS  : 266.0 Volts
    ALARMDEL : 30 seconds
    BATTV    : 13.6 Volts
    LASTXFER : No transfers since turnon
    NUMXFERS : 0
    TONBATT  : 0 seconds
    CUMONBATT: 0 seconds
    XOFFBATT : N/A
    STATFLAG : 0x07000008 Status Flag
    SERIALNO : 5B1405T11672
    BATTDATE : 2014-02-02
    NOMINV   : 230 Volts
    NOMBATTV :  12.0 Volts
    FIRMWARE : 870.O3 .I USB FW:O3
    END APC  : 2014-05-01 21:15:43 +0100
    Even better is that by default, this will also give you a network-enabled daemon that all of your other systems can probe to be notified of power failures; at least QNAP support this (called "Network UPS Slave" ) under UPS settings. I don't have any windows systems plugged into the UPS so I haven't yet tried getting windows hooked into the daemon.

    Even considering the tiny battery in this thing, I should still be good for at least 20mins of power even once all the hard drives are plugged in.
     
  14. stdPichu

    stdPichu Kustomer

    Joined:
    Nov 13, 2012
    Messages:
    34
    Likes Received:
    0
    OK, onto heavyweight issues now - the Samba 4 Active Directory Domain Controller, using a BIND backend for DNS as opposed to Samba's inbuilt DNS server which allows us greaterer flexibility (Samba's internal DNS server doesn't yet fully support being a forwarding caching server and a bunch of other options). Thankfully for us Debian have seemingly made this task very, very simple compared to how some documentation makes it seem (assuming you've already got a static IP set up, along with correct info in /etc/hosts and /etc/resolv.conf). This is the first time I've tried this so hopefully it will all work fine.

    First install bind9 and do basic configuration - you can make it a simple caching server just by adding your ISPs (or others) DNS servers in the forwarders section as seen below:

    Code:
    root@titus:~# cat /etc/bind/named.conf.options
    options {
            directory "/var/cache/bind";
    
            // If there is a firewall between you and nameservers you want
            // to talk to, you may need to fix the firewall to allow multiple
            // ports to talk.  See http://www.kb.cert.org/vuls/id/800113
    
            // If your ISP provided one or more IP addresses for stable
            // nameservers, you probably want to use them as forwarders.
            // Uncomment the following block, and insert the addresses replacing
            // the all-0's placeholder.
    
            forwarders {
                    // ISP or other forwarders ad infinitum
                    2.3.4.5;
                    6.7.8.9;
                    10.11.12.13;
            };
    
            //========================================================================
            // If BIND logs error messages about the root key being expired,
            // you will need to update your keys.  See https://www.isc.org/bind-keys
            //========================================================================
            dnssec-validation auto;
    
            auth-nxdomain no;    # conform to RFC1035
            listen-on-v6 { any; };
    
    };
    Ensure Bind is running.

    Now to do the Samba equivalent of a dcpromo via the following command line (after first backing up/deleting the Debian-provided /etc/samba/smb.conf); we're going to be using the Bind DLZ option which allows Samba to create and dynamically control the DNS zone(s) looked after by your domain. If you're already using another box for your DNS server (e.g. your router) feel free to use the internal DNS backend. The provisioning tool should auto-detect your network settings but you can use the --interactive options to specify your own values if you like.

    Debian automatically creates ext4 filesystems with xattr and acl support enabled these days, but if you're upgrading or otherwise unsure then please ensure your /var partition is mounted with support for these - either using an fstab mount option or using tune2fs to enable permanently.

    Code:
    tune2fs -l /dev/mapper/vg_root-lv_var|grep -i option
    Code:
    root@titus:~# rm /etc/samba/smb.conf
    root@titus:~# samba-tool domain provision --use-rfc2307 --interactive --dns-backend=BIND9_DLZ
    Realm [GOOSNARGH.LOCAL]:
     Domain [GOOSNARGH]:
     Server Role (dc, member, standalone) [dc]: dc
     DNS backend (SAMBA_INTERNAL, BIND9_FLATFILE, BIND9_DLZ, NONE) [SAMBA_INTERNAL]: BIND9_DLZ
    Administrator password:
    Retype password:
    Looking up IPv4 addresses
    Looking up IPv6 addresses
    No IPv6 address will be assigned
    Setting up share.ldb
    Setting up secrets.ldb
    Setting up the registry
    Setting up the privileges database
    Setting up idmap db
    Setting up SAM db
    Setting up sam.ldb partitions and settings
    Setting up sam.ldb rootDSE
    Pre-loading the Samba 4 and AD schema
    Adding DomainDN: DC=goosnargh,DC=local
    Adding configuration container
    Setting up sam.ldb schema
    Setting up sam.ldb configuration data
    Setting up display specifiers
    Modifying display specifiers
    Adding users container
    Modifying users container
    Adding computers container
    Modifying computers container
    Setting up sam.ldb data
    Setting up well known security principals
    Setting up sam.ldb users and groups
    Setting up self join
    Adding DNS accounts
    Creating CN=MicrosoftDNS,CN=System,DC=goosnargh,DC=local
    Creating DomainDnsZones and ForestDnsZones partitions
    Populating DomainDnsZones and ForestDnsZones partitions
    See /var/lib/samba/private/named.conf for an example configuration include file for BIND
    and /var/lib/samba/private/named.txt for further documentation required for secure DNS updates
    Setting up sam.ldb rootDSE marking as synchronized
    Fixing provision GUIDs
    A Kerberos configuration suitable for Samba 4 has been generated at /var/lib/samba/private/krb5.conf
    Setting up fake yp server settings
    Once the above files are installed, your Samba4 server will be ready to use
    Server Role:           active directory domain controller
    Hostname:              titus
    NetBIOS Domain:        GOOSNARGH
    DNS Domain:            goosnargh.local
    DOMAIN SID:            S-1-5-21-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
    This can all be further refined by editing /etc/samba/smb.conf but we'll come to that later; for the moment it will look like the following:

    Code:
    root@titus:~# cat /etc/samba/smb.conf
    # Global parameters
    [global]
            workgroup = GOOSNARGH
            realm = GOOSNARGH.LOCAL
            netbios name = TITUS
            server role = active directory domain controller
            server services = s3fs, rpc, nbt, wrepl, ldap, cldap, kdc, drepl, winbind, ntp_signd, kcc, dnsupdate
            idmap_ldb:use rfc2307 = yes
    
    [netlogon]
            path = /var/lib/samba/sysvol/goosnargh.local/scripts
            read only = No
    
    [sysvol]
            path = /var/lib/samba/sysvol
            read only = No
    Now modify the bind9 configuration with the files listed above, and then restart bind and samba to ensure the daemons load properly and with the right config. The samba setup means you should have the following in Bind's config files:

    /etc/bind/named.conf should contain:
    Code:
    include "/var/lib/samba/private/named.conf";
    /etc/bind/named.conf.options should contain:
    Code:
            # samba dns keytab
            tkey-gssapi-keytab "/var/lib/samba/private/dns.keytab";
    You can then test if Samba has correctly created the service records for your new AD DC as follows:

    Code:
    root@titus:~# host -t SRV _ldap._tcp.goosnargh.local.
    _ldap._tcp.goosnargh.local has SRV record 0 100 389 titus.goosnargh.local.
    
    root@titus:~# host -t SRV _kerberos._udp.goosnargh.local.
    _kerberos._udp.goosnargh.local has SRV record 0 100 88 titus.goosnargh.local.
    
    root@titus:~# host -t A titus.goosnargh.local
    titus.snafu.local has address 1.2.3.4
    Then check with smbclient that the netlogin and sysvol shares are visible:

    Code:
    root@titus:~# smbclient -L localhost -U%
    Domain=[GOOSNARGH] OS=[Unix] Server=[Samba 4.1.6-Debian]
    
            Sharename       Type      Comment
            ---------       ----      -------
            netlogon        Disk
            sysvol          Disk
            IPC$            IPC       IPC Service (Samba 4.1.6-Debian)
    To test authentication, try connecting with the inbuilt "administrator" user and the AD master password you created earlier:

    Code:
    root@titus:~# smbclient //localhost/netlogon -Uadministrator -c 'ls'
    Enter administrator's password:
    Domain=[GOOSNARGH] OS=[Unix] Server=[Samba 4.1.6-Debian]
      .                                   D        0  Tue Apr 22 23:38:26 2014
      ..                                  D        0  Tue Apr 22 23:38:30 2014
    
                    59041 blocks of size 131072. 50880 blocks available
    You should now be able to join your NT machines to the Samba domain with the administrator password you provided earlier.

    You can optionally also configure your linux box to authenticate against AD (or more accurately LDAP and Kerberos) by installing krb5-user and creating a boring /etc/krb5.conf; alternatively, the samba provisioning tool will also have created one for you:

    Code:
    root@titus:~# cp /var/lib/samba/private/krb5.conf /etc/krb5.conf
    root@titus:~# cat /etc/krb5.conf
    [libdefaults]
            default_realm = GOOSNARGH.LOCAL
            dns_lookup_realm = false
            dns_lookup_kdc = true
    You can now snag a Kerberos ticket as follows, or with any othe rpincipal name that you might set up in AD:

    Code:
    root@titus:~# kinit administrator@GOOSNARGH.LOCAL
    Password for administrator@GOOSNARGH.LOCAL:
    Warning: Your password will expire in 41 days on Thu 12 Jun 2014 20:40:52 BST
    root@titus:~# klist
    Ticket cache: FILE:/tmp/krb5cc_0
    Default principal: administrator@GOOSNARGH.LOCAL
    
    Valid starting     Expires            Service principal
    01/05/14 23:40:10  02/05/14 09:40:10  krbtgt/GOOSNARGH.LOCAL@GOOSNARGH.LOCAL
            renew until 02/05/14 23:39:59
    Now install ntp and configure the NTP daemon so that your windows clients can authenticate and grab time from it; just change /etc/ntp.conf similar to the following:

    Code:
    ntpsigndsocket /var/lib/samba/ntp_signd/
    restrict default mssntp
    Now I've only just set this up... so it's going to take me a while to do some adequate testing on this, but it should at least serve as a good baseline for your own experiments.
     
  15. MikeBuzz

    MikeBuzz New Member

    Joined:
    Feb 20, 2014
    Messages:
    11
    Likes Received:
    0
    Hi, thanks for the great build thread

    I have just got all the parts i need to start my own build much like yours, only difference is that i have gone for the ASrock E3C224D2I, was there any reason you chose the 226 version?
     
  16. stdPichu

    stdPichu Kustomer

    Joined:
    Nov 13, 2012
    Messages:
    34
    Likes Received:
    0
    The only reason for the 226 vs. the 224 was that at the time I bought the hardware the 224 was out of stock and priced higher than the 226 (something like £180 vs. £165); other than the SATA ports and slightly different BIOS there shouldn't be any real difference. Looking now, the C224 model appears to be about £20 cheaper than the C226 one.

    Glad you've enjoyed the thread, it's been fun dusting off my somewhat rusty skills in getting samba tweaked (especially with the AD domain) and other frippery that's happened since I last built a file server... and I don't even have the RAID array built yet! Sadly the bank holiday weekend was filled with too much relaxation and daiquiri's to get any real work done :)
     
  17. MikeBuzz

    MikeBuzz New Member

    Joined:
    Feb 20, 2014
    Messages:
    11
    Likes Received:
    0
    I have a another question, what setting do you run your fans at, as i have read some people are having issue with the smartfan setting never tuning the fans on. I am looking to replace the standard fans with some Noctua NF-S12A PWM ones
     
  18. stdPichu

    stdPichu Kustomer

    Joined:
    Nov 13, 2012
    Messages:
    34
    Likes Received:
    0
    With the 1.70 BIOS, the default smart fan mode certainly didn't turn on the CPU fan the whole time - it would turn on when the CPU hit about 50 degrees and turn back off again at 40 or so. I wouldn't really call it an issue per se since nothing ever overheated but I can see how it might put extra strain on the bearings and could be annoying if you've got a non-silent CPU cooler.

    BIOS 1.80 seemed to rectify this however, and the CPU fan runs constantly now, as do the fans on the rear (which are hooked up via PWM to the mobo ports). I didn't have the case fans hooked up when I was running 1.70 so can't verify that either way.

    For the E3C224D2I, it looks like the fan fixes should be in the 1.60 BIOS. There's a corresponding v13 BMC firmware for both as well.

    As to what settings my fans are at - I think they're at whatever the 1.80 defaults are, either smart fan or formula fan setting. Currently the NAS is almost completely silent.
     
  19. MikeBuzz

    MikeBuzz New Member

    Joined:
    Feb 20, 2014
    Messages:
    11
    Likes Received:
    0
    Thanks for letting me know

    Got my E3-1230v3 and 16gb EEC RAM today, so i just need the time to set it all up now.
     
  20. Archaon

    Archaon Eats, Drinks, Sleeps Kustom

    Joined:
    Jul 5, 2004
    Messages:
    5,421
    Likes Received:
    0
    That seems like a lot of CPU in a little space. Personally probably would've gone with one of the E3 low power models or one of the new 4/8 core Avoton boards.