Category Archives: IT

Wicked, New Orientations

The Truth of Vertical vs. Horizontal Key Layout
The Truth of Vertical vs. Horizontal Key Layout

Few things in life are definitively good or evil in their entirety. This is one of them.

I have no idea what these keys in the center of keyboards are named. What I do know is that recently some manner of vertical orientation was spawned, and it is an abomination.

Be very careful when you buy keyboards now or you may find yourself wildly scrolling back and forth uncontrollably in your applications, or typing over large swaths of text instead of inserting. Even jumping to locations you never intended!

This new vertical orientation is a most vile, confusing and even dangerous development. Be wary!

LVM Basics – A Quick Intro and Overview by Example

This article is published at Linux Tools of the Trade.

Storage technologies and methods can be a confusing subject when you’re first starting out, and depending on which route you choose, and how you organize them, these storage methods can continue to be confusing even when you know lots – unless you plan and organize well, at the beginning.

Take the time to think about what you want to have before losing yourself in the details of any given technology. And keep this in mind as you create.

Following is very generalized overview of some practical LVM use, presented in simple, no-frills terms. LVM tends to scare people away, seeming complex at first. And it can be, but it doesn’t have to be. This is the easy way, to get you started with LVM if you’re interested. You can make it harder (and perhaps better) later.

What is LVM? Why use it?

GNU/Linux gives you many options. LVM, the Logical Volume Manager is one of them. LVM lets you combine disks and partitions and even arrays of disks together into single filesystems.

LVM is very flexible. None of your storage media or arrays need to match up – you can combine a high performance PCI-e RAID10 array with an external USB 3.0 4TB hard drive if you like. Not always the best idea performance-wise, but often performance isn’t the most important thing.

LVM performs very well, too. Is your RAID0 SATA array running out of space? Add 2 more drives in another RAID0 and combine their capacities together. Or just throw in one new drive. It’s a messy way to do it, but you can. And it would still be fast.

Or perhaps you’re more sober, and want to create a SAN for your network, starting off with a 4-disk RAID10 with smaller drives. But you’re worried the space might fill up, and you can’t have the files span multiple filesystems. If you use LVM, you can just put in a second array at a later date, and expand your logical volume seamlessly.

 LVM Details

From a designer and administrator standpoint, you can think of LVM as having 3 main components:

  1. Physical volumes (devices or arrays).
  2. Physical volumes you bundle into made-up groups.
  3. Logical disks created and carved out of those groups you made.

This is where your planning comes into play (or not if you’re bad).

I’ve gotten into the habit of always using LVM to create the volumes which I will then format into filesystems. This gives me the flexibility to add more capacity, or take some away, at any point in the future.

The Debian distribution has long supported creating and installing itself into LVM volumes. Most of the major distributions now seem to support this as well.

So, using LVM, it’s a simple matter to take your 6TB RAID array and make a nice little root drive of 10GB for Debian, and a 5TB home directory, that you could then share with another 10GB partition you might make later for, say, Fedora.

Of course, you could partition your RAID array with extended partitions to achieve much of the same result, but you would not have nearly the flexibility later.

The Nitty Gritty

Working with LVM requires that you have the LVM tools. In Debian you can get these installed for you automatically if you choose to install onto an LVM volume, or you can get them later with

apt-get install lvm2

Instead of “normal” devices for hard drive volumes,  you’ll get logical devices that are create by the device mapper. In Debian, at least, they live under /dev/mapper and you also get a prettier version of the device names you create under /dev/<volume_group> — and you get to choose the <volume_group>

Creating Physical Volumes

The first thing you need to do is decide which devices you’d like to take over as LVM-controlled beasties. You can also specify individual partitions of disks if you like. The point being, you don’t have to even partition a disk first – you can specify the whole device if you like. Or partition it if you prefer. People will always argue over what’s best, and there is no clear winner. Which means, do what you like!

Let’s say you have 2 SATA3 drives, /dev/sdb and /dev/sdc. And you want to use them for LVM. The first thing to do is claim those physical volumes for LVM:

pvcreate /dev/sdb /dev/sdc

And lets suppose later you decide you also want to use your RAID 1 drive (/dev/md0) for LVM as well. You can designate any physical volume to be used with LVM, at any time.

pvcreate /dev/md0

Oh, I also have a USB 3.0 drive I might want to do something with:

pvcreate /dev/usb_drive_0

Do be careful though – consider any data on these destroyed.

Creating Volume Groups

Once you have some physical volumes in your system designated for LVM use, you can start grouping them up together. Creating volume groups doesn’t give you anything you can format into a filesystem, it just groups together whichever physical volumes you want to use into a named group that you can reference them by as one.

Think of it as the first abstraction layer — which of these physical devices do I want to group together and use as a big pool of space, from which I can make my littler hard drives.

You might want to consider how fast the drives perform when grouping. For example, you probably don’t want to group your super fast RAID array in with your USB drive. But maybe you’re fine grouping your RAID array in with your other SATA drives.

Yes, yes indeed, we’re fine with grouping our RAID in with our SATA drives. It will be fast. And the USB will be slow. How about we name these groups to reflect the fast and slowness of each. Maybe I can use the slow for backups?

vgcreate fast /dev/md0 /dev/sdb /dev/sc

That’s it. Easy to create a volume group. Just pick a name, like “fast” as we did here, and then list the volumes you want to use in it — ones that you created with pvcreate. And if you’re wondering, yes you can skip the pvcreate part above — but don’t — until you’re very sure that all this structure stuff is going to stick in your head.

Now, even though we only have one device in our USB drives, we may later add more nonetheless. So why not put it into an LVM group now – then later if we need to expand our backup area with more space, we can add a second USB drive, and just add it to this same group for instant gratification.

We’ll call this one slow:

vgcreate slow /dev/usb_drive_0

So we now have two spaces we can work with – the “fast” space, and the “slow” space. These are the volume groups. And we no longer have to care about which devices make them up. Well, until something goes wrong.

I suppose it’s worth noting that LVM provides no redundancy or backups, by default, although there are some great mechanisms built into it for doing so later, in some… interesting ways. So it’s really best to make sure your physical volumes have the level of redundancy and protection you want, before making those physical volumes into logical ones. I almost never make an LVM volume out of anything but a RAID array, unless I just don’t care that much about losing data, because I’m so awesome about backups (mm hmm).

Creating Logical Volumes

Now you can create logical volumes in your volume groups, and these logical volumes you can format just like hard drive partitions.

When you create your groups, LVM will allocate all the space possible on those devices, and make it available to you to create your logical volumes in. If you ever want to see how much space you have:

vgdisplay fast

You’ll get to see how much space you have, and also how much space has already been allocated to logical volumes.

Now, lets say I want to have a 100GB “disk” for my database files. Fast disk access is important for database work, so we’ll put that on the “fast” volume group:

lvcreate -n database -L100G fast

This will create a logical device of 100GB size called /dev/fast/database (at least in Debian). A more pedantic device name is also created called /dev/mapper/fast-database but I prefer using the first naming convention.

You can then use this newly-created device just like you would any hard drive. You can partition it if you like, or you can just create a filesystem on it:

mkfs.ext4 /dev/fast/database

Now, if you decide you just must partition these, they can be a little tricky to mount. Using a tool called partx you can extract that partition table out into devices you can work with, but we won’t go into that just now, here.

But if you skip the partitioning bit, these logical volumes mount just like normal filesystems, except really, they can be coming off of any number of drives in your volume group.

mount -t ext4 /dev/fast/database /var/mydatabase

Easy as pie! (when you don’t have to make the crust)

The same holds true for your slow USB drive:

lvcreate -n backups -L 2T slow
mkfs.ext4 /dev/slow/backups
mount -t ext4 /dev/slow/backups /mnt/backups

Here we created a “drive” of 2TB called “backups”, formatted it, and mounted it at /mnt/backups. This would be our USB drive.

Adding More Drive Space to an LVM Logical Volume

Now that we’re demystified, and have been using our system just fine for weeks, we ended up filling up our /dev/slow/backups with backups. Let’s say our USB drive was 3TB. We only allocated 2TB to our logical volume for backups, so really we have 1TB left free in the “slow” volume group. We could see this with

vgdisplay slow

So, if we wanted, we could just increase the size of our logical volume /dev/slow/backups to eat up that extra 1TB that exists in that volume group, and so give us the full capacity of that USB drive:

lvextend -L+1T /dev/slow/backups

Here we give a relative size in the -L parameter, saying we want to add 1TB to the already-allocated size of 2TB, making a total of 3TB. Of course, this doesn’t make the filesystem bigger, just the “disk” the filesystem is on. But ext4 is easy to resize:

resize2fs /dev/slow/backups

By not specifying a specific size, resize2fs will just make the filesystem as large as it can on that “drive”. And you don’t have to take the disk offline either — this is an online resize, and I’ve never had it fail. But of course you should do it offline. But I never will.

You can also shrink filesystems and volumes, but I’m not going to go into any of that here because shrinking filesystems is unnatural.

But suppose even 3TB is not enough for your backups. Well, it’s easy to add a new device to a volume group, and make it available for logical volume to eat up in its gluttony.

You buy another USB drive! So now we need to add it to the “slow” volume group. You got a good deal on 4TB drive this time. So you have your original 3TB one, and now a 4TB one you’re going to use to expand your storage. Assuming it’s assigned /dev/usb_drive_1

First, create the physical volume, as usual:

pvcreate /dev/usb_drive_1

Then add it to your “slow” volume group:

vgextend slow /dev/usb_drive_1

After you do that, if you look with

vgdisplay slow

You’ll see that you now have a whopping 7TB volume group called “slow”.

Then you just do the same as above to extend your logical volume, or “drive”, for your backups:

lvextend -L+3T /dev/slow/backups
resize2fs /dev/slow/backups

Et voila! You now have a 6TB backup “drive”, because you didn’t use the full 4TB off the new one. You left 1TB free in the “slow” volume group, because you’re not always a complete hog, and you may like to use that 1TB for some other volume in the future, like:

lvcreate -n pron -L1T slow
mkfs.ext4 /dev/slow/pron
mount -t ext4 /mnt/relax /dev/slow/pron

Because, well, if not a hog, then perhaps a pig. Oink.  And that uses the last of the drives.


You can, of course, remove devices from LVM groups as well. You just need to make sure you have enough space in the volume group to do so. For example, if you have 2 USB drives, one 3TB and one 4TB, making a total of 7TB — if you’re using up 6TB in logical volumes, you won’t be able to take away either of those USB drives, unless you can shrink down your logical volumes first (after shrinking your filesystem first, unless you like agony).

The point being, LVM is quite flexible.

One of my other favorite features is the ability to take a live snapshot. This is great for backing up whole disk images of a live-running system. You create an LVM “snapshot”, and then you can dump that image anywhere you like, and the filesystem will be in a consistent state, even with the system running. It’s wonderful for virtual machine images especially.

But I’m not going into that either here, just  yet.

Hopefully this might have helped you a little. I remember when I was first looking at LVM it was a confusing mish-mash of all these different options, and nothing spelled it out simply. Hopefully, this has managed to do so, if you’re looking to get started with it, for play or production.

And as always, check out the man pages. There are lots of other places out there too with more detailed and specific use cases and feature examples.

Honestly, using LVM has changed everything for me. It’s certainly worth looking at. Best to you!

Overclock Experience on AMD FX-8350 CPU on ASRock 990FX Extreme9 Mobo Using 2400 Speed Memory

A few months ago I decided to sacrifice my AMD FX-8150 – re-purposing it as a decent 8-core virtual server instead. In its place I purchased one of the new AMD 7850K Kaveri APU’s. My former FX-8150 workstation had an Nvidia 670 graphics card and the system consumed a lot of power, even when barely being used for anything. The thought of a 95w Kaveri sounded great.

And it was – with the new Kaveri 7850K chip as my CPU/GPU (APU) and the Nvidia card removed, the system rarely consumed more than 65w (including an LCD 24″ monitor)! When I played the occasional game on it, or the odd video encode, the power would spike up to 150w easily enough, since I had it overclocked a bit. I love this little system and still keep it. But I found that there are times when I really need the raw horsepower I gave up with the FX-8150.

So I decided to purchase the newer AMD FX-8350 instead. Of course, this chip isn’t all that new, really. But after looking at various “not-just-mainstream-talking-head” benchmarks, and see it compared reasonably well with the much more expensive Intel offerings, even their latest and greatest, I decided to go with it. My old FX-8150 was so solid. I was hoping the FX-8350 would be the same, and give me a little more performance as well.

The Kaveri APU’s benefit greatly from very fast memory. The talking heads out there claim that the FX-series processors don’t benefit that much from faster memory, and many claim that the AMD memory controller can’t even handle faster memory speeds well, past 1866 MHz. I decided to purchase faster memory nevertheless, thinking I could always use it in the APU system, since I wasn’t that thrilled with the more bargain Team Group memory I purchased for it. So I bought the AMD Radeon Gamer series memory, 2 sticks of 8 gigabytes rated for 2400 MHz speed at a CAS latency of 11. Expensive, but I didn’t want to mess around this time, wondering.

I also bought the obligatory aftermarket CPU cooler: a Hyper 212 Evo. It’s a beast of a hunk of metal, but I kinda like that. And no matter what, I’m not putting water inside my computer. I’ll just keep the clock speeds down (and power consumption).

For the motherboard, I decided upon the ASRock 990FX Extreme9. I was going to go with the ASUS Crosshair V Formula-Z, but it was always out of stock at Newegg, and I’ve recently become more skeptical of ASUS’s quality. I only ever used ASRock boards one other time, for a router I was building, and the thing was a good price, and very solid. So why not? The Extreme9 even had the Intel NIC on it, and a 12x power phase, which is unheard of. So anyway, that’s the board I chose, and it was only $169 – while the FX-8350 I got for $179. 🙂 The 16G of 2400 memory was the most expensive of all at $199!

Anyway, to the point. That’s what I have, and why I got it. In this machine is also a Bluray SATA drive, 2 3T SATA hard drives, and 2 120G SSD’s. All of that, and one ASUS LCD monitor are plugged into a UPS to draw power. So I can see my power utilization. Not uber scientific accuracy of course, but close enough just to have a look-see. Oh and the big power draw (supposedly), I splurged on a new graphics card as well, an R9 290 OC – just to keep it in the family.

I shelled out the $100 to Microsoft as well to get a Windows 8.1 Pro OEM license. That always makes me happy.

This AMD FX-8350 machine does run very solid. Like a tank. Just like me FX-8150, I never can seem to bog it down in its responsiveness, no matter what I’m doing, including virtualization.

I don’t overclock that often, and don’t know a ton about it. However, I was surprised that I could easily get the AMD FX-8350 CPU up to 4.4 GHz and the memory up to the full 2400 MHz speed, all while just using the air cooling of the Hyper 212 Evo! It honestly shocked me.

Of course, that’s no big deal unless you are running the CPU at full throttle for a long period of time. And what better way to do that, than to encode HD video using Handbrake – which maxxes out every single core for hours on end. It was my test, both of thermals and voltages, as I fine-tuned things.

People get confused about CPU temperatures. There are 2 different kinds. There is the CPU temperature at the socket, and there is the CPU temperature of the CPU cores themselves, within the chip. Both temperatures have different manufacturer suggestions/limits.

Using the ASRock motherboard’s automatic overclocking setting to reach 4.4 GHz on the CPU and 2400 MHz memory speeds, with handbrake running continuously my CPU core temperature maxxed out at 80c, and the thermal thresholds of the CPU cores reached AMD’s predefined limits, and the voltage automatically dropped at brief intervals to keep the temperatures below the supposed damage threshold.

So I thought, well, I should be able to lower the CPU voltage some, and the Northbridge voltage as well, and still be stable — and this should lower both my temperatures and power consumption. My thinking was, the motherboard manufacturer would want to pick voltages that were on the more greedy side to make sure the overclocks were more likely to work.

This proved to be a good move. I managed to lower the voltage on both the CPU and northbridge without sacrificing any performance, bringing the thermals down well below thresholds, and decreasing the power consumption by about 30 watts.

I’ll show you some screenshots I took while I was in the middle of running those Handbrake video encodes that kept the FX-8350 CPU cores pegged at full. The power draw you’ll see is reported from the UPS the system is plugged into. So here is a list of devices that are currently drawing power on that device:

  1. AMD FX-8350 CPU
  2. 2x8G AMD Radeon Gamer series memory @ 2400
  3. 6 120MM case fans
  4. Seasonic Gold something power supply 😉
  5. AMD R9 290 OC (MSI)
  6. Yeti microphone
  7. ASUS Bluray SATA drive (not actively spinning)
  8. 2 3T Seagate Barracuda hard drives
  9. 2 120G SSD drives (Samsung and OCZ V4)
  10. ASUS VN247 LCD monitor

All of that, with the CPU pegged out and overclocked to 4.4GHz, the system was drawing 307 watts! Of course, if the graphics card were going like crazy, it would be significantly more. But that just amazes me how little that graphics card will draw, too, when it’s not being used except for dual-monitor 1080p (one monitor is plugged into that UPS while the other isn’t).

When the system is idle but awake, just doing its normal system-y things in the background, all those things draw 121 watts with the CPU at 4.4 GHz still. Absolutely nuts! That’s some amazingly good power-awareness work, in both the CPU and video card.

As you can see from those screenshots, there is the idle power draw and the fully loaded CPU power draw running maxed-out FX-8350 on all 8 cores. Also, the AMD Overdrive screenshot shows those cores all maxed out, along with the “thermal margin”. This “thermal margin” value is often confusing to people it seems. It represents the number of degrees you have left to heat up before you reach AMD’s predefined maximum safe temperature per core. By lowering voltages I was able to give myself a comfortable thermal margin while still maintaining a completely stable 4.4 GHz overclock that ran and ran and ran.

The “ASRock Extreme Tuning Utility” screenshot shows ASRock’s included software overclock utility that came with this 990FX Extreme9 motherboard. It’s not the greatest utility – but it’s ok for tweaking some things. The BIOS is the place to do it, and the boot-to-UEFI feature is great. I am incredibly pleased with this motherboard. The ASUS stuff has seemed so buggy lately. I am convince that there is no way I could have gotten such a stable overclock with such low voltages were it not for this fine board (and perhaps the silicon dye god’s favor).

The last screenshot above is the CPUZ utility showing the memory speed and timings, in case someone doesn’t believe that an FX-8350 can run with 2400 speed memory. There it is! It’s using those AMD memory modules, though.  And if you look at the northbridge speeds in the ASRock utility screenshot, you’ll see that the bandwidth is there. I could probably even press it further. Haven’t tried yet, though. It most certainly increased my AIDA64 scores below. The AMD chips, even the FX ones, actually do seem to benefit from fast memory.

All in all, I’m extremely happy and surprised by this system. I’m also impressed with the memory bandwidth AMD has provided even on the FX series processors. I had an evaluation copy of the AIDA64 test suite, and included the benchmark results below.

What astonishes me is that there are cases where this FX-8350 CPU greatly outperforms even the i7-4770k from Intel. Of course, there are cases where the Intel i7-4770k CPUs outperform the AMD FX-8350 as well. The price difference between the two is huge, though, especially when you take into account motherboards with comparable features.

I used to run i7’s several years ago, but switched to the FX processors after experiencing how much better the AMD chips handled virtualization. I have no benchmarks, but using the systems I could certainly feel the difference. And virtualization is a lot of what I do. Playing games, I can never tell the difference. But if I’m playing a game on a system that’s running some load in a virtualized environment at the same time, the AMD system runs smooth, while the i7 system acts choppy. That’s why I switched.

But all silly Intel vs. AMD stuff aside, if I look at just this chip, and even the small overclocking up to 4.4 GHz, I can certainly notice a huge performance gain while transcoding video with Handbrake. I have also noticed that running the memory at 2400 MHz most definitely improves the responsiveness of the system, such that I can’t even tell when I’m running with all the cores maxxed out.

Honestly, I was a little hesitant about going with the FX-8350 chips, since they are older than the newest releases from Intel. But right now, I have absolutely no regrets. They are still amazingly great performing workhorses and absolutely rock solid. Especially if you invest in the quality components.

AIDA64 Extreme benchmark test results:

Anyway, I hope you have found something useful in all this. It’s hard finding any more detailed information out there related to specific use cases and experiences.

I’m so pleased with this purchase and have absolutely no regrets about spending the money for the quality components. And no regrets about not spending twice even that much for an Intel-based system.

Besides the incredible solidity of this system, the thing I’m most impressed with is how well it utilizes power. Although the FX-8350 chip isn’t the most power-efficient chip, it’s not bad for an 8-core! And it seems like AMD has gone to some great lengths to only draw power when you really need it, whether it’s a CPU or a GPU. I swear that R9 290 isn’t drawing any power it seems. You do see it when you’re gaming though.

Oh, and I should mention, I overclocked this while leaving Cool and Quiety enabled in the UEFI, and also C6 state on the CPU, which gives it the ability to save lots of power. This has not impacted the stability of the overclock at all. Then again, I’m hardly pushing this chip to anything close to what it’s capable of, either.

Anyway, just thought I’d share my happiness and enthusiasm in case you might be questioning similarly.

BTW – the hardware support for AES encryption on this chip is phenomenal. Encrypted disks and folders? No worries. 😉

Compiling Samba 4 on Debian Wheezy – Active Directory Domain Controllers Ho!

Samba SI’ve managed to avoid working with Microsoft’s Active Directory for many years, which is actually somewhat of a skill. But recently a client, unhappy with the support and the direction their MS “specialist” was taking them, asked me to see what I could do with their network.

Long ago I advised them to steer clear of Active Directory if they could, because it would only tie them in to more and more expensive MS “necessities” over time. This is the position they found themselves in, years later, having to shell out more and more money to MS and their MS-oriented “consultant” just to keep things running – and not running well, either.

It was important to this company that they remain able to manage user identity and authentication from a central place, as well as authorities and permissions. So I thought it might be a good time to at last examine Samba-4 and its claims to support Active Directory.

The Samba-4 guys can claim anything they like related to Active Directory and I would be none the wiser. I knew nothing of AD. But that soon changed as I delved into Samba-4. I must point out that the things I say here are my own impressions and conclusions based upon next to no research – so I could be quite wrong in some places.

It turns out that Active Directory is an unholy marriage of DNS, Kerberos, LDAP and CIFS. Unholy only in that it tries to obscure the individual technologies. On the MS side of things, they like to include DHCP, but it isn’t necessary at all.

Maybe I shouldn’t say that it tries to obscure the individual technologies. Maybe I should say it tries to unite them in holy simplicity for the good user.  Yes, that’s it.

The tricky key (and shackle) is DNS. I always wondered why Windows clients had to use the Active Directory server as their DNS server – it seemed so limiting (and error-prone). It turns out that Active Directory will “inject” funny yet specific DNS names into your domain that identify the AD server to clients. It’s not necessary to be designed that way of course, really – but it’s a good hook. Windows clients joining a “domain” expect these funny DNS entries, and it does no good just specifying the AD server to connect to, unless you have these DNS entries being injected there as well. (salutes and rifle fanfare, etc.)

As for Kerberos and LDAP – anyone who’s worked with them knows it can take some strenuous wrestling to get stuff seated and right for handling your user auth stuff. And in this I am actually impressed with Active Directory. MS has done a great job integrating these Free technologies into something standardized on a platform. Although there are many ways this can be accomplished, Microsoft’s dominance on client machines made a standardization possible. And I’m happy that the European courts saw fit to rule in a way that allowed these Free technologies to be free once again — and this is where Samba-4 comes in.

If you have worked with Samba in the past, you know how versatile it is for file serving, and how complicated it can get. I don’t think I’ve ever dealt with a longer man page with more options. Samba 4 is no different. However, in some ways, it’s much easier than Samba 3 if you’re using the standard Windows administration tools to administer the users and shares. From my understanding so far, you basically just put the shares you want into the smb.conf file with minimal definitions, and define the user authority stuff through the Windows tools connected as an Administrator to Samba 4. If you’re managing rights on share servers other than your Samba 4 DC, then you don’t even have to worry about defining them in the smb.conf file.

But of course you can if you want – there is a  command line tool that gives you access to the same stuff that tweaks this marriage of Kerberos, LDAP and DNS – without the need of Windows at all.

Anyway, enough of these background thoughts. The Samba team has done a great job. A really great job. And I’m going to donate some dollars to them, because they do need pizza, even though they say they don’t.

So, being mostly a Debian guy, I decided to try this Samba-4 out in Debian Wheezy. The Wheezy repositories have an older version of Samba-4, of course. This is one of those rare instances where I will compile my own version of a package outside the normal Debian space, since Samba-4 is such a newer and only recently became stable, in the more unix-y sense of stability.

And it’s not that hard to compile and get Samba-4 running in Debian Wheezy. And it’s certainly worth the time if you want to replace an Active Directory Domain Controller with Samba-4 or to just play with it, to see what it’s all about. I took some notes while I was doing it, which I decided to share here, since other people have found my doing so helpful previously, on other systems.

Note: It looks like Debian Backports is updated with a newer version of Samba4 at last. This is a great way to go to avoid compiling and maintaining your own. I’ve tried it, and it works well. FYI

Do Your Debian

I used a KVM virtual machine to create a Debian Wheezy installation that would run Samba-4. I think it’s probably a good idea not to use a production server at first. If you use a VM, you can always just trivially put it into production later.

During the install, I chose the most minimal installation package option with the addition of an SSH server.

Of course, this will probably work just as well with other distributions if you get your library dependencies right. Ubuntu may work with no modification, but I’m not sure.

Kerberos is very finicky about time. You will need an ntp server to keep your clock well synchronized.

apt-get install ntp

Also, generally I like to assign my servers static IP’s. And it also seems like the AD stuff does not like changing IP addresses once it’s been set up. Seriously. It’s probably an ingredient in the unholy glue.

edit /etc/network/interfaces

Change your “dhcp” flag to “static” and give yourself your proper address and routing info.

auto eth0
iface eth0 inet static

Unless you’re right on top of your DNS zone information, including PTR records, you should probably edit your /etc/hosts file too, to include the machine name you’re going to use:

edit /etc/hosts

I’m not really sure about the entry here, but it freakishly seemed to work for me. And I’m not sure why I did it. And it may not be necessary. I think it must not be.       localhost    samba    samba

# The following lines are desirable for IPv6 capable hosts
::1     localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

As for DNS, you can use Bind9 just fine with Samba 4 — but Samba 4 also has its own built-in DNS server that does that filthy injection. If you want to use Bind9 as your backend DNS server, you can, but you will need to allow the Samba 4 server to dynamically update the zone for your domain with Kerberos. There are howto’s on that. I chose to just let Samba 4 use its own built-in DNS server. Because I’m lazy. And I’m just playing for now. And I don’t like a “domain controller” being able to update my real DNS zone file.

This leads to an interesting, and by that I mean boring and unnecessary, discussion of how you should name your Active Directory “domain”. There are a few schools of thought on it, and even Microsoft has changed their tune over time on the subject. I have chosen to name my Samba 4 “domain” as a “subdomain” of my root domain – that way the Active Directory stuff doesn’t have to be authoritative for my whole domain, and I don’t have to make up a fake domain either.

And leave it to Microsoft to terribly confuse everyone by “making it easy”. By domain they do not mean a DNS domain. It’s a hybrid abomination of DNS and what is known in Kerberos as a “realm”.

So yes, well, I made Samba 4 be the DNS server, but it will also do sensible lookups to the real DNS information from my proper DNS server when it doesn’t know a name. That’s why I named it as a DNS “subdomain” (host) rather than the whole domain. For resolution:

edit /etc/resolv.conf

Now, in Ubuntu you’re going to have to do some special editing of configs to keep Network-Manager from overwriting your resolv.conf file after you make these changes.


The first should be your Samba 4 installation IP. The second should be your real DNS server.

Probably quick & dirtiest to reboot after all this, if you like that sort of thing. BTW – make sure your /etc/hostname matches your DNS hostname. I don’t know if it’s necessary, but how can you stand it otherwise??

Debian Requirements to Compile Samba 4

I should mention, if you plan on having your Samba 4 server also be a filesharing server, and for the Active Directory stuff to manage the users and permissions for you, you need to make sure that whatever filesystem you’re going to be serving out is supporting ACL’s and extended attributes. In Debian this is a normal part of their ext4 mounts, and I think their ext3 ones as well. So you’re set!

But still, might be good to put it in, in your /etc/fstab, just as a reminder. Do, of course, use your own partition’s UUID. And whatever mountpoint you want to share.

UUID=b99750a8-9c39-11e3-82f1-525400990c6c   /home ext4      user_xattr,acl  0       2

Many docs also want you to specify barrier=1 as a mount option, to make sure stuff doesn’t get corrupt in a power failure. This is enabled by default in ext4, but you may want to in ext3. And if you’re using LVM volumes, this is passed through and respected now. Ah, the wonders of the modern world.

Now, what you really want to know: which Debian packages do I need to install when compiling Samba 4? Well, how about these?

apt-get install build-essential pkg-config libacl1 acl libacl1-dev libblkid-dev libblkid1 attr libattr1 libattr1-dev libgnutls-dev libreadline-dev python-dev python-dnspython gdb libpopt-dev libldap2-dev dnsutils libbsd-dev krb5-user docbook-xsl libcups2-dev libncurses-dev libpam0g-dev libdm0-dev libfam0 fam libfam-dev xsltproc libnss3-dev docbook-xsl-doc-html docbook-xsl-ns

If you don’t have other Kerberos servers, well, I just used this server as my kerberos ones, and it works just fine. The initial realm, where it defaults to your domain name in upper-case — I made that the FQDN in upper-case as well. Apparently the realm likes to be upper-case.

Maybe you’ll want to reboot again, after the acl stuff. Maybe not. Maybe you didn’t reboot a few minutes ago, so it will only be this one reboot. Or none. I don’t care.

Compile Samba 4

The version of Samba I grabbed was their latest at the time, listed below. They may have a newer version when you read this, so always check the Samba site for the version you want.

I like compiling in /usr/src — and I’m letting Samba 4 install to its default location, which I know is a horrific violation of Debian policy. But I’m naughty.

cd /usr/src
tar -xzf samba-4.1.4.tar.gz
cd samba-4.1.4
./configure && make && make install

Oh, the places we’ll go.

After that completes successfully the first try and love descends upon all humanity, you might want to put the install directory into your PATH environment variable so you can avoid over-stressing your poor little phalanges. Put this in your .bashrc

export PATH=/usr/local/samba/bin:/usr/local/samba/sbin:$PATH

If you’re feeling particularly cavalier, trusting in the goodness of strangers that is. And source it! (or log out/in, open a new terminal, whatever)

I also symlinked my /usr/local/samba/etc to /etc/samba to make it less typing to edit configs:

ln -s /usr/local/samba/etc /etc/samba

Then you’ll want to make the Samba 4 stuff work. Right? First thing is to provision the so-called domain. I’m leaving it open to do some Un*x-side integration later here – that’s why the “rfc” switch.

samba-tool domain provision --use-rfc2307 --interactive

It will ask you some questions, and here’s where we get into the “domain” naming philosophy again. Just make it the same as your DNS decision above. In my example, the Realm I chose was SAMBA4.MYDOMAIN.COM

Do do the upper-case! Why? I don’t know!

And for the “Domain” I chose “MYDOMAIN” (without the .COM). It’s pretty much like your workgroup setting, is all I can figure.

If you do it this way, then all machines joining your Active Directory “domain” will get the right DNS information for your DNS zone — because the AD server will only consider itself authoritative for SAMBA4.MYDOMAIN.COM and “higher”, but not for all of MYDOMAIN.COM itself — and it will forward those DNS requests on to your proper DNS server when it doesn’t know about them.

So be sure to set your DNS forwarder here to your real DNS server.

Cold, Cruel Kerberos

I’ve never know it to be so easy. I’m leaping with joy inside. Or maybe that’s lasagna.

cd /etc
cp krb5.conf krb5.conf.original
cp /usr/local/samba/share/setup/krb5.conf .

Then edit your new /etc/krb5.conf and change the REALM variable to the realm you chose: SAMBA4.MYDOMAIN.COM

I know! Can you believe it! It’s here where I feel a twinge of almost… non-sickness about MS. Ok it may even be stronger than that. A little.

Reboot again. Hahaha!

You Can Dance

Now, just start Samba 4 by typing in “samba”

It will give minimal info in /var/log/syslog – mine complained about CUPS not being there, but it wasn’t enough trauma for it to die, thankfully.

Now you’ll want to set up your administrator auth-y stuff, yes?

kinit administrator@SAMBA4.MYDOMAIN.COM
samba-tool user setexpiry administrator --noexpiry

Bad idea that no-expiry flag probably. But we’ve already established I’m naughty.

That’s about it! You can now fully administer it just like an Active Directory domain controller from Windows, using their remote server administration tools. Crazy, I know! That link is for Windows 8.1 download, BTW.

Also, the Samba website has a good howto on stuff like this.

The thing is, when you join a Windows machine into the “domain”, you have to make sure that you’re using your Samba 4 server as the DNS server for that machine, just like you would have to do with Microsoft’s Active Directory domain controllers. They need the filthy DNS injection.

Home Directories for Windows Users

If you want to have your Samba 4 server serve out home directories to your users, you accomplish that pretty easy. It just requires a “[home]” section in your smb.conf file.

That’s not a “[homes]” section like in Samba 3 by the way — just a singular “[home]”. It’s special. Apparently.

That section only requires a path and a not-read-only:

        path = /home/
        read only = no

You don’t really need local accounts for your users. Samba 4 will create crazy high-numbered fictional users and groups to service your Windows throngs. Just make sure that mountpoint has the acl and xattr flags.

Oh, and your administrator account will need the “SeDiskOperatorPrivilege” I think:

net rpc rights grant 'MYDOMAIN\Domain Admins' SeDiskOperatorPrivilege -Uadministrator

This will make it so that, if you use the Windows remote administration tools in Windows, you can create users that can have a drive automatically mapped to their Windows machine when they log in, and Samba 4 will create their home directory automatically.

The setup in Windows is a little convoluted. I’m no Windows person. But here’s a step by step that I followed and it worked great.

It should also be noted that the default setup seems to allow normal workgroup functioning to continue working as well. So even if you have Windows machines that aren’t the insanely more expensive “Pro” version of Windows, you can still map to the shares like  you could to a workgroup.

But then again, that begs the question, why then bother with an Active Directory Domain Controller at all? Unless you want to spend a lot more money per seat on Windows.

Final Comments

I am impressed with Microsoft’s ability to impose a standardized way of implementing LDAP in conjunction with Kerberos. I am less impressed with their shameless violations of DNS to rope this in.

I haven’t tried it yet, but apparently you can pretty easily have your Linux boxes authenticate against Samba 4 as well. I think I may not be doing that. Well, maybe I will.

It is really nice and compelling that it’s all tied together. And it’s not so bad since Samba 4’s been able to bring it into the light. I’m undecided. It seems to work well.

Anyway, I hope this helped someone. I was very daunted by the whole Active Directory integration mess at first. But these Samba guys really have done a great job. I’ll be showing them some love. Of the monetary type! Well, I suppose unless…

This article is published at Linux Tricks of the Trade.

How To Deal With Udev Children Overwhelming Linux Servers With Large Memory

This article is published on Linux Tools of the Trade.

A few months ago a client purchased a new server and asked me to set them up with Linux-based virtual machines to consolidate their disparate hardware. It was a big server, with 128 Gigs of memory. I chose to use Debian Wheezy and QEMU KVM for the virtualization. We’d be running a couple Windows Server instances, and a few Debian GNU/Linux instances on it.

Unfortunately, we ended up encountering some strange problems during heavy disk IO. The disk subsystem is a very fast SAS array, also on board the server, which is a Dell system. The VM’s disks are created as logical volumes on the array using LVM.

Each night, in addition to the normal backups they do within the Windows servers, they also wanted a snapshot of the disk volumes sent off to a backup server. No problem with LVM, even with the VM’s running, when you use the LVM snapshot feature. This does tend to create a lot of disk IO, though.

What ended up happening was that occasionally, every week or two, the larger of the Windows server instances would grind slowly to a halt, and eventually lock up. The system logs on the real server would begin filling with timeouts from Udev – about it not being able to kill children. This would, in turn, effect the whole system – making a reboot of the whole server necessary. Very, very ugly, and very, very embarrassing.

I tried a couple off-the-cuff fixes that were shots in the dark, hoping for an easy fix. But the problem didn’t go away. So I had to dig in and research the problem.

It turns out that Udisk (which is part of Udev) decides how many children it will allow based upon the amount of memory the server is running. In our case, 128G – which is quite a lot. This number of allowed children was a simple one-to-one ratio, based upon memory. However, with this much memory, that many children seemed to be overloading the threaded IO capacity of this monster server, causing blocks, during live LVM snapshots being copied.

What I ended up doing was manually specifying that the maximum number of allowed children for Udev would be 32 instead of the obscene number the inadequate calculation in the Udev code came up with. Since doing this, the server has run perfectly, without a hitch, for a good, long time.

So this is for anyone who may have run into a similar problem. I could find no information about this on the Internet at the time. But I did manage to find how to effect the number of children Udev allows. You can do it while the system is running (which will un-happen once the server is rebooted) or you can put in a kernel boot parameter to effect it, until the Udev developers fix their code to provide a sane value the maximum number of children allowed in systems with a large amount of memory.

At the command line, this is how. I used 32. You might like something different, of course.

udevadm control --children-max=32

And, as a permanent thang, the Linux kernel boot parameter is “udev.children-max=”.

Hopefully this will save some of you some of my headache.