Category Archives: Indulgence

Fix Slow Network (NAT) after Debian Wheezy Kernel Update 3.2.0-4

NOTE: This issue was fixed with 3.2.60-1+deb7u3 update that came out in Debian’s security update stream.

Router with FirewallI noticed a few weeks ago that after a Debian kernel update on my Debian-based router, network performance degraded terribly. Linux clients behind this Debian firewall did not seem to be effected nearly as much as the Windows clients — Windows machines could not upload at all to the Internet once this Debian update was in place on the router.

At first I thought it was Comcast, before I realized that it was mostly the Windows machines that had slow network performance. Sometimes download performance was effected as well – some sites just stalling, and Pandora was practically unlistenable.

After searching around a bit, I found an old bug where the network address translation Linux kernel code had been patched for handling the defragmentation of packets that exceeded MTU values, if I’m remembering right. Apparently this “fix” caused a number of problems with the 3.2.0-4 Debian GNU/Linux kernel when it was implemented along with some security updates.

I started playing around with it on my own, and managed to find a Debian bug where a couple patches were available that patched it back. This is very, very good, because the network connection was pretty much unusable if you were using IP Masquerading or NAT as a firewall/router.

The bug is documented on the Debian bugsite, along with the kernel patches. But if you’d like a step-by-step, this is what I did to fix the problem on 2 different routers so far:

Prepare

You’ll need some disk space — probably around 10G free. Always back up — if following these steps results in an unbootable machine for you, don’t blame me. It very well could. Particularly if you don’t pay attention, or know things that I can’t even imagine you don’t know. Which is hard. You’ve been warned. It’s a kernel recompile! I’d say wait for Debian to release it in the channel, but it’s been weeks, and I’m sure some of you have been suffering as much as me.

Install Debian Packages

This is a kernel compile – we’ll be keeping all of Debian’s customizations, along with their current kernel, just with our 2 little extra patches applied. As such, you’ll need some source to compile, and the Debian scripts that automate the Debian Way. It’s a boatload of packages…

# apt-get install devscripts
# apt-get build-dep linux

I know, sweetie.

To The Kernel Source and Patch

I like to do my dirty work in /usr/src – and when doing it, I like to be root, not any of that sudo or fakeroot stuff. So if you’re playing it safe and wise, you’ll need to fakeroot these compiles. I leave it to you. But if you’re willing to be root, here’s the easy:

# cd /usr/src
# mkdir linux-deb
# cd linux-deb
# apt-get source linux

NOTE! You might want to specify “linux=3.2.60-1+deb7u1” instead of just the plain “linux” there. That way you’re sure to get the right version – the version with the problem, that matches with this fix.

As for the patches, I’ll link to the ones provided in the bug report that you can get with wget — I’ve also included them as full text below if you’d rather, in case the cut & paste for these long URI’s don’t work right for you.

If you can these two long lines pasted, you’ll get two files outputted to your working directory that are those patches. Saw this from Teodor Milkov in the bug – thanks Teo!

# wget
--no-check-certificate
"https://bugs.debian.org/cgi-bin/bugreport.cgi?msg=50;filename=revert-net-ip-ipv6-handle-gso-skbs-in-forwarding-pat.patch;att=1;bug=754294"
-O revert-net-ip-ipv6-handle-gso-skbs-in-forwarding-pat.patch


# wget
--no-check-certificate
"https://bugs.debian.org/cgi-bin/bugreport.cgi?msg=50;filename=revert-net-ipv4-ip_forward-fix-inverted-local_df-tes.patch;att=2;bug=754294"
-O revert-net-ipv4-ip_forward-fix-inverted-local_df-tes.patch

Compile Kernel with the Patches

Now you’ll just cd down into the top of your Debian kernel build tree, and apply these patches and compile. This command line is for the amd64 architecture. You maybe have a different one.. ?

And replace that -j 8 with the number of CPU cores you have (or less)

# cd linux-3.2.60
# debian/bin/test-patches -f amd64 -j 8 ../revert-net-ipv4-ip_forward-fix-inverted-local_df-tes.patch ../revert-net-ip-ipv6-handle-gso-skbs-in-forwarding-pat.patch

Now go make some dinner. Do some yoga! Dig in the earth, or paint a room. That will take some time. The first error up top at the very beginning is normal.

Install the new Debian Kernel Package

Now you should have a nice new linux-image-3.2.0-4 deb package file, along with another with debug headers, and just your regular headers. ūüėČ This new Debian package, version-wise, is the same as the one in the main stream, only with a ~test — so I believe we should get newer-versioned kernels automatically when they come out.

Install this deb with the normal

dpkg -i linux-image-3.2.0-4-amd64_3.2.60-1+deb7u1a~test_amd64.deb

It’ll do all your modules and initrd stuff for you, and call your grub menu rebuilder doohicky.

One of my routers failed the install, complaining that it couldn’t make a symlink to the initrd file from / to /boot — that’s because there was no initrd. I solved it by removing my current kernel-image package (ignore the scary warnings if you’re foolhearty) and then running the dpkg -i again on it, where the initrd was made just fine. The other router had no problem with it. Go figure.

Hope this helps some of you if you’re having those terrible network performance problems after that last Debian kernel update. I wish they could get these fixed sooner.

Anyway, here are those patches if you need to cut and paste your own, instead of wgetting from those obnoxiously long URI’s. Just put them in any named file, and then be sure to call them by those names from the test-patches step.

diff --git a/net/ipv4/ip_forward.c b/net/ipv4/ip_forward.c
index 7593f3a..e0d9f02 100644
--- a/net/ipv4/ip_forward.c
+++ b/net/ipv4/ip_forward.c
@@ -42,12 +42,12 @@
 static bool ip_may_fragment(const struct sk_buff *skb)
 {
     return unlikely((ip_hdr(skb)->frag_off & htons(IP_DF)) == 0) ||
-        skb->local_df;
+           !skb->local_df;
 }
 
 static bool ip_exceeds_mtu(const struct sk_buff *skb, unsigned int mtu)
 {
-    if (skb->len <= mtu)
+    if (skb->len <= mtu || skb->local_df)
         return false;
 
     if (skb_is_gso(skb) && skb_gso_network_seglen(skb) <= mtu)
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -2588,22 +2588,5 @@ static inline bool skb_is_recycleable(co
 
     return true;
 }
-
-/**
- * skb_gso_network_seglen - Return length of individual segments of a gso packet
- *
- * @skb: GSO skb
- *
- * skb_gso_network_seglen is used to determine the real size of the
- * individual segments, including Layer3 (IP, IPv6) and L4 headers (TCP/UDP).
- *
- * The MAC/L2 header is not accounted for.
- */
-static inline unsigned int skb_gso_network_seglen(const struct sk_buff *skb)
-{
-    unsigned int hdr_len = skb_transport_header(skb) -
-                   skb_network_header(skb);
-    return hdr_len + skb_gso_transport_seglen(skb);
-}
 #endif    /* __KERNEL__ */
 #endif    /* _LINUX_SKBUFF_H */
--- a/net/ipv4/ip_forward.c
+++ b/net/ipv4/ip_forward.c
@@ -39,68 +39,6 @@
 #include <net/route.h>
 #include <net/xfrm.h>
 
-static bool ip_may_fragment(const struct sk_buff *skb)
-{
-    return unlikely((ip_hdr(skb)->frag_off & htons(IP_DF)) == 0) ||
-           !skb->local_df;
-}
-
-static bool ip_exceeds_mtu(const struct sk_buff *skb, unsigned int mtu)
-{
-    if (skb->len <= mtu || skb->local_df)
-        return false;
-
-    if (skb_is_gso(skb) && skb_gso_network_seglen(skb) <= mtu)
-        return false;
-
-    return true;
-}
-
-static bool ip_gso_exceeds_dst_mtu(const struct sk_buff *skb)
-{
-    unsigned int mtu;
-
-    if (skb->local_df || !skb_is_gso(skb))
-        return false;
-
-    mtu = dst_mtu(skb_dst(skb));
-
-    /* if seglen > mtu, do software segmentation for IP fragmentation on
-     * output.  DF bit cannot be set since ip_forward would have sent
-     * icmp error.
-     */
-    return skb_gso_network_seglen(skb) > mtu;
-}
-
-/* called if GSO skb needs to be fragmented on forward */
-static int ip_forward_finish_gso(struct sk_buff *skb)
-{
-    struct sk_buff *segs;
-    int ret = 0;
-
-    segs = skb_gso_segment(skb, 0);
-    if (IS_ERR(segs)) {
-        kfree_skb(skb);
-        return -ENOMEM;
-    }
-
-    consume_skb(skb);
-
-    do {
-        struct sk_buff *nskb = segs->next;
-        int err;
-
-        segs->next = NULL;
-        err = dst_output(segs);
-
-        if (err && ret == 0)
-            ret = err;
-        segs = nskb;
-    } while (segs);
-
-    return ret;
-}
-
 static int ip_forward_finish(struct sk_buff *skb)
 {
     struct ip_options * opt    = &(IPCB(skb)->opt);
@@ -110,9 +48,6 @@ static int ip_forward_finish(struct sk_b
     if (unlikely(opt->optlen))
         ip_forward_options(skb);
 
-    if (ip_gso_exceeds_dst_mtu(skb))
-        return ip_forward_finish_gso(skb);
-
     return dst_output(skb);
 }
 
@@ -152,7 +87,8 @@ int ip_forward(struct sk_buff *skb)
     if (opt->is_strictroute && opt->nexthop != rt->rt_gateway)
         goto sr_failed;
 
-    if (!ip_may_fragment(skb) && ip_exceeds_mtu(skb, dst_mtu(&rt->dst))) {
+    if (unlikely(skb->len > dst_mtu(&rt->dst) && !skb_is_gso(skb) &&
+             (ip_hdr(skb)->frag_off & htons(IP_DF))) && !skb->local_df) {
         IP_INC_STATS(dev_net(rt->dst.dev), IPSTATS_MIB_FRAGFAILS);
         icmp_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED,
               htonl(dst_mtu(&rt->dst)));
--- a/net/ipv6/ip6_output.c
+++ b/net/ipv6/ip6_output.c
@@ -381,17 +381,6 @@ static inline int ip6_forward_finish(str
     return dst_output(skb);
 }
 
-static bool ip6_pkt_too_big(const struct sk_buff *skb, unsigned int mtu)
-{
-    if (skb->len <= mtu || skb->local_df)
-        return false;
-
-    if (skb_is_gso(skb) && skb_gso_network_seglen(skb) <= mtu)
-        return false;
-
-    return true;
-}
-
 int ip6_forward(struct sk_buff *skb)
 {
     struct dst_entry *dst = skb_dst(skb);
@@ -515,7 +504,7 @@ int ip6_forward(struct sk_buff *skb)
     if (mtu < IPV6_MIN_MTU)
         mtu = IPV6_MIN_MTU;
 
-    if (ip6_pkt_too_big(skb, mtu)) {
+    if (skb->len > mtu && !skb_is_gso(skb)) {
         /* Again, force OUTPUT device used as source address */
         skb->dev = dst->dev;
         icmpv6_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);

Impressions of My New Motorola Moto X (not really a review)

Sadly late last week my much loved Nexus 4 phone died. After much testing, it turned out to be a failure of the flash memory on which the system lives, and so the device is fairly well dead.

I’m pretty well determined to keep as close to the stock Android experience as possible. LG is pretty good at that, and they manufactured the Nexus 4, and it was a great price. However, it concerns me when an manufacturer sells me a product that dies less than six months past its warranty. So I am skeptical of LG right now.

That left me considering the Google devices and the Motorola devices. The Nexus 5 looks wonderful, and the price is excellent. I’ve heard many good things about it. And looking through the Motorola lineup, the Moto X stood out as the best option, even above their new “budget” models.

Between the Nexus 5 and the Moto X I was hard-pressed to decide. The Nexus 5 certainly had much better system specifications on paper, but the Moto X was incredibly well-engineered, and creatively so as well.

In the end, the creativity and engineering of the Moto X won out for me, even above the base system specifications. This decision was the more premium, price-wise as well, though not by a wide margin, considering a free bumper case was being included and Google charged a ridiculous price for shipping last time.

When all was said and done, I ended up with a new phone from Motorola, the Moto X, 32 GB of flash memory, a bumper case, an NFC clip that acts as an unlock, and a real walnut wood case, for $475, including tax.

Most satisfactory, except for the fact that I have to buy it at all, because my Google/LG Nexus 4 failed, and I was forced to. I think the main reason I chose the Moto X was because Motorola is the manufacturer, and every Motorola device I have ever owned has worked flawlessly, never dying, and survived everything I dished out to it. In my mind, Motorola has a reputation of reliability and durability, as well as engineering — and they are a company that takes pride in making a solid product as well. But I have to admit, having a real wood case was also a nice selling point.

Anyway, I ordered it, custom made, with walnut, gold metallic highlighting, orange bumpers, my name engraved on it (I never resell), and all sorts of little custom details about the software innards. It arrived before a week was out, and they were excellent about keeping me informed of the billing, build and shipping progress along the way. A completely satisfactory experience.

The Experience

This Moto X, first of all, is much more fluid than my Nexus 4. And the screen, even though it is less resolution, looks better. And best of all, this is the first of the smartphones I’ve owned that actually felt very natural and comfortable to hold in the hand.

Of course, being a Google Android device, it synced itself up all quick and nicely with my contacts once I connected my Google account. And the phone was great at pointing out things you should consider activating or doing as your started to break the phone in, customizing it even further toward your tastes.

I really was surprised at how fast and smooth this phone was. I was imagining that, despite what others had said, I would run into the occasional performance stutter, especially when all the apps were installing themselves as I was trying to do other things. But it didn’t. I don’t know what these Motorola engineers did, but they did something very, very right.

For a while now, I’ve slowly been getting myself used to dictating messages to my Android devices rather than typing them out. Always there is the occasional annoying glitch in its interpretation than you must awkwardly return to manually fix. Happily, one of the first things I noticed was that this Moto X was noticeably superior at voice recognition than my Nexus 4 was, and my Nexus 4 was damn good!

I think I remember reading somewhere that Motorola engineers added a small CPU whose sole purpose was to perform voice recognition. I suppose I should verify this before even mentioning it, but I’ll leave that for you to do, if you doubt my memory as much as I do. If they did, it certainly shows.

I remember thinking, when I first heard of it, how unsettling it would be to have a device that was going to be listening to you at all times. Particularly in an age when so many “true Americans” with “American values” have such a fetish for voyeurism and disdain for any privacy. But my Moto X is sitting right next to me, on the right. I know it hears my clicking keyboards, and maybe a fart. And of course, all the lies I tell myself when nobody is around. But it’s not looming there, like I imagined it might, with its own disturbing gravity of ears. Though perhaps it should. I don’t know.

But what I do know is that I love being able to yell out to it from across the room and have it answer or do something for me. Before, I thought, what a silly feature really. I can just hit the microphone button on the search bar and get the same thing. But there is something very different about it just being there, knowing you can just tell it something, at any time, or ask it something, as if it were actually something… in the room with you.

I think it’s impossible to describe. Just like how it feels in the hand. And just like how things move within the screens. And how it knows when you’re in the car driving, and will read out messages to you if you like, instead. These guys as Motorola thought of a lot of things, and they really did an amazing job bringing those things together into an actual working device.

I suppose it all boils down to, I like this phone. I like the Moto X so much that I’m even a little happy that my LG Nexus 4 died, just so that we could be together now. And I don’t have even the slightest hint of regret that I might be missing something, having chose the Moto X over the Nexus 5. In fact, I’m happy that I did.

Oh, I should also mention the camera. I like taking pictures. From what I was reading earlier, neither then Nexus 5 or the Moto X supposedly have the greatest camera. But I do like this camera better than the one I had on my Nexus 4. It takes beautiful pictures, to me. And the camera app is very fast. In another very clever design decision, Motorola engineers thought to make the camera start when you flick your wrist. I thought, how silly, really. But the thing is, it’s very useful! And it happens fast!

The thing I don’t like about the camera is that it seems very easy to blur the pictures. I think it must not have any image stabilization, or maybe I just haven’t found it to enable yet. So you have to be aware of your hand and body motion as you snap. This is a little bothersome, being so sensitive. Then again, for years with cameras, I had to worry about the same thing – always using the trick of holding your breath when you shoot to keep the lens from any distorting motions.

I would still say that is a minus of the camera. And really, that’s the only minus I’ve found – amongst so many pluses! The most peculiar and delightful thing about this phone is the pluses you never even thought would be there. The biggest being; the Moto X is just so damn comfortable to be around!

This device really is a truly wonderful dollop of engineering and design baked into a sweet package. It is understated, elegant, and intelligent, at all levels, and at any angle. I honestly don’t think I could be happier with a phone. I could just eat it!

My new Moto X with a walnut back! Picture was taken with my no-good Nexus 7 front-facing camera though, with lots of unsightly reflection from the plant's artificial sun.
My new Moto X with a walnut back! Picture was taken with my no-good Nexus 7 front-facing camera though, with lots of unsightly reflection from the plant’s artificial sun.

Creating a Samba 4 Domain Member File Server

Just finished writing up a piece on how to integrate a Samba 4 file server into an existing Active Directory Domain.

The odd thing is creating a Samba 4 server that doesn’t want to be authoritative, but is, instead, subject to another server for auth and permissions.

Not a lot out there I found for just a simple file server. All kinds of stuff for integrating a Linux system to use AD/DC for centralized auth — like user logins to Linux boxes — but not much on just being a file server, that doesn’t need all that extra rigmarole.

Hopefully it will help someone out… ūüėČ

 

Wicked, New Orientations

The Truth of Vertical vs. Horizontal Key Layout
The Truth of Vertical vs. Horizontal Key Layout

Few things in life are definitively good or evil in their entirety. This is one of them.

I have no idea what these keys in the center of keyboards are named. What I do know is that recently some manner of vertical orientation was spawned, and it is an abomination.

Be very careful when you buy keyboards now or you may find yourself wildly scrolling back and forth uncontrollably in your applications, or typing over large swaths of text instead of inserting. Even jumping to locations you never intended!

This new vertical orientation is a most vile, confusing and even dangerous development. Be wary!

Overclock Experience on AMD FX-8350 CPU on ASRock 990FX Extreme9 Mobo Using 2400 Speed Memory

A few months ago I decided to sacrifice my AMD FX-8150 – re-purposing it as a decent 8-core virtual server instead. In its place I purchased one of the new AMD 7850K Kaveri APU’s. My former FX-8150 workstation had an Nvidia 670 graphics card and the system consumed a lot of power, even when barely being used for anything. The thought of a 95w Kaveri sounded great.

And it was – with the new Kaveri 7850K chip as my CPU/GPU (APU) and the Nvidia card removed, the system rarely consumed more than 65w (including an LCD 24″ monitor)! When I played the occasional game on it, or the odd video encode, the power would spike up to 150w easily enough, since I had it overclocked a bit. I love this little system and still keep it. But I found that there are times when I really need the raw horsepower I gave up with the FX-8150.

So I decided to purchase the newer AMD FX-8350 instead. Of course, this chip isn’t all that new, really. But after looking at various “not-just-mainstream-talking-head” benchmarks, and see it compared reasonably well with the much more expensive Intel offerings, even their latest and greatest, I decided to go with it. My old FX-8150 was so solid. I was hoping the FX-8350 would be the same, and give me a little more performance as well.

The Kaveri APU’s benefit greatly from very fast memory. The talking heads out there claim that the FX-series processors don’t benefit that much from faster memory, and many claim that the AMD memory controller can’t even handle faster memory speeds well, past 1866 MHz. I decided to purchase faster memory nevertheless, thinking I could always use it in the APU system, since I wasn’t that thrilled with the more bargain Team Group memory I purchased for it. So I bought the AMD Radeon Gamer series memory, 2 sticks of 8 gigabytes rated for 2400 MHz speed at a CAS latency of 11. Expensive, but I didn’t want to mess around this time, wondering.

I also bought the obligatory aftermarket CPU cooler: a Hyper 212 Evo. It’s a beast of a hunk of metal, but I kinda like that. And no matter what, I’m not putting water inside my computer. I’ll just keep the clock speeds down (and power consumption).

For the motherboard, I decided upon the ASRock 990FX Extreme9. I was going to go with the ASUS Crosshair V Formula-Z, but it was always out of stock at Newegg, and I’ve recently become more skeptical of ASUS’s quality. I only ever used ASRock boards one other time, for a router I was building, and the thing was a good price, and very solid. So why not? The Extreme9 even had the Intel NIC on it, and a 12x power phase, which is unheard of. So anyway, that’s the board I chose, and it was only $169 – while the FX-8350 I got for $179. ūüôā The 16G of 2400 memory was the most expensive of all at $199!

Anyway, to the point. That’s what I have, and why I got it. In this machine is also a Bluray SATA drive, 2 3T SATA hard drives, and 2 120G SSD’s. All of that, and one ASUS LCD monitor are plugged into a UPS to draw power. So I can see my power utilization. Not uber scientific accuracy of course, but close enough just to have a look-see. Oh and the big power draw (supposedly), I splurged on a new graphics card as well, an R9 290 OC – just to keep it in the family.

I shelled out the $100 to Microsoft as well to get a Windows 8.1 Pro OEM license. That always makes me happy.

This AMD FX-8350 machine does run very solid. Like a tank. Just like me FX-8150, I never can seem to bog it down in its responsiveness, no matter what I’m doing, including virtualization.

I don’t overclock that often, and don’t know a ton about it. However, I was surprised that I could easily get the AMD FX-8350 CPU up to 4.4 GHz and the memory up to the full 2400 MHz speed, all while just using the air cooling of the Hyper 212 Evo! It honestly shocked me.

Of course, that’s no big deal unless you are running the CPU at full throttle for a long period of time. And what better way to do that, than to encode HD video using Handbrake – which maxxes out every single core for hours on end. It was my test, both of thermals and voltages, as I fine-tuned things.

People get confused about CPU temperatures. There are 2 different kinds. There is the CPU temperature at the socket, and there is the CPU temperature of the CPU cores themselves, within the chip. Both temperatures have different manufacturer suggestions/limits.

Using the ASRock motherboard’s automatic overclocking setting to reach 4.4 GHz on the CPU and 2400 MHz memory speeds, with handbrake running continuously my CPU core temperature maxxed out at 80c, and the thermal thresholds of the CPU cores reached AMD’s predefined limits, and the voltage automatically dropped at brief intervals to keep the temperatures below the supposed damage threshold.

So I thought, well, I should be able to lower the CPU voltage some, and the Northbridge voltage as well, and still be stable — and this should lower both my temperatures and power consumption. My thinking was, the motherboard manufacturer would want to pick voltages that were on the more greedy side to make sure the overclocks were more likely to work.

This proved to be a good move. I managed to lower the voltage on both the CPU and northbridge without sacrificing any performance, bringing the thermals down well below thresholds, and decreasing the power consumption by about 30 watts.

I’ll show you some screenshots I took while I was in the middle of running those Handbrake video encodes that kept the FX-8350 CPU cores pegged at full. The power draw you’ll see is reported from the UPS the system is plugged into. So here is a list of devices that are currently drawing power on that device:

  1. AMD FX-8350 CPU
  2. 2x8G AMD Radeon Gamer series memory @ 2400
  3. 6 120MM case fans
  4. Seasonic Gold something power supply ūüėČ
  5. AMD R9 290 OC (MSI)
  6. Yeti microphone
  7. ASUS Bluray SATA drive (not actively spinning)
  8. 2 3T Seagate Barracuda hard drives
  9. 2 120G SSD drives (Samsung and OCZ V4)
  10. ASUS VN247 LCD monitor

All of that, with the CPU pegged out and overclocked to 4.4GHz, the system was drawing 307 watts! Of course, if the graphics card were going like crazy, it would be significantly more. But that just amazes me how little that graphics card will draw, too, when it’s not being used except for dual-monitor 1080p (one monitor is plugged into that UPS while the other isn’t).

When the system is idle but awake, just doing its normal system-y things in the background, all those things draw 121 watts with the CPU at 4.4 GHz still. Absolutely nuts! That’s some amazingly good power-awareness work, in both the CPU and video card.

As you can see from those screenshots, there is the idle power draw and the fully loaded CPU power draw running maxed-out FX-8350 on all 8 cores. Also, the AMD Overdrive screenshot shows those cores all maxed out, along with the “thermal margin”. This “thermal margin” value is often confusing to people it seems. It represents the number of degrees you have left to heat up before you reach AMD’s predefined maximum safe temperature per core. By lowering voltages I was able to give myself a comfortable thermal margin while still maintaining a completely stable 4.4 GHz overclock that ran and ran and ran.

The “ASRock Extreme Tuning Utility” screenshot shows ASRock’s included software overclock utility that came with this 990FX Extreme9 motherboard. It’s not the greatest utility – but it’s ok for tweaking some things. The BIOS is the place to do it, and the boot-to-UEFI feature is great. I am incredibly pleased with this motherboard. The ASUS stuff has seemed so buggy lately. I am convince that there is no way I could have gotten such a stable overclock with such low voltages were it not for this fine board (and perhaps the silicon dye god’s favor).

The last screenshot above is the CPUZ utility showing the memory speed and timings, in case someone doesn’t believe that an FX-8350 can run with 2400 speed memory. There it is! It’s using those AMD memory modules, though.¬† And if you look at the northbridge speeds in the ASRock utility screenshot, you’ll see that the bandwidth is there. I could probably even press it further. Haven’t tried yet, though. It most certainly increased my AIDA64 scores below. The AMD chips, even the FX ones, actually do seem to benefit from fast memory.

All in all, I’m extremely happy and surprised by this system. I’m also impressed with the memory bandwidth AMD has provided even on the FX series processors. I had an evaluation copy of the AIDA64 test suite, and included the benchmark results below.

What astonishes me is that there are cases where this FX-8350 CPU greatly outperforms even the i7-4770k from Intel. Of course, there are cases where the Intel i7-4770k CPUs outperform the AMD FX-8350 as well. The price difference between the two is huge, though, especially when you take into account motherboards with comparable features.

I used to run i7’s several years ago, but switched to the FX processors after experiencing how much better the AMD chips handled virtualization. I have no benchmarks, but using the systems I could certainly feel the difference. And virtualization is a lot of what I do. Playing games, I can never tell the difference. But if I’m playing a game on a system that’s running some load in a virtualized environment at the same time, the AMD system runs smooth, while the i7 system acts choppy. That’s why I switched.

But all silly Intel vs. AMD stuff aside, if I look at just this chip, and even the small overclocking up to 4.4 GHz, I can certainly notice a huge performance gain while transcoding video with Handbrake. I have also noticed that running the memory at 2400 MHz most definitely improves the responsiveness of the system, such that I can’t even tell when I’m running with all the cores maxxed out.

Honestly, I was a little hesitant about going with the FX-8350 chips, since they are older than the newest releases from Intel. But right now, I have absolutely no regrets. They are still amazingly great performing workhorses and absolutely rock solid. Especially if you invest in the quality components.

AIDA64 Extreme benchmark test results:

Anyway, I hope you have found something useful in all this. It’s hard finding any more detailed information out there related to specific use cases and experiences.

I’m so pleased with this purchase and have absolutely no regrets about spending the money for the quality components. And no regrets about not spending twice even that much for an Intel-based system.

Besides the incredible solidity of this system, the thing I’m most impressed with is how well it utilizes power. Although the FX-8350 chip isn’t the most power-efficient chip, it’s not bad for an 8-core! And it seems like AMD has gone to some great lengths to only draw power when you really need it, whether it’s a CPU or a GPU. I swear that R9 290 isn’t drawing any power it seems. You do see it when you’re gaming though.

Oh, and I should mention, I overclocked this while leaving Cool and Quiety enabled in the UEFI, and also C6 state on the CPU, which gives it the ability to save lots of power. This has not impacted the stability of the overclock at all. Then again, I’m hardly pushing this chip to anything close to what it’s capable of, either.

Anyway, just thought I’d share my happiness and enthusiasm in case you might be questioning similarly.

BTW – the hardware support for AES encryption on this chip is phenomenal. Encrypted disks and folders? No worries. ūüėČ