Technical trends

Over the years, we've come across a number of interesting hardware and software components. Some of them we've loved, some of them are worse than watching Initial D sober. As the list of do's and dont's grew, we decided to start listing them on a separate page instead of filling up the header of the server list page.

Due to the international nature of these technical details, this information is provided only in English. Feel free to contact us with any inquiries you may have.

Right here, right now

This part is for recent stuff. Dates are YYYY-MM-DD.

2023/07/20
Not a great deal to update, just to confirm. Still long term thumbs up for reliability to Asus motherboards, Noctua coolers, Samsung SSD's and AMD Ryzen CPU's. I'll add Seasonic to the list, as their better model PSU's are extremely boring in the good sense. We use their Prime and G(X) series models. As usual, we're listing stuff here that gets high points on quality and reliability, so not necessarily the most shiny items on the list.

Alas, as one can guess, Huawei is probably something we can't work with for much longer due to international issues which we are not getting into here. It's a pity, their quality was fine.

2018/08/08
A few gathered keypoints of 2018. Thumbs up for better than average lifespans and quality in general go to familiar names, Asus motherboards, Samsung SSD's (which CAN be burned out if you write 5 times the promised maximum), Noctua coolers and as a new happy surprise AMD Threadripper CPU's. Intel has pretty much done the fine art of absolutely nothing for years, leaving AMD a chance to finally create true competition. We've found the Threadrippers to work wonderful in any virtualization environment we've thrown at them, including our more commonly used Xen and Hyper-V.

On the networking side, I'm so glad EU isn't on with Trump's trade war, because Huawei's CE6800 series datacenter switches are nothing short of amazing in their price range. With our ISP change, we've placed about a dozen of them in use and noticed how much easier maintenance, management and upgrades are. With the friendlier prices, we've also added a great deal of additional redundance all across the board.

So, what the heck are we still doing still running a few bare metal servers, or operating an IRC network in this day and age? For one, it works. Two, if it's not broken, don't fix it. Three, our thing is tweaking and playing around with sysadmin duties. So, we'll keep playing.

2016/08/08
After a number of tests with various SSD caching technologies, we've come to have an opinion on them in general: AVOID THEM! Naturally there are good options if you put serious money down, but we really can't go that way. In case you're curious, besides the usual major storage vendors, QNAP can provide good performance with 10GbE and SSD caching so for NAS solutions, I can give a general recommendation their way. When everything is weighed in, trouble vs cost vs complexity vs gains, we just came to the conclusion that all flash is the way to go for all primary data. We've started testing a radically new and as of yet very beta virtualization solution, which we'll make available to our users probably early next year.

There will still be places for spinning disks like certain mass archives, backups and such, but those places don't need extra speed. Keywords will probably be WD Red Pro, RAID-10 and iSCSI for those. We've still got the drawing board on the table.

2016/06/27
Been a while since the last update, but not much has changed, not enough to write much. For the past half a year, we've been making the "better late than never" decision to move into Xeon CPU's at least for the about half our hardware base. The only meaningful reason is support for ECC memory. We're still running X99 series motherboards and I can't think of any good reasons to change that. Because we need our hardware to be easily adjustable to any use, we're not going to try any sort of more use type dedicated hardware (read: brand servers).

We have a serious practice of looking in the mirror about SSD's. Right now we use them for caching and booting and such, but at some point they're going to replace most of the primary data storage too. This year, next year, not sure yet. Some massive archives will still be on spinning drives for years to come, but that's not too many systems. Some insider calculations suggest that over 90% of our hosted virtual machines could run with inexpensive SSD solutions.

Our current hardware recommendations are Asus X99 motherboards, we've used a LOT of X99-Deluxe and X99-WS/IPMI. Both can be recommended warmly. On the storage front, WD Red / Red Pro / RE series get our thumbs up. We have insufficient data to heavily recommend any PSU models or other hardware at this point. We are happy to confirm that anything Seagate produces is still absolute garbage.

2014/09/30
Haswell-E is here. Well, it's been here a few weeks but I never got around to updating this page. We're building two (for now) virtualization nodes from them and once the software looks stable enough, we'll start moving virtual servers to these new nodes, bit by bit. The nodes will get their names from the game Trinity Universe because fluffy ears.

The hardware? Yeah, looks like it's working. It'll be fun to test the new m.2 pcie ssd's at some point but I highly doubt we'll need those for virtualization nodes. The boot volumes just don't do much work.

2014/03/25
Seriously, nothing much has happened for two years. The X79 based systems are still hot, CPU's have been upgraded to i7-4930K's but this base architechture is both rock solid and still very much up to date. We switched from Asus P9X79 Deluxe to P9X79 WS, but that's mostly cosmetic. They're both amazingly stable, solid motherboards, second to none in their class.

On the storage side, we're sorry to see Samsung leave the market, but so far we've been happy with WD drives. Seagate is permanently blacklisted due to catastrophical problems and failing warranty procedures. They're not bad, they're way worse.

Additionally, we found a few old IBM blades out of use and have gotten them to run some minor operations. This has freed some newer servers for inhumane experimental operations. No, you can't ask. Okay, you can ask, but we won't answer. At least not truthfully.

And speaking about uncomfortable truths, that old Albel is still running. Seriously, the machine is from the 90's. A modern toaster could run that workload.

2012/05/15
That was fast. We managed to upgrade every i7-9x0 based system to i7-3820/3930K in under half a year, including the testing phase. Last week, we sold off our last i7-950 motherboard. Next, we're doing some major upgrades on several backup systems.

2011/12/23
Christmas is finally here! That is to say, our first three Sandy Bridge E based systems have arrived, starting with i7-3930K. Epic testing starts now!

2011/11/25
This year has been a hardware disappointment year. We started selling off older servers, mostly i7-860 and i7-920, starting spring this year. We figured that either AMD or Intel would bring something good to the market this year. We figured wrong. Bulldozer was a performance joke and Sandy Bridge E was a rather minor performance gain. On the other hand, the previously tested Sandy Bridge i7-2600 based setups failed due to lack of a suitable motherboard.

However, we have a major lack of hardware on our hands so we ended purchasing a few i7-3930K based setups, and expect to receive them early December. We'll be testing new virtualization setups and of course the usual stability tests before putting them into production use. Let's hope Ivy Bridge comes to the rescue next year.

2011/01/25
The second major series of Core i7's has started its testing runs with a few i7-2600 setups. We're hoping the P8P67 EVO from Asus will work as fine as their P6T Deluxe series did. Also, we've stopped buying 2 GB DIMMs as 4 GB models have been priced more reasonably.

A second phase of virtualization testing started on 2010/06. The tests went on for a few months and moved into production in 2010/09 through 2010/10. After a large amount of testing, we found again that Xen was our best choice. Naturally the version was upgraded and changes to the management tools were tweaked as well.

Historical stuff

Where it began

The information (and memory) is already a bit sketchy, but our first server was Youzen (youzen.bishounen.st at the time, reverse name pc3.vekoduck.com), originally a Pentium 133 MHz box running behind the desk at Kyuu's home on an unsupported test/development ADSL connection. The first OS used was Red Hat Linux version 6.2. That's something that none of us miss. The lowest end server on the B2 network was the first incarnation of Tamahome, which was an HP Vectra 486dx2-66 with 500+400 MB hard disks. It was actually upgraded step by step eventually into a Core2Quad Q6600, until we realized a virtual server was really all that was needed.

Testing out alternatives

The first production use non-x86 server was Lafiel (PowerMac 9500/200), put to use in 2002, upgraded to PowerMac G3 in 2005. In 2007 it was replaced by a G4 system named Rin. We also tinkered a while with Itanium2 hardware in the form of HP's 1600 and 2600 series Integrity rack servers. They were rock solid, fast, epic and in general quite cool, but didn't leave much room for affordable upgrades or tweaks. The same comments go for a SUN Netra we played around with for a while. Thus we went back to boring x86 and amd64 hardware. We've had a few Alix (and later APU/APU2) boxes doing some low power dirty work. Odds are we'll test a few of their APU's too. As for Raspberry Pi, we just haven't found any fun use for them. Still, we're keeping half a dozen around for philosophical reasons.

We've also tried out some brand name hardware now and then. In addition to the Apples and HP Integrity pieces mentioned above, there's been an ICL TeamServer, some HP NetServers and a number of HP Vectra and IBM workstations. With some exceptions, we mostly prefer the adventurous, renegade lifestyle that clone hardware brings. Some of us actually like screwing in motherboards on a friday evening.

More bits, more cores, more work

Since around 3Q/2004, 64 bit architechtures, starting with AMD64 were accepted as stable operating platforms and in another year, Linux distros caught on properly. The first high reliability production server that switched to 64 bit hardware(AMD64) was Zelgadis, upgraded from P4 to AMD64 on 2005/02/22. The first production server to switch to a 64 bit OS was Tamahome, which switched over to Debian-Pure64 on 2005/04/15. During the same year, as more standardized Debian options became available, the upgrade became faster. Unfortunately before this happened, we had tested many not-so-supported 64 bit linux options and it took quite a while to reinstall them from scratch.

Dual core systems came around in mid-2005 and in production, August 5th 2005 for Tsukasa (later Haseo, which was retired as unneeded in 09/2009). We started with Athlon64 X2 models but very soon we found out that while AMD was faster at the time, the chipsets and motherboards just plain sucked. Eventually we started switching to Pentium D and soon after, around Q4/2006 to Core2 Duo, in Q1/2007 to Core2 Quad and in Q2/2009 to Core i7. Too bad for AMD - decent CPU's fail constantly because nobody could make a reliable motherboard (and yes, we've tested dozens of them over the years).

Virtualization tests were started in Q2/2007 and put to minimal testing soon after. First we started with Xen and Linux-vserver. The latter dropped off, KVM wasn't that interesting and Vmware can go jump in a lake. Later on Hyper-V came along too, and it wasn't all bad either.

On the OS front, we've tested and haven't missed RHEL, Fedora or Gentoo. We've also played a bit with Ubuntu, but it's just too easy to get Ubuntu somehow messed up. Thus, we've ended up preferring Debian for most uses. Some BSD testing has been tried, but there be dragons.

The coming of i7 and internal standards

Core i7 has quickly become the dominant architechture, starting with Youzen on 2009/05 and Lelouch (up to version 3) on 2009/06. First S1156 i7 on Zelgadis 2009/11 though we went back to 1366 soon after. Also during 2009/11, we're retiring the last Q6600 CPU's. In early 2010 we decided to emphasize purchases on a combination we've found to be foolproof: Asus P6T Deluxe V2, i7-9xx CPU, Kingston KVR memory and Nexus RX PSU's. By 2010/11, all servers had switched over to this setup. Most servers are also accompanied by a varied amount of Samsung F3 SATA hard disks.

Eventually, with the X99 chipset, i7's were replaced with Xeons and ECC memory. The next generation after that, Intel stopped supporting this, but fortunately AMD came to the rescue with the Treadripper.

The good experiences with the P6T based setups have emphasized the usefulness of standards. Instead of buying whatever hardware seems fun at the time, we'll buy certain models of motherboard, CPU series, HDD's and so on. This is most useful when dealing with hardware faults, when a replacement is always nearby.

These Asus P6T series motherboards were a tribute to quality. They still come up from time to time in late 2018, still running just fine. Since then, we've mostly preferred Asus motherboards when possible. It has been nice to see though that just about all motherboard manufacturers have started putting effort into quality.

Physical bare metal server numbers started to decrease noticeably in 2009. Our historical notes mention dropping two physical servers between Q4-2009 and Q1-2010. Of course, having far more stuff to host means more hardware too, just now most of them are running one or another virtualization setup. Two production virtualization setups, one development and some offsite means one or two servers. Since 2016, most of our setups have been clustered, meaning even more hardware.


Recommendations

This area is all about stuff we use, stuff we like and stuff we don't like.

Stuff we like

Hardware

Nobody really makes the perfect computer case. However, Compucase has made some real effort with their 6919 series, which unfortunately isn't available anymore. With the 6919, putting pieces in and taking them out is quite effortless and simple. Anything can be changed without hurting your fingers or brains. No wonder we've stocked a dozen or so of these cases into a well guarded location. For rack use, we're going for the horrendously ugly UNC-410F-B. It's big, a bit expensive, looks like the 80's, but it's roomy, well cooled and easy to maintain.

A case needs a good PSU. We've found extremely fine products from Corsair's HX series. They're quiet, reliable, very powerful, providing amazing efficiency ratings, with the only downside of being a bit on the expensive side. We had a few Super Flower models, but their availability just sucks nowadays. Right now we're using FSP Aurum models alongside with Corsairs.

A good way to ease maintenance is with hot swap trays. We've used some from Icydock and from Chieftec. Nothing odd in either ones, they do the job.

Motherboards are a pain. In the stone ages, Intel was the only way to go, but for years we've gone with Asus models from the higher end (ie. stuff ending with "Pro", "Deluxe" or "WS"). They've worked very long, with very little trouble.

Hard drives are also a pain. Samsung's F3's were our tool of choice for years, but since they went off the market, WD is our brand of choice, with Red and RE models.

For memory, we'll buy Kingston. Haven't been able to break a single one yet. Noting that most other brands tend to break after a bit of component swapping and long term use, that's much said.

Software

On the OS front, we're putting several thumbs up for Debian. It's not pretty, but it works and is stable. Once you learn it, maintenance is a breeze.

Stuff we don't like

Hardware

Compucase may make decent cases but their PSU's (branded as HEC) aren't much to talk about. They just don't win in the efficiency/power output competition. They also tend to break too soon. Antec PSU's are to be avoided at all costs. Many Nexus PSU's have also proven to be less than ideal in reliability, though not the worst of the lot.

Motherboard makers need to get a grip. MSI boards tend to break all the time, so we forgot about them quickly. Actually, ASUS is the only one we can trust and even their cheapo models don't really create trust in us.

Purchase of new Seagate drives is blacklisted, their quality and warranty are both hideously horrible. We'll use what we have and never go there again.

Software

Let's never use Ubuntu again, okay? I don't know what curse we have but every Ubuntu box has become an administrative nightmare. Reliability issues, odd breakages, random "let's remove this software without warning" decisions and such. Pretty odd, considering Debian is working fine and they're relatives.


Valid HTML

Our website is valid XHTML and CSS, created completely with Nano. The B2 logo has been created using CorelDraw X3 and retouched with Gimp.

Valid XHTML 1.0 Strict Valid CSS