# New FA Server



## Dragoneer (Nov 12, 2006)

If you could propose a new FA server and build it under $3,000 what would you guys recommend? I'd like to have at LEAST four cores (be it two dual or a single quadcore with second slot for upgrade later). I've been discussing things with Gushi and we're having some processing issues which are affecting site performance as well as some other problems.

While the current hardware is more than capable, it's still not enough sometimes, and our Xeon CPUs are a lil' bit older and the motherboard that the server run has a 4GB RAM limit (which we're already at).

I think it would benefit FA's future that, along with Ferrox, we meet our processing needs for the future head on.

http://supermicro.com/products/chassis/3U/933/SC933T-R760.cfm

This is a server case I'm looking at now, and I'm going to more-than-likely rig it out with 8x SATA drives in RAID 10 or RAID 50. Hard drive space + security + holy shit my underpants just burst into flames performance!


----------



## CyberFoxx (Nov 13, 2006)

Hmm, multi-core setup eh? There's a number of ways you can set it up then. If I remember correctly, you guys are running BSD 64-bit right? And I think BSD supports Native Posix threads. Thus, you can just setup normal SMP and let the Posix threader take care of process delegation.

Then there's the "Wow, that's overkill, but a good safe idea" way, VMware Server. Have a session of VMware running on each core, and each session handling something separate, like having the Database being handled by a session, httpd on another, etc.

Mind you, I have yet to mess with a multi-core system myself. But I have read about what people have done with them though. Hell, people have been using VMware Server to partition multi-CPU systems for years now. And it's even easier now that VMware Server is free.

Anyway, just thought I'd toss out an idea. I myself have been thinking of playing with VMware Server on my own little server box, have the main firewall in hardware, and everything else "Virtualized."


----------



## Kougar (Nov 13, 2006)

Whomever had the RAID 10 idea should get a Cookie! To borrow from a wiki: _"RAID 10 is often the primary choice for high-load databases, because the lack of parity to calculate gives it faster write speeds."
_ The 3ware RAID cards you've mentioned do support RAID 10, and while I'd suggest the PCIe 4x model over the PCI-X model, that decision is still dependant more on the server motherboard.

The old Netburst Xeons are pretty far behind in performance. While AMD is by far better than those, Woodcrest Xeons are at the top of the performance pile. Or even a single quad-core Clovertown Xeon.

I know of a few high-end ATX workstation boards that take desktop CPUs, and then there are the server boards that take Xeons. The issue with Xeon's is they will need FB-DIMMs, which cost more but let you go nuts adding memory.

I do not know which type of RAM FA currently uses as I don't know the specific model of Xeons, so it needs to be asked. About what size of memory would y'all expect to be the upper desired limit for the foreseeable future, 8gb? Anything higher than this would require FB-DIMMs. Buying 4 new 2gb modules of DDR2 RAM can get you 8gb, but currently DDR2 cannot go higher in module capacity or # of banks. A rough look around and I found these, 2gb DDR2 RAM is harder to find than 2gb FB-DIMMs, even if still $100 cheaper per 2x2gb Kit. 

Kingston 4GB(2 x 2GB) 240-Pin DDR2 FB-DIMM ECC Fully Buffered DDR2 667

Kingston 4GB(2 x 2GB) 240-Pin DDR2 SDRAM ECC Registered DDR2 667

Generally how much scalability does FA need, and what would be optimum right now... From what you are saying 4 physical cores with 8gb of memory using a RAID 10 disk array would be it? How much disk space does FA currently use? Mainly: How critical or how far along will it be before FA requires more than 8gb of RAM is the question? That will decide whether to stick with basic DDR2 or to use DDR2 FB-DIMMs. Going with basic DDR2 server RAM now and finding out FA needs more than 8gb later would require a complete rebuild from scratch.


----------



## Dragoneer (Nov 13, 2006)

RAID 10 was my idea. =)

Well, I'd like to get an option that nets us the single best performance option up front WHILE giving us the best option for upgradability down the line. Right now, I'd save a Quad-core Xeon would suit us best, or whatever AMD throws into the fray when they do their Opteron refresh (or whatever they decide to call it).

My main concern is insuring that we have an upgrade path down the line so we don't, as we are now, forced to upgrade everything to move forward.

From my discussions with Gushi today, we can stick with 4GB of RAM and be fine, so long as we have the option to increase it more later. The CPUs and file system are the current major bottlenecks because as it stands the drives thrash non-stop. The CPUs are old Nocona cores which are no longer supported and, frankly, not be the best performers.


----------



## Dragoneer (Nov 13, 2006)

Also, storage is about 120GB at last count, but rising really fast.


----------



## N3X15 (Nov 13, 2006)

Dragoneer said:
			
		

> If you could propose a new FA server and build it under $3,000 what would you guys recommend? I'd like to have at LEAST four cores (be it two dual or a single quadcore with second slot for upgrade later). I've been discussing things with Gushi and we're having some processing issues which are affecting site performance as well as some other problems.
> 
> While the current hardware is more than capable, it's still not enough sometimes, and our Xeon CPUs are a lil' bit older and the motherboard that the server run has a 4GB RAM limit (which we're already at).
> 
> ...



I am using two SuperMicro 1U rackmounts.

Be SURE AND CHECK that your system's RAID card will work with Linux.    The built-in Marvell HW raid setup on my primary web server was completely unusable by Linux when I first got it.  Just a few weeks ago, experimental Linux drivers were released, and it's been a year so I was not able to implement RAID.


----------



## Dragoneer (Nov 13, 2006)

N3X15 said:
			
		

> I am using two SuperMicro 1U rackmounts.
> 
> Be SURE AND CHECK that your system's RAID card will work with Linux.    The built-in Marvell HW raid setup on my primary web server was completely unusable by Linux when I first got it.  Just a few weeks ago, experimental Linux drivers were released, and it's been a year so I was not able to implement RAID.


Yeah, we're gonna continue using a 3ware, freebsd compatible card.


----------



## Kougar (Nov 13, 2006)

Well, as long as 8GB of RAM will not become a bottleneck anytime soon or in the near future then here's what I'm looking at. To clarify: I don't see single 4gb modules of RAM happening before DDR3 memory becomes the norm by the end of next year. Intel's flagship chipset will be introduced this Spring, and the top dog "X38" model will use 1066/1333mhz DDR3. By the fall, Intel will slide it into the upper-mainstream segment with a new chipset refresh. And I'm rambling. Keeping the hardware within the desktop realm would definitely be cheaper, so as long as 8gb max is not a problem there are no other issues I see doing this.



The best desktop processor you can buy: Intel Quad-Core QX6700. AMD's "K8L" will debut just before the exact middle of next year. MSRP is $999, so once Intel is able to deliver inventory prices should go down. So far no one else appears to have it in stock, but most offer lower prices.

Looking around, the ASUS P5W64 WS and the same board but with two PCI-X slots ASUS P5WDG2-WS are good workstation/server boards. Ignoring their enthusiast qualities, they will run 8GB max of ECC DDR2 RAM, up to 800mhz in speed, with 4 slots to use.

Nocona Xeons use regular DDR RAM, so I can assume FA is using DDR memory? If so two of these would be in order Kingston 4GB(2 x 2GB) 240-Pin DDR2 SDRAM ECC Registered DDR2 667

Need at least four drives to use RAID 10, and I am not counting a hotswap drive for automatic array rebuilding. Seagate Barracuda (Perpendicular Recording) 320GB 7200RPM 16MB Cache SATA 3.0Gb/s I know you said 160gb, so I looked at the better priced 250gb segment. But if you factor in the $6 shipping Newegg wants for them, these end up being a better deal as shipping is free. I would avoid 7200.9 drives over 7200.10, due to cache and performance differences. 

I know I sort of blew away your "keep it under $3,000" limit, but to much of this is high-end hardware or is a requirement that can't be skimped on. Shopping around will help a bit with the prices, but this is to ballpark a figure for what I'm seeing. The specific memory I linked to appears to be the absolute cheapest when using single 2gb modules, and I have spent a good while looking around. Even assuming there were 8 banks to use, 8 1gb modules wouldn't cost any less. The same with buying two dual-core E6700 processors instead of a single quad QX6700, assuming the QX6700 falls around a $1,100 price anyway.


CPUÂ Â Â Â Â Â Â Â Â Â $1,300
RAMÂ Â Â Â Â Â Â Â Â Â $1,180
Case/PSUÂ Â Â $745
HDDsÂ Â Â Â Â Â Â Â  $380
MainboardÂ Â Â $360
RAID CardÂ Â Â $500

TotalÂ Â Â Â Â Â Â  Â  $4,465

Gets

Intel Quad-Core QX6700 2.67ghz Processor
ASUS P5WDG2-WS Motherboard
Kingston 8GB (4x2GB) DDR2 ECC Registered DDR2 667 Memory
3ware 9590SE-8ML PCIe x4 RAID Controller Card
4x320gb Seagate 16mb cache hard drives (640gb Available)
SuperMicro SC933T-R760 Chassis (With redundant 760watt PSU)

Cutting the RAM by half "saves" $590, and careful shopping should save $200 on the CPU. 

Furthermore, these ASUS motherboards offer RAID 10 *onboard*. The drawback is they offload the work onto the CPU... but for a Quad-Core CPU will this matter? I need to do more research, but I would hazard to say in FA's case that it will not, so then you can save $500 on not buying a RAID card. Unless the server needs PCI-X slots, choosing the ASUS P5W64-WS option will save $60. 

All of these "save" options combined would bring the total cost down to *$3,115*.


----------



## Dragoneer (Nov 13, 2006)

I'd rather stick with a RAID controller due it it having dedicated processing and better tech built in. Onboard solutions are good, but I'd rather have something that was built for the ground up and is a solid, dedicated performer with *NO* questions asked regarding reliablity.

Onboard isn't bad, but it's not a powerhouse solution.

The reason I said 160GB drives is because we could get six of them and easily do a nice ultra-fast RAID. Hell, even six 250's would work, so long as they're Sata 3/Perp.


----------



## blueroo (Nov 13, 2006)

Don't build it yourself, that's a waste of time and money. 

http://www.siliconmechanics.com/i6091/dual-xeon-server.php

* 2u chassis with sliding rail kit
* 2x Intel Xeon 5030 Dual Core 2.66ghz, 2MB cache (Here's your four cores)
* 2GB (2x1GB) of 533mhz fully buffered DIMM. The chassis has 16 slots, so there's plenty of room for memory upgrades in the future.
* Dual Intel 82563EB 10/100/1000 Mbps NICs
* 6 x SATA Ports via Intel ESB2 SATA Controller (Hot Swappable)
* Six Western Digital 160GB, 7.2k RPM, 16MB cache drives
* Floppy drive and 8x DVD/24x CDROM drive
* 3 year advanced component exchange warranty (If it breaks, send the part back to them and they'll send you a new one)

And they'll support (and pre-install if you like) various flavours of unix, including Linuxes and BSDs.

Total: $2493

Replace the 160GB drives with 250GB drives and the price is $2631.

I'm not sure if the SATA controller listed above offers raid but if it doesn't, a 3ware 8-port SATA RAID Controller will run you another $711. A 4-port will run you $507. Both are within the realm of reason.

Silicon Mechanics are good guys. I've known them for a couple years now and I buy servers from them constantly. If you don't like the rig above, browse through their site. $3k will buy you a good server, or a couple low key 1u servers. Building a server yourself from mismashed hardware is a pain and will always be so. The rig above (or any rig they sell you) will just work, and it'll work well. They burn in all the hardware before it goes out, and they'll help you resolve problems if something goes wrong. You can't buy a warranty for your server when you build it yourself!

-----

If I were you, I would buy the server above with the 3ware 4-port and four 250GB disks ($2842). Make that your MySQL server. Rebuild the current server with two webservers, one for Apache/PHP and one for images, js, and css. Add a second Gigabit card to the old server dedicated to MySQL traffic. That's the biggest bang for your buck, and gives you the greatest growth potential.


----------



## Kougar (Nov 13, 2006)

I'm just giving alternatives to choose amongst.  I would agree with you about the hardware RAID controller for all of those reasons, which is why I put it in the $4.4k total. 

Seagate doesn't have any 7200.10 series 160gb drives I think... Only one of the 7200.10 model 250gb drives has the 16mb cache buffer, the rest are 8mb. One reason I kept those 320gb perpendicular recording based drives is because the higher the GB density of the platters, the higher the drive performance will be. Additionally the .10 models run marginally cooler and appear to have a better track/performance record than their .9 counterparts, plenty of major reviews on these floating around! Either way, with Black Friday approaching there should be some great hard drive deals to be had... 

I didn't look at any of the cases as you already found one that looks pretty good. I'm not quite so sure about cooling a quad-core chip though... Intel goes with the lowest model cooler they can get away with bundling per each model CPU. Since those boards are LGA775, I think something like a Zalman 7700Cu would do pretty well while still barely fitting inside that case, but granted aftermarket cooling isn't a priority. There are some decent versions for Clovertown chips such as this one but there's no way it'll mount on a LGA775 socket that I know of.

Edit: Just saw your post Blueroo! I'd take issue with some of the extra parts is the problem, such as the PSU. Xeon 50xx CPU's are worth avoiding as they are still Netburst cores, AMD's chips will top these while using less power. Only Xeons "51xx" and higher are going to be Woodcrest (Core 2 Duo) cores.

You have a good idea, but once you upgrade to Woodcrest 5150s, upgrade to a redundant 700watt PSU, throw in 8gb of RAM their quoted price becomes $5,500!! And that is before buying all the hard drives.  It's cheaper to build your own with the above hardware, and you get alot more for doing so.


----------



## yak (Nov 13, 2006)

What's up with Intel Xeons? Why doesn't AMD solutions suit 'ya? Opteron 2000 family looks pretty cool, and they all have an on-die memory controller. I could be wrong thou, but they are also supposed to be (way) cheaper.
RAID Controller card has to be separate. 
Network card, if it is also comes separate, has to have most of the routine network functions implemented in hardware, rather then offloading them to CPU, like cheap desktop Realtek's do.

I don't know if there are any plans for the /current/ server yet, but i'd really like to see it used for a web-server. Leave the new one for the database.


----------



## Rhainor (Nov 13, 2006)

Two words:  Beowulf Cluster.

Probably not feasible, but I just couldn't resist throwing that out there.


----------



## N3X15 (Nov 13, 2006)

Rhainor said:
			
		

> Two words:  Beowulf Cluster.
> 
> Probably not feasible, but I just couldn't resist throwing that out there.



That's for processor intensive stuff, like calculating Pi.  Also, we'd have to rewrite the various server systems, like the SQL database server, in order to take advantage of it, not to mention the hundreds of servers we'd have to buy.

However, I WOULD think a load balancing system would be nice, using the Linux virtual server system.


----------



## blueroo (Nov 13, 2006)

Kougar said:
			
		

> Edit: Just saw your post Blueroo! I'd take issue with some of the extra parts is the problem, such as the PSU. Xeon 50xx CPU's are worth avoiding as they are still Netburst cores, AMD's chips will top these while using less power. Only Xeons "51xx" and higher are going to be Woodcrest (Core 2 Duo) cores.
> 
> You have a good idea, but once you upgrade to Woodcrest 5150s, upgrade to a redundant 700watt PSU, throw in 8gb of RAM their quoted price becomes $5,500!! And that is before buying all the hard drives.  It's cheaper to build your own with the above hardware, and you get alot more for doing so.



Why upgrade? I'll bet real money that FA doesn't need the extra CPU, the dual psu, or a full 8gb of ram for the database. It's not cheaper to build your own when your PSU or a drive fails on you next year and you're paying out of pocket instead of getting a warranty replacement.

And if you want to get technical, it's far cheaper to fix bad code and architecture than to throw hardware at the problem. Bad code always finds a way to waste the hardware you give it.


----------



## blueroo (Nov 13, 2006)

N3X15 said:
			
		

> Rhainor said:
> 
> 
> 
> ...



LVS is way more trouble than it is worth. Better to use an Apache reverse proxy with mod_proxy_balancer in 2.2. You get the benefit of balancing *and* protecting the application/web servers from slow clients.


----------



## blueroo (Nov 13, 2006)

yak said:
			
		

> What's up with Intel Xeons? Why doesn't AMD solutions suit 'ya? Opteron 2000 family looks pretty cool, and they all have an on-die memory controller. I could be wrong thou, but they are also supposed to be (way) cheaper.
> RAID Controller card has to be separate.
> Network card, if it is also comes separate, has to have most of the routine network functions implemented in hardware, rather then offloading them to CPU, like cheap desktop Realtek's do.
> 
> I don't know if there are any plans for the /current/ server yet, but i'd really like to see it used for a web-server. Leave the new one for the database.



http://www.siliconmechanics.com/i7288/opteron-server.php
2x Opteron 2210 1.8Ghz Dual Core, 2x1MB cache

Or the following, which I like:

http://www.siliconmechanics.com/i3949/2u-opteron-server.php
Dual Opteron 246 2Ghz, 1MB cache
2GB PC3200 Registered ECC
3ware 9550SX-LP 8 port SATA RAID Controller, 128MB cache
6x 160GB Western Digital 7.2k RPM (Hot swappable)
$2743


----------



## Rhainor (Nov 13, 2006)

N3X15 said:
			
		

> That's for processor intensive stuff, like calculating Pi.  Also, we'd have to rewrite the various server systems, like the SQL database server, in order to take advantage of it, not to mention the hundreds of servers we'd have to buy.



Hence my "probably not feasible" statement.

...Although it doesn't have to be hundreds of nodes to be a Beowulf Cluster, and the nodes don't have to be server machines themselves.  According to the Wikipedia article I linked (keeping in mind it's _Wikipedia_), a Beowulf Cluster usually consists of a primary "server" unit and a number of "dumb" node units.  It even goes so far as to say "the dumber the better" for the nodes; meaning they should be nothing more than a processor, some RAM, and a NIC.


----------



## blueroo (Nov 13, 2006)

Rhainor said:
			
		

> N3X15 said:
> 
> 
> 
> ...



Those kinds of clusters do not lend themselves well to database and webserving duties.


----------



## CyberFoxx (Nov 13, 2006)

An OpenMosix cluster would work alot better than a Beowulf, but still, it's a tad bit overkill.


----------



## Kougar (Nov 13, 2006)

blueroo said:
			
		

> Why upgrade? I'll bet real money that FA doesn't need the extra CPU, the dual psu, or a full 8gb of ram for the database. It's not cheaper to build your own when your PSU or a drive fails on you next year and you're paying out of pocket instead of getting a warranty replacement.



Why upgrade? Because there is no point in paying for new Dempsey core Xeons, it'd be better for FA to just stick with their current Nocona core Xeons then fork over for more Netburst based-designs, or switch to an Opteron server instead. I am not saying Woodcrest simply performs better, I am saying Woodcrest will perform up to half-again better than Dempsey ever could. AMD's Opteron does a pretty good job trouncing Dempsey while using less power and generating less heat at the same time. Woodcrest does the exact same, but with the Opteron sitting in Dempsey's shoes. http://www.tomshardware.com/2006/10/26/intel_woodcrest_and_amd_opteron_battle_head_to_head/index.html

Using desktop varients, a E6700 2.67ghz processor will outperform a 3ghz clocked FX62, no matter what application you choose to try. Even a 2.4ghz model can almost claim a complete shut out. I would advocate that if you build something, then build it right to begin with so it will pay back over the longer term. I have nothing against AMD Opterons, infact I like AMD, except that I go with the superior chip and Intel's current offering is superior in every single way. Pure performance, price/performance, and price/watt ratios. There are literally over a hundred reviews on these "Core" architecture processors floating around the ether.

Second, and more importantly, ignoring the CPU issue for the moment: a simple 500watt PSU is not enough to run a server off of. When it overloads or gives up the ghost, the hosted server is going down and will be lucky to not have any damaged hardware because of it. *Also Dragoneer stated 4gb of RAM is the minimum they can get away with, and having more for expansion room would be needed.* I don't consider these optional upgrades, and they constitute the bulk of any servers cost that you can build from Silicon Mechanics. You can get much more for much less doing a-la-cart. As far as warranty goes, if you build with the right components you won't be killing them. I would be surprised if a 500watt PSU did last a year running a 4-core, 6-disk server, such as in your example. While building it yourself does have the disadvantage of not having direct support from the manufacturer, every single part used has their own warranty and their own technical/RMA support. This sounds better to me than having to send back the entire server to have work done on it, again citing Silicon Mechanic's warranty policy.



			
				blueroo said:
			
		

> And if you want to get technical, it's far cheaper to fix bad code and architecture than to throw hardware at the problem. Bad code always finds a way to waste the hardware you give it.



While this is true, this is not the point of this thread. Secondly, even perfect code is going to run poorly if the hardware is unable to support it. While I do not know FA's current full specs, I have the feeling in the very least this is contributing to part of the current problem.

Yak, I tried to roll my answer to your question up in my lengthy post. To illustrate, a 2ghz Woodcrest offers better performance than a 2.4ghz Opteron. While offering more performance, a 2ghz Xeon runs ~$345, while a 2.4 ghz Opteron runs $450. See my point?  Opteron 2000 models only use Socket F motherboards, which while I was about to say were faily expensive seem to have come down to reasonable levels now. To compare a "Core" to a "K8" chip, the K8 will need a 400mhz clock advantage to even win just a few of the benchmarks. A 2.67ghz "Core" chip will flat out win every single test against any "K8" up to a 3ghz model, at 3.2ghz AMD will claim two or three wins back. Since Dragoneer said a quad-core is needed, the 2.67ghz QX6700 seems to be the best option on the market... or perhaps a quad-core Xeon but those are not quite out yet. -Which, I just poked around and found out to be otherwise.

Forget the QX6700 in my earlier post, I would replace it with a quad-core Xeon X3220 or Xeon X3210 (2.4ghz $851 & 2.13ghz $690, respectively). General availability is not there yet, but these would make more sense anyway, and as these are socket775 chips will still work in either of those ASUS workstation boards.


----------



## blueroo (Nov 13, 2006)

4GB, ok. That's a reasonable requirement.

The 500 watt PSU will power the server shown for years without any problems. I'm not sure why you are convinced that they will "burn out". They have plenty of juice to power all the components listed and more. If you're not convinced, call them and they'll do the math for you.

To be honest, I stopped caring many years ago about which processor was ever so slightly faster than the other. I don't care to sit around for days trying to figure out whether the processor, board, ram, and other components I'm buying are all compatible with each other either. Somebody already does that for me (Silicon Mechanics) and they sell a polished, guaranteed to work solution with a warranty. Why would I sit around and mentally masturbate about the very latest details of a hardware purchase when my goal is to buy a reliable server with sufficient horsepower on a budget of $3k?

Putting MySQL or Apache/PHP or anything else on the very latest highest grade Intel processor versus putting it on something SiMech (or any server provider) builds is going to yield very little difference in actual real world performance. You won't be able to tell the difference. Queries won't load significantly faster. The site won't be appreciably quicker. There are much bigger gains to be made with architecture and software anyway.


----------



## blueroo (Nov 13, 2006)

BTW, I'd like to point out that the Advanced Component Exchange Warranty includes not only shipping the whole machine for repair, but also:

_Advanced Component Exchange â€“ At the discretion of a Silicon Mechanics support representative, failed components will be advance exchanged to expedite the repair of customers' product. A package containing the replacement component and a prepaid shipping label will be sent to the customer. Customer or customers' agent will return the component to Silicon Mechanics within 10 business days of receipt. Components not returned to Silicon Mechanics within 10 business days of receipt are considered billable goods. Silicon Mechanics reserves the right to substitute equivalent components in the advanced-exchange process._

They will send you a new component to replace your failed component, be it a disk or ram or what have you, and you send the failed component back within 10 days. If you can get a mismash of internet vendors for various server component parts to give you that deal, I'll eat a button.


----------



## Dragoneer (Nov 13, 2006)

FA currently has a 620Watt Enermax Liberty PSU powering the system -- the same one I have in my dual 7900 GTX game machine (w/ 4 HDs). It's more than enough for what we need right now, but amplifying the hardware requirement... that's another story.

And it's not like we're upgrading next month (although I'd like to be able to).

We need better hardware long term to "future proof" FA, and we need more capable data. The code *DOES* need improvements and a lot of the code that has been written for Ferrox is gettin scrapped and recoded. Still, having upwards of 8GB of RAM *could* be exceptionally useful in terms of memcache, perhaps even doing a ram drive for the most commonly accessed files, etc.

Who knows.


----------



## blueroo (Nov 13, 2006)

Implementing memcached is a fantastic idea.


----------



## Kougar (Nov 13, 2006)

I didn't say "burn out" as you quoted, I said "last". I am looking at Silicon Mechanic's own specifications on the power requirements as they do not list the PSU specs directly. 

According to them a pair of Xeon 5160's, 16gb of RAM, and 8 disks with a RAID card consumes 561watts, and they are using the 700watt PSU for that test system. Now realize switching to the Dempsey core Xeons is going to push that wattage figure about *140-150 watts higher*!! Cutting back on the mem and disks to match FA's config will not drop that *711 watt* figure down nearly enough. The numbers are even shown in that THG link I gave previously. While granted server grade PSUs are much better rated than their consumer counterparts, it is still not a good idea to be near to the PSU's maximum rating. A 560watt unit with a 95% load is not going to last as long as a 700watt unit with a 76% load.

I guess I'll agree with ya about their warranty service, it doesn't get better than that without paying more to get it such as their on-site service.


----------



## blueroo (Nov 13, 2006)

Kougar said:
			
		

> I didn't say "burn out" as you quoted, I said "last". I am looking at Silicon Mechanic's own specifications on the power requirements as they do not list the PSU specs directly.
> 
> According to them a pair of Xeon 5160's, 16gb of RAM, and 8 disks with a RAID card consumes 561watts, and they are using the 700watt PSU for that test system. Now realize switching to the Dempsey core Xeons is going to push that wattage figure about *140-150 watts higher*!! Cutting back on the mem and disks to match FA's config will not drop that *711 watt* figure down nearly enough. The numbers are even shown in that THG link I gave previously. While granted server grade PSUs are much better rated than their consumer counterparts, it is still not a good idea to be near to the PSU's maximum rating. A 560watt unit with a 95% load is not going to last as long as a 700watt unit with a 76% load.



Correct me if I'm wrong, but... From all the sources I've found, Dempsey with 667Mhz FSB (as in the quote above) eats 95watts per cpu. Given that Woodcrest runs at an estimated 80 watts per cpu, I'm having a hard time finding an extra 140-150 watts by running Dempsey instead of Woodcrest.

Very few servers ever run at 100% power load. Any server that does run its disks, cpu, and fans at full capacity 24x7 is both destined for an early failure and poorly architected. In this case, 560watts is enough to power the configuration given above under max load, and is more than enough for the average and peak loads the server should see.


----------



## Kougar (Nov 14, 2006)

I already provided the link, but here it is again directly. The exact same Intel 2P server was used for both Woodcrest and Dempsey tests, so the only difference was the processors used. This one is interesting as you can compare a slower clocked Dempsey to a faster dual Nocona: here. There are plenty of graphs out there documenting how bad the leakage on Netburst chips can get the higher the clock speed... and if you stick with lower clocked Dempsey models (For either price or power consumption reasons) then depending on what FA currently uses it might even be a *backwards* step in performance to choose Dempsey. That is half of my point, it's a waste of money to upgrade to a new system that performs no better or only marginally better than what one currently has, especially when dealing with systems that tally $3,000 and higher. 

To summerize what FA is looking for in a server, according to Dragoneer/Yak:

4 physical processing cores
4gb bare minimum with 8gb of RAM being preferred
6-160gb SATA II hard drives
Discrete RAID 10 Controller Card
Discrete Gbit NIC Card
Server case + Sever grade PSU

Here's what I'd suggest:

Intel Quad-Core QX6700 2.67ghz Processor
ASUS P5WDG2-WS Motherboard
4 x WINTEC 2GB DDR2 SDRAM ECC Registered DDR2 533 (8gb total, all slots used)
3ware 9590SE-8ML PCI Express x4 SATA II (RAID 10, 8 SATA II ports)
6 x 250gb Seagate ST3250620AS 16mb CacheÂ Â Perpendicular Recording (RAID 10=750gb available for use)
SuperMicro SC933T-R760 3U Rack (With redundant 760watt PSU)
Everything in Stock - Total: $4,120 

The only issues I see is that it is over 3,000 and while it offers two Gbit ethernet ports, they are both run via two different Marvel chips (Which Yak is against), however a discrete NIC would easily fix that. As far as price goes, building this exact rig as close a match as possible with a Rackform iServ R272 would *cost over $5,500*. Whether the warranty and support (And protection from unforseen issues cropping up that can happen from building a-la-carte) that that price includes would be needed or not is up to the FA admins, not me to decide. Only buying 2-2gb chips of RAM in place of the 8gb would drop $500 off that price.

As far as performance goes, this should in the very least match that of an AMD "4x4" server using a pair of FX-74 chips. There are at least a few Apache and database benchmarks on the net between only dual-core E6700 and FX62 processors to confirm this, but here are two I googled up. Apache and Web Server/Apache Benchmark. To compare with a Dempsey, look at the Pentium D 9xx chips in the benchmarks.


----------



## uncia2000 (Nov 14, 2006)

Kougar said:
			
		

> 4gb bare minimum with 8gb of RAM being preferred
> ...
> Only buying 2-2gb chips of RAM in place of the 8gb would drop $500 off that price.



aside: 8Gb of RAM is pretty much essential I'd've thought.


----------



## Kougar (Nov 14, 2006)

I'll go ahead and ask, could we please have a _complete_ rundown on FA's current server hardware to give a better frame of reference to work from, and to get a more exact idea how much of a difference specific upgrades would bring to the table? And the more input from other admins and coders as to what specs this server needs the better IMHO, which includes the OS and hosting programs/software than is planned to run on this server. Otherwise my previous post is going to be as approximate as I can give... Grazie 

Edit: Uncia, I would fully agree with you there, hence why I included it in my total!


----------



## blueroo (Nov 14, 2006)

uncia2000 said:
			
		

> Kougar said:
> 
> 
> 
> ...



I highly doubt it. I've run far bigger, and far busier databases on less RAM. If the present code is so bad that it drives MySQL to *need* that much ram, it's far far far cheaper to find and fix the problem than spend hundreds of dollars on an extra 4GB. *Especially* when your mainboard only has 4 slots and requires absurdly expensive 2GB sticks. Even if/when FA starts using memcache, you'll be hard pressed to find 4GB of data worth caching if you have a design worth having. Hand-to-god, it's true.


----------



## blueroo (Nov 14, 2006)

I have to be honest here. That quad core you're quoting is overkill. Especially considering the fact that from what Yak showed me, FA is not limited by CPU. Building an entire rig around the fastest core you can get, when you don't even *need* more CPU seems.... silly. Most of those components are needlessly expensive, just so they can support that processor.

Throw in the fact that Yak, and assuredly others as well, wants to see the new server be dedicated to the database, it's hard to justify spending so much money on a resource which just isn't needed.

If I'm wrong about FA's usage, and it's possible, then it could justify getting the fastest core you can, but I really really doubt it.


----------



## WelcomeTheCollapse (Nov 14, 2006)

blueroo said:
			
		

> I have to be honest here. That quad core you're quoting is overkill. Especially considering the fact that from what Yak showed me, FA is not limited by CPU. Building an entire rig around the fastest core you can get, when you don't even *need* more CPU seems.... silly. Most of those components are needlessly expensive, just so they can support that processor.
> 
> Throw in the fact that Yak, and assuredly others as well, wants to see the new server be dedicated to the database, it's hard to justify spending so much money on a resource which just isn't needed.
> 
> If I'm wrong about FA's usage, and it's possible, then it could justify getting the fastest core you can, but I really really doubt it.



There's probably more processor-intensive stuff coming up in Ferrox.


----------



## Kougar (Nov 15, 2006)

I don't pretend to be a hosting guru, just a hardware one.  I don't know the full details behind FA's current server, nor the usual loads it receives, so I can only base the build off of what the admins/coders have said. What little I can discren from the server load that Alexia gives for FA, which I assume is understated to begin with, would lead me to assume 4gb IS the bare minimum for an efficient server, FA is just to busy. FA has even exceed VCL's server load, let alone Furnation's if Alexia is any indication to go by. And WelcomeTheCollapse has a point, the Ferrox upgrade is going to do more than just looks. 

Something like Google Analytics would be great to include to avoid the inaccurate Alexia stats, since all it requires is a minor addition to the server code and nothing else. But that is a different topic entirely I think...


----------



## Dragoneer (Nov 15, 2006)

Kougar said:
			
		

> I'll go ahead and ask, could we please have a _complete_ rundown on FA's current server hardware to give a better frame of reference to work from?


2x Intel Xeon 3.0Ghz CPUs (Nocona core)
1x Asus NCCH-DR Motherboard
4x 1GB Kingston ValueRAM ECC-reg, non-buffer server-line memory 
1x 3Ware RAID controller card, 2 port SATA 1.0 spec w/64MB onboard mem
2x 200GB Western Digital Barracuda HDs
1x 620 Watt Enermax Liberty SLI PSU
1x Shitty server case that does the job


----------



## Kougar (Nov 15, 2006)

Thanks Dragoneer! It looks like that system can be roughly equated into a Pentium D 830, which can then be directly compared to a E6700 or anything else to give a good idea on the performance side of things. CPU Performance Charts. Those charts don't have any real database oriented tests I'm afraid, but they do have the QX6700 and most Intel/AMD processors in them. If a QX6700 is overkill then simply using an E6700 in it's place is always an option without affecting any of the other system hardware. However keep in mind that the next "higher" processor is the X6800, which carries the same MSRP as a QX6700. 

Am I correct in that FA is running LAMP (Linux, Apache, MySQL, Php)? And just to double-check, the Apache used is version 2.0 right? If FA was to migrate to a quad-core or 4-core system then Apache 2.0 would be required if running under linux or unix otherwise the hardware would be useless.  I've been conversing with a person I'd consider an expert in hosting (As best a person that knows nothing like myself in that field can determine anyway  ), and according to him it sounds like php may be becoming the bottleneck as it is inherently single-threaded, especially when dynamic content is being served through it. I was going to ask about WAS, Websphere Application Server, but his opinion would be that J2EE is better, since Apache has an open source j2ee project going. And to tack this on as a sidenote to confirm, are "images" being stored inside the SQL database or rolled into flat files? My apologies in advance if I offend anyone (Especially the coders!) but I feel it's better to check things instead of just assuming. If I'm asking a pretty blatently obvious question regarding the hosting/code then chances are I don't even know it's obvious. 

(Edited a few dozen times as I continue to converse  )


----------



## Kougar (Nov 19, 2006)

A few not so important questions, are those the 1mb or the 2mb L2 cache Noconas? And which FSB grade are they? Whle the difference isn't much between the cache sizes the FSB does have an impact. I'm asking mostly out of curiousity as it affects which Opterons I'd equate them to. Here's some database oriented benchmarks on both cache versions of Nocona with some older model Opterons: 1 

And for whomever is interested, here are some "applicable" CPU tests in various configurations, some more "applicable" than others. Woodcrest: 1Â Â Clovertown: 1, 2, 3 Kentsfield: 1


--Edited--

Since price is the major issue second only to performance, ignoring the lower performance entailed I looked at switching to an outdated Opteron server that supports the same DDR RAM the current server uses, but I don't believe the savings are there. A single Wintec 2GB ECC DDR2-533mhz CL4 module is $250, $50 more than a 2x1gb kit of DDR, but purchasing two socket940 Dual-Core Opterons would cost 2x $386 each for just 2.2ghz models, and the chip/sockt has already been outdated as it is. While one of these would definitely outperform the current 3ghz Nocona's, it's not going to be by even half of the lead a "Core" based chip would offer over the Opterons.

At first I wasn't keen on suggesting 6 drives... but disk space seems to already be an issue & they're needed for RAID 10 which halves the available storage. Switching to 6 x 320gb drives would run $90 more, but give 960gb in RAID 10. If 750gb of space has the potential to be filled within two years then I strongly suggest getting the 320gb drives for that extra $90. That RAID card only offers 8 ports total, while 2 slots are left RAID 10 will halve a drive's capacity. Switching to a Intel server board would require spending more on FB-DIMMs, but it would allow more than 8gb of RAM to be utilized in the process so if 8gb will ever become a limitation, then this build should be skipped in favor of a full Intel sever board. 

Intel Quad-Core QX6700 2.67ghz Processor
ASUS P5WDG2-WS Motherboard
4 x WINTEC 2GB DDR2 SDRAM ECC Registered DDR2 533 (8gb total, all slots used)
3ware 9590SE-8ML PCI Express x4 SATA II (RAID 10, 8 SATA II ports)
6 x 250gb Seagate ST3250620AS 16mb CacheÂ Â Perpendicular Recording (RAID 10=750gb available for use)
SuperMicro SC933T-R760 3U Rack (With redundant 760watt PSU)
Total: $4,120

I went ahead and moved the build idea down here incase anyone sees it and has some better ideas to offer, I'm sure someone out there does.


----------



## yak (Nov 19, 2006)

Kougar said:
			
		

> Am I correct in that FA is running LAMP (Linux, Apache, MySQL, Php)? And just to double-check, the Apache used is version 2.0 right? If FA was to migrate to a quad-core or 4-core system then Apache 2.0 would be required if running under linux or unix otherwise the hardware would be useless.


FA's running FreeBSD 6.something, and Apache 1.4.something, MySQL 4.something and Php 4.something.
For static content we are going to make use of nginx, which will currently be installed on the same machine as the rest of the site, and later probably moved to it's own hardware. 
For dynamic content, i'm looking towards niginx, again, using FastCGI.

Currently both static and dynamic content are served by Apache.




			
				Kougar said:
			
		

> And to tack this on as a sidenote to confirm, are "images" being stored inside the SQL database or rolled into flat files?


Storing the images in the database would be adding a redundant wrapper to otherwise static flat files. I can argue for eternity with people who think that storing data in blobs inside the database is actually a good thing. No it's not ;D



			
				Kougar said:
			
		

> My apologies in advance if I offend anyone (Especially the coders!) ...


*smiles* With what? I honestly don't see anything offending.


----------



## Kougar (Nov 19, 2006)

Thanks for the reply! I'm trying to google it up, but I didn't even know an Apache 1.4 existed till now... I mentioned Apache 2.0 because it is the only version that's multi-threaded under non-Windows OSs, reportedly. Apache 1.3 is multi-threaded only under Windows... and the one article I found regarding a 1.4 version stated it was directly based off 1.3 code.

I'm going to be showing my software naivety asking this, but which of those apps demands the most CPU/system resources? Since you work with the code, are the most demanding app(s) already multithreaded (or multithreaded enough) so they would take advantage of four cores? That's pretty much the crux of my posts right there, since obviously there's no point in building a powerhouse of a system if the software won't make use of it, and no one's confirmed that it could. 

And I can agree with you about storing images inside the database, I'd come across a pretty 'interesting' article on the side-effects doing that can cause!


----------



## blueroo (Nov 20, 2006)

I am cursed to forever point this out to folks. You do not need threading to take advantage of multiple cores when you have a multi-tasking operating system. Never have, never will. Threads are simply a way to split one process into multiple simultaneous tasks.


----------



## Kougar (Nov 20, 2006)

Well, I thought I'd replied to that the first time you mentioned it, but since I can't find it at the moment maybe I didn't... A "multi-tasking operating system" as you put it will still be using only a single core if the program was built to run under a single thread, no matter if the system has a dual core, quadcore, or two quadcore CPUs inside it. If that single core isn't powerful enough to efficiently run that single thread, then single-threading becomes the bottleneck and none of the idle cores can do anything except miscellaneous unrelated tasks. You can't magically slice and dice up a single thread into many to split it across all available cores, that is not how single-threading works. That is infact the definition of a multithreaded application, one that a "multi-tasking operating system" can use to split amongst other cores, either by a preset number of "n" threads, or by "n" threads = # of available cores.


----------



## blueroo (Nov 20, 2006)

Kougar said:
			
		

> Well, I thought I'd replied to that but since I can't find it at the moment maybe I didn't... A multi-tasking operating system will still be using only a single core if the program was built to run under a single thread. If that single core isn't powerful enough then single-threading becomes the bottleneck, and none of the idle cores can do anything except miscellaneous unrelated tasks. You can't magically slice and dice up single thread into many to split it across all available cores, that is not how single-thread apps work.



You're absolutely right. And yet applications being single-threaded is rarely if ever a bottleneck on real world servers. In the case of a web server, the cores are always filled up with multiple web server processes running simultaneously, kernel operations, queue managers, disk IO operations, shoving memory around, handling network operations. There's always enough work to be done. In FA's case, the cores have mysql server joy as well.


----------



## Kougar (Nov 22, 2006)

Well, that's basically what I've been trying to confirm, is if four cores would ever be utilized. Sorry I misunderstood ya Blueroo. 

For some fun, I played around until I found this set up. Honestly it makes more sense, and doesn't cost any more than the last build, but offers much expanded capacity for upgrades all around. 16 physical DDR2 slots anyone?  Board accepts 64gb max of FB-DIMM RAM.

SuperMicro X7DBE+-O Dual Socket 771 Intel 5000P Enhanced Extended ATX Server Motherboard (Integrated VGA)
(2) Intel Xeon 5130 Woodcrest 2.0GHz Processors
(8gb) of WINTEC 2GB FB-DIMM ECC DDR2-533 modules
3ware 9590SE-8ML PCI Express x4 SATA II (RAID 10, 8 SATA II ports)
6 x 320gb Seagate ST3320620AS 16mb Cache Perp Recording (RAID 10=960gb available for use, not counting capacity "loss")
SuperMicro SC933T-R760 3U Rack (With redundant 760watt PSU)
Total: $4,126

Personally I'd say screw my last build and try this one instead... I also forgot to try and account for the natural "loss" hard drives have, which gets to be a large size when dealing with so many large drives! I estimate all said and done it would offer 894gb in a RAID 10. And for comparison purposes a 2ghz Woodcrest offers more performance per clock over any AMD chip. And since I love to edit my posts a few dozen times... I want to point out that due to how FB-DIMMs work, it is inherently slower to have 8gb of RAM comprised of 8 modules, the same 8gb comprised of just 4 modules would yield a tangible performance difference.


----------



## CyberFoxx (Nov 22, 2006)

Hmm, just thought of something today, what filesystems are you guys using on the HDs: JFS, XFS, ReiserFS, Ext3, etc? And what options: noatime, notail... What FS you use and what options you use can add or subtract alot of unneeded overhead. Then again, I'm basing everything on the FS' that Linux supports, and what I've used. Not sure what all that BSD supports, and what mount options as well. I do know that adding in noatime to the mount options can give a huge speed boost to almost any filesystem. (Normal operation shouldn't care about access times, just modified times.) And the notail mount option can add a speed boost to ReiserFS, if you arn't concerned with saving all the space you can. If it's Ext3, make sure they have the dir_index filesystem option turned on, that can give a decent speed boost to large directories. (Makes it almost as fast as ReiserFS, without the overhead of ReiserFS. ^_^)

Just thought I'd give out a couple ideas/tips. Sure, they're elementary tips that almost every *NIX user should know, but every tiny bit of speed does help in the long run. ^_^


----------



## blueroo (Nov 22, 2006)

CyberFoxx said:
			
		

> Hmm, just thought of something today, what filesystems are you guys using on the HDs: JFS, XFS, ReiserFS, Ext3, etc? And what options: noatime, notail... What FS you use and what options you use can add or subtract alot of unneeded overhead. Then again, I'm basing everything on the FS' that Linux supports, and what I've used. Not sure what all that BSD supports, and what mount options as well. I do know that adding in noatime to the mount options can give a huge speed boost to almost any filesystem. (Normal operation shouldn't care about access times, just modified times.) And the notail mount option can add a speed boost to ReiserFS, if you arn't concerned with saving all the space you can. If it's Ext3, make sure they have the dir_index filesystem option turned on, that can give a decent speed boost to large directories. (Makes it almost as fast as ReiserFS, without the overhead of ReiserFS. ^_^)
> 
> Just thought I'd give out a couple ideas/tips. Sure, they're elementary tips that almost every *NIX user should know, but every tiny bit of speed does help in the long run. ^_^



It's FreeBSD, so hopefully UFS with SoftUpdates, directory hashing, and optimization for minimal time spent allocating blocks.


----------

