So here'sa deal.
My backup needs have gotten stupid. Currently I have a Mac Mini server with a 1TB drive backing up two MBAirs (and the Mini). Everything vital is up on Dropbox, of course, but when you work with media, that's tough. The Mini is also steward of a 1TB drive including my music, which is also backed up on Google Music (I think - haven't touched it in ages but haven't turned it off, either). All told, the little Time Machine is backing up ~450GB, which puts 1TB at "just barely enough." However, it's a Seagate and also a dick. It has been unruly for some time now, and has gotten to the point where it dismounts before it can assemble a backup. Fuck you, 4-year-old Seagate.
The real stupidity, though, is the workhorse. Between project files, photo catalogs, sample libraries, virtual instrument patches and other miscellaneous media, an image of the Big Rig comes in at 3.5TB. And it's backing up to a 4TB Mybook.
Obviously, this is an untenable situation.
A year ago I was looking at a big stupid SATA raid enclosure. With the drives, 9TB would have cost me about $2500. But, just to see wazzup, I poked around and discovered that Western Digital is currently rawkin' some drives in a nifty little NAS. Not a well-reviewed one but a little NAS nonetheless.
Problem being: 3.5TB plus 0.5TB x 3 (safety factor) = 12TB. That fucking thing would be effectively full now. Without any expansion. Without more stupid sound libraries added. Without more jobs (which are backed up offline regularly - I've got 2TB of archived media from the past year that isn't included in this). So suddenly we're out of the land of cute.
And once out of the land of cute, I have no mileposts.
The last time I messed around with RAID I rolled my own. Used a little "linux-on-a-card" IDE thing that, in combo with 4 IDE drives, gave me 900GB of RAID5 in ZFS for just under a grand. And then the "linux on a card" thing konked, took the BIOS of the ghetto Dell I was running it all on with it and corrupted the array while it was at it. I had to buy a new motherboard and install Knoppix on it just to rebuild the goddamn array. Which was missing most of the photos I took of the bicentennial orchid show in Bangkok, but I digress. Anyway, it was an unpleasantly technical experience.
So shit like this scares me a titch.
So I appeal to the greater Hubski mind. I know we got lotsa computer kats on here, of which I do not consider myself one. As far as my regular circle of friends is concerned, I'm out there On Beyond Zebra with this shit. I do not want to be anyone's sysop but myself and I don't want to spend too much time learning. But as near as I can tell, the quickest, bestest way to give me a Time Machine backup solution sized for my needs plus expansion involves one of these.
Talk me out of it. Talk me into it. Talk me into something else. Tell me horror stories. Tell me bedtime stories. Just know that I've been vapor-locked on this goddamn decision since Monday which means there are three computers in this house not backing up.
My 2 cents: I do some work managing a network for a house of 60 people. I've seen two of a smaller WD backup solution die on the job...mysteriously. Unfortunately, if you check the back of both your and my link, you'll see that there is no monitor / serial out. It's getting stuck on boot? Tough luck figuring out why. Now, yours is definitely easier to pull the drives out of, so it's slightly definitely, but unless you have another machine handy to debug with, any glitches in the system are going to be hard to come back from. I've sworn off WD. On one hand, their devices cost barely more than that of the hard drives. On the other, their devices are pretty much just a case, a tiny motherboard, and ethernet / USB IO ports. If that's enough to toss WD out the door, but you're still looking for a cheap solution, I'd personally recommend building your own el-cheapo server. It'll take time, and if you want good performance, you'll want a RAID card. But at least if things go sour, it'll be recoverable. Technically the drives from those WDs still work, but now they are sitting in an old desktop running centos and XFS. Hope that's not entirely useless ramblings^^
No, that's useful. I'm hesitant to roll my own because the last time I rolled my own I was up a creek without a paddle. I've learned that all-in-one solutions are really good for when you don't want a geometrically-expanding list of things that might have gone wrong... been there, done that. With a completely packaged solution, at least there's tech support of some sort (and often a community) that can provide advice on the whole system, rather than shotgunning piecemeal answers. My experience with WD is that their drives are great, but the shit they put in front of it isn't exactly Mac friendly. That's one thing that soured me on the EX4. The Synology has 6 USB ports, 4 network ports and 2 eSATA ports. Whether or not you can dial into them when the host goes tits up I don't know; far as I'm concerned, if the host goes tits up it's time to go shopping anyway. WTF is up with ZFS, by the way? Seems like all the little NAS boxes want to run it, but OS X, Win and most flavors of Linux don't speak it at all. Thus my adventures in knoppix, which were the opposite of fun and empowering.
The GPL is incompatible with Sun's license, so it has to live outside the kernel. There's a FUSE implementation, and a recent loadable kernel module I haven't played with, but whether either is installed out of the box depends on your distro. The kernel module is almost surely the better choice now that it exists.
There are a couple of OsX implementations as well, OpenZFS and Zevo.
I have no idea about the state of ZFS on Windows.WTF is up with ZFS, by the way? Seems like all the little NAS boxes want to run it, but OS X, Win and most flavors of Linux don't speak it at all. Thus my adventures in knoppix, which were the opposite of fun and empowering.
That was my plan, along with using a distributed filesystem like glusterfs to replicate data across the network as backup. Unfortunately "time to go shopping" was ~1 month in on the second device (It was in a sub-optimal power situation...). A few hours, I had one machine with 7.5 TB of space and going on 4 months of uptime. ZFS has been historically BSD-only. Newer versions of the linux kernel are starting to support it, but I'm doubtful of the stability of a core module that hasn't undergone 10+ years of testing in the wild. It's got neat stuff to ensure data reliability on the drives, but if most people mount their storage over SMB (Windows shares) / AppleTalk, both of which negate the problem of ZFS support on OS X / Windows. Not sure about eSATA / USB...Whether or not you can dial into them when the host goes tits up I don't know; far as I'm concerned, if the host goes tits up it's time to go shopping anyway.
WTF is up with ZFS, by the way?
By the way, now that I'm back in town, I thought I would share...THE NETCLOSET: http://imgur.com/a/XihhA It's since gone through a number of changes, and is no longer quite so much a fire hazard. Sadly, the water pipes still remain.
Have you looked into an LTO-5 system instead of a multi-drive backup? Store the tapes properly and they will work for several decades untouched. It's a bit expensive, but it should outlive further drives you'd have to buy in 4-5 years. I do a lot of filmmaking and video editing, and LTO seems to be the standard for data archival in most post houses. I don't personally own one, but that's only because I'm broke as fuck. Edit: Also, avoid Seagate in the future, their drives are known to fail faster than other brands.
Avoid Seagate is good advice. There was that whole Backblaze analysis that got a bunch of people worked up on hard drive reliability http://blog.backblaze.com/2014/01/21/what-hard-drive-should-i-buy/ If you are willing to pay for reliability Hitachi seems like the way to go. If they fail they do it early in their life span, which seems like a huge bonus to me. Replacing data that is only a few months old is way easier than replacing it a few years down the line.
I had the first Barracuda. THE ONE. 9600 RPM, SCSI-1, 1GB, $2k in 1992. Fucker sounded like a lear jet powering up. It was without a doubt the hottest shit drive you could buy for any price. That drive pushed me through a good 15 years of Seagate nostalgia. I don't think I wised up until a couple years ago.
Probably 7200 (first to go that fast, I believe). There was a 5.25 drive made by the Seagate division in Minneapolis that develeped a drive called the Elite that spun at 10,000 - that sucker got really hot, but it was a nice piece of work. I worked for Seagate from '87 to '98, mostly on the Wren series (developed in OKC). But it was a division of Control Data when I started, Seagate bought us later.
Seagate had a crap rep when they first started, because all they made at first were stepper-motor HDD's. They bought Control Data's disc division (branded as Imprimis), to up their game (voice-coil actuators replacing stepper motors), and they phased out the steppers quick. That integration of Control Data was the beginning of Seagate's first swing of good reputation. edit - I was wrong about the Elite - it was earlier, and the first to spin faster than 3600, it spun at 5400. 3600 was the standard for a long time before that. I think it may have been a later 'Cuda that spun at 10K.
I haven't - in part because I'm not doing that much deep storage. It'll get there, I'm sure. That's more of a "planned backup" solution as opposed to an "incremental backup" solution - in an ideal universe I'd be doing both but I'm still busily feeling gobsmacked by dropoing two large on a bunch of spinning magnetism.
You should be able to escape formatting symbols like + with a backslash (i.e. \+). It does look like there is a bug when editing, though, at it doesn't show the backslash you use to escape.
So +'s should be fine now by themselves in urls, such as this one. Anything outside of that and you'll have to use \'s to escape them.
While they're at it, it hates ~ 's as well, if they're used to indicate approximation. When split across ~2 lines. Edit: Ok, escaping with backslash works, but the backslash disappears when subsequently edited as forwardslash noted.
How do you feel about rolling your own, for God, country and savings? Zonk mentioned the Proliant Microserver. It's a tiny, quiet little thing with 4 x SATA slots for your drives and some extra external ports for expansion. They also running regular offers for cashback on purchase. Current one in the UK is £100 back off the £160 purchase price of the machine. I'm betting there will be similar deals in the States. The box is stupid cheap either way. 4 x Seagate 3TB drives gives you 12TB raw for around $400 4 x Seagate 4TB drives = 16TB for around $640 So that's 12 or 16 raw TB for $500 OR $750 ? On my Microserver I installed FreeBSD to a USB stick and boot from that. Then I configured ZFS from the command line and created my pool. If you don't want to fiddle around with FreeBSD, there is FreeNAS which does all the goodness in the background and gives you a nice GUI for configuration. The benefits of a little multi-purpose box, though, is that you can have it do other things in addition to providing NAS. Mine grabs torrents, handles networked security cameras in the house, lets me get back into the LAN when I'm travelling etc. Also it's extremely low power, which saves a few pennies when you're running it 24/7 for the year. One thing I'll get around to doing eventually is trickling an entire mirror of the data to Amazon S3 storage for a proper offsite backup.
That's what I did last time. It was great until it stopped working abruptly and painfully. Then I was left to fuck around on my own. Don't get me wrong - I was inches away from a DL380 a few years back. I'm not opposed to the idea, I'm just not sure what I benefit from going that way here. But work it out with me, though. So the box in question is probably this one, yes? That's $339 raw, 4 drives. Whatever I want on it, I put on it and make it happy. For another $10, I get this, which has hot-swappable cradles, the possibility for redundant PS, and a design ethic revolving around "people who don't want to deal with configuring a NAS." It actually works out cheaper - because WD wants you to buy drives, a fully loaded 16TB EX4 comes in at a "raw chassis" cost of $280. Or, I bite the bullet and spend another $400. The equivalent of two Proliant microservers. But what I get with my hard-earned change is this - now we're 5 bays, not 4 (which, if you're going to do RAID5, is your multiplier of choice). I've got dual power supplies. I've got dual fans. I've got four aggregatable ports. I've got expansion chassis that will push this bitch out to 15 drives if I feel like it. I have the ability to cache on an SSD. I've got RAM I can upgrade. And I've got a web configuration that, while kinda scary, doesn't scare me like "FreeBSD" and "ZFS." Again, I built one of those in 2003. It lasted three years and then failed dramatically, leaving me shit out of luck. Due to life events, cash flow and cantankerousness it took me nine months to get my data back. Color me stung. I would appreciate your opinion on this. From my perspective, it's worth the $400 hit to have the expandability, to have the support of a non open-source organization behind me, to have a single-point vendor, to have an optimized configuration, to have the ability to build a 16TB array in one chassis rather than a 12TB, to not have to puzzle out all this stuff. My perspective can be changed, however. That's why I posted this. What's your opinion? By the bye, fun power fact: Our old apartment is 3 doors down. It's a 2 bedroom vs. a 3 bedroom, though. Nothing else has changed, other than the fact that the refrigerator in this apartment isn't fully enclosed in wood, which means it doesn't run all the time. My power bill went down $80 a month.
Well, true 'nuff, the argument about how much one's time is worth comes into play. At the time I set up The Box, I had spare time on my hands and I used it as an excuse to learn FreeBSD. If I had to do it again? Maybe I'd spend the money on (or install) something more fire and forget. A thought: you can get 5 drives in a Microserver by stripping out the CD bay. Again, time. Another thought on RAID5: from a discussion about RAID and large disks, I can quote my IT guru here:
"As always, this is a standard risk calculation. If the data matters, you don't use RAID5. And with big SATA drives you are often best off using RAID6 (even with 4 disks) as it has the highest recovery coverage." I can ask him to explain further why this is the case, but I'm assuming double error parity is better than single for recovery. As with everything, it comes down to you usage. For me, it's long term backup of large files (video data) that I don't need online access to, since the projects go onto faster local storage when in use and then move back to the slower, larger NAS when packed away. Do you need bandwidth critical storage? Or if it's primarily Time Machine backups, your first sync is the one that takes the time and then it's small, incremental updates. In that case, maybe it's slower, cheaper drives and a slower interface. Like you I have nearly no idea what I'm doing. However, i's educational doing it. ZFS though? Proper space technology. Raid-Z (1, 2 or 3 vs RAID5, RAID6 or "RAID7"), self-healing data, automated snapshot and rollback of data, on the fly storage compression. If I had to do it again (or I upgrade The Box in the future), I'd go with FreeNas 9 or Nexenta Community to avoid having to muck about in BSD. Open source, yes, but still reasonably turnkey, and they both appear to provide the latest ZFS implementations which older FreeBSD versions do not.
I basically have no idea what I'm doing either (at least that's how I feel, even after my CS Bachelor). But I had a urgent need for redundancy (a NAS does not necessarily provide backup functions, but rather redundancy, that's a difference (at least if you rely on RAID)) with lots of bytes in movies and music, and now my growing photo library after getting my DSLR. So at first I wanted to get a HP Proliant Server and use that as a NAS, but it was limited in slots / bays and it just had too many disadvantages, which is why I decided to go for a QNAP TS-412 with (at first) 3x 3TB WD Red (the NAS drives). The configuration was as easy as you could imagine. I connected it, plugged in the 3 drives, started it up, decided what RAID I wanted (I went for RAID5, because that saves me from hassle when I add a 4th HDD and it uses very little space, and speed was not an issue, since it was mainly for storage) und it was basically set up. Of course, a little bit of user management and privileges through the interface, but that was pretty easy to handle, too. And as it's basically only me that uses the NAS, I could just admin the shit out of everything and give me just all privileges everywhere, which (I guess) saved me a bit of a hassle toying around with the authorizations. After that I created the folder, added it as a network drive to my PC and it was basically already functioning as I wanted it. On top of that I wanted my torrent client to run from the NAS, and even though it's only an ARM processor, it works just as fine. QNAP offers some pre-packaged setups, which are super easy to install, so I got transmission and got that set up and running super easy, too. Added a GUI for Windows and my Android and voilà: I had everything I wanted. I can control my downloads from everywhere in the world (which is handy when I'm on a business travel) and I feel secure with my data being secured from HDD failure (see, redundancy, not backup). On top of that I added the movie folder as a resource for my RaspPi with XBMC and I have my whole library easily accessible and fancy looking. Everything on the QNAP is easy to handle, and the latest firmware also added a huuuuuge overhaul of the interface, making it even easier. I may not use it the most efficient way, maybe not the most secure way, but it works just as fine as I wanted it. The next quest will be upcoming this weekend (which is funny, because I just placed the order yesterday and now I see this topic), because I ordered a 4th WD RED 3TB to add to the last unused bay of the NAS. If everything goes well and as user-friendly as I hope, I just put it in the bay, the raid system is rebuilt and after a little bit I can use it with 9TB instead of 6TB. I don't know if that's how it will work out, but I remember that there's an easy option with a configuration wizard which simply has the function "Add another drive to the RAID array". And the community in the forums is pretty active, too. So I'm not too worried about that, even without being a super nerd, who handles connecting to the NAS via Putty as it is nothing. All in all I'm more than happy with my choice, even if it was the slightly more expensive way (compared to the HP server). Easily being able to add another 3TB is golden, the management is easy, I love my media center with XBMC (a NAS + XBMC is just sooooo good) and having torrent running 24/7 and accessible from everywhere all the time is just super handy and nice (without having the desktop running). I even have that QNAP standing 2 meters from my bed and it's so silent that I don't mind that. So yea, I couldn't really help you with your specific problem set, but I hope I gave you a little insight how I handled things about a year ago, when I was in almost exactly the same situation. Edit: Now that I think about it, we're not in exactly the same situation lol Well, kinda, but I had 2TB of stuff and I solely rely on redundancy without backups. But I have no suggestion how to handle 12TB of data right now, haha. Maybe decide for a solution that is almost full right now, but easily vertically scalable, so that you can add resources one after another, whenever you need it (like I did with my RAID5, at least I hope so). The problem I see there though is, that I feel like after 4-bay NAS, you hit the end what's considered "homeserver" and you get into business spheres, which is not price-friendly at all anymore. I have no proper solution, but I'll keep track of this topic and let us know what you will end up with :P
So riddle me this, Batman - why QNAP over Synology? 'cuz that's a decision that matters to me. Seems'a'me that in 5-drive land, QNAP runs me 870 bucks. For my additional $100, I get a bunch of media center features - an HDMI port, for example, and a media-centric approach. Which is kind of cool, but I have to remind myself that I have a near-useless Roku that speaks XBMC just fine. I also note that if I look up "QNAP Time Machine" I get a bunch more people bitching than I do when I google "Synology Time Machine." both of them support it natively, both of them are not without their problems, but it appears that Synology pushed an update five months ago that made things better. That's probably what it's down to - a 5-bay Synology or a 5-bay QNAP. $750 vs $870. Thoughts?
Actually, in this regard, I trusted blindly my fellow students with that. As I mentioned I studied CS until this summer, so I was surrounded by a couple of people who were really into networking, NAS and this kind of stuff. And a couple of guys just recommended QNAP to me because they had to do a lot with it during work or privately and they loved it. I knew those people know a lot more than me about that topic so I just trusted them blindly. Regarding the "more results": I don't know how to interpret that, but it doesn't necessarily correlate with the quality. If QNAP has three times more users than Synology, the vocal minority might appear more overall, even when possibly being a smaller user percentage. That said, I never compared, looked for user feedback or have any other info about Synology. I just can speak for my experience with my QNAP. No critical or questioning buying process. Sorry bud.
I won't argue reliability one over the other. I was curious if you had any insight. As far as what I can find, there are about six times as many reviews on Amazon for the Synology box as there are for the QNAP, so there's that. On the other hand, the QNAP reviews are a little more positive. Having messed around with the QNAP's UI, I'd need a pretty compelling reason to go that way; the Synology UI is a lot more intuitive to me. Don't interpret this as me not sincerely appreciating your input - I've enjoyed this decision not at all and every datapoint I can scrounge helps. Personal experience helps loads. Thanks a whole bunch.