well it was built in the mid-late 2000s by Sun, who by that point had been completely chased out of the low end of the market by Linux and x86, like all the others of their ilk. So since they were aiming all their stuff at the cost-insensitive parts of the market by then, either people locked into Solaris or the high-end enterprise stuff, ZFS kind of had a baked-in assumption in its design that you're a mid-to-large organization and you have a hardware budget, and that if you need to expand you just buy a new machine with however much storage you now want and then migrate.
I'm sure that what you're saying is correct and all; however, hobbyist-oriented projects such as FreeNAS have been popular for over a decade now. You'd imagine that someone would step up, and implement it themselves ages ago, but I guess not.
IIRC the devs didn't want it for a long time because they inherited that same attitude, and the people who were paying for most of them had hardware budgets or were selling to people who do, and this didn't care
this is also why both ZFS and Btrfs have kinda neglected parity RAID (RAIDZ/5/6), since enterprises with stacks of cash will just use mirrors since it's faster.
they still had some smart people in the late 2000s, but as a corporation they were already finished. Nobody wanted SPARC by that point and other than a handful of things like ZFS, Solaris wasn't a very compelling OS. (and remember at that point bit-rot wasn't a terribly salient issue, people were still using traditional RAID) What do you do at that point, sell commodity x86 servers? They couldn't compete in that market, their costs (and thus prices) were way too high.
They bought MySQL and got nothing. The OG MySQL devs were big suntards though.
Oracle bought Sun solely to get MySQL's secret sauce. Lolno.
The MySQL devs then balied Oracle and created MariaDB, bringing the secret sauce that makes it assrape garbage like Oracle or Postgres.
The free version of MySQL/MariaDB is a gun without bullets.
t. spoke to the OG devs over a cup of coffee
Great, but the pool won't be reshaped / the data won't be redistributed to the new disks. Same with striped mirrors when you add disks. So it's still not as advanced as mdraid.
It is redistributed, just not in the default way, so it can only be read at old speeds. However new data is, and if you delete your old data or move it off and move it back, or restore from a snapshot, it is fully redistributed the default way.
They cannot have it on the hardware level at most they can have single bit correction per block at the cost of 5% of space. Most modern hdd do not protect against bit rot, they have improved shielding but that's about it.
In fact they are more vulnerable to some extent due to higher data density.
Anyway, i only have a few tb i really care about so that setup is good enough for me.
HDDs have the same proportionate risk of URE now as they did then. Checksums are as much about protecting the user from drive firmware fuckups as media failures.
You're retarded. While not as prevalent as people like to pretend, your data CAN change/rot in general due to faulty hardware, but there are ways to mitigate it.
>ZFS is very robust.
I just got finished dealing with a dodgy cable in a new server. It was generating a large number of chksum errors on one of my mirrored drives as i was copy data over from the old server.
swapped out the cable, scrubbed, and everything is fine. god knows how long it would have taken me to notice otherwise as i don't access much of the data that often.
Why are you using RAID instead of just backing up your PC? It's much cheaper. That, and unless you never eat, sleep, relax, or do literally anything other than compute 24/7/365, you don't need the uptime that RAID provides.
NTA, but RAID and snapshots are cheaper and less work than having a NAS. I still keep offline and off-site backups. The risk of serious controller or OS fuckups is just not relevant to me as a home user. I haven't had total loss from either since journaling filesystems were a thing.
also a different anon, but I've had SSDs fuck up before and even though I have backups of all my data, reinstalling my system would still be a massive pain in the ass. Also one of the times I had that happen was on the machine I was using as my router, it was running pfsense at the time. I updated it and rebooted it. The SSD barfed up an I/O error when it was reading some shared library, init died, and the machine panicked. Now I have no router until I can fix that. Oops. Ever since then I've always mirrored my boot drives so if that ever happens again I can just say "whatever, mount it degraded" and keep going until Amazon can ship me a new drive.
this, while sure i'm not some big corp who stands to lose tons of money if my machine is out of service for a day, it still sucks when it's not expensive or difficult to avoid by just having a raid1
like compare a raid1 with weekly backup to just weekly backup, if the active volume dies, you not only can't use the machine until you restore the backup, you also lose up to a week of data, and since i actually do work on my computer, that means even more work than just restoring the backup
for what, to save on not having to buy two discs for important data? it's not a great expense
I take this view; I've got a large movie collection and other assorted documents/data. Some of it is 20 years old already. I plan to keep all of it plus what I add in the future till I'm dead. ZFS is just another layer of protection. Just like ECC ram. Do I have to use them? No. But when I'm 80 and want to read some e-book on my server from 2010 or watch a movie on my server from 2020 I won't have to deal with any errors either. Just enjoy. As it should be. "Do it once, do it right, enjoy it then on till you die" (Aside; my movie rips won't ever have to be redone cause frankly 65" tv is as large as I'll ever go. My living room isn't going to be growing larger so 65" is it)
I'm baffled that such a basic feature took so long to implement.
well it was built in the mid-late 2000s by Sun, who by that point had been completely chased out of the low end of the market by Linux and x86, like all the others of their ilk. So since they were aiming all their stuff at the cost-insensitive parts of the market by then, either people locked into Solaris or the high-end enterprise stuff, ZFS kind of had a baked-in assumption in its design that you're a mid-to-large organization and you have a hardware budget, and that if you need to expand you just buy a new machine with however much storage you now want and then migrate.
I'm sure that what you're saying is correct and all; however, hobbyist-oriented projects such as FreeNAS have been popular for over a decade now. You'd imagine that someone would step up, and implement it themselves ages ago, but I guess not.
IIRC the devs didn't want it for a long time because they inherited that same attitude, and the people who were paying for most of them had hardware budgets or were selling to people who do, and this didn't care
this is also why both ZFS and Btrfs have kinda neglected parity RAID (RAIDZ/5/6), since enterprises with stacks of cash will just use mirrors since it's faster.
hobbyists represent a $0 market segment
> You'd imagine that someone would step up, and implement it themselves ages ago
Chapter Nov 2023, in which OP learns the truth about Open Source.
Didn't they open source Solaris for a while?
It's so sad what happened to Sun. A bunch of extremely competent people invented amazing technology only to get eaten by Oracle and milked for profit.
Sun was a staggering zombie for many years before Oracle bought them. Oracle didn't kill them, not one bit. It just fed on the corpse.
I thought that the ultra-competent people who wrote stuff like ZFS and dtrace only started leaving after the acquisition?
they still had some smart people in the late 2000s, but as a corporation they were already finished. Nobody wanted SPARC by that point and other than a handful of things like ZFS, Solaris wasn't a very compelling OS. (and remember at that point bit-rot wasn't a terribly salient issue, people were still using traditional RAID) What do you do at that point, sell commodity x86 servers? They couldn't compete in that market, their costs (and thus prices) were way too high.
They bought MySQL and got nothing. The OG MySQL devs were big suntards though.
Oracle bought Sun solely to get MySQL's secret sauce. Lolno.
The MySQL devs then balied Oracle and created MariaDB, bringing the secret sauce that makes it assrape garbage like Oracle or Postgres.
The free version of MySQL/MariaDB is a gun without bullets.
t. spoke to the OG devs over a cup of coffee
>assrape Postgres
Yeah no, nothing can beat Postgres.
Great, but the pool won't be reshaped / the data won't be redistributed to the new disks. Same with striped mirrors when you add disks. So it's still not as advanced as mdraid.
It is redistributed, just not in the default way, so it can only be read at old speeds. However new data is, and if you delete your old data or move it off and move it back, or restore from a snapshot, it is fully redistributed the default way.
Wake me when it's going to be included in the linux kernel
You want to sleep forever?
yes
To sleep, perchance to dream, ay there's the rub? I hope not 🙁
Superseded by Bcachefs
Interesting, thanks for the tip
Isn't bcachefs still experimental?
Reminder that ZFS on hard disks serves no purpose to the home user.
Nope, i have a hdd mirror, enough storage for my need, cheap, builtin bit rot correction.
>builtin bit rot correction.
ZFS was made when HDDs did not have this on the hardware level
They cannot have it on the hardware level at most they can have single bit correction per block at the cost of 5% of space. Most modern hdd do not protect against bit rot, they have improved shielding but that's about it.
In fact they are more vulnerable to some extent due to higher data density.
Anyway, i only have a few tb i really care about so that setup is good enough for me.
HDDs have the same proportionate risk of URE now as they did then. Checksums are as much about protecting the user from drive firmware fuckups as media failures.
>bit rot
no such thing
You're retarded. While not as prevalent as people like to pretend, your data CAN change/rot in general due to faulty hardware, but there are ways to mitigate it.
now wait 1 or 2 years for all the data eating bugs to get shaked out
Oh god finally, I don't need to wait for a sale to by my remaining 2 drives before setting up raid Z on home server.
probably some hidden downside
never gonna use it
i like zfs as is
Isn't that old news?
draid expansion when?
We also got block cloning a while ago.
I just read on RAID-Z. sounds kinda spooky. too many moving parts for it to be robust.
ZFS is very robust. The filesystem itself is complex, but it eliminates the need for other shit that can break.
>ZFS is very robust.
I just got finished dealing with a dodgy cable in a new server. It was generating a large number of chksum errors on one of my mirrored drives as i was copy data over from the old server.
swapped out the cable, scrubbed, and everything is fine. god knows how long it would have taken me to notice otherwise as i don't access much of the data that often.
>Z
4000 euro fine in Germany for that one
What?
wake me up when it finally has defragmentation support
Is this suitable for desktop use? I'm considering using this with RAIDZ-1 instead of LVM
Why are you using RAID instead of just backing up your PC? It's much cheaper. That, and unless you never eat, sleep, relax, or do literally anything other than compute 24/7/365, you don't need the uptime that RAID provides.
NTA, but RAID and snapshots are cheaper and less work than having a NAS. I still keep offline and off-site backups. The risk of serious controller or OS fuckups is just not relevant to me as a home user. I haven't had total loss from either since journaling filesystems were a thing.
also a different anon, but I've had SSDs fuck up before and even though I have backups of all my data, reinstalling my system would still be a massive pain in the ass. Also one of the times I had that happen was on the machine I was using as my router, it was running pfsense at the time. I updated it and rebooted it. The SSD barfed up an I/O error when it was reading some shared library, init died, and the machine panicked. Now I have no router until I can fix that. Oops. Ever since then I've always mirrored my boot drives so if that ever happens again I can just say "whatever, mount it degraded" and keep going until Amazon can ship me a new drive.
this, while sure i'm not some big corp who stands to lose tons of money if my machine is out of service for a day, it still sucks when it's not expensive or difficult to avoid by just having a raid1
like compare a raid1 with weekly backup to just weekly backup, if the active volume dies, you not only can't use the machine until you restore the backup, you also lose up to a week of data, and since i actually do work on my computer, that means even more work than just restoring the backup
for what, to save on not having to buy two discs for important data? it's not a great expense
Why not do both?
I take this view; I've got a large movie collection and other assorted documents/data. Some of it is 20 years old already. I plan to keep all of it plus what I add in the future till I'm dead. ZFS is just another layer of protection. Just like ECC ram. Do I have to use them? No. But when I'm 80 and want to read some e-book on my server from 2010 or watch a movie on my server from 2020 I won't have to deal with any errors either. Just enjoy. As it should be. "Do it once, do it right, enjoy it then on till you die" (Aside; my movie rips won't ever have to be redone cause frankly 65" tv is as large as I'll ever go. My living room isn't going to be growing larger so 65" is it)
Do you watch in 1080p or 4K?
ZFS anons, how much data do you store?