

i figured the bit extra was worth the extra bays. After some consideraton – the reasons i ended up going with synology over QNAP wereġ) Price – Synology gear was/is a reasonable amount cheaper than the equivalent QNAP gearĢ) Expandability – The synology NAS’s i was looking at can both be expanded by adding up to 2 additional, 5 bay enclosures…. I eneded up getting a Synology 2413+, after changing my mind from a Synology 1813+…. i tossed up between QNAP and Synology, both of which seem to have generally positive reviews around the web and friends/clients that have them speak of them is a generally positive light. So, when this NAS reached capacity, it was time for a new one…. The performance was poor, the AD integration simply doesn’t work, the iscsi is flaky and the support is non-existant…… hope it helps someone.Īs some of you may have seen from a previous post, i purchased a Thecus N7700 pro NAS a few years back…… i was most unimpressed by this NAS. (and yes, all this probably could have been avoided by running the dedup estimator tool – but then i wouldnt have learnt stuff – so theres no fun in that!) hence why I thought it would write the above….

and disbaling it completely just isn’t intuitive…. Just to be clear, 2012 deduplication is still a good technology – and i use it elsewhere with great results – just every now and again, you will run into a dataset which it just does not agree with….

In order to get around this, i re-enabled dedup, but excluded all folders on the drive, i also removed all the schedules/background optimisation settings….start-dedupjob -Volume -Type GarbageCollection only to find that this command can only be run when dedup is enabled!.Unfortunately the powershell help nor the technet documentation actually state what either of these do! After a bit of searching, i found this page which specifies that GarbageCollection will find and remove unreferenced chunks and scrubbing will perform an integrity check….

There were two additional job types available, “garbageCollection” and “scrubbing”.“maybe theres another command for this ?” i thought…. At this stage – i noticed the original files getting bigger, but the dedup store (and the chunks within it) have not decreased at all….You can check the status on where this is at by using get-dedupjob, or, i like using TreeSize which shows the size on disk of specific files….In order to “move” (re-hydrate ?) the data back to the original files and out of the deduplication store, use the powershell command start-dedupjob -Volume -Type Unoptimization.Disabling data deduplication (via GUI or powershell) only stops further deduplication from occuring – but data that has already been deduplicated will remain deduplicated.Today i got around to doing something about this and found
Synology deduplicator iso#
iso and compressed data where not only did dedup not save me anything, it actually used up more space in the dedup folder than the orginal data size (which i found a little odd) generally a very good feature, but as that specific post talked about, there was a collection of. A while back i wrote a post about data deduplication in 2012….
