I do agree with you guys about not using BTRFS if you are letting it manage the volumes, however I and many other people realise that it's stable and reliable as a filesystem, such as when you use it over a LVM/mdadm. Like Synology does.
I think you're confusing some issues here, and a general lack of knowledge about how SW RAID can be done...
BTRFS, much like ZFS, can cross layer boundaries, and yes, that's what bit the BTRFS team with the parity issue noted by crossing layers, as one can build a RAID straight up, just like one can build a RAID straight up with LVM and not mess with mdadm at all...
Nothing wrong with running BTRFS over an MD device running on mdadm, and nothing wrong with running over LVM - actually, one could bind multiple devices as an md device, and layer it with logical volume manager, and then format the device with btrfs...
If I recall, that's exactly what Synology does...
BTRFS - data
LVM - volume groups/storage pools
MDADM - physical disks...
That works... no problem here, and it gives some flexibility as one can have multiple MD groups under LVM, and each LVM managed volume can have a FS running on it, optimized for the task at hand - for example, XFS on on volume, EXT4 on another, and BTRFS on another...
Like I mentioned earlier, I have nothing against btrfs in general, other than it's not as proven as some of the other choices out there - and yes, it has some serious issues that the btrfs team knows needs to be addressed to meet the requirements they set forth.
A plus for btrfs is that it's not encumbered like zfs is, so it's more gpl friendly and gives similar benefits that ext4 can't.