Oh no! You did something kinda bad...

Please read: http://nex7.blogspot.com/2013/03/readme1st.html

Two issues to note:

Under RAIDZ1|2|3, the zpool command reflects the size of the disks comprising the pool, including parity. zfs list will show usable space. That's where the discrepancy comes from.

Also, what you ended up doing was... expanding a 4-disk RAID 1+0 (ZFS mirrors) by adding a RAIDZ1 group of 3 disks.

So your mirrors mean that the RAID1 pairs are striped together. Fine. But by adding a RAIDZ1 group, you now have a stripe across two mirror sets and a RAIDZ1.

Failure of either two disks in a mirror group or two disks in the RAIDZ1 group would result in total pool failure.

You should have added an even number if disks by creating more mirror pairs. Right now, you have no option to revert the change because your data is scattered across the disk groups. This is probably a backup.rebuild.restore situation.