I'm afraid I still don't get how A/P is any better than A/A, post SP failure. Lets look at two hypothetical arrays using round numbers, one A/A and one A/P. Each houses 1 TB of net storage divided into 10 LUNs. Each array has two SPs that can handle 1,000 IOPS each, and lets say the average and peak workload is 1,500 IOPS.
The A/A array delivers about 750 IOPS on each SP, but if one SP fails, the remaining SP can only provide 67% of the needed I/O. The A/P array also delivers about 750 IOPS on each SP, but if one SP fails, the remaining SP can only provide 67% of the needed I/O. How is A/P any more fault tolerant?
->I agree with Chris. In your example it really comes down to a design flaw that should never be allowed. Simply put in every way A/A is better but only if you don't over subscribe it. In your example it would be the following situation:
1. A/A array each SP can do 750 IOPS so total number of available IOPS 1500 taking into account two SP's
2.A/P array each SP can do 750 IOPS so total number of available IOPS 750 taking into account two SP's A/P
So as long as you don't expect 1500 IOPS from number 1 you are good. Number two forces a upper limit of 750 before you see performance issues in normal state. I have seen just like I am sure Chris has in his work so many A/A array's over subscribed. They just keep packing on applications and IO assuming they are ok until a failure hits. Assuming that if your do 1500 IOPS and if you loose 1 SP 750 will be better than nothing is not always true... One of the largest problems with large arrays is the whole world lives on the array. It's really a poor design choice. Storage admins should take into account that a A/A array can only do 50% of it's total IOPS/performance while A/P does not allow larger than that.
Thanks,
j