I get a lot of questions about Percona XtraDB Cluster 5.6 (PXC 5.6), specifically about whether such and such MySQL 5.6 Community Edition feature is in PXC 5.6. The short answer is: yes, all features in community MySQL 5.6 are in Percona Server 5.6 and, in turn, are in PXC 5.6. Whether or not the new feature is useful in 5.6 really depends on how useful it is in general with Galera.

I thought it would be useful to highlight a few features and try to show them working:

Innodb Fulltext Indexes

Yes, FTS works in Innodb in 5.6, so why wouldn’t it work in PXC 5.6? To test this I used the Sakila database , which contains a single table with FULLTEXT. In the sakila-schema.sql file, it is still designated a MyISAM table:

CREATE TABLE film_text ( film_id SMALLINT NOT NULL, title VARCHAR(255) NOT NULL, description TEXT, PRIMARY KEY (film_id), FULLTEXT KEY idx_title_description (title,description) )ENGINE=MyISAM DEFAULT CHARSET=utf8; 1 2 3 4 5 6 7 CREATE TABLE film_text ( film_id SMALLINT NOT NULL , title VARCHAR ( 255 ) NOT NULL , description TEXT , PRIMARY KEY ( film_id ) , FULLTEXT KEY idx_title_description ( title , description ) ) ENGINE = MyISAM DEFAULT CHARSET = utf8 ;

I edited that file to change MyISAM to Innodb, loaded the schema and data into my 3 node cluster:

[root@node1 sakila-db]# mysql < sakila-schema.sql [root@node1 sakila-db]# mysql < sakila-data.sql 1 2 [ root @ node1 sakila - db ] # mysql < sakila-schema.sql [ root @ node1 sakila - db ] # mysql < sakila-data.sql

and it works seamlessly:

node1 mysql> select title, description, match( title, description) against ('action saga' in natural language mode) as score from sakila.film_text order by score desc limit 5; +-----------------+-----------------------------------------------------------------------------------------------------------+--------------------+ | title | description | score | +-----------------+-----------------------------------------------------------------------------------------------------------+--------------------+ | FACTORY DRAGON | A Action-Packed Saga of a Teacher And a Frisbee who must Escape a Lumberjack in The Sahara Desert | 3.0801234245300293 | | HIGHBALL POTTER | A Action-Packed Saga of a Husband And a Dog who must Redeem a Database Administrator in The Sahara Desert | 3.0801234245300293 | | MATRIX SNOWMAN | A Action-Packed Saga of a Womanizer And a Woman who must Overcome a Student in California | 3.0801234245300293 | | REEF SALUTE | A Action-Packed Saga of a Teacher And a Lumberjack who must Battle a Dentist in A Baloon | 3.0801234245300293 | | SHANE DARKNESS | A Action-Packed Saga of a Moose And a Lumberjack who must Find a Woman in Berlin | 3.0801234245300293 | +-----------------+-----------------------------------------------------------------------------------------------------------+--------------------+ 5 rows in set (0.00 sec) 1 2 3 4 5 6 7 8 9 10 11 node1 mysql > select title , description , match ( title , description ) against ( 'action saga' in natural language mode ) as score from sakila .film_text order by score desc limit 5 ; + -- -- -- -- -- -- -- -- - + -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - + -- -- -- -- -- -- -- -- -- -- + | title | description | score | + -- -- -- -- -- -- -- -- - + -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - + -- -- -- -- -- -- -- -- -- -- + | FACTORY DRAGON | A Action - Packed Saga of a Teacher And a Frisbee who must Escape a Lumberjack in The Sahara Desert | 3.0801234245300293 | | HIGHBALL POTTER | A Action - Packed Saga of a Husband And a Dog who must Redeem a Database Administrator in The Sahara Desert | 3.0801234245300293 | | MATRIX SNOWMAN | A Action - Packed Saga of a Womanizer And a Woman who must Overcome a Student in California | 3.0801234245300293 | | REEF SALUTE | A Action - Packed Saga of a Teacher And a Lumberjack who must Battle a Dentist in A Baloon | 3.0801234245300293 | | SHANE DARKNESS | A Action - Packed Saga of a Moose And a Lumberjack who must Find a Woman in Berlin | 3.0801234245300293 | + -- -- -- -- -- -- -- -- - + -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - + -- -- -- -- -- -- -- -- -- -- + 5 rows in set ( 0.00 sec )

Sure enough, I can run this query on any node and it works fine:

node3 mysql> select title, description, match( title, description) against ('action saga' in natural language mode) as score from sakila.film_text order by score desc limit 5; +-----------------+-----------------------------------------------------------------------------------------------------------+--------------------+ | title | description | score | +-----------------+-----------------------------------------------------------------------------------------------------------+--------------------+ | FACTORY DRAGON | A Action-Packed Saga of a Teacher And a Frisbee who must Escape a Lumberjack in The Sahara Desert | 3.0801234245300293 | | HIGHBALL POTTER | A Action-Packed Saga of a Husband And a Dog who must Redeem a Database Administrator in The Sahara Desert | 3.0801234245300293 | | MATRIX SNOWMAN | A Action-Packed Saga of a Womanizer And a Woman who must Overcome a Student in California | 3.0801234245300293 | | REEF SALUTE | A Action-Packed Saga of a Teacher And a Lumberjack who must Battle a Dentist in A Baloon | 3.0801234245300293 | | SHANE DARKNESS | A Action-Packed Saga of a Moose And a Lumberjack who must Find a Woman in Berlin | 3.0801234245300293 | +-----------------+-----------------------------------------------------------------------------------------------------------+--------------------+ 5 rows in set (0.05 sec) node3 mysql> show create table sakila.film_textG *************************** 1. row *************************** Table: film_text Create Table: CREATE TABLE `film_text` ( `film_id` smallint(6) NOT NULL, `title` varchar(255) NOT NULL, `description` text, PRIMARY KEY (`film_id`), FULLTEXT KEY `idx_title_description` (`title`,`description`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 1 row in set (0.00 sec) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 node3 mysql > select title , description , match ( title , description ) against ( 'action saga' in natural language mode ) as score from sakila .film_text order by score desc limit 5 ; + -- -- -- -- -- -- -- -- - + -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - + -- -- -- -- -- -- -- -- -- -- + | title | description | score | + -- -- -- -- -- -- -- -- - + -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - + -- -- -- -- -- -- -- -- -- -- + | FACTORY DRAGON | A Action - Packed Saga of a Teacher And a Frisbee who must Escape a Lumberjack in The Sahara Desert | 3.0801234245300293 | | HIGHBALL POTTER | A Action - Packed Saga of a Husband And a Dog who must Redeem a Database Administrator in The Sahara Desert | 3.0801234245300293 | | MATRIX SNOWMAN | A Action - Packed Saga of a Womanizer And a Woman who must Overcome a Student in California | 3.0801234245300293 | | REEF SALUTE | A Action - Packed Saga of a Teacher And a Lumberjack who must Battle a Dentist in A Baloon | 3.0801234245300293 | | SHANE DARKNESS | A Action - Packed Saga of a Moose And a Lumberjack who must Find a Woman in Berlin | 3.0801234245300293 | + -- -- -- -- -- -- -- -- - + -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - + -- -- -- -- -- -- -- -- -- -- + 5 rows in set ( 0.05 sec ) node3 mysql > show create table sakila .film_textG * * * * * * * * * * * * * * * * * * * * * * * * * * * 1. row * * * * * * * * * * * * * * * * * * * * * * * * * * * Table : film_text Create Table : CREATE TABLE ` film_text ` ( ` film_id ` smallint ( 6 ) NOT NULL , ` title ` varchar ( 255 ) NOT NULL , ` description ` text , PRIMARY KEY ( ` film_id ` ) , FULLTEXT KEY ` idx_title_description ` ( ` title ` , ` description ` ) ) ENGINE = InnoDB DEFAULT CHARSET = utf8 1 row in set ( 0.00 sec )

There might be a few caveats and differences from how FTS works in Innodb vs MyISAM, but it is there.

Minimal replication images

Galera relies heavily on RBR events, but until 5.6 those were entire row copies, even if you only changed a single column in the table. In 5.6 you can change this to send only the updated data using the variable binlog_row_image=minimal.

Using a simple sysbench update test for 1 minute, I can determine the baseline size of the replicated data:

node3 mysql> show global status like 'wsrep_received%'; +----------------------+-----------+ | Variable_name | Value | +----------------------+-----------+ | wsrep_received | 703 | | wsrep_received_bytes | 151875070 | +----------------------+-----------+ 2 rows in set (0.04 sec) ... test runs for 1 minute... node3 mysql> show global status like 'wsrep_received%'; +----------------------+-----------+ | Variable_name | Value | +----------------------+-----------+ | wsrep_received | 38909 | | wsrep_received_bytes | 167749809 | +----------------------+-----------+ 2 rows in set (0.17 sec) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 node3 mysql > show global status like 'wsrep_received%' ; + -- -- -- -- -- -- -- -- -- -- -- + -- -- -- -- -- - + | Variable_name | Value | + -- -- -- -- -- -- -- -- -- -- -- + -- -- -- -- -- - + | wsrep_received | 703 | | wsrep_received_bytes | 151875070 | + -- -- -- -- -- -- -- -- -- -- -- + -- -- -- -- -- - + 2 rows in set ( 0.04 sec ) . . . test runs for 1 minute . . . node3 mysql > show global status like 'wsrep_received%' ; + -- -- -- -- -- -- -- -- -- -- -- + -- -- -- -- -- - + | Variable_name | Value | + -- -- -- -- -- -- -- -- -- -- -- + -- -- -- -- -- - + | wsrep_received | 38909 | | wsrep_received_bytes | 167749809 | + -- -- -- -- -- -- -- -- -- -- -- + -- -- -- -- -- - + 2 rows in set ( 0.17 sec )

This results in 62.3 MB of data replicated in this test.

If I set binlog_row_image=minimal on all nodes and do a rolling restart, I can see how this changes:

node3 mysql> show global status like 'wsrep_received%'; +----------------------+-------+ | Variable_name | Value | +----------------------+-------+ | wsrep_received | 3 | | wsrep_received_bytes | 236 | +----------------------+-------+ 2 rows in set (0.07 sec) ... test runs for 1 minute... node3 mysql> show global status like 'wsrep_received%'; +----------------------+----------+ | Variable_name | Value | +----------------------+----------+ | wsrep_received | 34005 | | wsrep_received_bytes | 14122952 | +----------------------+----------+ 2 rows in set (0.13 sec) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 node3 mysql > show global status like 'wsrep_received%' ; + -- -- -- -- -- -- -- -- -- -- -- + -- -- -- - + | Variable_name | Value | + -- -- -- -- -- -- -- -- -- -- -- + -- -- -- - + | wsrep_received | 3 | | wsrep_received_bytes | 236 | + -- -- -- -- -- -- -- -- -- -- -- + -- -- -- - + 2 rows in set ( 0.07 sec ) . . . test runs for 1 minute . . . node3 mysql > show global status like 'wsrep_received%' ; + -- -- -- -- -- -- -- -- -- -- -- + -- -- -- -- -- + | Variable_name | Value | + -- -- -- -- -- -- -- -- -- -- -- + -- -- -- -- -- + | wsrep_received | 34005 | | wsrep_received_bytes | 14122952 | + -- -- -- -- -- -- -- -- -- -- -- + -- -- -- -- -- + 2 rows in set ( 0.13 sec )

This yields a mere 13.4MB, that’s 80% smaller, quite a savings! This benefit, of course, fully depends on the types of workloads you are doing.

Durable Memcache Cluster

It turns out this feature does not work properly with Galera, see below for an explanation:

5.6 introduces an Memcached interface for Innodb. This means any standard memcache client can talk to our PXC nodes with the memcache protocol and the data is:

Replicated to all nodes

Durable across the cluster

Highly available

Easy to hash memcache clients across all servers for better cache coherency

To set this up, we need to simply load the innodb_memcache schema from the example and restart the daemon to get a listening memcached port:

[root@node1 ~]# mysql < /usr/share/mysql/innodb_memcached_config.sql [root@node1 ~]# service mysql restart Shutting down MySQL (Percona XtraDB Cluster)...... SUCCESS! Starting MySQL (Percona XtraDB Cluster)...... SUCCESS! [root@node1 ~]# lsof +p`pidof mysqld` | grep LISTEN mysqld 31961 mysql 11u IPv4 140592 0t0 TCP *:tram (LISTEN) mysqld 31961 mysql 55u IPv4 140639 0t0 TCP *:memcache (LISTEN) mysqld 31961 mysql 56u IPv6 140640 0t0 TCP *:memcache (LISTEN) mysqld 31961 mysql 59u IPv6 140647 0t0 TCP *:mysql (LISTEN) 1 2 3 4 5 6 7 8 9 10 11 [ root @ node1 ~ ] # mysql < /usr/share/mysql/innodb_memcached_config.sql [ root @ node1 ~ ] # service mysql restart Shutting down MySQL ( Percona XtraDB Cluster ) . . . . . . SUCCESS ! Starting MySQL ( Percona XtraDB Cluster ) . . . . . . SUCCESS ! [ root @ node1 ~ ] # lsof +p`pidof mysqld` | grep LISTEN mysqld 31961 mysql 11u IPv4 140592 0t0 TCP * : tram ( LISTEN ) mysqld 31961 mysql 55u IPv4 140639 0t0 TCP * : memcache ( LISTEN ) mysqld 31961 mysql 56u IPv6 140640 0t0 TCP * : memcache ( LISTEN ) mysqld 31961 mysql 59u IPv6 140647 0t0 TCP * : mysql ( LISTEN )

This all appears to work and I can fetch the sample AA row from all the nodes with the memcached interface:

node1 mysql> select * from demo_test; +-----+--------------+------+------+------+ | c1 | c2 | c3 | c4 | c5 | +-----+--------------+------+------+------+ | AA | HELLO, HELLO | 8 | 0 | 0 | +-----+--------------+------+------+------+ [root@node3 ~]# telnet 127.0.0.1 11211 Trying 127.0.0.1... Connected to 127.0.0.1. Escape character is '^]'. get AA VALUE AA 8 12 HELLO, HELLO END 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 node1 mysql > select * from demo_test ; + -- -- - + -- -- -- -- -- -- -- + -- -- -- + -- -- -- + -- -- -- + | c1 | c2 | c3 | c4 | c5 | + -- -- - + -- -- -- -- -- -- -- + -- -- -- + -- -- -- + -- -- -- + | AA | HELLO , HELLO | 8 | 0 | 0 | + -- -- - + -- -- -- -- -- -- -- + -- -- -- + -- -- -- + -- -- -- + [ root @ node3 ~ ] # telnet 127.0.0.1 11211 Trying 127.0.0.1... Connected to 127.0.0.1. Escape character is '^]' . get AA VALUE AA 8 12 HELLO , HELLO END

However, if I try to update a row, it does not seem to replicate (even if I set innodb_api_enable_binlog):

[root@node3 ~]# telnet 127.0.0.1 11211 Trying 127.0.0.1... Connected to 127.0.0.1. Escape character is '^]'. set DD 0 0 0 STORED ^] telnet> quit Connection closed. node3 mysql> select * from demo_test; +----+--------------+------+------+------+ | c1 | c2 | c3 | c4 | c5 | +----+--------------+------+------+------+ | AA | HELLO, HELLO | 8 | 0 | 0 | | DD | | 0 | 1 | 0 | +----+--------------+------+------+------+ 2 rows in set (0.00 sec) node1 mysql> select * from demo_test; +-----+--------------+------+------+------+ | c1 | c2 | c3 | c4 | c5 | +-----+--------------+------+------+------+ | AA | HELLO, HELLO | 8 | 0 | 0 | +-----+--------------+------+------+------+ 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 [ root @ node3 ~ ] # telnet 127.0.0.1 11211 Trying 127.0.0.1... Connected to 127.0.0.1. Escape character is '^]' . set DD 0 0 0 STORED ^ ] telnet > quit Connection closed . node3 mysql > select * from demo_test ; + -- -- + -- -- -- -- -- -- -- + -- -- -- + -- -- -- + -- -- -- + | c1 | c2 | c3 | c4 | c5 | + -- -- + -- -- -- -- -- -- -- + -- -- -- + -- -- -- + -- -- -- + | AA | HELLO , HELLO | 8 | 0 | 0 | | DD | | 0 | 1 | 0 | + -- -- + -- -- -- -- -- -- -- + -- -- -- + -- -- -- + -- -- -- + 2 rows in set ( 0.00 sec ) node1 mysql > select * from demo_test ; + -- -- - + -- -- -- -- -- -- -- + -- -- -- + -- -- -- + -- -- -- + | c1 | c2 | c3 | c4 | c5 | + -- -- - + -- -- -- -- -- -- -- + -- -- -- + -- -- -- + -- -- -- + | AA | HELLO , HELLO | 8 | 0 | 0 | + -- -- - + -- -- -- -- -- -- -- + -- -- -- + -- -- -- + -- -- -- +

So unfortunately the memcached plugin must use some backdoor to Innodb that Galera is unaware of. I’ve filed a bug on the issue, but it’s not clear if there will be an easy solution or if a whole lot of code will be necessary to make this work properly.

In the short-term, however, you can at least read data from all nodes with the memcached plugin as long as data is only written using the standard SQL interface.

Async replication GTID Integration

Async GTIDs were introduced in 5.6 in order to make CHANGE MASTER easier. You have always been able to use async replication from any cluster node, but now with this new GTID support, it is much easier to failover to another node in the cluster as a new master.

If we take one node out of our cluster to be a slave and enable GTID binary logging on the other two by adding these settings:

server-id = ## log-bin = cluster_log log-slave-updates gtid_mode = ON enforce-gtid-consistency 1 2 3 4 5 server - id = ## log - bin = cluster_log log - slave - updates gtid_mode = ON enforce - gtid - consistency

If I generate some writes on the cluster, I can see GTIDs are working:

node1 mysql> show master statusG *************************** 1. row *************************** File: cluster_log.000001 Position: 573556 Binlog_Do_DB: Binlog_Ignore_DB: Executed_Gtid_Set: e941e026-ac70-ee1c-6dc9-40f8d3b5db3f:1-1505 1 row in set (0.00 sec) node2 mysql> show master statusG *************************** 1. row *************************** File: cluster_log.000001 Position: 560011 Binlog_Do_DB: Binlog_Ignore_DB: Executed_Gtid_Set: e941e026-ac70-ee1c-6dc9-40f8d3b5db3f:1-1505 1 row in set (0.00 sec) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 node1 mysql > show master statusG * * * * * * * * * * * * * * * * * * * * * * * * * * * 1. row * * * * * * * * * * * * * * * * * * * * * * * * * * * File : cluster_log . 000001 Position : 573556 Binlog_Do_DB : Binlog_Ignore_DB : Executed_Gtid_Set : e941e026 - ac70 - ee1c - 6dc9 - 40f8d3b5db3f : 1 - 1505 1 row in set ( 0.00 sec ) node2 mysql > show master statusG * * * * * * * * * * * * * * * * * * * * * * * * * * * 1. row * * * * * * * * * * * * * * * * * * * * * * * * * * * File : cluster_log . 000001 Position : 560011 Binlog_Do_DB : Binlog_Ignore_DB : Executed_Gtid_Set : e941e026 - ac70 - ee1c - 6dc9 - 40f8d3b5db3f : 1 - 1505 1 row in set ( 0.00 sec )

Notice that we’re at GTID 1505 on both nodes, even though the binary log position happens to be different.

I set up my slave to replicate from node1 (.70.2):

node3 mysql> show slave statusG *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: 192.168.70.2 Master_User: slave ... Slave_IO_Running: Yes Slave_SQL_Running: Yes ... Retrieved_Gtid_Set: e941e026-ac70-ee1c-6dc9-40f8d3b5db3f:1-1506 Executed_Gtid_Set: e941e026-ac70-ee1c-6dc9-40f8d3b5db3f:1-1506 Auto_Position: 1 1 row in set (0.00 sec) 1 2 3 4 5 6 7 8 9 10 11 12 13 node3 mysql > show slave statusG * * * * * * * * * * * * * * * * * * * * * * * * * * * 1. row * * * * * * * * * * * * * * * * * * * * * * * * * * * Slave_IO_State : Waiting for master to send event Master_Host : 192.168.70.2 Master_User : slave . . . Slave_IO_Running : Yes Slave_SQL_Running : Yes . . . Retrieved_Gtid_Set : e941e026 - ac70 - ee1c - 6dc9 - 40f8d3b5db3f : 1 - 1506 Executed_Gtid_Set : e941e026 - ac70 - ee1c - 6dc9 - 40f8d3b5db3f : 1 - 1506 Auto_Position : 1 1 row in set ( 0.00 sec )

And it’s all caught up. If put some load on the cluster, I can easily change to node2 as my master without needing to stop writes:

node3 mysql> stop slave; Query OK, 0 rows affected (0.09 sec) node3 mysql> change master to master_host='192.168.70.3', master_auto_position=1; Query OK, 0 rows affected (0.02 sec) node3 mysql> start slave; Query OK, 0 rows affected (0.02 sec) node3 mysql> show slave statusG *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: 192.168.70.3 Master_User: slave ... Slave_IO_Running: Yes Slave_SQL_Running: Yes ... Executed_Gtid_Set: e941e026-ac70-ee1c-6dc9-40f8d3b5db3f:1-3712 Auto_Position: 1 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 node3 mysql > stop slave ; Query OK , 0 rows affected ( 0.09 sec ) node3 mysql > change master to master_host = '192.168.70.3' , master_auto_position = 1 ; Query OK , 0 rows affected ( 0.02 sec ) node3 mysql > start slave ; Query OK , 0 rows affected ( 0.02 sec ) node3 mysql > show slave statusG * * * * * * * * * * * * * * * * * * * * * * * * * * * 1. row * * * * * * * * * * * * * * * * * * * * * * * * * * * Slave_IO_State : Waiting for master to send event Master_Host : 192.168.70.3 Master_User : slave . . . Slave_IO_Running : Yes Slave_SQL_Running : Yes . . . Executed_Gtid_Set : e941e026 - ac70 - ee1c - 6dc9 - 40f8d3b5db3f : 1 - 3712 Auto_Position : 1

So this seems to work pretty well. It does turns out there is a bit of a bug, but it’s actually with Xtrabackup — currently the binary logs are not copied in Xtrabackup SST and this can cause GTID inconsistencies within nodes in the cluster. I would expect this to get fixed relatively quickly.

Conclusion

MySQL 5.6 introduces a lot of new interesting features that are even more compelling in the PXC/Galera world. If you want to experiment for yourself, I pushed the Vagrant environment I used to Github at: https://github.com/jayjanssen/pxc_56_features