Tensor Network Training of Quantum Circuits¶

Here we’ll run through constructing a tensor network of an ansatz quantum circuit, then training certain ‘parametrizable’ tensors representing quantum gates in that tensor network to replicate the behaviour of a target unitary.

[1]: import quimb as qu import quimb.tensor as qtn

The Ansatz Circuit¶ First we set up the ansatz circuit and extract the tensor network. Key here is that when we supply parametrize=True to the 'U3' gate call, it injects a PTensor into the network, which lazily represents its data array with a function and set of parameters. Later, when the optimizer sees this it then knows to optimize the parameters rather than the array itself. [2]: def single_qubit_layer ( circ , gate_round = None ): """Apply a parametrizable layer of single qubit ``U3`` gates. """ for i in range ( circ . N ): # initialize with random parameters params = qu . randn ( 3 , dist = 'uniform' ) circ . apply_gate ( 'U3' , * params , i , gate_round = gate_round , parametrize = True ) def two_qubit_layer ( circ , gate2 = 'CZ' , reverse = False , gate_round = None ): """Apply a layer of constant entangling gates. """ regs = range ( 0 , circ . N - 1 ) if reverse : regs = reversed ( regs ) for i in regs : circ . apply_gate ( gate2 , i , i + 1 , gate_round = gate_round ) def ansatz_circuit ( n , depth , gate2 = 'CZ' , ** kwargs ): """Construct a circuit of single qubit and entangling layers. """ circ = qtn . Circuit ( n , ** kwargs ) for r in range ( depth ): # single qubit gate layer single_qubit_layer ( circ , gate_round = r ) # alternate between forward and backward CZ layers two_qubit_layer ( circ , gate2 = gate2 , gate_round = r , reverse = r % 2 == 0 ) # add a final single qubit layer single_qubit_layer ( circ , gate_round = r + 1 ) return circ The form of the 'U3' gate (which generalizes all possible single qubit gates) can be seen here - U_gate() . Now we are ready to instantiate a circuit: [3]: n = 6 depth = 9 gate2 = 'CZ' circ = ansatz_circuit ( n , depth , gate2 = gate2 ) circ [3]: <Circuit(n=6, n_gates=105, gate_opts={'contract': 'auto-split-gate', 'propagate_tags': 'register'})> We can extract just the unitary part of the circuit as a tensor network like so: [4]: V = circ . uni You can see it already has various tags identifying its structure (indeed enough to uniquely identify each gate): [5]: V . graph ( color = [ 'U3' , gate2 ], show_inds = True ) [6]: V . graph ( color = [ f 'ROUND_ { i } ' for i in range ( depth )], show_inds = True ) [7]: V . graph ( color = [ f 'I { i } ' for i in range ( n )], show_inds = True )

The Target Unitary¶ Next we need a target unitary to try and digitially replicate. Here we’ll take an Ising Hamiltonian and a short time evolution. Once we have the dense (matrix) form of the target unitary $U$ we need to convert it to a tensor which we can put in a tensor network: [8]: # the hamiltonian H = qu . ham_ising ( n , jz = 1.0 , bx = 0.7 , cyclic = False ) # the propagator for the hamiltonian t = 2 U_dense = qu . expm ( - 1 j * t * H ) # 'tensorized' version of the unitary propagator U = qtn . Tensor ( data = U_dense . reshape ([ 2 ] * ( 2 * n )), inds = [ f 'k { i } ' for i in range ( n )] + [ f 'b { i } ' for i in range ( n )], tags = { 'U_TARGET' } ) U . graph ( color = [ 'U3' , gate2 , 'U_TARGET' ]) The core object describing how similar two unitaries are is: \(\mathrm{Tr}(V^{\dagger}U)\), which we can naturally visualize at a tensor network: [9]: ( V . H & U ) . graph ( color = [ 'U3' , gate2 , 'U_TARGET' ]) For our loss function we’ll normalize this and negate it (since the optimizer minimizes). [10]: def loss ( V , U ): return 1 - abs (( V . H & U ) . contract ( all , optimize = 'auto-hq' )) / 2 ** n # check our current unitary 'infidelity': loss ( V , U ) [10]: 0.9916803129508406 So as expected currently the two unitaries are not similar at all.