Now that I have a Quantum True Random Number Generator, it is time to start applying quantum computing to machine learning: quantum machine learning. The easiest place to start, of course, is with random number generation.

My proof-of-concept borrows a very simple neural network from SoloLearn. This is actually the first neural network that I translated from Python to C. I did this to increase my understanding of both programming languages, as well as of neural networks.

In the code below, I removed the pseudo-random number generator. In the original code, there is an incredibly weak seed (the number one) and then two weights are assigned. Those two weights are now directly assigned by a 14-qubit quantum computer, the largest I currently have access to.

If you read through the code, all of the early code is for generating truly-random numbers. All of the later code is copied-and-pasted from SoloLearn. The only overlap is where the weights are assigned.

# This was the first neural network that I translated from Python to C to increase my understanding of both programming languages plus neural networks. The original code is from https://www.sololearn.com/learn/744/?ref=app. I have modified the code to seed the pseudo-random number generator from a quantum computer.

from qiskit import Aer, ClassicalRegister, execute, QuantumCircuit, QuantumRegister

from qiskit.tools.monitor import job_monitor

def quniform(min, max):

range = max - min

qaddend = range * qmeasure('sim')

qsum = qaddend + min

return qsum

Use this function to generate random numbers on a quantum simulator. It’s faster than waiting in a queue for real hardware.

def qquniform(min, max):

range = max - min

qaddend = range * qmeasure('real')

qsum = qaddend + min

return qsum

Use this function to generate truly-random numbers on an actual quantum computer.

def qmeasure(hardware):

if (hardware == 'real'):

qubits = 14

#from qiskit.providers.ibmq import least_busy

#backend = least_busy(IBMQ.backends())

provider = IBMQ.get_provider(hub='ibm-q')

provider.backends()

backend = provider.get_backend('ibmq_16_melbourne')

else:

qubits = 32

backend = Aer.get_backend('qasm_simulator')

If you want to use the least busy quantum computer, the code that is commented out, you’ll have to update the qubits variable dynamically. You’ll most likely have only 5 qubits available, although sometimes you’ll get more.

q = QuantumRegister(qubits) # initialize all available quantum registers (qubits)

c = ClassicalRegister(qubits) # initialize classical registers to measure the qubits

qc = QuantumCircuit(q, c) # initialize the circuit

i = 0

while i < qubits:

qc.h(q[i]) # put all qubits into superposition states so that each will measure as a 0 or 1 completely at random

i = i + 1

qc.measure(q, c) # collapse the superpositions and get random zeroes and ones

job = execute(qc, backend=backend, shots=1)

job_monitor(job)

result = job.result()

mraw = result.get_counts(qc)

m = str(mraw)

subtotal = 0

for i in range(qubits):

subtotal = subtotal + (int(m[i+2]) * 2**(i)) # convert each binary digit to its decimal value, but read left-to-right for simplicity

multiplier = subtotal / (2**qubits) # convert the measurement to a value between 0 and 1

return multiplier

from numpy import exp, array, random, dot

class neural_network:

def __init__(self):

self.weights = []

self.weights.append([qquniform(-1, 1)])

self.weights.append([qquniform(-1, 1)])

print("self.weights ",self.weights)

Here is where you can select quniform(min, max) to use a simulator or qquniform(min, max) to use real hardware. For only 2 neurons, I didn’t bother to use a loop to assign weights. For larger neural networks, I obviously would.

def train(self, inputs, outputs, num):

for iteration in range(num):

output = self.think(inputs)

error = outputs - output

adjustment = 0.01*dot(inputs.T, error)

self.weights += adjustment

Trains the neural network on the training data.

def think(self, inputs):

return (dot(inputs, self.weights))

Multiplies the inputs by the weights. This function is actually one of the main reasons that I translated this code to C. If you are new to Python and to neural networks, what does the dot function really do? You can do alot without knowing, but I personally need to know.

neural_network = neural_network()

# training data

inputs = array([[2, 3], [1, 1], [5, 2], [12, 3]])

outputs = array([[10, 4, 14, 30]]).T

Double both numbers then add them together, or add both numbers and double their sum.

neural_network.train(inputs, outputs, 10000)

The number of iterations is overkill, but I left it as-is for this proof-of-concept experiment.

print(neural_network.think(array([15, 2])))

Test it! Based on the training data, the result should be 34.

The next project will be more challenging. Deep neural networks are generally created with such Python libraries as Tensorflow and Keras. But, those libraries internalize the algorithms, making quantum integration challenging, if not impossible.

For the record, I am basing that prediction solely on one Keras-based project that I worked on. I created a deep neural network with only about a dozen lines of Python, so “Quantum Keras” is an unlikely title for any future blog article.

I am more likely to spend some time looking at what else may be quantum computed within a simple neural network. From there, I will start adding layers manually (without the aforementioned librares) and look for other integration opportunities. Quantum computing is purported to be ideally-suited for unsupervised machine learning, so I’m looking forward to babystepping my way along that path.