How To Write Smart Contracts for Blockchain Using Python — Part One

A step-by-step guide to getting started

Photo by Hitesh Choudhary on Unsplash

In 2019, smart contracts are definitely the new paradigm shift in computer programming. This guide intends to be an introductory path to creating computer programs that are deployed and run in a decentralized blockchain.

A bit of history… Back in the 1950s in the early days of computing, if you wanted to write a piece of code to perform, let’s say, a simple sum operation (considering the Motorola 6502 8-bit CPU), you would end up with something like this:

18 A9 01 69 02 85 31 F6

The above hexadecimal numbers represented the machine language that the CPU could understand to perform an action.

The CPU had an “instruction set”, which means that each number was a command that resulted in an operation made by the processor: addition, subtraction, division, multiplication, load, store, jump, etc.

A programmer needed to know the operation codes by heart and thus memorize which number was equivalent to which command. Not very productive.

Soon, it was clear that a more human approach was required. It was the beginning of a movement towards the creation of higher-level languages, which looked more like spoken language.

So, at first came what became known as mnemonics:

CLC

LDA #$01

ADC #$02

STA $31F6

For each computer operation code, there was now an associated word or symbol that facilitated understanding. So, CLC (clear the carry) was equivalent to 18. LDA (load accumulator) was A9. ADC (add with carry) was 69. And STA (store accumulator) was 85.

This approach to programming was known as assembly language and it was a first step to making programming easier, relieving programmers from tedious tasks, such as remembering numeric codes.

The program above clears the carry, loads the value 01 into the accumulator, adds 02 to it, then stores the resulting number in the memory address 31F6 . Now in a far easier way for humans to understand.

As the years passed, new tools were created to make programming more productive, so development environments evolved a lot. The term high-level language appeared.

This means that the higher the level of a programming language, the more it resembles spoken, human language. Similarly, low-level languages were the ones closer to the computer instruction set itself.

In parallel with this evolution of computer languages, there were some paradigm shifts along the way.

The very first computer programs were injected directly into a memory address and then the computer needed to be told the point where the program would start its execution. This was raw machine language computer code, like the one shown at the beginning of this article.

With the advent of the mnemonics, we created what is known as assembler — a piece of software that was responsible for decoding the human-readable mnemonics, converting it to machine language code, injecting it into the correct memory address and telling the CPU to start its execution. Way better!

Although that helped a lot to write and debug software, it was still counter-productive. We needed an easier way to program.