Symbol in computer programming

If you are familiar with other programming languages, you will know that they have symbols too. And in fact, even if the name of the datatype is the same, there are pretty significant differences between them.

Now let’s talk about symbols in programming in general. The definition of a symbol in Wikipedia is the following:

A symbol in computer programming is a primitive data type whose instances have a unique human-readable form.

In JavaScript symbol is a primitive datatype and although the language does not force you to make the instance human-readable, you can provide the symbol with a debugging description property.

Given that, we should know that there are some differences between JS symbols and symbols in other languages. Let’s take a look at Ruby symbols. In Ruby, Symbol objects are usually used to represent some strings. They are generated using the colon syntax and also by type conversion using the to_sym method.

If you noticed, we never assign the “created” symbol to a variable. If we use (generate) a symbol in the Ruby program it will always be the same during the entire execution of the program, regardless of its creation context.

In JavaScript, we can replicate this behavior by creating a symbol in the global symbol registry.

A major difference between symbol in the 2 languages is that in Ruby, symbols can be used instead of string, and in fact in many cases they auto-convert to strings. Methods available on string objects are also available on symbols and as we saw string can be converted to symbols using the to_sym method.

We already saw the reasons and the motivation of adding symbols to JavaScript, now let’s see what is their purpose in Ruby. In Ruby, we can think of symbols as immutable strings, and that alone results in many advantages of using them. They can be used as object property identifiers, and usually are.

Symbols also have performance advantages over strings. Every time you use the string notation, a new object gets created in the memory while symbols are always the same.

Now imagine we use a string as a property identifier and create 100 of that object. Ruby will have to also create 100 different string objects. That can be avoided by using symbols.

Another use-case of symbols is showing status. For example, it is a good practice for functions to return a symbol, indicating the success status like (:ok, :error) and the result.

In Rails (a famous Ruby web-app framework), almost all HTTP status codes can be used with symbols. You can send status :ok, :internal_server_error or :not_found, and the framework will replace them with correct status code and message.

To conclude, we can say that symbols are not the same and do not share the same purpose in all programming languages and as a person who already was familiar with Ruby symbols, for me JavaScript symbols and their motivation were a bit confusing.

Note: In some programming languages (erlang, elixir), symbol is called an atom.