Openness and transparency are the defining features of Cindicator. To evaluate the full picture of the performance of our analytical products, it’s important to gather and analyse as much information as possible. That way we can review the progress from every point of view.

We’ve now gathered an adequate data set and can share with you our analysis of how and why the indicators worked in the second quarter of 2018. In this post we’re sharing our observations and insights as well as our future plans.

Indicators by numbers

As usual, let’s start with some quantitative metrics for Q2 2018.

Here is what we have:

Our users received 889 indicators, which is significantly more than in the previous quarter and is more than double the number we sent in Q4 2017;

The number of indicators has increased proportionally for every tier;

The indicators cover an investment universe that includes over 100 different assets;

The accuracy of the crypto indicators stayed at the level of 61.67%, while for the fiat indicators it was a bit higher — 69.31% (mostly due to the smaller sample size);

The weighted average accuracy during the quarter was 62.54%.

We welcome any additional research of our indicators, so as usual we’re sharing the full list of all indicators that we’ve issued, including for Q2 2018. The spreadsheet is available here.

Comparing to the previous quarter, accuracy has improved significantly, primarily due to the implementation of the neural network at the end of the last quarter. The actual accuracy is comparable to the 66% accuracy that was demonstrated by earlier backtests. Often the results of backtests are drastically different from forward tests. That’s because every new stage in the market cycle is unlike the previous one, and the model needs to adapt to the new conditions.

Of course, the Hybrid Intelligence system still has great potential for growth and continuous improvements. The quality of our analytics is the key priority for Cindicator.

Today we want to share with you insights from Sasha Sasev, a senior member of the analytics team, and Alex Osipenko, ML team lead.

Market review

We can see that on average the weekly accuracy was above 60%. Despite fluctuations, it typically remained above 50%.

It’s clear that the Bitcoin price fluctuations have the greatest influence on the rest of the crypto market. This phenomenon is well captured by the indicators. During both bullish and bearish trends the accuracy of indicators was about 70% on average.

The periods of lower accuracy occurred during sudden trend reversals in the price of Bitcoin. The chart below summarises average weekly accuracy during different market conditions.

What happened during those weeks? There was some major news that could have affected the fundamental views of market participants.

Week 14 (2–8 April)

Major news: Soros & Rockefellers invest in crypto; Coinbase might get an SEC license; Japan plans to legalise ICOs;

Market impact: Bitcoin downward trend reversed;

Accuracy: 53%.

Weeks 15–17 (9–29 April)

Major news: $1.6 billion Chinese fund launches in support of blockchain startups; the UK, France, Germany, Norway, Spain and the Netherlands sign a declaration on creating a single digital market and partnership in the blockchain industry; Samsung plans to use blockchain technology for supply chain management; Goldman Sachs hires crypto trader Justin Schmidt to lead digital assets; NASDAQ CEO announces that the exchange “would consider becoming a crypto exchange”; Malta’s cabinet approves cryptocurrency bill; Cboe breaks record for Bitcoin futures volume; Korean Blockchain Association reveals self-regulatory rules for 14 member exchanges;

Market impact: there has been much positive news during these weeks. Bitcoin’s upward trend continued and it climbed from USD 6,600 to USD 10,000;

Accuracy: 62%, 71%, 60%.

Week 18 (30 April–6 May)

Major news: Ethereum is under regulatory scrutiny; 16,000 Bitcoins moved from Mt. Gox’s wallets; MyEtherWallet (MEW) is hit by a DNS hack; South Korean prosecutors raid UPbit, the country’s largest cryptocurrency exchange;

Market impact: Bitcoin upward trend reversed;

Accuracy: 51%.

Weeks 19–21 (7–27 May)

Major news: Japan’s FSA announces preparations of more regulatory restrictions aimed at preventing large-scale attacks on cryptocurrency exchanges; the U.S. Department of Justice opens a criminal investigation into possible crypto market manipulation; Verge hacked for the second time in two months; Microsoft’s Bing joins Facebook, Twitter in crypto ad ban;

Market impact: the Bitcoin downward trend continued and the price fell from USD 10,000 to USD 7,000;

Accuracy: 56%, 72.5%, 76%.

Week 22 (28 May–3 June)

Major news: National Assembly pushes the South Korean government to allow ICOs; Bloomberg and Galaxy Digital Capital Management launch the Cryptocurrency Benchmark Index; President of China Xi Jinping openly praises blockchain technology during a speech on 28 May; Binance plans to create a $1 billion cryptocurrency-based fund;

Market impact: BTC bounced from USD 7,000 to USD 7,800, total market capitalisation bounced from $300 billion to $350 billion;

Accuracy: 49%.

Weeks 23–24 (4–17 June)

Major news: Korean crypto exchange Coinrail loses over $40 million in tokens following a hack; Wells Fargo bans credit card cryptocurrency purchases; SEC official declares Ethereum is not a security token;

Market impact: due to the Coinrail hack, Bitcoin continued its downtrend trend;

Accuracy: 70.5%, 75%.

Week 25 (18–24 June)

Major news: Bithumb suffers a $31 million hack on 20 June;

Market impact: BTC fell from USD 6,800 to USD 5,900 due to the hack for two days, total market capitalisation fell from $290 billion to $240 billion;

Accuracy: 54%.

Week 26 (25 June–1 July)

Major news: Facebook updates its policy to allow cryptocurrencies to once again be advertised; SEC receives a Cboe Global Markets application for a Bitcoin ETF license;

Market impact: Bitcoin downward trend reversed;

Accuracy: 46%.

Machine learning perspective

For Cindicator’s ML team, Q2 2018 was a very fruitful period: we were working on expanding the neural network and experimenting with different architectures. In total, we tested over 30 new models.

How the ML pipeline works

One of the levels in our pipeline is the pool of several dozens of models (and it continues to increase). For each model, the inputs are user forecasts for an event. The output is the final prediction for that event.

These predictions then go to the neural network, which acts as the final layer. The neural network’s indicators are sent to our token holders.

The goal of the ML team is to train the models to react to market reversals, adapting to them as fast as possible.

Sharp and unpredictable changes in the market lead to incorrect user forecasts and indicators. Yes, at this point these changes affect the accuracy of the neural network. It receives new data every day and learns better and better.

The ML team is focused on optimising the models for these trend reversal periods. In addition to standard research methods, we’ve also used some new approaches. For example, we’ve hosted internal hackathons to find the best ensemble model that could beat the accuracy of the current neural network.

We planned and tested several models that showed superior accuracy during Q2 2018 backtests and became candidates to replace the neural network in the current architecture. These models were forward tested in parallel with the production neural network, but they have not showed a statistically significant increase in accuracy yet.

Our directions for further development:

Testing models and services for making forecasts based exclusively on market data : high and low forecasts, volatility predictions, etc. These models would help us to correct the cognitive biases of analysts and capture market changes faster;

: high and low forecasts, volatility predictions, etc. These models would help us to correct the cognitive biases of analysts and capture market changes faster; Developing our own natural language processing (NLP) platform using proprietary language models for analysing social media sentiment. This would help us to react much faster to important news that leads to sharp increases in volatility — we’ll tell you more about this soon;

using proprietary language models for analysing social media sentiment. This would help us to react much faster to important news that leads to sharp increases in volatility — we’ll tell you more about this soon; Developing and testing different neural network architectures that could take different kinds of input, not only analyst forecasts, but also real-time market data, and social media sentiment. This would also help to monitor trend changes.

This just a brief summary of what the ML team is working on — we’ll share more details in our next post.

Conclusions

The accuracy of indicators is not the only KPI, yet it is crucial because it’s directly linked to the value our token holders derive. That’s why everyone at Cindicator is working to increase this important metric. Our internal analysts and quant researchers are constantly analysing the market to identify additional data sources and create new hypotheses. The marketing team is working to grow and retain our decentralised analysts. The ML team is refining models and experimenting with new ones. The trading team, as well as our token holders, is learning to extract the maximum value from the unique data generated by Hybrid Intelligence.

Our team is about 65+ people strong, yet we’re supported by over 113,000+ analysts and 6,000+ traders. You can join us too, by contributing in our Discord community, working together as part of the Cindicator Avantgarde, or participating in one of the many challenges that we host on Twitter and Facebook.