AI accelerator hardware like Intel’s Nervana Neural Network Processor and Google’s Tensor Processing Units are promising and they are involved in enhancing the speed of AI model training. Earlier stages of training pipeline, like data processing, are not benefited because of the way the chips are made.

The artificial intelligence research wing of Google, Google Brain, has suggested a new procedure to enhance the speed of AI training. In a paper release, the scientist of the researcher inside that they are introducing a new technology called Data Echoing. The new technique will reduce the computation used by earlier pipeline stages by using intermediate outputs from the same stages.

The best performing data echoing algorithm will be able to match the baseline’s predictive by using less upstream processing, according to scientists. They also said that they will compromise for four times slower input pipeline.

Besides operation, a lot of other things are also involved in the process of training a neural network. Hence, it is not completely possible to rely on the accelerator to a speedup in all cases. This what is a common observation by the researchers. The machine learning program must be capable of reading and decoding the training data. Additionally, it must be capable of reshuffling the data and grouping them into batches, when required. These steps involve multiple-layer exercise and like CPU, hard disk and other primary parts.

The procedure for machine training goes through a lot of steps. It involves many steps which become complicated. The new data acquiring technique will add another stage in the pipeline, which will repeat the output data of the previous stage before it gets updated.

A number of experiments were carried out in order to test the data echoing technique. Two language modeling task, two image task, and one object detection task were done. The timing for each training was recorded. It was being observed if data echoing can reduce the number of levels.

The outcome of the experiments showed that that is echoing technique reduced the number of steps and decreased the timing of machine training. It indicated that the machine training will require less number of examples using this new technique. Moreover, it was suggested that earlier the examples are inserted, the lesser time will be taken for training.

The scientists concluded that that is going is a perfect technique to reduce the time needed for machine training. The time saved as a result of your examples required, word help in enhancing the speed of the complete process. It is the best alternative to optimize the training pipeline. Moreover, it is also an alternative to hiring employees for performing upstream data processing, which is generally not possible or desirable.