An effort to bring structure and meaning to huge volumes of varied data is being used to improve training of neural networks.
The technique, dubbed Neural Structured Learning (NSL) attempts to leverage what developers call “structured signals.” In model training, those signals represent the connections or similarities among labeled and unlabeled data samples. The ability to capture those signals during neural network training is said to boost model accuracy, especially when labeled data is lacking.
NSL developers at Google (NASDAQ: GOOGL) reported this week their framework can be used to build more accurate models for machine vision, language translation and predictive analytics. Read More