TensorFlow DynamicEmbedding

DynamicEmbedding, which extends Google’s TensorFlow machine learning framework, allows  AI models to  have significant accuracy gains over the course of two years, demonstrating that they can grow “incessantly” without having to be constantly retuned by engineers.

Currently, DynamicEmbedding models are suggesting keywords to advertisers in Google Smart Campaigns, annotating images informed by “enormous” search queries (with Inception), and translating sentences into ad descriptions across languages (with Neural Machine Translation). Google says that many of its engineering teams have migrated algorithms to DynamicEmbedding so that they can train and retrain them without much data preprocessing.

DynamicEmbedding could be useful in scenarios where focusing on the most frequently occurring data might cast aside too much valuable information. Building DynamicEmbedding into TensorFlow required adding a new set of operations to the Python API that take symbolic strings as input and “intercept” upstream and downstream signals when running a model. These operations interface with a server called DynamicEmbedding Service (DES) to process the content part of a model. This talks to a DynamicEmbeding Master module that divvies up and distributes the work among agents — called DynamicEmbedding Workers. Workers are principally responsible for allocating memory and computation and communicating with external cloud storage, as well as ensuring that all DynamicEmbedding models remain backward compatible.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *