distributed
Distributed tensorflow: the difference between In-graph replication and Between-graph replication
First of all, for some historical context, “in-graph replication” is the first approach that we tried in TensorFlow, and it did not achieve the performance that many users required, so the more complicated “between-graph” approach is the current recommended way to perform distributed training. Higher-level libraries such as tf.learn use the “between-graph” approach for distributed … Read more