Skip to main content

Paper Accepted at NeurIPS 2024: On Divergence Measures for Training GFlowNets

·179 words

I’m excited to announce that our paper, “On Divergence Measures for Training GFlowNets,” authored by Tiago da Silva, Eliezer de Souza da Silva, and Diego Mesquita, has been accepted at NeurIPS 2024! 🎉

This paper presents an in-depth study of divergence-based learning objectives in Generative Flow Networks (GFlowNets), which are widely used for amortized inference over compositional objects in fields such as causal discovery, NLP, and drug discovery. We explored alternatives to the standard flow-matching training objectives, proposing the use of forward and reverse Kullback-Leibler (KL), Renyi-\( \alpha \), and Tsallis-\( \alpha \) divergences.

We demonstrated that by designing statistically efficient estimators and applying variance reduction techniques like control variates, GFlowNets can be trained more efficiently, often achieving faster convergence and better approximations.

The key contributions of the paper include:

  1. A comprehensive empirical evaluation of various divergence measures applied to GFlowNet training.
  2. Development of control variates for reducing the variance of stochastic gradient estimators.
  3. A theoretical bridge between GFlowNets and generalized variational inference in topological spaces.

Stay tuned for more updates on this exciting research! 🚀