Skip to main content

Generative Flow Networks

2024


On Divergence Measures for Training GFlowNets

Novel approach to training Generative Flow Networks (GFlowNets) by minimizing divergence measures such as Renyi-$\alpha$, Tsallis-$\alpha$, and Kullback-Leibler (KL) divergences. Stochastic gradient estimators using variance reduction techniques leads to faster and stabler training.

Analyzing GFlowNets: Stability, Expressiveness, and Assessment

How balance violations impact the learned distribution, motivating an weighted balance loss to improve training. For graph distributions, there are scenarios where balance is unattainable, and richer embeddings of children’s states is needed enhance expressiveness. To measure of distributional correctness in GFN we introduce a provable correct novel assessment metric.

2023


Human-in-the-Loop Causal Discovery under Latent Confounding using Ancestral GFlowNets

We introduce a causal discovery method that estimates uncertainty and refines results with expert feedback. Using generative flow networks, we sample belief-based ancestral graphs that captures latent-confounding, and iteratively reduce uncertainty through human input, with a human-in-the-loop approach.