Recent
Paper Accepted at NeurIPS 2024: On Divergence Measures for Training GFlowNets
On Divergence Measures for Training GFlowNets
Novel approach to training Generative Flow Networks (GFlowNets) by minimizing divergence measures such as Renyi-$\alpha$, Tsallis-$\alpha$, and Kullback-Leibler (KL) divergences. Stochastic gradient estimators using variance reduction techniques leads to faster and stabler training.
Analyzing GFlowNets: Stability, Expressiveness, and Assessment
How balance violations impact the learned distribution, motivating an weighted balance loss to improve training. For graph distributions, there are scenarios where balance is unattainable, and richer embeddings of children’s states is needed enhance expressiveness. To measure of distributional correctness in GFN we introduce a provable correct novel assessment metric.
Human-in-the-Loop Causal Discovery under Latent Confounding using Ancestral GFlowNets
We introduce a causal discovery method that estimates uncertainty and refines results with expert feedback. Using generative flow networks, we sample belief-based ancestral graphs that captures latent-confounding, and iteratively reduce uncertainty through human input, with a human-in-the-loop approach.
Prior Specification for Bayesian Matrix Factorization via Prior Predictive Matching
A method for prior specification by optimizing hyperparameters via the prior predictive distribution. This approach matches virtual statistics generated by the prior to certain target values. We apply it to Bayesian matrix factorization models, obtaining a close-formula for the rank of the latent variables, and analytically determine the matching hyperparameters, and extend it to general models through stochastic optimization.