In this talk, I will present my current research on utilizing Graph Neural Networks (GNNs) to learn vector representations for SAT/SMT formulae. I will discuss the limitations of using GNNs for this purpose and explore the possible connection between GNNs and approximation algorithms like SDP. This connection offers a promising avenue for future research and suggests that there may be new ways to interpret what neural networks can do.