I was trying out the sampling in MMM music generation model
today and encountered the problem described in this issue I
proposed. I have no experience writing C in python with
ctypes
, so I figured why not ask the magic conch
shell ChatGPT?
I’ve asked him several “how to write …” questions since its release, but this was the first time I actually ask him to help me understand a snippet of code so I can proceed to debugging.
It did pretty good in my first question, as it should do as the SOTA LLM model.


What strikes me is the context-aware ability it showed in my second question. It is well known it can do so from all the demos, but seeing it actually work with a real example of your own is really a different story. Here, I just asked an absolutely arbitrary question and it knew what I was referring to


It became mostly clear to me where the bug was, but to make sure




At last, I conveniently asked it how to fix this bug

