Thank you for your effort in constructing this dataset.
Not quite sure if this could be valuable or not, but this project seems so close to an automatic math curriculum.
I'm thinking some sort of ordered pairwise semantic retrieval based meta-prompting strategy may work, I haven't thought to deeply about it but maybe you could even use the values you've already calculated as a way to order the retrievals.
I'm not sure which side of the curriculum to start on though. You could take the highest-ranking original label and try to find previously valuable related information which might work if you have a smart enough embedding model.
Or instead set some minimum threshold of Qwen value and start from there and for each increasing Qwen value datapoint find the k most similar datapoints possibly factoring in the Qwen rankings into the embeddings or using it as some weighting.
Then use Qwen to compare 1 x 1 a dynamic number of the most similar datapoints to the query datapoint to create weighted edges between those datapoints that represent the necessity of the query to understanding the dynamic number.
I think a dynamic number would be better to get a near constant number of links.
So, like until k links have been found with values above some threshold, keep searching through the semantically retrieved similar points. This threshold might need to be adjusted based on the average/range of the similarity scores of retrieved datapoints.
While not a complete representation of all the edges. I really think this would be incredibly valuable as a means for step-by-step reward modeling.