Git Product home page Git Product logo

Comments (9)

andreaskipf avatar andreaskipf commented on June 28, 2024

Thanks for reaching out. Yes, wiki contains duplicates. @alexandervanrenen can you provide some information about the dataset?

I don't understand what you mean by record length. We extend each 64-bit unsigned integer key with a 64-bit value (payload). The payloads don't affect the mapping of keys to positions.

However, it is true that with larger payloads the last-mile search (e.g., binary search) would become more expensive due to extra cache misses.

from sosd.

alexandervanrenen avatar alexandervanrenen commented on June 28, 2024

Sure! The wiki dataset is based on wikipedia. We gathered the timestamps of all edits to pages. I think they have a second or millisecond granularity. Which explains the collisions. You can find the data and descriptions here. If I remember correctly, we only used a subset of the english wikipedia, as that already contained more than enough time stamps.

from sosd.

chaohcc avatar chaohcc commented on June 28, 2024

Thanks for reaching out. Yes, wiki contains duplicates. @alexandervanrenen can you provide some information about the dataset?

I don't understand what you mean by record length. We extend each 64-bit unsigned integer key with a 64-bit value (payload). The payloads don't affect the mapping of keys to positions.

However, it is true that with larger payloads the last-mile search (e.g., binary search) would become more expensive due to extra cache misses.
Thank you for your help!
I mean that each key corresponds to a index in the array, so if the record length is 1, then as the keys sorted, the positions are 0,1,2,3... but if the record length is not 1, assume it's 20, then positions are 0,20,40,60. Although the same primary keys, and the mappings between keys and positons are changed, the performance of learned index is quite different.
Best wishes!

from sosd.

chaohcc avatar chaohcc commented on June 28, 2024

Sure! The wiki dataset is based on wikipedia. We gathered the timestamps of all edits to pages. I think they have a second or millisecond granularity. Which explains the collisions. You can find the data and descriptions here. If I remember correctly, we only used a subset of the english wikipedia, as that already contained more than enough time stamps.

Thank you for your help, I find the description.

from sosd.

andreaskipf avatar andreaskipf commented on June 28, 2024

I mean that each key corresponds to a index in the array, so if the record length is 1, then as the keys sorted, the positions are 0,1,2,3... but if the record length is not 1, assume it's 20, then positions are 0,20,40,60. Although the same primary keys, and the mappings between keys and positons are changed, the performance of learned index is quite different.
Best wishes!

Hi @chaohcc. No, the record length doesn't affect the positions since each record is fixed size. You can just multiply the model's prediction with the record size to get the byte offset of the record. The position of the 5th record will always be 4 (0-indexed). Its byte offset will be 4 * record_size. So the model only needs to know that the 5th record is stored at position 4.

With variable-sized records that's a different story. But even then you probably wouldn't train on the raw byte offsets but instead use another structure for the indirection, like store an array of pointers and index into that with the model.

I hope that helps.

from sosd.

chaohcc avatar chaohcc commented on June 28, 2024

I mean that each key corresponds to a index in the array, so if the record length is 1, then as the keys sorted, the positions are 0,1,2,3... but if the record length is not 1, assume it's 20, then positions are 0,20,40,60. Although the same primary keys, and the mappings between keys and positons are changed, the performance of learned index is quite different.
Best wishes!

Hi @chaohcc. No, the record length doesn't affect the positions since each record is fixed size. You can just multiply the model's prediction with the record size to get the byte offset of the record. The position of the 5th record will always be 4 (0-indexed). Its byte offset will be 4 * record_size. So the model only needs to know that the 5th record is stored at position 4.

With variable-sized records that's a different story. But even then you probably wouldn't train on the raw byte offsets but instead use another structure for the indirection, like store an array of pointers and index into that with the model.

I hope that helps.

Thank you. It's very helpful, i also try the varialbe-size records and build a mapping table with just the keys in array instead of the raw byte offsets.
Best wishes!

from sosd.

chaohcc avatar chaohcc commented on June 28, 2024

Sure! The wiki dataset is based on wikipedia. We gathered the timestamps of all edits to pages. I think they have a second or millisecond granularity. Which explains the collisions. You can find the data and descriptions here. If I remember correctly, we only used a subset of the english wikipedia, as that already contained more than enough time stamps.

Hi! I have another question about the dataset, I want more information about 'books_200M', is it timestamp or some other ID? Thank you for your help!

Best wishes!

from sosd.

alexandervanrenen avatar alexandervanrenen commented on June 28, 2024

@RyanMarcus If I remember correctly the book dataset has the popularity of each book. So the number of times it was accessed or bought on amazon. No timestamps.

from sosd.

chaohcc avatar chaohcc commented on June 28, 2024

@RyanMarcus If I remember correctly the book dataset has the popularity of each book. So the number of times it was accessed or bought on amazon. No timestamps.

Thank you so much!

from sosd.

Related Issues (15)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.