Git Product home page Git Product logo

Comments (13)

altavir avatar altavir commented on May 27, 2024

@elizarov Just pointed me in the direction of Memory, which seems to do exactly what I need. No I need to understand if there are ways to easily allocate it.

from kotlinx-io.

elizarov avatar elizarov commented on May 27, 2024

It is not supposed to be allocated easily. It is a resource that is to be carefully managed. Right now we are thinking that scoped primitive that gives you memory for a while should be Ok. Tell us more about your use-case, though.

from kotlinx-io.

altavir avatar altavir commented on May 27, 2024

I use manual placement of objects in a JVM ByteBuffer to avoid boxing. Currently, it us solved by specialized readers and writers like these ones and emulate value-types. Current tests show that it allows almost completely eliminate boxing overhead on non-primitive buffers (tested it for complex numbers).

Currently I use JVM ByteBuffer, but obviously, I can't move it to multiplatform since IOBuffer works quite differently. Memory seems to do the trick (read and write primitives, create non-copying view-slices, etc). And it seems to be backed by ByteBuffer, but of course I will need some way to allocate it and keep it allocated while the Buffer that holds it is alive.

from kotlinx-io.

altavir avatar altavir commented on May 27, 2024

In future, I will probably want to connect to something like Apache Arrow for cross-language data transport and use its memory model, but it is currently out of scope.

from kotlinx-io.

cy6erGn0m avatar cy6erGn0m commented on May 27, 2024

The problem is that different platform has different memory management so it is unclear how can we define an MPP common allocator for Memory so that it can be fully-functional and relatively safe. This is why the only planned function is something like that (IoBuffer will always have Memory inside)

inline fun <R> withBuffer(size: Int, block: IoBuffer.() -> R): R

from kotlinx-io.

altavir avatar altavir commented on May 27, 2024

I think that for most cases a simple wrapper on top of ByteArray will do. why not make an interface like RandomAccessBuffer and make Memory implement it. The operations like primitime get/set could be added on top of it as extensions instead of being extensions of Memeory (Memory could have its own set of extensions overriding those of interface). Than we can add other implementations like the one wrapping the ByteArray or even that of Arrow storage.

from kotlinx-io.

cy6erGn0m avatar cy6erGn0m commented on May 27, 2024

Those extensions couldn't be on top of RandomAccessBuffer because they can't be implemented efficiently: all primitive get/set operations will be significantly slow (comparing to `ByteBuffer.getShort/Int/Long...). The idea is that on JVM Memory is an inline class that is represented as a ByteBuffer in runtime and all functions are inline so writing any code with Memory will be compiled to the corresponding bytecode working with ByteBuffer so all Hot Spot optimizations are enabled. Any kind of wrapping or handmade primitive reading implementations will reduce performance.

from kotlinx-io.

altavir avatar altavir commented on May 27, 2024

Indeed, but we still need some kind of multiplatform implementation for this. There are several ways to solve that. One is to make primitive read/write members instead of extensions. It will allow to use "slow" access for "slow memory" (ByteArray) and optimized access to ByteBuffer. It will probably work, but not very kotlinish. Another way (the one I usually do in kotlin) is to separate storage and access. Meaning that you have a storage class like Memory with minimal functionality and then accessor class like MemoryReader or MemoryWriter that takes actual Memory as a parameter and could use factory function like Memory.read() to create it. This factory funciton could find out (in runtime) what exact Memory implementation is used and then use optimized access methods if they do exist. It will bring only minimal runtime overhead and looks quite simple from the user side. We can also automatically free the memory if it is initialized and no accessor holds it at the moment. I can write a prototype later if you are interested.

from kotlinx-io.

altavir avatar altavir commented on May 27, 2024

Here is the prototype: https://github.com/mipt-npm/kmath/tree/dev/kmath-memory/src/commonMain/kotlin/scientifik/memory
It ended up very similar to the current IO implementation (I've stolen most of JS part). The difference is that Memory is interface and could have multiple implementations on the same platform. It could allow better flexibility in future. For example it is possible that we will need some kind of special representation for shared memory, when it is available.

Another feature (not really used yet) is a release mechanism. It is supposed that Memory is initiated when first reader or writer is taken from it (it is possible to make initialization lazy), then it is released when all readers and writers are released. In this case one can control memory release process in native or some other case.

I currently did not implement array reads since I am not sure I understand a use-case for them. They could be done via MemorySpec. MemorySpec could be optimized for specific memory type, it could check on specific memory type from MemoryReader::memory and use optimized access operations if type matches. Also user could use specific MemorySpec optimized for specific memory type.

from kotlinx-io.

Dominaezzz avatar Dominaezzz commented on May 27, 2024

IMO, I think something like that can and should be implemented on top of the current memory class but not necessarily in kotlinx.io.
It introduces unnecessary runtime indirection for what is supposed to be a thin abstraction over platform-specific raw memory implementation. If in future (Like project panama) there's another implementation, it can be added as another actual module for the same target. Similar to how ktor-client-* has multiple implementations for the same core client.
Although it would be nice if one the actuals could be your prototype. Which would make everyone happy. Not sure if expect/actual would ever allow this use case.

I'm not sure if this is currently possible as I haven't gotten to this stage in my project yet but the MemorySpec bit might be achieved with kotlinx.serialization.

from kotlinx-io.

altavir avatar altavir commented on May 27, 2024

I can agree that this is not basically an IO problem. But it seems for me that one Memory for one platform does not solve all possible use-cases. It is possible to have different memory variants in the same platform. Split actuals are not always a good solution because you need to actually take different module and recompile everything to make the change.
Of course, I can build everything on top of existing Memory implementation and then add my own interface on top of it, but the problem of inability to allocate memory in common still exists.
I do not see any memory indirection here. Maybe you are talking about virtual calls? Well, the API adds a single additional virtual call, and I do not see how it could affect anything.

A compiler plugin to determine the MemorySpec could be done in the same way as it is done in kotlinx.serialization. I mentioned it before. Maybe even current plugin could be tricked to do it, but I am not sure. For mathematical tasks it probably not needed (we work with limited number of simple objects and it is quite easy to implement specification for each of them), but it Kotlin tries to implement value-types surrogate through that, it is possible.

from kotlinx-io.

Dominaezzz avatar Dominaezzz commented on May 27, 2024

Will the new Memory class have methods to set/get in native byte order? As supposed to the current big-endian only getters and setters?

from kotlinx-io.

fzhinkin avatar fzhinkin commented on May 27, 2024

We're rebooting the kotlinx-io development (see #131), all issues related to the previous versions will be closed. Consider reopening it if the issue remains (or the feature is still missing) in a new version.

from kotlinx-io.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.