Git Product home page Git Product logo

reliance-edge's People

Contributors

danielrlewis avatar datalight-devops avatar datalightsupport avatar gjjjiang avatar jcdubois avatar jeremysherrill avatar quizic avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

reliance-edge's Issues

API to manage user controlled metadata as extended attribute.

Hello guys,

For our purpose we were in need of a way to add user controlled metadata to files stored inside the reliance edge filesystem (posix version).

The need was for these data to be associated to the file but distinct from the file data payload. Moreover it was desired that these metadata could be protected in integrity.

So instead of devising a solution strictly on top of the actual reliance API, I started to add a very crude/simple "user controlled metadata API" loosely inspired by the linux/posix "extended attribute API".

Here, the feature is rather limited as:

  • the max number of attributes is determined at compile time,
  • the size of an attribute is fixed at 32 bits
  • the attributes are referenced by index rather than by a key.

The drawback is that the API is not really linux compliant. Anyway there is no posix version of this API it seems. So maybe it doesn't really matter to have a Reliance specific API.

My present prototype is available on a branch in my github repository (https://github.com/jcdubois/reliance-edge/tree/attr).

The feature is optional (if REDCONF_ATTRIBUTES_MAX is equal to 0 it is not included) and it allows the user to store 32 bits user controlled values inside dir/file inodes. Because it is stored in the inode, it is protected by the inode CRC.

I would like to know if such feature is of interest to you and if you would consider merging it (after careful review and required fix/change) to reliance main line.

Thanks.

JC

Note: on my branch I have added a crc_file_wrapper API as an example of what could be achieved with this feature (using one attribute to store the data file CRC). This API is not necessarily candidate for inclusion (unless you find it useful). Extended attributes can be used to store a lot of other things related to file like owner ID, file type, encoding, identifier, ... and any number of other tags.

Note: I ported fsstress to use the crc_file_wrapper API (rather than the red API) to test it some bit and it did work as expected (with some performance hit when writing files).

redstat.h is incompatible with Posix 2008 OS.

If an OS is Posix 2008 compliant (for example linux) it should have the following define in its stat.h file:

# define st_atime st_atim.tv_sec

Same thing for st_mtime and st_ctime.

This "define" is to allow backward compatibility for source code that would still use st_atime (or st_mtime or st_ctime)

As a result any file including (directly or indirectly) both the POSIX compliant stat.h and redstat.h will fail to compile.

The reported error is not really clear on the issue, but it all come from the above define as st_atime is expanded to st_atim.tv_sec.

../../../include/redstat.h:70:17: error: expected ‘:’, ‘,’, ‘;’, ‘}’ or ‘attribute’ before ‘.’ token
uint32_t st_atime; /**< Time of last access (seconds since 01-01-1970). */
^

Erroneous use of "current volume" globals

Note: Datalight has published a technical bulletin (follow the link to download) which discusses the consequences of this bug, which projects are likely to be affected, and what has been done to correct it.

Reliance Edge is using a global variable for the "current" (i.e., currently accessed) volume in several places where doing so is inappropriate and problematic.

In core/driver/blockio.c, every function is using gpRedVolConf (the global variable for the per-volume configuration parameters for the current volume) to access the number of times that an I/O operation should be retried:

for(bRetryIdx = 0U; bRetryIdx <= gpRedVolConf->bBlockIoRetries; bRetryIdx++)
{
    /* ... */
}

Also in core/driver/blockio.c, RedIoRead() and RedIoWrite() are also using gpRedVolConf for the sector offset:

uint64_t ullSectorStart = ((uint64_t)ulBlockStart << bSectorShift) + gpRedVolConf->ullSectorOffset;

All of the RedIo functions have a bVolNum (volume number) parameter; and so, semantically speaking, they should use the parameters for the specified volume rather than the current volume. The issue is more than semantical with RedIoWrite(). The block buffers in Reliance Edge (core/driver/buffer.c) are shared by all mounted Reliance Edge volumes; and when doing LRU replacement of a dirty block buffer, that module will invoke RedIoWrite() (from BufferWrite()) on a buffer that is not necessarily from the current volume. As a result, the bBlockIoRetries and ullSectorOffset values used may be for the wrong volume. If partitioning is in use, using the wrong ullSectorOffset will generally result in a I/O error, since (as of v2.2.1) the RedOsBDevWrite() implementations use the VOLUME_SECTOR_RANGE_IS_VALID() to validate the sector range, which will detect the discrepancy.

In core/driver/buffer.c, the BufferFinalize() function is using gpRedVolume (the global variable for volume data for the current volume) to get at the current sequence number:

uint64_t ullSeqNum = gpRedVolume->ullSequence;

BufferFinalize() is called when doing LRU replacement of dirty block buffer; as mentioned above, the block being written is not necessarily from the current volume. If the block buffer being written is not for the current volume, that means the incorrect sequence number is being put into the metadata block header. This could result in metadata corruption. The sequence number from the current volume might be higher than the sequence number for the volume whose metadata block is being written. When that metadata block is read back in, Reliance Edge detects (in BufferIsValid()) that the sequence number in the metadata block header is too high for the volume: resulting in a critical error that sets the volume read-only.

BufferFinalize() also calls RedVolSeqNumIncrement(), which always increments the sequence number from the current volume: it should be incrementing the sequence number for the volume whose metadata block is being written.

For further details, refer to the technical bulletin.

Failing to compile linux host tools.

Compilation is failing with the foolowing:

gcc -Wall -Werror -I ../../../projects/linux/host -I ../../../include -I ../../../core/include -I ../../../os/linux/include -DD_DEBUG=0 -D_XOPEN_SOURCE=500 -x c -c ../../../tools/getopt.c -o ../../../tools/getopt.to
../../../tools/getopt.c:102:19: error: ‘illoptchar’ defined but not used [-Werror=unused-const-variable=]
static const char illoptchar[] = "illegal option -- %c\n"; /* From P1003.2 */
^~~~~~~~~~
cc1: all warnings being treated as errors
../../../os/linux/build/host.mk:23 : la recette pour la cible « ../../../tools/getopt.to » a échouée
make: *** [../../../tools/getopt.to] Erreur 1

Please find proposed patch in attached file.

JC

Linux-Allow-linux-to-compile.txt

Implement readahead hint for low level driver functions

If one application reads data from files by chunks of 4KB, at this time, all IO toward the persistent storage is synchronous to the call and will be started over for the next 4KB chunk.
Now, assuming we have a relatively intelligent and autonomous persistent storage peripheral like a eMMC controller able to do DMA transfers in the background, with the present usage scheme, we could not really leverage this ability because of the synchronous use of the persistent storage done by Reliance.
For example, if the application had some processing work to do on the present 4KB chunk it would be beneficial (performance wise) if we could trigger the retrieval of the next 4KB chunk in the background (by the intelligent persistent storage peripheral) so that it is available (or almost available) when the application is done with the present block.
I have been considering adding this behavior only to the low level driver functions without particular support/hint from reliance but I think that implementing this behavior blindly on all "read calls" could be counterproductive as these API are also used to retrieve inode information and other metadata on the file system (and these are mostly short single block reads).
So I was wondering if it would make sense to add either some parameters to some actual read functions or even some new functions to "give a hint" to the lower driver API that it would be beneficial to initiate some "readahead" behavior if supported by the hardware.
Do you think such feature could be beneficial to Reliance and would you be interested to add it to it? If so, assuming I could make prototype for it, could you give some guidance on the way you would prefer it to be implemented?

FreeRTOS F_DRIVER bug in block driver read/write

The FreeRTOS F_DRIVER block device read and write functions use the wrong variable(s) for the sector index.

From os/freertos/services/osbdev.c, line 478 (DiskRead()):

    for(ulSectorIdx = 0U; ulSectorIdx < ulSectorCount; ulSectorIdx++)
    {
        iErr = pDriver->readsector(pDriver, &pbBuffer[ulSectorIdx * ulSectorSize],
                                   CAST_ULONG(ullSectorStart + ulSectorCount));
    /*...*/
    }

Line 534 contains similar logic for the DiskWrite() function.

The F_DRIVER readsector and writesector methods can only handle one sector at a time, so they are called from within a for loop so that multiple sectors can be transferred by the RedOsBDevRead and Write methods. The loop iterator is ulSectorIdx, which is correctly used to find the write place in the given buffer. However, the loop iterator is ignored when determining the sector number pass to readsector and writesector. Instead, (ullSectorStart + ulSectorCount) is passed as the sector number.

The values of ullSectorStart and ulSectorCount are not modified within the loop. For multi-sector transfers, this means that the same sector will be read or written repetitively instead of the expected number of sectors being transferred to or from the disk. Furthermore, the sector being accessed is always one beyond the end of the requested transfer, which affects single-sector transfers as well. For example, on single-sector transfers (ulSectorCount = 1), the desired sector is at ullSectorStart, but ullSectorStart + 1 is used as the sector number.

The FreeRTOS BDEV_F_DRIVER implementations of DiskRead and DiskWrite (os/freertos/services/osbdev.c) are affected by this bug. Other block device service implementations are not affected.

This issue report is for documentation purposes only; the bug has already been fixed.

Low level driver for stm32 sdio

Studying the lower-level driver for stm32, I'm a little confused. Take a look at this code:

if(IS_ALIGNED_PTR(pBuffer, sizeof(uint32_t)))
    {
        bSdError = BSP_SD_WriteBlocks_DMA(CAST_AWAY_CONST(void, pBuffer), ullSectorStart, ulSectorCount);

        if(bSdError != MSD_OK)
        {
            redStat = -RED_EIO;
        }
      #if SD_STATUS_TIMEOUT > 0U
        else
        {
            redStat = CheckStatus();
        }
      #endif
    }
    else
    {
        uint32_t ulSectorIdx;

        for(ulSectorIdx = 0U; ulSectorIdx < ulSectorCount; ulSectorIdx++)
        {
            const uint8_t *pbBuffer = pBuffer;

            RedMemCpy(gaulAlignedBuffer, &pbBuffer[ulSectorIdx * ulSectorSize], ulSectorSize);

            bSdError = BSP_SD_WriteBlocks_DMA(gaulAlignedBuffer, (ullSectorStart + ulSectorIdx), 1U);

            if(bSdError != MSD_OK)
            {
                redStat = -RED_EIO;
            }
          #if SD_STATUS_TIMEOUT > 0U
            else
            {
                redStat = CheckStatus();
            }
          #endif

            if(redStat != 0)
            {
                break;
            }
        }
    }

    return redStat;

This function (HAL_SD_WriteBlocks_DMA) take block logical address, but reliance edge high level functions pass block number). Can you explain exactly how the blocks are addressed?

Persistent write buffer for unaligned writes

In case when system has heavy read traffic and light write traffic with small chunks (smaller than block size) filesystem I see the following behavior: buffers for unaligned writes between consecutive writes are constantly pulled out (flushed to NVRAM) by LRU algorithm.
This leads to heavy IO traffic (for single write request) and increase write amplification.
Suggested solution - mark buffer for unaligned writes (requires on buffer per file) as "persistent",
which must not be flushed to NVRAM with LRU algoritm in RedBufferGet().

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.