Git Product home page Git Product logo

fastdbf's Introduction

FastDBF

A free and open source .net library for reading/writing DBF files. Fast and easy to use. Supports writing to forward-only streams which makes it easy to write dbf files in a web server environment.

enjoy,

Ahmed Lacevic

fastdbf's People

Contributors

alacevic avatar emirpasic avatar jesuswasrasta avatar lazyb0y avatar mprtenjak avatar slightlymadgargoyle avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fastdbf's Issues

Byte 0 of mData (IsDeleted) is incorrectly initialised in the DbfRecord constructor

mData is initialized in the DbfRecord constructor leaving mData[0] == 0. Valid values would be either '*' or ' ' as described in the setter of IsDeleted.

My quick fix is was to set IsDeleted to false after the initialization of mData.

        public DbfRecord(DbfHeader oHeader)
        {
            mHeader = oHeader;
            mHeader.Locked = true;

            //create a buffer to hold all record data. We will reuse this buffer to write all data to the file.
            mData = new byte[mHeader.RecordLength];
            IsDeleted = false; // Make sure mData[0] correctly represents 'not deleted'
            mEmptyRecord = mHeader.EmptyDataRecord;
            encoding = oHeader.encoding;

            for (int i = 0; i < oHeader.mFields.Count; i++)
                mColNameToConIdx[oHeader.mFields[i].Name] = i;
        }

multi query DBF and display to datagridview in vb.net

Dear All master,
I want to do multi query and display records in datagridview. Is the library fast for three hundred thousand records?.
For information I use VS2010 with VB.NET programming language.

Thanks
kana88

Add support F field

Please add after: case "M": return DbfColumnType.Memo;
case "F": return DbfColumnType.Number;

UTF-8 Encoding

Hi,

I'm creating a DBF file with FastDBF, the file contains some data with accents (like áéíóúñ), I tried creating the db file with Encoding.UTF8, Encoding.Ansi, Encoding.Unicode, Encoding.Ascii, but those characters are not displaying ok when I open the DBF file.

Is there something else I should do to get this to work?

Thanks

Index FIles

Hello,
FastDBF Solved a lot of my problems.
Really well written code.
I wish FastDBF had a way to make index files.
David

Use of Clipper and FoxPro extended "FieldLength" functionality does not always work

"public void Read(BinaryReader reader)" allows for a non-standard 16 bit nFieldLength as documented below.

I have run into some DBFs where the high byte is populated with a non-zero value resulting in columns record lengths that are longer than the actual record.

In my case I'm working around this by attempting to use the extended FieldLength format, checking for length errors, and re-processing without extended FieldLength format if an error is identified

        /// <summary>
        /// Read header data, make sure the stream is positioned at the start of the file to read the header otherwise you will get an exception.
        /// When this function is done the position will be the first record.
        /// </summary>
        /// <param name="reader"></param>
        public void Read(BinaryReader reader)
        {
            var readerPos = reader.BaseStream.Position;

            // Attempt to use extended FieldLength format
            Read(reader, true);

            // Calculate the expected field length.
            var calculatedDataLength = 1;
            for (var i = 0; i < mFields.Count; i++)
            {
                calculatedDataLength += mFields[i].Length;
            }

            // If the exected field length does not match the calculated, re-processess the file without extended FieldLength format 
            if (RecordLength != calculatedDataLength)
            {
                reader.BaseStream.Position = readerPos;
                Read(reader, false);
            }
        }



        /// <summary>
        /// Read header data, make sure the stream is positioned at the start of the file to read the header otherwise you will get an exception.
        /// When this function is done the position will be the first record.
        /// </summary>
        /// <param name="reader"></param>
        internal void Read(BinaryReader reader, bool allowExtendedFieldLength)
        {
            // type of reader.
            int nFileType = reader.ReadByte();

            if (nFileType != 0x03)
                throw new NotSupportedException("Unsupported DBF reader Type " + nFileType);

            // parse the update date information.
            int year = (int)reader.ReadByte();
            int month = (int)reader.ReadByte();
            int day = (int)reader.ReadByte();
            mUpdateDate = new DateTime(year + 1900, month, day);

            // read the number of records.
            mNumRecords = reader.ReadUInt32();

            // read the length of the header structure.
            mHeaderLength = reader.ReadUInt16();

            // read the length of a record
            mRecordLength = reader.ReadInt16();

            // skip the reserved bytes in the header.
            reader.ReadBytes(20);

            // calculate the number of Fields in the header
            int nNumFields = (mHeaderLength - FileDescriptorSize) / ColumnDescriptorSize;

            //offset from start of record, start at 1 because that's the delete flag.
            int nDataOffset = 1;

            // read all of the header records
            mFields = new List<DbfColumn>(nNumFields);
            for (int i = 0; i < nNumFields; i++)
            {

                // read the field name              
                char[] buffer = new char[11];
                buffer = reader.ReadChars(11);
                string sFieldName = new string(buffer);
                int nullPoint = sFieldName.IndexOf((char)0);
                if (nullPoint != -1)
                    sFieldName = sFieldName.Substring(0, nullPoint);


                //read the field type
                char cDbaseType = (char)reader.ReadByte();

                // read the field data address, offset from the start of the record.
                int nFieldDataAddress = reader.ReadInt32();

                //read the field length in bytes
                //if field type is char, then read FieldLength and Decimal count as one number to allow char fields to be
                //longer than 256 bytes (ASCII char). This is the way Clipper and FoxPro do it, and there is really no downside
                //since for char fields decimal count should be zero for other versions that do not support this extended functionality.
                //-----------------------------------------------------------------------------------------------------------------------
                int nFieldLength = 0;
                int nDecimals = 0;
                if (cDbaseType == 'C' || cDbaseType == 'c')
                {
                    if (allowExtendedFieldLength)
                    {
                        //treat decimal count as high byte
                        nFieldLength = (int)reader.ReadInt16();
                    }
                    else
                    {
                        //treat decimal count as high byte
                        nFieldLength = (int)reader.ReadByte();
                        reader.ReadByte();
                    }
                }
                else
                {
                    //read field length as an unsigned byte.
                    nFieldLength = (int)reader.ReadByte();

                    //read decimal count as one byte
                    nDecimals = (int)reader.ReadByte();

                }


                //read the reserved bytes.
                reader.ReadBytes(14);

                //Create and add field to collection
                mFields.Add(new DbfColumn(sFieldName, DbfColumn.GetDbaseType(cDbaseType), nFieldLength, nDecimals, nDataOffset));

                // add up address information, you can not trust the address recorded in the DBF file...
                nDataOffset += nFieldLength;

            }

            // Last byte is a marker for the end of the field definitions.
            reader.ReadBytes(1);


            //read any extra header bytes...move to first record
            //equivalent to reader.BaseStream.Seek(mHeaderLength, SeekOrigin.Begin) except that we are not using the seek function since
            //we need to support streams that can not seek like web connections.
            int nExtraReadBytes = mHeaderLength - (FileDescriptorSize + (ColumnDescriptorSize * mFields.Count));
            if (nExtraReadBytes > 0)
                reader.ReadBytes(nExtraReadBytes);



            //if the stream is not forward-only, calculate number of records using file size, 
            //sometimes the header does not contain the correct record count
            //if we are reading the file from the web, we have to use ReadNext() functions anyway so
            //Number of records is not so important and we can trust the DBF to have it stored correctly.
            if (reader.BaseStream.CanSeek && mNumRecords == 0)
            {
                //notice here that we subtract file end byte which is supposed to be 0x1A,
                //but some DBF files are incorrectly written without this byte, so we round off to nearest integer.
                //that gives a correct result with or without ending byte.
                if (mRecordLength > 0)
                    mNumRecords = (uint)Math.Round(((double)(reader.BaseStream.Length - mHeaderLength - 1) / mRecordLength));

            }


            //lock header since it was read from a file. we don't want it modified because that would corrupt the file.
            //user can override this lock if really necessary by calling UnLock() method.
            mLocked = true;

            //clear dirty bit
            mIsDirty = false;
        }

License?

What license is this released under? Could you include a LICENSE file with the code?

Problem with column-name 11 chars long

If a column name is 11 chars long there is some garbage chars added to the name.
I have a columnname 'STOREMETHOD,C,64' which renders 'STOREMETHODC|,C,64' when a new file is created.

Cannot read files more than 2 GB, getting System.IO.IOException

The problem is that index parameter of method DbfFile.Read(Int32 index, DbfRecord oFillRecord) is int. So when the line
long nSeekToPosition = _header.HeaderLength + (index * _header.RecordLength);
is executed we will get an overflow. It will result in System.IO.IOException: An attempt was made to move the file pointer before the beginning of the file.
It would be great to change index type to long.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.