Git Product home page Git Product logo

logdna-cos's Issues

The Logpush job feature has changed on CIS

As it mentioned on IBM Cloud Docs, the Logpush job is currently pushing new log packages to COS bucket every 30 seconds or every 100,000 logs, whichever comes first. And with a solution with a high access, more than one file might be pushed per 30-second period or per 100,000 logs. So, in order to handle a higher number of files on COS bucket, the logdna-cos should be able to handle every package separately (in a serverless model).

Package all logs before making the HTTP requests to LogDNA

The source code needs to be able to handle all the packaging process (by default, it is 20,000 in a single package) before start sending each one to the LogDNA.

The AS-IS scenario is different. It adds the 20,000 logs in a single package and then sends it to the LogDNA. It does this process for every 20,000 logs, but it does not wait to package everything. And when something happens during the "add logs and then send" process, like an exception (not mapped previously), it automatically exits and waits for the next time to redo everything again, which cause the duplication of logs on LogDNA (even though it only stores all new logs for 7 days, 14 days or 30 days on the IBM Log Analysis with LogDNA service).

LogDNA Ingest API limits max 10 MB per request

If you try to send a payload with more than 10 MB per request, LogDNA will return with an error. The size of each log depends on the number of fields added and the request result on IBM Cloud Internet Services (there is an example on README.md with all fields available on CIS).

Replace buffer-split lib by a local function

As part of the improvement process, the buffer-split lib is a small package that allows the function to split all lines of logs by a "Buffer tag." It should be replaced by local code to allow its usage on IBM Cloud Functions without any additional process of zipping node_modules folder.

Copy object instead of deploy the object from local env

The current process is: download the object, parse the JSON, send to the LogDNA, upload the object, and delete the object. For IBM Cloud users, you can use the Private Endpoint to avoid any Public Bandwidth cost, but for those who are not using IBM Cloud, it can generate an extra charge.

So, to improve the process, the function should be able to copy the object instead of deploying the local one.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.