Git Product home page Git Product logo

lightship's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

lightship's Issues

Killing a beacon twice ends up killing a different beacon

I've just ran into a situation in which, for some reason (can add the specific use case, if needed), I ended up calling beacon.die() on the same beacon.

I was expecting it to either be an error or a no-op, but instead what I noticed is that it killed the beacon in question, as well as some other totally unrelated beacon that was also active.

The culprit is most likely this line. I believe we should check if indexOf returns -1, instead of blindly splicing the array.

Port is random and I cant force it with the config

Hello,

I try to use lightship in my project but I have a problem with the lightship HTTP port. It seems it is set randomly at service startup. I try to force it with the env var LIGHTSHIP_PORT or with the port config but, at all cases the used port is random :

const { createLightship } = require('lightship');

const lightship = createLightship({
    port : 9000,
});

lightship.queueBlockingTask(new Promise((resolve) => {
    setTimeout(() => {
        // Lightship service status will be `SERVER_IS_NOT_READY` until all promises
        // submitted to `queueBlockingTask` are resolved.
        resolve();
    }, 1000);
}));

module.exports = lightship;
{"context":{"package":"lightship","namespace":"factories/createLightship","logLevel":40},"message":"shutdown handlers are not used in the local mode","sequence":"0","time":1643961907523,"version":"2.0.0"}
{"context":{"package":"lightship","namespace":"factories/createLightship","logLevel":30},"message":"Lightship HTTP service is running on port 50848","sequence":"1","time":1643961908137,"version":"2.0.0"}
listening on port 8085
{"context":{"package":"lightship","namespace":"factories/createLightship","logLevel":30},"message":"signaling that the server is ready","sequence":"2","time":1643961908140,"version":"2.0.0"}
{"context":{"package":"lightship","namespace":"factories/createLightship","logLevel":20},"message":"service will not become immediately ready because there are blocking tasks","sequence":"3","time":1643961908140,"version":"2.0.0"}
{"context":{"package":"lightship","namespace":"factories/createLightship","logLevel":30},"message":"service became available for the first time","sequence":"4","time":1643961908526,"version":"2.0.0"}

lightship with express + http-terminator.

I have an express app that is going to be deployed into a k8s stack and was investigating the use of lightship for readiness checks.

In the lightship readme is the recommendation "Add a delay before stop handling incoming requests"…

Instead of this, would it be reasonable to use http-terminator inside a lightship shutdown handler? Or is that redundant?

Express dependency is not stable

HI !
Currently project depends on an alpha version of expressjs ( "express": "^5.0.0-alpha.6" ).
Is it really necessary ? Hard to install it into any existing app which uses a stable version, without messing up a lot the dependency tree and adding a lot of extra dependencies

Change default path

Is it possible to change the default path from '/' to '/something/'?
For example:
/live -> /something/live
/health -> /something/health
/ready -> /something/ready

MODULE_NOT_FOUND

Looks like the version [email protected] is introducing a change and files published at npm registry doesn't contain the JavaScript file declared at main.js

Screen Shot 2021-04-02 at 10 59 12

Wrong node version requirement

Version 6.0.0 states that support for "Node >=10" but the code refers to >10 (85461c1) and is probably not intentional?

Currently 6.0.1 (latest) ends up with errors when trying to install with Node 10.x:

error [email protected]: The engine "node" is incompatible with this module. Expected version ">10". Got "10.18.1"

SyntaxError: Unexpected token on Node 6

Hi Gajus, thanks for fixing the Node 6 issue in roarr but now we're seeing similar one in lightship itself:

/usr/src/app/node_modules/lightship/dist/factories/createLightship.js:29
  const configuration = { ...defaultConfiguration,
                          ^^^
SyntaxError: Unexpected token ...
    at Object.exports.runInThisContext (vm.js:76:16)
    at Module._compile (module.js:542:28)
    at Object.Module._extensions..js (module.js:579:10)
    at Module.load (module.js:487:32)
    at tryModuleLoad (module.js:446:12)
    at Function.Module._load (module.js:438:3)
    at Module.require (module.js:497:17)
    at require (internal/module.js:20:19)
    at Object.<anonymous> (/usr/src/app/node_modules/lightship/dist/index.js:13:47)
    at Module._compile (module.js:570:32)
    at Object.Module._extensions..js (module.js:579:10)
    at Module.load (module.js:487:32)
    at tryModuleLoad (module.js:446:12)
    at Function.Module._load (module.js:438:3)
    at Module.require (module.js:497:17)
    at require (internal/module.js:20:19)
    at Object.<anonymous> (/usr/src/app/server.js:15:45)
    at Module._compile (module.js:570:32)
    at Object.Module._extensions..js (module.js:579:10)
    at Module.load (module.js:487:32)
    at tryModuleLoad (module.js:446:12)
    at Function.Module._load (module.js:438:3)

Suggestions: adding support for a startup probe

This is partially a suggestion and a question.

Kubernetes defines the ability next to liveness and readiness probes to define a startup probe.

startupProbe: Indicates whether the application within the Container is started. All other probes are disabled if a startup probe is provided, until it succeeds. If the startup probe fails, the kubelet kills the Container, and the Container is subjected to its restart policy. If a Container does not provide a startup probe, the default state is Success.

In examples the startup probe is often pointed to the /ready endpoint. I am wondering if it would make sense to add another endpoint to register a startup probe with. This probe would for instance be used to manage long taking processes when starting up or validating credentials before polling for ready. Lightship could maybe help here with providing an API for it.

Generally, I am also curious what thoughts are towards this.

Feature request: custom health check method

Currently /live provides information about the current shutdown state of the server:
"SERVER_IS_NOT_SHUTTING_DOWN" or "SERVER_IS_SHUTTING_DOWN".

While this is ok for simple use-cases, it does not truly reflect the liveness state of every application. Imagine your application loses the connection to its database (or something similar). In this case, you don't want /live to respond with 200 and "SERVER_IS_NOT_SHUTTING_DOWN". Instead, you want the liveness probe to fail and the application to be restarted.

godaddy/terminus let's define you a custom health check method

function healthCheck ({ state }) {
  // `state.isShuttingDown` (boolean) shows whether the server is shutting down or not
  return Promise.resolve(
    // optionally include a resolve value to be included as
    // info in the health check response
  )
}

and defaults to returning the state if no custom method was defined.

@gajus Is this a feature that could be implemented or would it be out-of-scope for this project?

Lightship with inversify-express-utils

Hi Gajus! Thanks for creating this lib!

I was trying to make it work in my current stack, but none of /health, /live and /ready routes work.
Today, i use https://github.com/inversify/inversify-express-utils to handle my express application. I was following the README.md but i couldn't make it work :(

Below is my index.ts:

import 'reflect-metadata';
import * as mongoose from 'mongoose';
import * as sourceMapSupport from 'source-map-support';
import { createLightship } from 'lightship';
import { delay } from 'bluebird';
import { InversifyExpressServer } from 'inversify-express-utils';
import Connection from './shared/class/Connection';
import env from './config/env';
import injectionContainer from './config/inversify.config';
import middleware from './shared/middlewares';
import REFERENCES from './config/inversify.references';

mongoose.set('useNewUrlParser', true);
mongoose.set('useCreateIndex', true);
mongoose.set('useUnifiedTopology', true);

sourceMapSupport.install();
process.on('unhandledRejection', console.log);

const server = new InversifyExpressServer(injectionContainer);
const mongoConn = injectionContainer.get<Connection>(REFERENCES.Connection);

server.setConfig(app => {
  middleware.initMiddlewares(app);
  middleware.initCustomRoutes(app);
});

server.setErrorConfig(app => {
  middleware.initExceptionMiddlewares(app);
});

const bootstrapedServer = server.build();

bootstrapedServer.listen(env.server_port, async () => {
  await mongoConn.connect();
  lightship.signalReady();
  console.log(`Opening the gates in ${env.server_port}`);
});

const lightship = createLightship();
lightship.registerShutdownHandler(async () => {
  await delay(30 * 1000);
  await mongoConn.disconnect();
});

I didn't find how to send a reference of my express app to lightship. This is possible?

Thanks!

Clarify best practices in readme

Hi, and thanks for creating this great library!

The readme has this under the "best practices" section:

It is important that you do not cease to handle new incoming requests immediately after receiving the shutdown signal. This is because there is a high probability of the SIGTERM signal being sent well before the iptables rules are updated on all nodes. The result is that the pod may still receive client requests after it has received the termination signal. If the app stops accepting connections immediately, it causes clients to receive "connection refused" types of errors.

I find this a little disconnected from the rest of the readme: is this at all related to the options you set on the lightship instance, eg shutdownDelay or similar? Should you manually set a timer inside of a registerShutdownHandler before calling .close() on an HTTP server? I think some more context would be very beneficial to users :)

Option to pass `Roarr` instance to lightship?

Hey,

I am using Roarr in my application and enhance it with a couple of simple middlewares. It would be super-helpful, if I could pass this Roarr instance to lightship.

Do you think that is is doable?

Btw: Amazing work! Lightship is great software, that makes running my applications in Kubernetes a bliss πŸ‘Œ

lightship module has been broken since yesterday

Was getting the MODULE_NOT_FOUND issue reported previously, and on the latest version getting this now:

import { createLightship } from 'lightship'
         ^^^^^^^^^^^^^^^
SyntaxError: Named export 'createLightship' not found. The requested module 'lightship' is a CommonJS module, which may not support all module.exports as named exports.
CommonJS modules can always be imported via the default export, for example using:

import pkg from 'lightship';
const { createLightship } = pkg;

    at ModuleJob._instantiate (node:internal/modules/esm/module_job:105:21)
    at async ModuleJob.run (node:internal/modules/esm/module_job:151:5)
    at async Loader.import (node:internal/modules/esm/loader:166:24)
    at async Object.loadESM (node:internal/process/esm_loader:68:5)

Looking at the changelog I think 6.6.2 is when it entered this broken state, reverting to 6.6.1 is a fine workaround for now.

6.6.2+ also added about 1000 lines of new dependencies in lockfiles across our services.

Prefix or custom urls for endpoints

Hello

I have a requirement to put all healthcheck endpoints behind a /internal prefix.

What are your opinions on allowing users to either customise the endpoints, or supply a prefix?

I am happy to create a PR with proposed changes, but I thought you might have had specific reasons for not allowing custom endpoints. If you do, I'd be curious to hear them.

Thanks!

Expose state logging

First of all, thanks for sharing this excellent package for application lifecycle management and serving endpoints for Kubernetes health probes.

We at Grandvision are investigating whether we can adopt Lightship. What is holding us back is that there is no possibility to inject our logging infrastructure based on Winston. And the information being logged by lightship is very useful for debugging and analysis. Can you consider to either support other loggers or introduce callbacks or events to inform consumers of the package about state changes?

The automated release is failing 🚨

🚨 The automated release from the main branch failed. 🚨

I recommend you give this issue a high priority, so other packages depending on you can benefit from your bug fixes and new features again.

You can find below the list of errors reported by semantic-release. Each one of them has to be resolved in order to automatically publish your package. I’m sure you can fix this πŸ’ͺ.

Errors are usually caused by a misconfiguration or an authentication problem. With each error reported below you will find explanation and guidance to help you to resolve it.

Once all the errors are resolved, semantic-release will release your package the next time you push a commit to the main branch. You can also manually restart the failed CI job that runs semantic-release.

If you are not sure how to resolve this, here are some links that can help you:

If those don’t help, or if this issue is reporting something you think isn’t right, you can always ask the humans behind semantic-release.


Invalid npm token.

The npm token configured in the NPM_TOKEN environment variable must be a valid token allowing to publish to the registry https://registry.npmjs.org/.

If you are using Two Factor Authentication for your account, set its level to "Authorization only" in your account settings. semantic-release cannot publish with the default "
Authorization and writes" level.

Please make sure to set the NPM_TOKEN environment variable in your CI with the exact value of the npm token.


Good luck with your project ✨

Your semantic-release bot πŸ“¦πŸš€

Use correct listen() function from Fastify

When starting a Lightship instance, the following error gets logged from Fastify:

[FSTDEP011] FastifyDeprecation: Variadic listen method is deprecated. Please use ".listen(optionsObject)" instead. The variadic signature will be removed in `fastify@5`.

Lightship should use the object version of listen as mentioned in the log message. It is also mentioned in their migrate to v4 guide, and the old variants will be removed completely in v5.

CommonJS issue

As I understand lightship library is distributed as es and commonjs module:
However it has dependency serialize-error ^11.0.0 which from v9 has breaking change and is pure es module: https://github.com/sindresorhus/serialize-error/releases/tag/v9.0.0

So looks like now lightship does not support commonjs?

UPDATE: In my project I get error:

Error [ERR_REQUIRE_ESM]: require() of ES Module ...\node_modules\lightship\dist\cjs\index.js from ...\SignalingAdapter.js not supported.
index.js is treated as an ES module file as it is a .js file whose nearest parent package.json contains "type": "module" which declares all .js files in that package scope as ES modules.
Instead either rename index.js to end in .cjs, change the requiring code to use dynamic import() which is available in all CommonJS modules, or change "type": "module" to "type": "commonjs" in ...\node_modules\lightship\package.json to treat all .js files as CommonJS (using .mjs for all ES modules instead).

I can run newest version of lightship only with some hacks. In my project package.json I added (then npm overrides version of lib):

  "overrides": {
    "serialize-error": "^8.1.0"
  }

Additionally I had to make manual edit in installed lightship ...\node_modules\lightship\package.json to remove line:

  "type": "module",

But this workaround is not acceptable in prod environment :)

UPDATE2: seems that my issue is duplicate of #67
But I think I have provided valuable information where the root cause is.

signalNotReady() not having any affect in shutdownHandler

Hey there, I'm having a hard time understanding why when I call lightship.signalNotReady() from within a shutdown handler, I get the log server is already shutting down. That might be correct but the problem is that requests to the /ready endpoint are still returning 200 OK, meaning my container is still accepting requests when it should be shutting down.

Here's some sample code:

lightship.registerShutdownHandler(async () => {
  logger.info(
    "in lightship shutdown handler.."
  );
  lightship.signalNotReady();
  // this is when I see the log 'server already shutting down', yet the server is still 'ready' when it shouldn't be..
});

If this is normal behaviour, then my question becomes - how do I tell lightship to signal not ready after I receive a sigterm, using lightship's shutdown handler callback?

Problem importing lightship

In my typescript ESM project, i get the following error when trying to use import { createLightship } from 'lightship';:

Cannot find module 'lightship' or its corresponding type declarations.ts(2307)

One way to fix this would be to edit my tsconfig.json file:

"moduleResolution": "nodenext",   // or "node16"       

The problem with this is that now a lot of other dependencies in my project will start to break, particularly the ones that use default exports (see microsoft/TypeScript#50175).

One way to solve the problem would be to add "main": "./dist/index.js" to Lightship's package.json. This way, the exports section would still take precedence, but main would make sure that environments with "moduleResolution": "node" would still work.

Would it be possible to add the main prop to pacakge.json?

Cannot test lightship endpoints when running in local mode

When lightship runs in local mode it uses a random port but does not have any way of identifying what port this is so it's not possible to know what port to check on if you want to confirm its behaviour during development. Other than adding extra code to disable the Kubernetes detection.

Things that should probably be added:

  1. In the log output include where the server is listening on
  2. In the return object include the string returned by app.listen

A bit of a tangent. Personally I'd like to put my readiness/liveliness probes on the same port as my Prometheus metrics. And then output something like the Next.js dev server's header with the URLs to these.

So I would be equally happy with the ability to provide createLightship with the Fastify app to use instead of having it run createFastify and app.listen internally.

Is lightship even maintained?

This project does not seem to have any activity whatsoever and bug fixes and active PR's seem to be ignored. Is this project going to be maintained?

Sentry as a peer dependency

Hi @gajus,

We have been using lightship for awhile and its been very helpful for us!
I was just wondering if it was possible for @sentry/node to be used as a peer dependency as well as be upgraded to the latest?

We have to set resolutions to prevent crashes due to breaking changes in newer versions of sentry
"resolutions": { "@sentry/node": "^7.12.1", }

Happy to open a PR if you'd like.

502 Bad gateway

I've got a WFE running and I'm implementing this in Kubernetes with the standard probe configuration, however as soon as lightship is added I receive a 502 when attempting to access my site.
Container logs say server is ready and started on port 80

Am I missing something??

Thanks!

Does not generate valid cjs

The current version does not generate a valid CommonJS distribution. This is because the tsconfig file for the cjs does not override the defaults for the module value:

// tsconfig.build-cjs.json
{
  "compilerOptions": {
    "noEmit": false,
    "outDir": "dist/cjs"
  },
  "extends": "./tsconfig.json",
  "include": [
    "src"
  ]
}

this can be fixed by setting compilerOptions.module to CommonJS

How to correctly handle errors??

Hi guys,

i'm a little bit confused about how we should handle a critical error, that the service can't recover from it.

Until now, I was using the signalNotReady when i catch a critical exception, thinking that kubernetes will kill my pod sending the SIGTERM and doing the graceful shutdown. But seems it doesn't work like this, because the container is not ready but kubernetes it's not killing it.

I have changed the lightship.signalNotReady() to lightship.shutdown() and now the container get killed by it self, but looking for the lightship code I see that when we use the shutdown() it's sets the signalReady to true again, so this will make k8 to route requests to this pod that is shutting down.. i'm right?

I'm confused about how I should to handle a critical error for make the pod restart graceful and ensure that not more requests are routed to this container. Any advice?

Thanks a lot!

Typescript declarations

Hi there πŸ‘‹

It would be awesome if you could expose typescript declarations.
I saw that the codebase is already typed using flow.

If you'd be interested into converting the codebase to TS I look into opening a PR.

If not - maybe its possible to generated d.ts files from the flow typings? πŸ€”

Should business server handle gracefully shutdown?

The gracefully shutdown is applied to the new created express app (server):

const server = app.listen(modeIsLocal ? undefined : configuration.port, () => {
log.info('Lightship HTTP service is running on port %s', server.address().port);
});
const httpTerminator = createHttpTerminator({
server,
});

What about the business one? Should we handle it manually (with createHttpTerminator)?

queueBlockingTask has no effect on /ready-endpoint

A small example:

const lightship: Lightship = createLightship(configuration);
// GET /ready --> 500
lightship.signalReady();
// GET /ready --> 200
lightship.queueBlockingTask(waitForever());
// actual: GET /ready --> 200
// expected: GET /ready --> 500

http-terminator: Please verify that the package.json has a valid "main" entry.

Node version: v14.17.0
Lightship version: 6.7.2
OS: macOS 11.5.2

import { createLightship } from 'lightship'

const lightship = createLightship()

Running my application throws this error:

[ERROR] 19:03:22 Error: Cannot find module '/Users/project/node_modules/http-terminator/dist/src/index.js'. 
Please verify that the package.json has a valid "main" entry

Maybe the error comes from the last update in http-terminator 3.0.1: https://github.com/gajus/http-terminator/releases/tag/v3.0.1

Get Reference for Express

Hi,
Is there any way to get a reference to the app express from lightship.server ?
I would like to add an endpoint to expose some metrics, using the same app express.

Version 7.0.1 incompatible with ubi8/nodejs12

baseimage: https://catalog.redhat.com/software/containers/ubi8/nodejs-12/5d3fff015a13461f5fb8635a
lighshipt version: 7.0.1

error message:

/opt/app-root/src/lightshipServer.js:18
lightship.registerShutdownHandler(() => {
          ^

TypeError: lightship.registerShutdownHandler is not a function
    at Object.<anonymous> (/opt/app-root/src/lightshipServer.js:18:11)
    at Module._compile (internal/modules/cjs/loader.js:999:30)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:1027:10)
    at Module.load (internal/modules/cjs/loader.js:863:32)
    at Function.Module._load (internal/modules/cjs/loader.js:708:14)
    at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:60:12)
    at internal/main/run_main_module.js:17:47

Always "process did not exit on its own; invetigate what is keeping the event loop active"

Hello,

this code:

const http = require("http");
const { createLightship } = require("lightship");

const server = http.createServer();
server.listen(3000);

const lightship = createLightship({ detectKubernetes: false });

lightship.registerShutdownHandler(() => {
  console.log("Closing HTTP server...");
  server.close();
});

lightship.signalReady();

Produce, on SIGINT (CTRL+C), always:

{"context":{"package":"lightship","namespace":"factories/createLightship","logLevel":40},"message":"process did not exit on its own; invetigate what is keeping the event loop active","sequence":3,"time":1556275546783,"version":"1.0.0"}

after 8.0.0 not possible to use in CJS project

after 8.0.0 is not able to use in the CJS project reason it has no export definition for CJS require.
not able to move beyond 7.

Issue: Error [ERR_PACKAGE_PATH_NOT_EXPORTED]: No "exports" main defined in /usr/app/bin/node_modules/lightship/package.json

Please add a build for CJS too and redefine the export similarly to

"exports": {
    "require": "./cjs/index.js",
    "import": "./esm/index.js"
  },

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.