Git Product home page Git Product logo

artillery-core's People

Contributors

brianjmiller avatar dt-atkinson avatar ebaioni avatar erikerikson avatar gboysko avatar gwsii avatar hassy avatar invictusmb avatar jordan-brough avatar kengoldfarb avatar kjgorman avatar ksplache avatar lordjabez avatar lukebond avatar marcbachmann avatar markgandy avatar menzow avatar n3ps avatar outsideris avatar pavelkucera avatar ragecryx avatar samueltallent avatar steveschnepp avatar tatey avatar tejohnso avatar tomgco avatar tresor616 avatar zcei avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

artillery-core's Issues

Artillery is connecting twice with socket.io

When I run a scenario with socket.io, my 'connection' handler and authentication middleware is being triggered twice (instead of once).

You can test it by using this config:

phases:
    - duration: 1
      arrivalRate: 1

add a console.log() in your socket.on('connection', () => console.log('connection)). It will be triggered twice instead of once.

Send binary packet through WS?

I want to integrate artillery-core with a WS application that uses Protobuf for encoding the messages. Is it possible to send binary data through the WS using artillery?

[Feature Request] Ability to use variables in `think` action

My use case for artillery involves modelling realistic usage of my API, and that means randomizing the delays between calls to approximate different user behavior. In order to support this, I'd like artillery to be able to vary the amount of time waiting in a think action based on either a random variable or CSV file.

I already have the code written to support this feature, and would be happy to submit a PR if others feel this feature useful.

(The jitter feature proposed by another user is nifty, but is not as flexible as this change would be).

beforeReqeust's requestParams can't be customized.

- post:
    url: '/login'
    beforeRequest: 'setDeviceAsJson'
    json:
      grant_type: 'password'
      email: '{{ username }}'
      password: '{{ password }}'
      device: '{{ device }}'

In here, device is JSON, but the request stringifies it. So, I want to customize device in beforeRequest hook.

As the document, requestParams object is for it.

requestParams is an object given to the Request library. Use this parameter to customize what is sent in the request (headers, body, cookies etc)

// process.js
module.exports = {
  setDeviceAsJson: (requestParams, context, ee, next) => {
    requestParams.json.device = JSON.parse(context.vars.device);
    return next();
  },
};

However, reqeust's json wasn't changed, even if requestParams was changed.

In here, requestParams is overwritten after running beforeRequest hook. This change was introduced in this commit, but I can't understand what your intention is.

requestParams looks useless in the hook.

Not able to send Form Data with request

Im not able to use artillery.io to send form data with a post request for my test. I have used the following with no luck:
json: "firstName': 'Jerry', 'lastName':'Winston', 'zipCode': '12345', 'expertiseLevel': 'BEGINNER'}"
headers:
authorization: "Bearer {{access_token}}"
content-type: "multipart/form-data, boundary=------WebKitFormBoundaryKPzoKfSfBvBzfvZa"
accept: "application/json"

Also I have tried the following with no luck:
formData: "firstName': 'Jerry', 'lastName':'Winston', 'zipCode': '12345', 'expertiseLevel': 'BEGINNER'}"
headers:
authorization: "Bearer {{access_token}}"
content-type: "multipart/form-data, boundary=------WebKitFormBoundaryKPzoKfSfBvBzfvZa"

Please advise what i can do

Remember cookies for following requests in scenario spec

In a scenario where a user needs to log in (e.g. a POST request) and go to a home( e.g. a GET request), where the home page requires the cookie set during the log in process to load the page correctly.
e.g.

 {
  "config": {
    "target": "http://localhost:8080",
    "phases": [{
      "duration": 10,
      "arrivalRate": 10
    }],
   "jar": true // or false for all the scenarios by default which can be overridden at request level under scenarios
  },
  "scenarios": [{
    "flow": [{
      "post": {
        "url": "/login",
        "headers": {
          "Content-Type": "application/x-www-form-urlencoded"
        },
        "jar": true, // to start persisting cookies 
        "body": "username=awsomeuser&password=password"
      }
    },{
      "think": 1
    }, {
      "get": {
        "url": "/home",
        "headers":{
          "Content-Type": "application/x-www-form-urlencoded"
        },
        "jar": false // to stop persisting cookies
      }
    }]
  }]
} 

Perform http & socket.io tasks in 1 flow

Web applications designed with both http and socket.io often have a sequence of events (i.e. a flow) that requires both http posts, gets, and socket.io calls. Currently, you can have only one engine per flow so you cannot mix both http and socket.io within a flow. Example:

  "scenarios": [
    {
      "flow": [
        {"post": { "url": "/somepost", "capture": { "json": "$.data", "as": "mydata" } }},
        {"emit": { "channel": "echo", "data": "{{mydata}}", "response": { "channel": "echoed" } }}
        {"think": 1},
      ]
    }
  ]

Double send in socket.io engine

When using Artillery the other day I ran into something that doesn't fit my mental model. My scenario looked like this:

config:
  target: "ws://localhost:3004"
  phases:
    - duration: 1
      arrivalRate: 1
scenarios:
  - name: Previous Winner
    engine: socketio
    flow:
      - emit:
          channel: "rewards.requests"
          data:
            messageType: rewards.previous_winner
            correlationId: "71efe5a9-0d92-493e-abe2-8fae5a88088b"
          response:
            channel: "rewards.response"
            match:
              - json: "$.status"
                value: "success"

Given this scenario, if I run Artillery I would expect to see a single request hit the server and then the server issue a response that is checked. However, what I see is that two requests are sent. If I remove the response matcher section, then only a single request is sent.

I've tracked this down to the following line in engine_socketio.js:

       // No acknowledge data is expected, so emit without a listener
        socketio.emit(outgoing.channel, outgoing.data);
        markEndTime(ee, context, startedAt);
        return callback(null, context);

This change seemed to have landed as part of #181 and I presume acknowledgements refers to this part of the socket.io documentation. Given in my situation I'm not looking for acknowledgements, could this line possibly be a bug? If I remove it and rerun my scenario it seems to work as I would expect.

socketio response won't match json

If my test json looks like this:

{"emit": { "channel": "echo", "data": "hello", "response": { "channel": "echo", "data": "{\"msg\":\"hello\"}"} }}

processResponse in engine_socketio.js seems to fail matching a string with an object even though the debug output looks like it would be a match.

Maybe I'm doing something wrong?

Match json fails for numeric value

match: json: "$.numberId" value: "{{ numberId }}"

The JSONPath here returns a numeric value.
The variable value is always returned as a string (using string replace).
The match exact comparison fails comparing the JSONPath number and the expected string.

If match also allowed transform (like capture), the json number could be converted to a string to pass the match comparison.

Or renderVariables could have a special case to return the exact variable value (if replacing only one variable).

form not sending data

I am trying to send this data over

      - post:
          url: "/logout"
          form:
            token: {{ refresh_token }}
            token_type_hint: refresh_token
          headers:
            Authorization: Basic XXXXXXXXXXXX
            Accept: application/json
            # Content-Type: application/x-www-form-urlencoded
            Origin: "{{ origin }}"

but with HTTP debug

| 2022-03-09T15:54:29.215Z http request: {
  "url": "https://api.pnb.devhaus.net/logout",
  "method": "POST",
  "headers": {
    "user-agent": "Artillery (https://artillery.io)",
    "authorization": "Basic XXXXXXXX"
    "accept": "application/json",
    "origin": "https://localhost:3000"
  }
}

I don't see the form body in the request the others have "json" this happens to be form encoding only.

Artillery statsD plugin not working

Hi,

I am not seeing any artillery metrics flowing through my StatsD server. I did a few weeks ago, but then I upgraded artillery, and now I don't

I see this when doing an npm install -g artillery

[email protected] UNMET PEER DEPENDENCY bower@*

Any idea? How can I go about debugging these curated plugins? It would be nice if the plugins printed useful error messages.

supporting multipart form data

Aren't you passing formData with read stream?
Like this:

           formData: {
                foo: {
                    value: fs.createReadStream(filePath),
                    options: {
                        filename: fileName,
                        contentType: 'application/octet-stream'
                    }
                }
            }

This is how formData works..
I tried this:

config:
  target: 'http://localhost:8000'
  files:
    - "@../file.json"
  phases:
    - duration: 10
      arrivalRate: 1
scenarios:
  - flow:
    - post:
        url: "/bla"
        formData:
          foo:  "@../file.json"

Is my spec right?
Do I need to add something to it?
My server gives 400 bad request as it is unable to find foo.

This is a perfectly working api and is tested functionally. There is some issue with the way in which formData is sent via request.

Any help is much appreciated.

npm install fails for bcrypt

[email protected] seems to work, any reason for the strict requirement for [email protected]?

npm ERR! Darwin 15.4.0
npm ERR! argv "/usr/local/bin/iojs" "/usr/local/bin/npm" "i"
npm ERR! node v6.2.2
npm ERR! npm  v3.9.5
npm ERR! code ELIFECYCLE

npm ERR! [email protected] install: `node-gyp rebuild`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the [email protected] install script 'node-gyp rebuild'.
npm ERR! Make sure you have the latest version of node.js and npm installed.
npm ERR! If you do, this is most likely a problem with the bcrypt package,
npm ERR! not with npm itself.
npm ERR! Tell the author that this fails on your system:
npm ERR!     node-gyp rebuild
npm ERR! You can get information on how to open an issue for this project with:
npm ERR!     npm bugs bcrypt
npm ERR! Or if that isn't available, you can get their info via:
npm ERR!     npm owner ls bcrypt
npm ERR! There is likely additional logging output above.

npm ERR! Please include the following file with any support request:
npm ERR!     /Users/sz/src/shoreditch-ops/artillery-core/npm-debug.log

Get contexts for postmortem

It would be great if their was an event emitted for the context ready with its _uid and data. That way the information could be stashed off in a database, file, or somewhere for postmortem efforts.

EG: I know I have a bad account in a list of accounts, but the only way to find it right now is to go look at each one manually till I find the wrong one. With the context's could look at the context request, align it to the uid in the errors and poof I now have something to look at.

Feature Proposal: File Upload

Hej guys,

I'd love to use artillery for testing file uploads as well. I do know there's an open issue (artilleryio/artillery#106).

But as this is a thing that needs to be implemented here, plus I couldn't find an already discussed proposal.
I'd love to get some feedback, as I want start working on this ๐Ÿ™ƒ


File Upload

Rationale

Uploading files is a frequent HTTP use-case, but currently not supported very well within artillery-core.

To enable user to follow a unified approach, this proposal adds a cURL-style way of indicating file uploads.

In the background each file be passed to request using a ReadableStream to allow tests with potentially huge file sizes.

Changes

Indicate file payload by prepending the path with @, e.g. @path/to/file.md.

Input types:

  • single file
  • list of files
  • list of objects with key and path values (form multipart)

File resolution:

If the @ sign (file indicator) is immediately followed by a /, file resolution happens from root of disk.

Otherwise files will be tried to be resolved in following order:

  1. process.cwd + your path
  2. __dirname + your path

Backwards compatibility: โš ๏ธ

People who previously started their custom body payload with an at-sign (@) will need to adjust their descriptors.

I can't really figure, why you would do that - but as it wasn't disallowed previously I'd rather go with a semver valid major bump / breaking change.

Examples of Input

Single File

post:
  url: "/test"
  body: "@selfie.jpg"

Multiple Files

post:
  url: "/test"
  body:
    - "@selfie.jpg"
    - "@selfie2.png"

Multiple Files - Multipart Form

post:
  url: "/test"
  body:
    - key: foo
      path: "@selfie.jpg"
    - key: bar
      path: "@selfie2.png"

Known vulnerability in npm dependancy ws.

Module: ws

Issue: DoS due to excessively large websocket message

Overview: ws is a "simple to use, blazing fast and thoroughly tested websocket client, server and console for node.js, up-to-date against RFC-6455" By sending an overly long websocket payload to a ws server, it is possible to crash the node process.

Remediation: Update to version 1.1.1 of ws, or if that is not possible, set the maxpayload option for the ws server - make sure the value is less than 256MB.

References: nodejs/node#7388

Known vulnerability in npm dependancy tough-cookie

Module: tough-cookie
Published: July 22nd, 2016
Reported by: David Kirchner
CVE-2016-1000232
Vulnerable: >=0.9.7 <=2.2.2
Patched: >=2.3.0
CVSS: 7.5 High
Overview: Tough-cookie is a cookie parsing and management library.

Versions 0.9.7 through 2.2.2 contain a vulnerable regular expression that, under certain conditions involving long strings of semicolons in the "Set-Cookie" header, causes the event loop to block for excessive amounts of time.

Remediation: Upgrade to at least version 2.3.0

A plugin using the "stats" event does not get an event for the last set of requests.

Comparing the aggregate.latencies.length for the "done" event, it appears that the number of samples is more than what was reported on the "stats" event.

For a plugin that wants to report latencies before the end of the test, using the "stats" event is insufficient as not all of the results will be reported and it will be necessary to handle the "done" event and determine which events have not been recorded/reported.

Or suggestion is that a final "stats" event should be raised during the "done" processing so that the balance of the events which have not been reported via the "stats" event can provided.

Better handling of failed module loading.

When a plug-in code throws an error, Artillery reports the name of the plug-in as 'unable to load' and continues with the test. This is a problem for a couple of reasons:

  1. Assuming the user needed that plug-in to perform testing, attempting to continue is over-optimistic. Artillery should exit at that point and refuse to test.

  2. Artillery currently does not show any information about how or why the plug-in could not be loaded, at a minimum if an exception was thrown that message and stack should be reported.

Data got emit twice in socket.io engine

I had a json test script looked like this :

{
  "config": {
    "target": "ws://192.168.74.17:8000",
    "phases": [
      {
        "duration": 1,
        "arrivalRate": 1
      }
    ],
    "processor": "./laura_processor.js",
    "variables": {
      "channelId": "1f49b260-94eb-41e2-8f9f-fca20ba9fc96",
      "channelToken": "ad09a6b1-19ea-47a1-9b8d-b2c0f42f0a98",
      "recipientId": "ad09a6b1-19ea-47a1-9b8d-b2c0f42f0a98",
      "chatId": "1f49b260-94eb-41e2-8f9f-fca20ba9fc96-{{senderId}}",
      "senderId": ""
    },
    "socketio": {
      "transports": [
        "websocket"
      ]
    }
  },
  "scenarios": [
    {
      "engine": "socketio",
      "flow": [
        {
          "emit": {
            "channel": "message",
            "data": {
              "action": "UNK",
              "dataType": "TEXT",
              "data": "Hello",
              "conversationInfo": {
                "channelId": "{{channelId}}",
                "channelToken": "{{channelToken}}",
                "recipientId": "{{recipientId}}",
                "chatId": "1f49b260-94eb-41e2-8f9f-fca20ba9fc96-b0528b2f-fbb2-4772-ae08-111c1cab15bc",
                "senderId": "b0528b2f-fbb2-4772-ae08-111c1cab15bc"
              }
            },
            "response": {
              "channel": "message",
              "capture": {
                "json": "$.data",
                "as": "answer"
              }
            }
          }
        }
      ]
    }
  ]
}

As long as the "response" node shows up in the test script, data will be emitted twice. If I remove the "response" node, everything works fine. I'm not sure if anyone has faced the same issue.
The number of requests in this example was also doubled when I added response node under "add user" channel to capture response body.

Allow request configuration (specifically TLS)

Hey @hassy, we use some self-signed certs for some of our staging servers.

At the moment, requests are (rightfully) rejected as being untrusted when requesting an HTTPS endpoint. I can hack around this by setting process.env.NODE_TLS_REJECT_UNAUTHORIZED directly, but that's pretty lame. I'd also want this to be able to be configured per environment so in staging I can use unsafe connections, but closer to and in production we'd want to actually have a secure connection.

It would be nice if we could pass in a config option that allowed the rejectUnauthorized property to be passed to the underyling https.request.

Just passing config all the way through to the request like this: kjgorman@f8b06c8 will work, but is pretty brittle with respect to the request implementation (especially given artilleryio/artillery#6).

It also maybe generalises beyond just this setting/use case to how you can configure other properties of the request.

What do you reckon?

Question/bug?: customStats being overwritten

Background

  • I wrote a plugin that listens for both the stats and done events from Artillery.
  • My script utilizes a processor function that emits the customStats event in order to add custom metrics to the stats object

Situation

When the stats even is fired, I see the custom stats shown exactly as I would expect them to be

When the done event is fired, the custom stats are overwritten with an object that's along the lines of

{
    min: Number,
    max: Number,
    median: Number,
    p95: Number,
    p99: Number
}

I believe I found the code that performs the overwrite:
Once all scenarios are complete:

  1. The runner calls Stats.combine(aggregate).report() then passes the result when it emits done
  2. .report() clears out the customStats object and overwrites it here

Does anyone know why the custom stats are overwritten? Is this the intended behavior, or should the custom stats be preserved when they're passed to the emit('done)?

Add raw TCP socket engine

Add an engine that can be used to load test a server listening on a raw TCP socket, similar to the current socket.io support except that it simply uses Node.js's net.Socket.

Test hello_socketio Crash

I get the following error when trying to run the hello_socketio.json example:

> artillery run loadtest/hello_socketio.json
Log file: artillery_report_20161124_013708.json
Phase 0 started - duration: 15s
 //usr/local/lib/node_modules/artillery/node_modules/artillery-core/lib/runner.js:281
        return engine.step(rs, scenarioEvents);
                     ^

TypeError: Cannot read property 'step' of undefined
    at /usr/local/lib/node_modules/artillery/node_modules/artillery-core/lib/runner.js:281:22
    at arrayMap (/usr/local/lib/node_modules/artillery/node_modules/artillery-core/node_modules/lodash/index.js:1406:25)
    at Function.map (/usr/local/lib/node_modules/artillery/node_modules/artillery-core/node_modules/lodash/index.js:6710:14)
    at /usr/local/lib/node_modules/artillery/node_modules/artillery-core/lib/runner.js:277:21
    at arrayMap (/usr/local/lib/node_modules/artillery/node_modules/artillery-core/node_modules/lodash/index.js:1406:25)
    at Function.map (/usr/local/lib/node_modules/artillery/node_modules/artillery-core/node_modules/lodash/index.js:6710:14)
    at runScenario (/usr/local/lib/node_modules/artillery/node_modules/artillery-core/lib/runner.js:274:27)
    at EventEmitter.<anonymous> (/usr/local/lib/node_modules/artillery/node_modules/artillery-core/lib/runner.js:155:5)
    at emitNone (events.js:67:13)
    at EventEmitter.emit (events.js:166:7)

I could not see the log file. I am running Node 5.0.0 on Linux SUSE. I am probably missing something.

Enhancement Request: match and capture for socket.io emit

I would like to reuse the match and capture functionality in a socket.io emit flow. I know that the socket.io engine is derived/dependent on the http engine, but I am not sure how reusable those capabilities are in the socket.io world.

My approach (based on only an introductory review of the code) is to expose captureOrMatch from the engine_http module.

Is that a reasonable approach?

Add socketio query param on connect event

Hi

I am trying to use artillery to test my socket.io server under
artillery 1.6.0-2.
I want to put query param while connecting to server.
My artillery config file:

config:
  target: "http://127.0.0.1:8000"
  phases:
    - duration: 28800
      arrivalRate: 1
  variables:
    var: ['1', '2']
  processor: "./functions.js"
  socketio:
    query: 'user_id=4f5bc00e-8516-49a9-8507-d475d40d06b5&session_id=4f5bc00e-8516-49a9-8507-d475d40d06b5'

scenarios:
  - name: "Connect and send a bunch of messages"
    weight: 100
    engine: "socketio"
    flow:
      - loop:
          - function: "setMessage"
          - emit:
              channel: 'sendMessage'
              data: {
                'user_id': "4f5bc00e-8516-49a9-8507-d475d40d06b5",
                'session_id': "4f5bc00e-8516-49a9-8507-d475d40d06b5",
              }
              namespace: "/chat_v1"
          - think: 4
        count: 1

Here is a simple Socket.io server on Python:

from sanic import Sanic

import socketio

sio = socketio.AsyncServer(async_mode='sanic')
app = Sanic()
sio.attach(app)


@sio.on('sendMessage', namespace='/chat_v1')
async def test_message(sid, message):
    print(message, '\n\n')


@sio.on('connect', namespace='/chat_v1')
async def test_connect(sid, environ):
    print(environ)


@sio.on('disconnect', namespace='/chat_v1')
def test_disconnect(sid):
    print('Client disconnected')


if __name__ == '__main__':
    app.run()

When section with socketio param is activated there are no messages in sendMessage event. But if you remove this param every thing works fine.

Crash when reading results after bad http request

I've ran in to a crash, I think it is when the server has a bad return result and I have artillery configured to capture a json variable from the result.

Here are hopefully all of the relevant details.

Stack Trace

 |/usr/lib/node_modules/artillery/node_modules/artillery-core/lib/engine_http.js:414
  if (results.length > 1) {
             ^

TypeError: Cannot read property 'length' of undefined
    at extractJSONPath (/usr/lib/node_modules/artillery/node_modules/artillery-core/lib/engine_http.js:414:14)
    at /usr/lib/node_modules/artillery/node_modules/artillery-core/lib/engine_http.js:502:30
    at parseJSON (/usr/lib/node_modules/artillery/node_modules/artillery-core/lib/engine_http.js:404:10)
    at /usr/lib/node_modules/artillery/node_modules/artillery-core/lib/engine_http.js:497:7
    at /usr/lib/node_modules/artillery/node_modules/async/lib/async.js:181:20
    at iterate (/usr/lib/node_modules/artillery/node_modules/async/lib/async.js:262:13)
    at Object.async.forEachOfSeries.async.eachOfSeries (/usr/lib/node_modules/artillery/node_modules/async/lib/async.js:281:9)
    at Object.async.forEachSeries.async.eachSeries (/usr/lib/node_modules/artillery/node_modules/async/lib/async.js:214:22)
    at captureOrMatch (/usr/lib/node_modules/artillery/node_modules/artillery-core/lib/engine_http.js:483:9)
    at done (/usr/lib/node_modules/artillery/node_modules/artillery-core/lib/engine_http.js:239:17)

Partial Config

"post": {
    "url": "/Player/Register",
    "json": {
        "localPlayerId": "LoadTest",
        "playerName": "LoadTest",
        "lifetimeCookies": 0,
        "showcaseLevel": 0,
        "bakeryLevel": 0
    },
    "capture": {
        "json": "$.UID",
        "as": "UID"
    }
}

Source (method stack trace is from)

// doc is a JSON object
function extractJSONPath(doc, expr) {
  let results = jsonpath.eval(doc, expr);
  if (results.length > 1) {
    return results[randomInt(0, results.length - 1)];
  } else {
    return results[0];
  }
}

Version

$ artillery --version
1.5.0-12

Use loopCount as index on a array

Is it possible to use the loopCount as an index on an array?

This is what I would like to do:

       {
          "count": 5,
          "loop": [
            {
              "get": {
                "url": "http://someUrl/{{ invoiceIds[ $loopCount ] }}"
              }
            }
          ]
        }

Make namespaces configureable per emit action

Hey,

First of all, thank you for this tool. Best load tester I could find for testing Web Sockets. ๐Ÿ‘

I started using Artillery yesterday, and due to the structure of my application it's required to communicate in multiple namespaces at the same time.

While digging through the documentation I wasn't able to figure out an existing way to do this, so I added this feature to the SocketIoEngine.

I added a new optional attribute to the emit action called namespace. Here you can enter the namespace you want the socket to communicate in for that specific action. Namespace socket connections are scoped to the scenario context and are only established once per scenario.

Once a scenario has been completed, all socket connections in its context are disconnected through the SocketIoEngine::closeContextSockets(context) method.

Example config:

config:
  target: http://localhost:1337
  phases:
  - duration: 60
    arrivalRate: 50
scenarios:
- engine: socketio
  flow:
  #Authenticate namespaces
  - emit:
      namespace: "/nsp1"
      channel: "authenticate"
      data:
        token: authtoken
  - emit:
      namespace: "/nsp2"
      channel: "authenticate"
      data:
        token: authtoken

Compare changes:
master...menzow:master

Artillery does not return with certain malformed requests

Sometimes when capturing variables to be used in follow up calls, the capture can end up being empty for a number of reasons. If that happens, the url is empty in requestParams and the request library throws an exception that is not handled by artillery-core.
A simple fix for this can be checking that requestParams has a url before calling request(). I will submit a pull request that addresses this. This is an issue for people using this in ci/cd systems as a single bad test request can cause artillery to hang forever. Here's a simple script to repro with intentional errors in it:

config:
  target: https://aws.amazon.com
  phases:
    - duration: 20
      arrivalRate: 4

scenarios:
  -
    flow:
      -
        get:
          url: /
          capture:
            -
              json: "$.view.details"
              as: c0
        get:
          url: "{{c0}}"

Socket.IO Engine - Emit Callback Data

Hello,

In artillery-core/lib/engine_socketio.js:186 where the actual emit happens the Socket.IO callback supports returning data as multiple arguments thus in some projects this may not return all the data the emit ack contains. Is it possible to return all of them by looking at the arguments object?

WS get received data?

Let's say we have a user login via WS, the client sends a message LOGIN through WS and receives back a token via a WS message. How can I read and save the token data in order to send it for new requests?

Can I subscribe to WS.onmessage method?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.