apocas / dockerode Goto Github PK
View Code? Open in Web Editor NEWDocker + Node = Dockerode (Node.js module for Docker's Remote API)
License: Apache License 2.0
Docker + Node = Dockerode (Node.js module for Docker's Remote API)
License: Apache License 2.0
This is more of a question, feel free to close. I am trying to determine whether this module with work in the browser with browserify. Since I don't have a docker instance, I couldn't test it, but from looking at the source, it seems like it should work, since docker's remote API supports CORS. Could you confirm?
I see no way to add flag options for the docker run command (such as -i, -p, -t, etc) with the docker.run command, nor do I see any way in the tests/examples. Is this possible?
I need to overwrite some of the default options when calling buildImage, as I want to set opts.path to '/build?t=nodeapp&q' to set a name/tag for the resulting image.
Unfortunately, the opts object given on call get completely ignored/overwritten by the defaults.
This seems to be a general problem, not only for docker.buildImage();
Docker.prototype.buildImage = function(file, opts, callback) {
var self = this;
var opts = {
path: '/build?',
method: 'POST',
file: file,
options: opts,
isStream: true,
statusCodes: {
200: true,
500: "server error"
}
};
this.modem.dial(opts, function(err, data) {
callback(err, data);
});
};
I'm using buildImage method
var Docker = require('dockerode');
var docker = new Docker({socketPath: '/var/run/docker.sock'});
docker.buildImage('./build/server.tar', {t: 'projectname'}, function(err, resp) {
// some code
});
It builds the image from tar as expected, however it runs each RUN instruction every time I run the script without caching it. When I run docker build . -t projectname
it runs much faster and uses caching.
If this is currently supported by dockerode, could you include a test / example of attaching to a container's stdin? And if it's not supported, what are the chances of that happening?
Thanks!
Does this module support importing images? The docker remote api supports importing images from stdin so this could possibly be supported here.
ID=$(docker run -d desired-image /bin/bash)
docker export $ID | gzip -c > desired-image.tgz
docker rm $ID
# copy desired-image.tgz to another machine
gzip -dc desired-image.tgz | docker import - desired-image
If this module supported the final import line from a stream that would be great
var Docker = require('dockerode')
var fs = require('fs')
var docker = new Docker({socketPath: '/var/run/docker.sock'})
var readStream = fs.createReadStream('./desired-image.tgz')
// pipe to new image here
var image = docker.importImage('foo')
readStream.pipe(image)
Another use case is building from a Dockerfile on stdin http://docs.docker.io/en/latest/api/docker_remote_api_v1.5/#build-an-image-from-dockerfile-via-stdin
var Docker = require('dockerode')
var docker = new Docker({socketPath: '/var/run/docker.sock'})
var fs = require('fs')
var readStream = fs.createReadStream('/path/to/Dockerfile')
var image = docker.buildFromDockerfile('foo')
readStream.pipe(image)
I can not find any usage of tty parameter here: http://docs.docker.io/reference/api/docker_remote_api_v1.11/#21-containers . But there is tty parameter to container.attach in examples:
//tty:true
container.attach({stream: true, stdout: true, stderr: true, tty: true}, function (err, stream) {
stream.pipe(process.stdout);
});
//tty:false
container.attach({stream: true, stdout: true, stderr: true, tty: false}, function (err, stream) {
//dockerode may demultiplex attach streams for you :)
container.modem.demuxStream(stream, process.stdout, process.stderr);
});
How this is woks?
I learned how to use this repo mostly by reading the source then reading the api docs for docker and then comparing to what the source of docker actually does in the CLI
Ideally we could improve the documentation in a more API driven way that is created by running the source through something like jsdoc or ironically docker
When working with dockerode I often need to debug some mistakes (or just want to see what is going on) some logging (or use of the debug module) would be very helpful
It would be nice if the err
actually had the statusCode of the response as a property.
Right now the errors are returned in a human readable form, but not friendly to interpret programmatically.
oh, i just saw the docker-modem issue for this...
apocas/docker-modem#9
After attaching and writing data to stdin I need to close it somehow. I have some code that is being executed and expects the stdin in the container to be closed at a certain poinrt. Is there a way for this to be accomplished?
I'm looking into ways to get demuxed log output (stdout/stderr) for a "run" call.
As docker.run() takes a destination stream for logging, it may be possible to provide an instance of container.modem.demuxStream(), which get feed by separate streams for stdout/stderr beforehand. While achieving this from the outside might be possible for the user, it would be far from convenient in practice. See example
From my point of view, two options should be added to the "run" functionality
I'm running dockerode synchronously using fibrous. In dockerodes error situation fibrous crashes and reports "Future resolved more than once" (for example when docker daemon is not running). I traced the problem to modem.js buildPayload. The passed callback function is executed more than once if err is true.
Also I have a feature request. Could it be possible to add options to container commands where they are now missing. At least stop doesn't have the possibility pass options but stop command has "t โ number of seconds to wait before killing the container".
Thank you.
How can I close attached stream? It is http.IncomingMessage, so the only way I found is to call request.abort() but it is in 'req' not in 'res'.
It's a very common task to need to build a new image from within a directory. I.e. use a Dockerfile + various source files that need ADDed.
This was my solution:
tar = require 'tar-fs'
console.log 'building new image'
tarStream = tar.pack(process.cwd())
docker.buildImage(tarStream, {t: 'pizza'}, (error,output) ->
output.pipe(process.stdout)
)
Using this example I'm able to run the bash in a container and then attach into it. That works as expected and I'm able to leave the container with exit
command.
The exit
command leaves the container and stop that. Which calls the wait
event. In some cases I would like to leave the container without stop that. Then I thought I could use the CTRL+P CTRL+Q
command, the same one I would use in the docker client, to leave the container and keep that running.
But the CTRL+P CTRL+Q
command leaves the container and after that freezes my prompt. As this command does not stop the container it will not call the wait
event.
Does anyone has some thoughts how I could detach a container and back to prompt without exit and stop the container?
In rare cases Container.inspect()
never returns, which seems to be a problem in docker. To keep my app running I need to wrap that call in a timeout. However I believe the right place to implement this would be directly in dockerode
or docker-modem
directly on the http request http://nodejs.org/api/http.html#http_request_settimeout_timeout_callback
Hi Pedro,
I just wondered if I could get some advice on using dockerode - I hope that's OK and thanks:
I'm building an image successfully using:
docker.buildImage(archiveName, {t: tagName}, function(err, response){
console.log('finished building image', err, response.statusCode, response.complete)
})
I see the response.complete is false, and I suspect that's something to do with the streaming nature of dockerode. I'm not quite sure whether I should:
or
I guess this is part of my lack of knowledge of streaming. Does HTTP streaming still time out, or does it keep going as long as new data comes in?
Sorry to bother you with this, I just suspect you might know the answer...
So you create create containers and not wait for them to finish, as they might not finish. For example if you run a web server, etc.
Pull is fairly simple but if we have run I think it would be really nice to have pull since this is a common use case https://github.com/dotcloud/docker/blob/master/commands.go#L1056
How do we take advantage of the OpenStdin and AttachStdin features? It doesn't seem as though the stream is a writeable stream.
Thanks.
Not sure if this something that should be dealt with here or over in Docker itself but it looks like the image tag call responds with a 201 status code, not a 200. Dockerode is expecting 200 as it says in the Docker documentation so this causes it to think it's an error. Even though Dockerode thinks it's an error it does actually tag the image.
This is my stream output from the error
{ [Error: HTTP code is 201 which indicates error: undefined - null] reason: undefined, statusCode: 201, json: null }
I'm using Docker version 0.8.0
Any reasons why the end
option is set as false
here (in docker run)?
stream.pipe(streamo, {end: false});
I am trying to send the stream to the browser as response, and this false
option means that I have no way to find out when the stream closes so that I can send a res.end
to the browser. (I tried listening for the finish
event, but it does not fire).
Here's a small snippet showing what I'm trying to do:
http.createServer(function (req, res) {
res.writeHead(200, { 'Content-Type': 'text/plain' });
res.on('finish', function() {
res.end('there will be no more data.'); // this never fires
});
docker.run('ubuntu', "echo 1", res, true, function(err, data) {
});
}).listen(9000);
First of all, many thanks for writing such an awesome wrapper. ๐
Here is one case that i come across :
I am trying to get list of all containers bases on user preference whether they want to see all containers or only few that comes by default.
Docker api docs says to specify "all" params in my call. Which works perfectly when i make raw curl call
xx.x.x.x/containers/json?all=1/true // Fetches all
xx.x.x.x/containers/json?all=0/false // Fetches NOT all
xx.x.x.x/containers/json?all=garbage // Fetches all
Now, when i try to make same call with listContainers
method it will consider everything given to all
as true i.e considering everything garbage even true/false/0/1 values.
docker.listContainers( {all: null/true/false/1/0}, function(err, containers) {...} // Fetches All
size
option also work like this. ( i haven't tested other options).
Please help me out with this. I am using "dockerode": "~2.0.0"
I got this once on attaching to stream coming out of dockerode:
_stream_writable.js:201
var len = state.objectMode ? 1 : chunk.length;
^
TypeError: Cannot read property 'length' of null
at writeOrBuffer (_stream_writable.js:201:41)
at Writable.write (_stream_writable.js:180:11)
at IncomingMessage.<anonymous> (/home/vwoo/coderpad-tty/node_modules/dockerode/node_modules/docker-modem/lib/modem.js:133:18)
at IncomingMessage.EventEmitter.emit (events.js:92:17)
at emitReadable_ (_stream_readable.js:408:10)
at emitReadable (_stream_readable.js:404:5)
at readableAddChunk (_stream_readable.js:165:9)
at IncomingMessage.Readable.push (_stream_readable.js:127:10)
at HTTPParser.parserOnBody [as onBody] (http.js:142:22)
at Socket.socketOnData [as ondata] (http.js:1583:20)
at Pipe.onread (net.js:525:27)
my code looks like:
container.attach stream: true, stdout: true, stderr: true, (err, stream) ->
return errHandler() if err
stdout = new Writable()
stderr = new Writable()
stdout._write = (chunk, encoding, cb) ->
handleWrite 'stdout', chunk
cb()
stderr._write = (chunk, encoding, cb) ->
handleWrite 'stderr', chunk
cb()
handleWrite = (stream, chunk) ->
data = chunk.toString 'utf8'
lastOutput = if results.length > 0 then results[results.size - 1] else false
if lastOutput && lastOutput.stream == stream
lastOutput.data += data
else
results.push { stream, data }
container.modem.demuxStream stream, stdout, stderr
it looks like demuxStream sometimes calls .write with null?
I'm not sure if this is a bug but I've been driving myself crazy trying to debug some weird things happening with the Remote API. Sometimes it just never responds. I narrowed it down to only happening with calls directly after I make an image tag call. And it looks like something is happening really low level where it never closes the connection to the Remote API so the next call will just sit in limbo. I was looking at the documentation and I'm not certain but I don't think the image.tag method should have the isStream flag.
This may just be more documentation changes or changes within Docker itself as was the case with #45 (Thanks for the quick fix BTW)
https://github.com/apocas/dockerode/blob/master/lib/docker.js#L236
It seems like a fairly easy thing to change the callback signature from:
callback(err, data);
to:
callback(err, container, data);
Now that we return the container in #20 it might be easier to deal with removal directly on the object (and remove some api surface to the already large .run
method)
I'm getting ECONNREFUSED
when trying to create a container.
var Docker = require('dockerode')
, docker = new Docker({host: 'http://localhost', port: 3000})
docker.createContainer({
Image: 'ubuntu:12.10',
Cmd: ['bash'],
AttachStdin: true,
OpenStdin: true,
// StdinOnce: true
}, function (err, container) {
if (err) return console.log(err)
wget http://localhost:3000/containers/json
works just fine.
See: https://github.com/apocas/dockerode/blob/master/lib/docker.js#L124 vs https://github.com/apocas/dockerode/blob/master/lib/docker.js#L130
I would argue that we should not return instances (since doing so may result in lost data since the server responds with much more then just the ID we need) and make getContainer / getImage (or the constructors) smarter so they can accept these objects from the server and respond with a object.
The Docker Remote API v1.4 and above supports creating privileged containers. Dockerode does not seem to support this.
In my code specifically:
docker.createContainer opts, (err, container) ->
the opts object contains a "Privileged":true
entry but that doesn't seem to be getting reflected in Docker. Patching the API calls to /v1.4/ doesn't seem to break anything but the behaviour continues.
Hi all, I'm using dockerode to manage my docker containers and its great so far, but the run
command doesn't seem to be friendly to running containers in a detached mode. The implementation now always tries to attach to a container and then wait for it to be done before running the callback. I'd like to just call run and get a confirmation it is running. Would like to contribute, was wondering thoughts on a preferred way. My inclination is to add a new method, runDetached
a la:
Docker.prototype.runDetached = function(image, cmd, options, callback) {
if (!callback && typeof(options) === 'function') {
callback = options;
options = {};
}
function handler(err, container) {
if (err) return callback(err, container);
container.start(options, function(err, data) {
if(err) return callback(err, data);
callback(err, data, container);
});
}
var optsc = {
'Hostname': '',
'User': '',
'AttachStdin': false,
'AttachStdout': false,
'AttachStderr': false,
'Tty': true,
'OpenStdin': false,
'StdinOnce': false,
'Env': null,
'Cmd': cmd,
'Image': image,
'Volumes': {},
'VolumesFrom': ''
};
_.extend(optsc, options);
this.createContainer(optsc, handler);
}
Thoughts? I'd be happy to submit a PR and include some tests, but don't want to go through the effort if you'd rather augment the run
method. If that's the case, I'd rather not use the options object but don't have a better suggestion.
Thanks for the great lib!
Mike
I can't figure out how to bind a host volume into a container. According to the Remote API documentation this can be specified by setting 'Binds' like this...
docker.run('image', ['ls', '-al', '/tmp/'], null, {Binds: ["/tmp:/tmp:rw"]}, cb)
Unfortunately, this doesn't seem to work. (shows contents of original container /tmp)
Can somebody point me into the right direction here? I wonder if this is a bug in the Remote API...
Btw, it works perfectly fine using the docker cli tool!
I was running into an issue where I was getting a random error in Image.prototype.inspect
when I was not calling the function. It turned out that when I was logging an image, the routines called inspect
on the image with a number (recursionTimes
as I traced into it) which ended up causing the error.
This can be replicated quickly with the following:
var Docker = require('dockerode');
var docker = new Docker(/* your setup */);
var image = docker.getImage('imageName');
console.log(image); // ends up calling `image.inspect`
This ends up crashing like as below, but not pointing directly to console.log as the issue.
/path/to/my/code/node_modules/dockerode/lib/image.js:37
callback(err, data);
^
TypeError: number is not a function
at /path/to/my/code/node_modules/dockerode/lib/image.js:37:5
at Modem.buildPayload (/path/to/my/code/node_modules/dockerode/node_modules/docker-modem/lib/modem.js:135:5)
at IncomingMessage.<anonymous> (/path/to/my/code/node_modules/dockerode/node_modules/docker-modem/lib/modem.js:106:14)
at IncomingMessage.EventEmitter.emit (events.js:117:20)
at _stream_readable.js:920:16
at process._tickCallback (node.js:415:13)
I will let it up to you if/how this could be accounted for, but it's something I wanted to at least make you aware. I could create a pull request with a quick fix as well (checking if the callback isn't a function, and returning the image name if that is the case), but I don't know if that accounts for any other issues off hand.
At the moment docker-modem
just reads options.file
and pipes the contents to the command.
It would be really nice if one could pass a readable stream instead of a filename (e.g. reading a tar file from the web).
There doesn't appear to be any documentation or tests for binding container ports to host.
vagrant@precise64:$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
48f3a0350a6d vincentwoo/xxxxxxxxxxxx:latest /home/coderpad/shell 3 minutes ago Up 3 minutes distracted_ritchie
vagrant@precise64:$ coffee
coffee> docker = new (require('dockerode'))(socketPath: '/var/run/docker.sock')
{ modem:
{ socketPath: '/var/run/docker.sock',
host: undefined,
port: undefined,
version: undefined } }
coffee> docker.getContainer
[Function]
coffee> container = docker.getContainer('48f3a0350a6d')
undefined
coffee>
/home/vagrant/coderpad-tty/node_modules/dockerode/lib/container.js:20
callback(err, data);
^
TypeError: number is not a function
at /home/vagrant/coderpad-tty/node_modules/dockerode/lib/container.js:20:5
at Modem.buildPayload (/home/vagrant/coderpad-tty/node_modules/dockerode/node_modules/docker-modem/lib/modem.js:118:9)
at IncomingMessage.<anonymous> (/home/vagrant/coderpad-tty/node_modules/dockerode/node_modules/docker-modem/lib/modem.js:91:14)
at IncomingMessage.EventEmitter.emit (events.js:117:20)
at _stream_readable.js:920:16
at process._tickCallback (node.js:415:13)
vagrant@precise64:$
When I use dockerode to create docker containers, container creation starts failing silently after a few times. This code will illustrate the problem: https://gist.github.com/kishorenc/026354f451f544153a2f
The first 3 times, I get the inside attach
message on the console, but after that, container creation just fails silently with no errors. Only before create
message is logged.
I'm on OS X and using Dockerode 0.2.0
and Docker 1.0
.
Hi, first of all thanks for this module! Second, I have a strange issue / question but I'm not sure if it's a Docker thing or a dockerode thing. Please see moby/moby#7375
Create a container with a name
I have a container I'd like to run with the PublishAllPorts
option, but this is not being passed to the container.start
call.
docker.run(['./startup.sh'], null, {PublishAllPorts : true}, callback);
Wondering if fix could be as simple as passing the options to container.start
here?
It's a detail but an important one for people who uses data volumes.
Basically, query parameter v
can be used to remove data volumes associated with a container. I think this is useful, as far as I can see docker doesn't have any other interface for managing data volumes...
http://docs.docker.io/en/latest/reference/api/docker_remote_api_v1.9/#remove-a-container
Note, adding opts
would break API compatibility.
https://github.com/apocas/dockerode/blob/master/package.json#L18 using *
will only make your downstream consumers very sad in the long run.
No license.txt
EDIT: Sorry, just noticed it's in the readme instead.
So I would like to know how to push to a private registry but I can seem to figure it out.
var tar = require('tar-fs');
var Docker = require('dockerode');
var fs = require('fs');
var socket = process.env.DOCKER_SOCKET || '/var/run/docker.sock';
var stats = fs.statSync(socket);
if (!stats.isSocket()) {
throw new Error("Are you sure the docker is running?");
}
var docker = new Docker({
socketPath : socket
});
var tarStream = tar.pack(process.cwd());
var testImage = 'pizza';
docker.buildImage(tarStream, {
t : testImage,
q : true
}, function(err, output) {
console.log(err);
output.pipe(process.stdout);
output.on('end', function() {
console.log('Successfully built')
var image = docker.getImage(testImage);
image.tag({
repo : 'http://localhost.localdomain:5000/pizza'
}, function(err, data) {
console.log(err, data);
image.push({
registry : 'http://localhost.localdomain:5000/'
}, function(err, data) {
data.pipe(process.stdout);
console.log(err);
});
});
});
});
I keep getting this error
{"message":"Error: Status 401 trying to push repository pizza: "},"error":"Error: Status 401 trying to push repository pizza: "}
This is the debug info from docker.
[debug] registry.go:534 [registry] PUT https://index.docker.io/v1/repositories/pizza/
[debug] registry.go:535 Image list pushed to index:
[{"id":"511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158"},{"id":"ef52fb1fe61037a1b531698c093205f214ade751c781e30ce4f9a7d33020a0f2"},{"id":"b7de3133ff989df914ae9382a1e8bb6771aeb7b07c5d7eeb8ee266b1ccff5709"},{"id":"9e937853e4f8c1de5d13f27f9e0497fe652689d55d28a80fb809b1144216c7a7"},{"id":"65b51c8843cdbf671379e1e3d9b2abc5135548daf2149fbf944c0f69a5f4e8e5"},{"id":"d61b8595012e24a798d326a23ad3fc2b9375a4bb5c3f3c5e0fb065ad60fc65a3"},{"id":"69e275ef0bcedffa0bfb1dce31bbbb393ac5ab5d4e9cdd01a888bc235e124d82","Tag":"latest"}]
[debug] http.go:168 https://index.docker.io/v1/repositories/pizza/ -- HEADERS: map[User-Agent:[docker/0.9.1 go/go1.2.1 kernel/3.14.2-200.fc20.x86_64 os/linux arch/amd64 ]]
Error: Status 401 trying to push repository pizza:
I'm not sure where this is coming form in my code looks like I missed a callback?
/web/node_modules/dockerode/lib/docker.js:51
callback(err, data);
^
TypeError: undefined is not a function
at /web/node_modules/dockerode/lib/docker.js:51:5
at Modem.buildPayload (/web/node_modules/dockerode/node_modules/docker-modem/lib/modem.js:126:7)
at ClientRequest.<anonymous> (/web/node_modules/dockerode/node_modules/docker-modem/lib/modem.js:82:12)
at ClientRequest.EventEmitter.emit (events.js:117:20)
at HTTPParser.parserOnIncomingClient [as onIncoming] (http.js:1658:21)
at HTTPParser.parserOnHeadersComplete [as onHeadersComplete] (http.js:119:23)
at Socket.socketOnData [as ondata] (http.js:1553:20)
at Pipe.onread (net.js:525:27)
Did I miss some configuration?
/web/node_modules/dockerode/node_modules/docker-modem/lib/modem.js:52
optionsf.headers['X-Registry-Auth'] = authconfig;
^
ReferenceError: authconfig is not defined
at Modem.dial (/web/node_modules/dockerode/node_modules/docker-modem/lib/modem.js:52:43)
at Docker.createImage (/web/node_modules/dockerode/lib/docker.js:50:14)
at Docker.pull (/web/node_modules/dockerode/lib/docker.js:234:15)
at dock_opts._createHandler (/web/api/controllers/AppImageController.js:69:18)
at /web/node_modules/dockerode/lib/docker.js:25:20
at Modem.buildPayload (/web/node_modules/dockerode/node_modules/docker-modem/lib/modem.js:123:5)
at IncomingMessage.<anonymous> (/web/node_modules/dockerode/node_modules/docker-modem/lib/modem.js:96:14)
at IncomingMessage.EventEmitter.emit (events.js:117:20)
at _stream_readable.js:910:16
at process._tickDomainCallback (node.js:459:13)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.