tritondatacenter / triton Goto Github PK
View Code? Open in Web Editor NEWTriton DataCenter: a cloud management platform with first class support for containers.
Home Page: https://www.tritondatacenter.com/
License: Mozilla Public License 2.0
Triton DataCenter: a cloud management platform with first class support for containers.
Home Page: https://www.tritondatacenter.com/
License: Mozilla Public License 2.0
"sdcadm update cloudapi" will re-provision cloudapi despite the fact that latest image was updated just a few seconds ago.
Looking at the latest releases notes for centos 7, nested virtualization is enabled. This fufills one of the criteria mentioned in the virtual box issue. (demoing standalone hypervisor) Is it possible to support kvm?
Note: I'm on Elementary OS (Based on Ubuntu 14.04 and the Ubuntu 15.04 kernel), libvirt is used on both.
I tried to configure link aggregation according to docs here: https://docs.joyent.com/sdc7/setting-up-link-aggregation .
What happens is that you cannot manage a link aggregation tags through "NIC Tags" UI, but from "Link Aggregation" UI (which, unfortunately, requires a reboot).
I think the issue is that the UI (through NAPI?) attempts to configure a NIC tag based on MAC address, but in case of aggr
interfaces there is a collision between a physical interface that belongs to the aggr
and the aggr
itself and the backend gets confused.
Worst case scenario is when the admin
is over an aggregated link (which works on HN because of the static usbkey configuration), when I've seen this happening on the CN:
The issue is very real easy to reproduce with an aggregated link on the admin
network. My speculations might not be, but I've thought I can provide a starting point to look at.
Worked with joshw in IRC to diagnose issues with queue of sysinfo jobs growing. This followed booting a new CN. The setup job was stuck in a queue:
# sdc-server jobs 44454c4c-5000-1044-805a-b3c04f483432 | grep queue
server-sysinfo-1.1.0 queued NaN 2015-02-10T22:14:04.799Z c20c54d5-ff9c-45b1-be91-a714a21f0807
server-sysinfo-1.1.0 queued NaN 2015-02-10T22:13:04.801Z b4dc5793-66ef-49f3-8801-60d7c5596e85
server-sysinfo-1.1.0 queued NaN 2015-02-10T22:12:04.799Z 8ed87b7a-1251-4828-89be-b0b1163c9df1
server-sysinfo-1.1.0 queued NaN 2015-02-10T22:11:04.799Z 2c74350e-2a1f-49fb-ab82-1c0ad68a3abf
server-sysinfo-1.1.0 queued NaN 2015-02-10T22:10:04.801Z a18a569a-e40f-46cb-b13f-00d0b050c74a
server-sysinfo-1.1.0 queued NaN 2015-02-10T22:09:04.807Z 1a0290f8-5925-4378-854b-ebdb2671b16e
server-sysinfo-1.1.0 queued NaN 2015-02-10T22:08:04.791Z d97b67fc-97fd-4844-b3ca-ef43101f6555
server-sysinfo-1.1.0 queued NaN 2015-02-10T22:07:04.809Z 0a9484f6-4bda-4520-9c81-9815cb55896d
server-sysinfo-1.1.0 queued NaN 2015-02-10T22:06:04.809Z 92d28e28-2bcb-4d4e-8720-2672d1da9f49
server-sysinfo-1.1.0 queued NaN 2015-02-10T22:05:04.817Z 81b2bfd1-3525-4803-a630-bf18834d9335
server-sysinfo-1.1.0 queued NaN 2015-02-10T22:04:04.810Z b76cc941-2e9c-4046-9682-423c57d7391a
server-sysinfo-1.1.0 queued NaN 2015-02-10T22:03:04.810Z 0a434be7-70d3-44ad-bf84-992f42532ae9
server-sysinfo-1.1.0 queued NaN 2015-02-10T22:02:04.810Z 0673e7aa-0976-45da-980c-cf38bce5378a
server-sysinfo-1.1.0 queued NaN 2015-02-10T22:01:04.810Z 99a875c4-ad4c-4ff9-b9fd-573abc504cf4
server-sysinfo-1.1.0 queued NaN 2015-02-10T22:00:04.822Z accb3776-8643-41fc-9eca-b2b2bea8c97f
server-sysinfo-1.1.0 queued NaN 2015-02-10T21:59:04.820Z e44cc60b-8191-4b38-bc1c-c4be640d3103
server-sysinfo-1.1.0 queued NaN 2015-02-10T21:58:04.819Z 24cb438d-e3d3-4971-82f2-3a267e156751
server-sysinfo-1.1.0 queued NaN 2015-02-10T21:57:04.823Z b0c67d12-c98c-43c5-b672-92341b958c6a
server-sysinfo-1.1.0 queued NaN 2015-02-10T21:56:04.821Z c0e3ded7-6df9-468b-9765-09df6a2f2821
server-sysinfo-1.1.0 queued NaN 2015-02-10T21:55:04.819Z 4ac7e63e-298f-4e28-9c7d-962ed20a129c
server-sysinfo-1.1.0 queued NaN 2015-02-10T21:54:04.818Z 8533bde3-d925-4660-aa46-3169d25d3cf3
server-sysinfo-1.1.0 queued NaN 2015-02-10T21:53:04.832Z 438bc43b-0207-4b55-92c2-492ac90503e1
server-sysinfo-1.1.0 queued NaN 2015-02-10T21:52:04.845Z d3403cd6-8149-4feb-b772-d5bf5bead397
server-sysinfo-1.1.0 queued NaN 2015-02-10T21:51:04.844Z fa21ec06-3a65-489e-a04a-02ff9731509c
server-sysinfo-1.1.0 queued NaN 2015-02-10T21:50:04.899Z 9d7850f4-5812-4a10-86ca-e48a3b21a5b7
server-sysinfo-1.1.0 queued NaN 2015-02-10T21:49:04.844Z 074be8ff-e61d-4eb9-8f47-507baff84897
server-sysinfo-1.1.0 queued 0.0 2015-02-10T21:48:04.843Z 09e46c98-4e9c-4a5d-bdb4-8c747924b01e
server-sysinfo-1.1.0 queued 1.4 2015-02-10T21:47:04.857Z 7b17eb70-0a5f-4c72-939f-9583069750c0
server-sysinfo-1.1.0 queued 1.4 2015-02-10T21:46:04.842Z 68aea24c-f6ff-404f-8c79-fe175f794088
server-sysinfo-1.1.0 queued 1.4 2015-02-10T21:45:04.852Z 076202a0-cb43-4174-a0f9-e8442ad15536
server-sysinfo-1.1.0 queued 1.4 2015-02-10T21:43:04.850Z 27f6ae71-699b-43cd-85db-503aa15020f1
server-sysinfo-1.1.0 queued 1.4 2015-02-10T21:41:04.851Z 575f58a4-72dc-433c-a5af-d08ad3dc3332
Upon checking the workflow zone, the wf-runner process was in maintenance (restarting too quickly). I cleared and restarted the job, and it crashed again shortly.
The setup job of the new CN is now marked as running, but has been stuck there.
I'm trying to create a custom image using Ubuntu 14.04 cloud edition from here: https://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img.
What I do (on headnode):
{
"v": 2,
"uuid": "86bb31ca-a888-11e4-b6d2-c79183d409bf",
"owner": "4a361963-67c1-4f04-daf2-e2d3cf54bb78",
"name": "ubuntu-14.04",
"version": "1.0.0",
"state": "active",
"disabled": false,
"public": true,
"type": "zvol",
"os": "linux",
"description": "Ubuntu 14.04 64-bit image with just essential packages installed. Ideal for users who are comfortable with setting up their own environment and tools.",
"urn": "sdc:tcn:ubuntu-14.04:1.0.0",
"requirements": {
"networks": [
{
"name": "net0",
"description": "public"
}
],
"ssh_key": true
},
"nic_driver": "virtio",
"disk_driver": "virtio",
"cpu_type": "qemu64",
"image_size": 16384
}
Everything looks fine up to this point, the image is imported, shown by both Operations Portal and CloudAPI.
Creating a VM out of it stays in "provisioning" for ages until it times out. On the other hand, the "provisioner" logs on the headnode (used as compute node) show:
{"name":"/opt/smartdc/agents/lib/node_modules/provisioner/lib/tasks/machine_create","req_id":"f6ded340-a891-11e4-8ec8-a3d6bdbf6ad1","hostname":"headnode","pid":54623,"level":30,"action":"create","vm":"d3d96226-1aae-49e0-8069-de35708d6a07","time":"2015-01-30T15:09:23.620Z","component":"machine_create","msg":"defaulting to refreservation = 16384","v":0}
{"name":"/opt/smartdc/agents/lib/node_modules/provisioner/lib/tasks/machine_create","req_id":"f6ded340-a891-11e4-8ec8-a3d6bdbf6ad1","hostname":"headnode","pid":54623,"level":30,"action":"create","vm":"d3d96226-1aae-49e0-8069-de35708d6a07","time":"2015-01-30T15:09:23.620Z","component":"machine_create","msg":"/usr/sbin/zfs get -Ho value name zones/86bb31ca-a888-11e4-b6d2-c79183d409bf@final","v":0}
{"name":"/opt/smartdc/agents/lib/node_modules/provisioner/lib/tasks/machine_create","req_id":"f6ded340-a891-11e4-8ec8-a3d6bdbf6ad1","hostname":"headnode","pid":54623,"level":30,"action":"create","vm":"d3d96226-1aae-49e0-8069-de35708d6a07","time":"2015-01-30T15:09:23.673Z","component":"machine_create","msg":"/usr/sbin/zfs clone -F -o refreservation=16384M zones/86bb31ca-a888-11e4-b6d2-c79183d409bf@final zones/d3d96226-1aae-49e0-8069-de35708d6a07-disk0","v":0}
{"name":"/opt/smartdc/agents/lib/node_modules/provisioner/lib/tasks/machine_create","req_id":"f6ded340-a891-11e4-8ec8-a3d6bdbf6ad1","hostname":"headnode","pid":54623,"level":50,"action":"create","vm":"d3d96226-1aae-49e0-8069-de35708d6a07","err":{"message":"Command failed: cannot create 'zones/d3d96226-1aae-49e0-8069-de35708d6a07-disk0': 'refreservation' is greater than current volume size\n","name":"Error","stack":"Error: Command failed: cannot create 'zones/d3d96226-1aae-49e0-8069-de35708d6a07-disk0': 'refreservation' is greater than current volume size\n\n at ChildProcess.exithandler (child_process.js:637:15)\n at ChildProcess.EventEmitter.emit (events.js:98:17)\n at maybeClose (child_process.js:743:16)\n at Process.ChildProcess._handle.onexit (child_process.js:810:5)","code":1,"signal":null},"volume":{"image_uuid":"86bb31ca-a888-11e4-b6d2-c79183d409bf"
Full error log is here: https://gist.github.com/sigxcpu76/90b6042fbc4cc2752745
The volsize of the imported volume is 2.2GB (the size of the ubuntu cloud QEMU image) instead of 16GB as set in the original zfs create.
Another option was to use the image_size and volsize of 2252m which is the size of the raw file.
Now the error is thrown by https://github.com/joyent/smartos-live/blob/master/src/vm/node_modules/VM.js at line 607 saying that 2252 is incorrect, it should be 16384 (where it gets this value is beyond my understanding).
at docs/faq.md
We have #smartos on Freenode, and [email protected] on Listbox. We should mention how to get started in these fora in README.md
.
The Packages interface is very nice, but in order to provision a VM, an Image is required. If I'm creating a KVM VM and there isn't an available image to use (e.g. Windows image, or if I wanted to create a Linux KVM from an ISO), then you must create the VM from a vmspec manually.
The provision UI, along with using packages, already does 90% of the work of manually creating a vmspec for you. It would be nice for it to either provide an output of what that vmspec would look like, so you can copy and paste it (if creating it manually is a must), or allow the VM to be created without being tied to an image.
After being added to joyagers group, I'm still not able to obtain the sdc7 usb image. fails thusly:
$export MANTA_USER=joyager
$ latest=$(mget -q /joyager/stor/builds/headnode/master-latest)
mget: KeyDoesNotExistError: /joyager/keys/::::::::::** does not exist
I tried it as my own Manta user as well (seemed like not the way to do it, per the readme, but thought I'd give it a shot..)
$ export MANTA_USER=jritorto
$ latest=$(mget -q /joyager/stor/builds/headnode/master-latest)
mget: AuthorizationFailedError: jritorto is not allowed to access /joyager/stor/builds/headnode/master-latest
please add my key to /joyager/keys and/or advise if I'm off track here.
thx
jake
Currently in the coal docs there's a large section marked as TODO.
It would be really good to get that filled in with the 98% case. I can't remember what the general install options are, so it'd be good to just get the basics down so that I can get an admin interface at a minimum.
Hello,
Pretty disgusting the instructions I'd say. Maybe it is clear for you how this software works but the rest of us?
The instructions do not say ANYTHING useful in how to install and run it. Do I need to install a SmartOS into a "virgin" PC or not? Does this tool runs already an existing VM in Win or Linux?
Remember if you want your software to be successful SEND YOUR MESSAGE ACROSS CLEARLY otherwise is only a garage project.
Mario
About me: with 25 years experience working as SCM Manager Consultant for large IT organisations with
Motto: If it the tool doesn't hit, I scrap it
Have two compute nodes 64GB DRAM each, but SDC only reports 1.5GB total memory. Headnode is correctly reporting its own memory. All nodes run 20141127T065456Z. prtconf | grep Memory reports Memory size: 1548 Megabytes. Compute nodes are IBM x3650 M4 and M3. BIOS is reporting 64GB DRAM.
Both compute nodes run latest BIOS and firmware.
Just tested another (third compute node) IBM x3650 M2 with 32 GB DRAM and it was correctly recognized.
When no HDD devices installed or detected, SDC 7 fails with the following message in /var/svc/log/system-smartdc-init:default.log:
/mnt/usbkey/scripts/headnode.sh:241: cp /tmp/joysetup.3498 /zones/
cp: cannot create /zones/: Not a directory
I was hoping for a higher level message like:
unable to set up zpool, no disks detected
This happens after the initial setup config. An email message complaining that smartdc-init service failed happens as well.
This was from an installer tgz generated on November 13, 2015.
Add make clone-all-repos
convenience to get all the SDC-related repos cloned to build/repos... for reading/grepping, etc. See docs/building.md for real build instructions.
In theory passwords in the config file should perhaps be enclosed inside single quotation marks with any single quotation marks in the password escaped.
Super low priority.
There doesn't appear to be any brand assets (logos, images, etc) for SDC/Triton that are available for the community to use.
Is there a plan for this?
The Joyent Engineering Guidlines link needs to reflect the change made for TOOLS-657.
I am trying to follow https://github.com/joyent/sdc/blob/master/docs/developer-guide/coal-setup.md but am experiencing some issues which prevent me from running sdcadm experimental update-docker --servers cns,headnode
and sdcadm post-setup cloudapi
. They might be related to some failed jobs as listed in the web interface, if so what should i do ? can i rerun those jobs ?
sdcadm post-setup common-external-nics
fails with:sdcadm post-setup: error: {"id":"379cb210-a459-498e-ae94-dffaa3e11be4","task":"machine_update","server_uuid":"564df79a-15b3-4722-4e98-01c6a523ed85","status":"failure","timestamp":"2016-01-21T21:31:50.231Z","history":[{"name":"error","timestamp":"2016-01-21T21:31:50.585Z","event":{"error":{"message":"requests must originate from CNAPI address"}}},{"name":"finish","timestamp":"2016-01-21T21:31:50.585Z","event":{}}],"req_id":"e5d1f84c-4cdb-e9e6-ef75-bc4152ee86e8"}
This error was already mentioned in December by someone else on irc but nobody seems to have answered (see http://echelog.com/logs/browse/smartos/1450306800)
[root@headnode (coal-1) ~]# sdc-healthcheck
ZONE STATE AGENT STATUS
global running - online
assets running - online
sapi running - online
binder running - online
manatee running - online
moray running - online
amonredis running - online
redis running - online
ufds running - online
workflow running - online
amon running - online
sdc running - online
papi running - online
napi running - online
rabbitmq running - online
imgapi running - online
cnapi running - online
dhcpd running - online
fwapi running - online
vmapi running - online
ca running - online
mahi running - online
adminui running - online
global running ur online
global running smartlogin online
Env: MBPr,VMWare Fusion Pro.
in SDC7 when selecting any VM on the right side of the screen under user icon there is always a message: Unable to fetch User Information.
Hi,
From SDC documentation https://docs.joyent.com/private-cloud/instances/compute-nodes, there is "migrator.sh" script that can help migrating instances from a compute node to another.
Although using this script is not supported, it is provided by joyent support.
Could you please consider open sourcing it ?
Cheers,
Clément Hermann
When starting elasticsearch I receive this error:
[root@a46c61de-0e03-68b7-8fb1-f8a12597a646 ~]# /opt/local/share/elasticsearch/elasticsearch
Error: Could not find or load main class org.elasticsearch.bootstrap.Elasticsearch
Java
[root@a46c61de-0e03-68b7-8fb1-f8a12597a646 ~]# java -version
openjdk version "1.7.0-internal"
OpenJDK Runtime Environment (build 1.7.0-internal-pkgsrc_2015_05_29_19_05-b00)
OpenJDK 64-Bit Server VM (build 24.76-b04, mixed mode)
Image UUID: 8b05ccbe-0890-11e5-8d30-57386d1d5482
Name: elasticsearch 15.1.1
Publish Date: 2015-06-01T19:00:59Z
Operating System: smartos
Image Type: zone-dataset
I'm not exactly sure if this is where I need to post issues with a Joyent image
Currently it looks like it always goes to "serial" even if i've manually set console=text. It makes it look like absolutely nothing is going on, especially in error cases.
monolith:~ blake$ export MANTA_USER=bixu
monolith:~ blake$ latest=$(mget -q /joyager/stor/builds/headnode/master-latest)
mget: AuthorizationFailedError: bixu is not allowed to access /joyager/stor/builds/headnode/master-latest
We have been running our private cloud on SDC7 for the past few months now and we migrated our production on it recently. Even if the "major" operations are done through the CloudAPI most of our daily tasks like VM re-sizing etc use the AdminUI. We do not give custom aliases to our VMs and mostly go with Tags (which match our way to work with our CMS).
It would greatly improve the usability of the AdminUI if we could:
I agree that getting the Tag filter right might require some work and a lot of thoughts but it would definitely be a great improvement. From the SDC-*API documentation, we can see that SDC is filled with very powerful features, like Tags, but unfortunately most of them are not available to the end users who, in most cases, use the AdminUI.
Not sure if I missed a step or if there is a problem with my vmware install but I am not able to get things working using Ubuntu 14.04 and VMware 11.1.2.
When I run the setup script this is what I get:
% curl -s https://raw.githubusercontent.com/joyent/sdc/master/tools/coal-linux-vmware-setup | sudo bash
Admin network: network="10.99.99.0", mac ip="10.99.99.254", netmask="255.255.255.0"
External network: network="10.88.88.0", mac ip="10.88.88.1", netmask="255.255.255.0"
Setup VMWare networking: admin network 10.99.99.0, external network 10.88.88.0
Changing VMware locations settings...
Old locations settings backed up to the following files:
/etc/vmware/locations.pre_coal
Changing VMware networking settings...
Old networking settings backed up to the following files:
/etc/vmware/networking.pre_coal
Changing vmnet interface permissions...
Restarting VMware services...
Stopped Bridged networking on vmnet0
Stopped DHCP service on vmnet1
Disabled hostonly virtual adapter on vmnet1
Stopped DHCP service on vmnet8
Stopped NAT service on vmnet8
Disabled hostonly virtual adapter on vmnet8
Stopped all configured services on all networks
Started Bridge networking on vmnet0
Subnet on vmnet1 is no longer available for usage, please run the network editor to reconfigure different subnet
Started NAT service on vmnet8
Enabled hostonly virtual adapter on vmnet8
Started DHCP service on vmnet8
Failed to start some/all services
When I try to boot up the headnode by selecting 'Live 64-bit' I get a black screen that does not ever seem to progress from there. Pressing 'c' doesn't have an effect.
I really like what I have seen in regards to Titan with docker integration... considering Joyant SDC is focused on the best in performance I would like to see Joyant following the Mikelangelo movement
"We develop sKVM, an optimised version of KVM, improve the cloud OS OSv, and we implement holistic monitoring and security modules for the cloud."
SDC + SmartOS + sKVM + OSv = big data, high performance computing, and I/O intensive applications in production.
http://www.mikelangelo-project.eu/technology/
http://www.mikelangelo-project.eu/deliverables/deliverable-d2-13/
http://osv.io/
http://www.seastar-project.org/
http://www.scylladb.com/
These projects are being led by the people responsible for KVM, I would love to see Joyant get involved with Mikelangelo because I would rather use Titan and SDC rather than OpenStack.
sKVM is currently being co-developed by IBM, this will revolutionize the IO speeds for VMs and the OSv OS brings the security of a VM with the efficiency of a docker container.
Console logging from boot should default to going to both "text" and serial for use with coal.
This is probably a dumb question. I'm a complete smartos ecosystem ignoramus trying to come up to speed, but noticed that while it seems pretty straightforward to get a Dev system going with smartos virtualbox boxes and even some Vagrant plugins for zone management, for some reason this doesn't seem to be the case for open source sdc.
So, why is the case? Is something like this in the works, not technically feasible, too large an effort, or just not of interest?
There is coal, but there sure seem to be a lot of caveats listed for it if you're running anything other than a Mac. Any insight here would be appreciated.
why usb key can't install
The system services will fail to start when there are more zones than can be provisioned in the initial sdc 7 setup. Originally I elected for admin as 10.2.1.251-255 and external is 10.2.1.240-248 on the same subnet, not knowing that each zone would need to get its own IP.
It would have been nice to have this checked earlier. Something weird happened where zones were allocated in an incremented network of 10.2.2.0/24 IPs and then was unable to route to them.
After restarting the setup and choosing 100-175, 175-250 ranges the setup process and sdc-healthcheck finished successfully.
The SDC is running with the CoAL package, build: 20150305T101223Z. However, when booting over PXE with 6 CPU cores and 4GB of RAM, the process is extremely slow. It is like downloading over a 56K dial-up modem - each character of the grub bootloader "loads" one character per 1.5 seconds.
I've seen it boot over PXE normally once, after that it's always been slow. I'm not sure what's causing this issue.
Is there any inherent reason this shouldn't work in Parallels? I've converted the VM fine, but now I need to redo what the VMWare setup script does...
When trying to build an usb/coal image, "make" reports nothing to do.
It looks like many SDC & Manta components have "private": true
in their package.json. Although, it might not be appropriate for others to publish to NPM, these are not private repositories.
https://www.npmjs.org/doc/files/package.json.html#private
I'll clean this up quickly if confirmed.
At present, we do not have proper SSL support for smartos.org, so links to that site should be via HTTP.
sdcadm experimental update-agents --latest -y
to sdcadm experimental update-agents --all --latest -y
true
in it resulting in multiple PIs being assigned to LATEST_PLATFORM
which causes the sdcadm platform assign $LATEST_PLATFORM --all
command to fail. I'm told this is due to the fact that the “default” column is new as of 20150820.
fixes for both are included in:
#177
a note in the README for the duration of the private beta would be nice, so i know who to email, etc.
Inbetween Fusion 5 and 6, they changed the locations of where a bunch of the support files live. I'm currently on Fusion 6 and there's also a Fusion 7 release at the moment.
We still have a couple of these drifting around; they need to be cleaned up.
The link at https://github.com/joyent/sdc/blob/master/docs/reference.md#agents in the first sentence linking to '...agent section in the glossary' is incorrect. Needs an 'docs' in the url path.
After reading the docs and a bit of the code for manatee/node-manatee, I wonder if there is the potential for split brain under some network partitions.
The initial configuration is three manatee nodes and three clients:
A network partition occurs that causes manatee0, and client0 to be partitioned from the rest of the cluster, including the zookeeper. manatee1 is promoted to primary, but the partitioned nodes are not aware of this and diverging data is written to manatee1 and manatee0.
Is this scenario a possibility? If not, what safeguards prevent it?
16bafe9 documented that CAPI is deprecated. Maybe should read "being deprecated" as my understanding is there are still dependencies on CAPI in core components.
https://github.com/joyent/sdc-smart-login brought this to my mind.
The documentation here suggests to download a new platform image via a customized example link.
Unless I missed something in the docs - where are these platform images located for download?
This is just a proposal to create an aggregated page to allow SDC users to track known issues between SDC releases.
Currently it is hard to look up known issues between releases.
One needs to check each SDC component on GH to figure out possible show stoppers before an actual upgrade is undertaken.
Some background on recent experiences:
Encountered the following regression TritonDataCenter/sdc-napi#8 literally a day before the next "fixed" release was pushed to the release channel.
This was a know issue for roughly two weeks, could have been avoided if we knew it in advance.
Was thinking about a simple page which clearly outlines regressions between releases so folks can avoid possible pitfalls and surprises before upgrading to an affected release.
I looked at https://github.com/joyent/sdc/blob/master/docs/developer-guide/reference.md to find out where CAPI's code lived. Should a description be added there or somewhere else?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.