gluster / gluster-ansible-collection Goto Github PK
View Code? Open in Web Editor NEWLicense: GNU General Public License v2.0
License: GNU General Public License v2.0
It looks like this collection is effectively unmaintained. According to the current community guidelines for collections, we will consider removing it in a future version of the Ansible community package. Please see Unmaintained collection: gluster.gluster for more information.
At least one month after this announcement appears here and on Bullhorn, the Ansible Community Steering Committee will vote on whether this collection is considered unmaintained and will be removed, or whether it will be kept. If it will be removed, this will happen earliest in Ansible 10. Please note that people can still manually install the collection with ansible-galaxy collection install gluster.gluster
even when it has been removed from Ansible.
Glusterfs has the follow settings for bitrot:
However, these cannot be changed using the 'volume set' command and must be done with the 'volume bitrot' command.
"error running gluster (/usr/sbin/gluster --mode=script volume set test features.bitrot on) command (rc=1): volume set: failed: 'gluster volume set features.bitrot' is invalid command. Use 'gluster volume bitrot {enable|disable}' instead.
gluster volume start fails
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: NoneType: None fatal: [gfs1]: FAILED! => {"changed": false, "msg": "error running gluster (/usr/sbin/gluster --mode=script volume start gfs0) command (rc=1): volume start: gfs0: failed: Commit failed on localhost. Please check log file for details.\n"}
I logged to the remote machine via ssh and ran the following command and got the same error.
/usr/sbin/gluster --mode=script volume start gfs0
How ever when i pass the force parameter it gets started successfully
/usr/sbin/gluster --mode=script volume start gfs0 force
Now how do we pass the force parameter ? I tried using the "force" option specified in the docs. But the force is not being used by the ansible collection. It is again running without the force param.
Copied from ansible-collections/community.general#631
I tried to remove a offline node from a 3 node gluster volume.
This causes a python exception in the gluster_volume.py.
gluster_volume
ansible 2.9.10
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/benjamin/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.8.3 (default, May 17 2020, 18:15:42) [GCC 10.1.0]
Output of "ansible-config dump --only-changed" is empty.
Cloud provider: hetzner cloud
VM Type: cx11 (1 CPU, 2 GB RAM) + 1 attached volume
Target OS: Debian 10
create the gluster volume with 3 nodes
---
- hosts: all
tasks:
- name: create gluster volume
gluster_volume:
state: present
name: test1
bricks: /mnt/data/brick1
replicas: 3
host: "{{inventory_hostname}}"
cluster:
- "master-node-0"
- "master-node-1"
- "master-node-2"
run_once: true
now take one node offline (eg master-node-2) and run:
---
- hosts: all
tasks:
- name: create gluster volume
gluster_volume:
state: present
name: test1
bricks: /mnt/data/brick1
replicas: 2
host: "{{inventory_hostname}}"
cluster:
- "master-node-0"
- "master-node-1"
run_once: true
I would expect that the node would be removed from the volume.
TASK [create gluster volume] *******************************************************************************************************************************************************************************************************
task path: /home/benjamin/dev/elch_cloud/ansible/test.yml:4
Using module file /usr/lib/python3.8/site-packages/ansible/modules/storage/glusterfs/gluster_volume.py
Pipelining is enabled.
<116.203.72.109> ESTABLISH SSH CONNECTION FOR USER: root
<116.203.72.109> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=32323 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/home/benjamin/.ansible/cp/620cd8a3a3 116.203.72.109 '/bin/sh -c '"'"'python3 && sleep 0'"'"''
<116.203.72.109> (1, b'', b'Traceback (most recent call last):\n File "<stdin>", line 102, in <module>\n File "<stdin>", line 94, in _ansiballz_main\n File "<stdin>", line 40, in invoke_module\n File "/usr/lib/python3.7/runpy.py", line 205, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File "/usr/lib/python3.7/runpy.py", line 96, in _run_module_code\n mod_name, mod_spec, pkg_name, script_name)\n File "/usr/lib/python3.7/runpy.py", line 85, in _run_code\n exec(code, run_globals)\n File "/tmp/ansible_gluster_volume_payload_jsve0o9j/ansible_gluster_volume_payload.zip/ansible/modules/storage/glusterfs/gluster_volume.py", line 611, in <module>\n File "/tmp/ansible_gluster_volume_payload_jsve0o9j/ansible_gluster_volume_payload.zip/ansible/modules/storage/glusterfs/gluster_volume.py", line 563, in main\n File "/tmp/ansible_gluster_volume_payload_jsve0o9j/ansible_gluster_volume_payload.zip/ansible/modules/storage/glusterfs/gluster_volume.py", line 420, in reduce_config\nValueError: invalid literal for int() with base 10: \'-\'\n')
<116.203.72.109> Failed to connect to the host via ssh: Traceback (most recent call last):
File "<stdin>", line 102, in <module>
File "<stdin>", line 94, in _ansiballz_main
File "<stdin>", line 40, in invoke_module
File "/usr/lib/python3.7/runpy.py", line 205, in run_module
return _run_module_code(code, init_globals, run_name, mod_spec)
File "/usr/lib/python3.7/runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmp/ansible_gluster_volume_payload_jsve0o9j/ansible_gluster_volume_payload.zip/ansible/modules/storage/glusterfs/gluster_volume.py", line 611, in <module>
File "/tmp/ansible_gluster_volume_payload_jsve0o9j/ansible_gluster_volume_payload.zip/ansible/modules/storage/glusterfs/gluster_volume.py", line 563, in main
File "/tmp/ansible_gluster_volume_payload_jsve0o9j/ansible_gluster_volume_payload.zip/ansible/modules/storage/glusterfs/gluster_volume.py", line 420, in reduce_config
ValueError: invalid literal for int() with base 10: '-'
The full traceback is:
Traceback (most recent call last):
File "<stdin>", line 102, in <module>
File "<stdin>", line 94, in _ansiballz_main
File "<stdin>", line 40, in invoke_module
File "/usr/lib/python3.7/runpy.py", line 205, in run_module
return _run_module_code(code, init_globals, run_name, mod_spec)
File "/usr/lib/python3.7/runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmp/ansible_gluster_volume_payload_jsve0o9j/ansible_gluster_volume_payload.zip/ansible/modules/storage/glusterfs/gluster_volume.py", line 611, in <module>
File "/tmp/ansible_gluster_volume_payload_jsve0o9j/ansible_gluster_volume_payload.zip/ansible/modules/storage/glusterfs/gluster_volume.py", line 563, in main
File "/tmp/ansible_gluster_volume_payload_jsve0o9j/ansible_gluster_volume_payload.zip/ansible/modules/storage/glusterfs/gluster_volume.py", line 420, in reduce_config
ValueError: invalid literal for int() with base 10: '-'
fatal: [master-node-0]: FAILED! => {
"changed": false,
"module_stderr": "Traceback (most recent call last):\n File \"<stdin>\", line 102, in <module>\n File \"<stdin>\", line 94, in _ansiballz_main\n File \"<stdin>\", line 40, in invoke_module\n File \"/usr/lib/python3.7/runpy.py\", line 205, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib/python3.7/runpy.py\", line 96, in _run_module_code\n mod_name, mod_spec, pkg_name, script_name)\n File \"/usr/lib/python3.7/runpy.py\", line 85, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_gluster_volume_payload_jsve0o9j/ansible_gluster_volume_payload.zip/ansible/modules/storage/glusterfs/gluster_volume.py\", line 611, in <module>\n File \"/tmp/ansible_gluster_volume_payload_jsve0o9j/ansible_gluster_volume_payload.zip/ansible/modules/storage/glusterfs/gluster_volume.py\", line 563, in main\n File \"/tmp/ansible_gluster_volume_payload_jsve0o9j/ansible_gluster_volume_payload.zip/ansible/modules/storage/glusterfs/gluster_volume.py\", line 420, in reduce_config\nValueError: invalid literal for int() with base 10: '-'\n",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1
}
The mentioned line in the exception runs gluster volume heal test1 info
and tries to parse the output.
Here the output of this command:
Brick master-node-0:/mnt/data/brick1
Status: Connected
Number of entries: 0
Brick master-node-1:/mnt/data/brick1
Status: Connected
Number of entries: 0
Brick master-node-2:/mnt/data/brick1
Status: Transport endpoint is not connected
Number of entries: -
Dear maintainers,
This is important for your collections!
In accordance with the Community decision, we have created the news-for-maintainers repository for announcements of changes impacting collection maintainers (see the examples) instead of Issue 45 that will be closed soon.
Watch
button in the upper right corner on the repository's home page.Issues
.Also we would like to remind you about the Bullhorn contributor newsletter which has recently started to be released weekly. To learn what it looks like, see the past releases. Please subscribe and talk to the Community via Bullhorn!
Join us in #ansible-social (for news reporting & chat), #ansible-community (for discussing collection & maintainer topics), and other channels on Matrix/IRC.
Help the Community and the Steering Committee to make right decisions by taking part in discussing and voting on the Community Topics that impact the whole project and the collections in particular. Your opinion there will be much appreciated!
Thank you!
I have following task:
- name: Create Gluster Volume (on first node)
when: inventory_hostname == groups['gluster_nodes'][0]
run_once: true
gluster_volume:
state: present
name: "{{gluster_volume_name}}"
bricks: "{{groups['gluster_nodes'][0]}}:{{gluster_volume_path}}/{{groups['gluster_nodes'][0]}}/brick,{{groups['gluster_nodes'][1]}}:{{gluster_volume_path}}/{{groups['gluster_nodes'][1]}}/brick,{{groups['gluster_nodes'][2]}}:{{gluster_volume_path}}/{{groups['gluster_nodes'][2]}}/brick"
replicas: 3
force: yes
options:
performance.cache-size: 128MB,
auth.allow: "{{groups['gluster_nodes'][0]}},{{groups['gluster_nodes'][1]}},{{groups['gluster_nodes'][2]}}"
auth.ssl-allow: "{{groups['gluster_nodes'][0]}},{{groups['gluster_nodes'][1]}},{{groups['gluster_nodes'][2]}}"
ssl.cipher-list: 'HIGH:!SSLv2:!SSLv3'
client.ssl: 'on'
server.ssl: 'on'
But I get following error (verbose output):
TASK [gluster : Create Gluster Volume (on first node)] *************************
task path: /home/cloufish/Projects/Ansible-HomeLab/playbooks/swarm/roles/gluster/tasks/init.yml:25
Wednesday 22 March 2023 14:47:18 +0100 (0:00:00.061) 0:00:11.626 *******
Wednesday 22 March 2023 14:47:18 +0100 (0:00:00.061) 0:00:11.625 *******
redirecting (type: modules) ansible.builtin.gluster_volume to gluster.gluster.gluster_volume
redirecting (type: modules) ansible.builtin.gluster_volume to gluster.gluster.gluster_volume
Using module file /home/cloufish/.ansible/collections/ansible_collections/gluster/gluster/plugins/modules/gluster_volume.py
Pipelining is enabled.
<192.168.1.130> ESTABLISH SSH CONNECTION FOR USER: root
<192.168.1.130> SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/cloufish/.ssh/ansible_ed25519"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o 'ControlPath="/home/cloufish/.ansible/cp/9633dff9ae"' 192.168.1.130 '/bin/sh -c '"'"'/usr/bin/python3 && sleep 0'"'"''
<192.168.1.130> (1, b'\n{"exception": "NoneType: None\\n", "failed": true, "msg": "error running gluster (/usr/sbin/gluster --mode=script volume create gfs replica 3 transport tcp force) command (rc=1): No bricks specified\\n\\nUsage:\\nvolume create <NEW-VOLNAME> [[replica <COUNT> [arbiter <COUNT>]]|[replica 2 thin-arbiter 1]] [disperse [<COUNT>]] [disperse-data <COUNT>] [redundancy <COUNT>] [transport <tcp|rdma|tcp,rdma>] <NEW-BRICK> <TA-BRICK>... [force]\\n\\n", "invocation": {"module_args": {"state": "present", "name": "gfs", "bricks": "192.168.1.130:/data/gluster/192.168.1.130/brick,192.168.1.131:/data/gluster/192.168.1.131/brick,192.168.1.132:/data/gluster/192.168.1.132/brick", "replicas": 3, "force": true, "options": {"performance.cache-size": "128MB,", "auth.allow": "192.168.1.130,192.168.1.131,192.168.1.132", "auth.ssl-allow": "192.168.1.130,192.168.1.131,192.168.1.132", "ssl.cipher-list": "HIGH:!SSLv2:!SSLv3", "client.ssl": "on", "server.ssl": "on"}, "transport": "tcp", "start_on_create": true, "rebalance": false, "cluster": null, "host": null, "stripes": null, "arbiters": null, "disperses": null, "redundancies": null, "quota": null, "directory": null}}}\n', b'')
<192.168.1.130> Failed to connect to the host via ssh:
The full traceback is:
NoneType: None
fatal: [192.168.1.130]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"arbiters": null,
"bricks": "192.168.1.130:/data/gluster/192.168.1.130/brick,192.168.1.131:/data/gluster/192.168.1.131/brick,192.168.1.132:/data/gluster/192.168.1.132/brick",
"cluster": null,
"directory": null,
"disperses": null,
"force": true,
"host": null,
"name": "gfs",
"options": {
"auth.allow": "192.168.1.130,192.168.1.131,192.168.1.132",
"auth.ssl-allow": "192.168.1.130,192.168.1.131,192.168.1.132",
"client.ssl": "on",
"performance.cache-size": "128MB,",
"server.ssl": "on",
"ssl.cipher-list": "HIGH:!SSLv2:!SSLv3"
},
"quota": null,
"rebalance": false,
"redundancies": null,
"replicas": 3,
"start_on_create": true,
"state": "present",
"stripes": null,
"transport": "tcp"
}
},
"msg": "error running gluster (/usr/sbin/gluster --mode=script volume create gfs replica 3 transport tcp force) command (rc=1): No bricks specified\n\nUsage:\nvolume create <NEW-VOLNAME> [[replica <COUNT> [arbiter <COUNT>]]|[replica 2 thin-arbiter 1]] [disperse [<COUNT>]] [disperse-data <COUNT>] [redundancy <COUNT>] [transport <tcp|rdma|tcp,rdma>] <NEW-BRICK> <TA-BRICK>... [force]\n\n"
}
Just for testing purposes, I changes brick
to brick: /data/gluster
, but the same error occured
Hello,
Please could you fill all fields of galaxy.yml file ?
in the current state it gives a poor impression about the collection on Galaxy (https://galaxy.ansible.com/gluster/gluster)
Hi, I've hit this when trying to setup volume for the first time. Defining cluster nodes as list seems to raise TypeError: sequence item 6: expected str instance, list found
. When running the same task with cluster parametr set to string of comma separated entries, it works as expected. The docs say that the expected value for cluster param is list.
Ansible Version:
$ ansible --version
ansible 2.10.2
config file = None
configured module search path = ['/home/ansible/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.8/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.8.5 (default, Jul 28 2020, 12:59:40) [GCC 9.3.0]
OS: Ubuntu 20.04
Task:
- name: Create GlusterFS Volume
gluster_volume:
name: "{{ item.name }}"
host: "{{ inventory_hostname }}.{{ domain_name }}"
bricks: "{{ item.bricks }}"
replicas: "{{ item.gluster_nodes | length }}"
options:
{
server.ssl: 'on',
client.ssl: 'on',
auth.ssl-allow: "{{ item.gluster_nodes }}",
ssl.cipher-list: 'HIGH:!SSLv2'
}
#rebalance: yes
cluster: "{{ item.gluster_nodes }}"
state: "{{ item.state }}"
loop: "{{ gluster_volumes }}"
run_once: true
when:
- gluster_server_initiator is defined
- gluster_server_initiator | bool
Vars:
gluster_volumes:
- name: test-vol1
bricks: "/mnt/cluster/brick-01/test-vol1"
gluster_nodes: "{{ groups['testing_cluster'] | product([domain_name]) | map('join', '.') | list }}"
state: present
TASK [glusterfs : Create GlusterFS Volume] *********************************************************************************************************************************************************************************************
task path: /home/ansible/Develop/ansible-linux/roles/glusterfs/tasks/main.yml:43
<testing-01.lab.demo> ESTABLISH SSH CONNECTION FOR USER: ansible
<testing-01.lab.demo> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ansible"' -o ConnectTimeout=10 -o ControlPath=/home/ansible/.ansible/cp/b2a5e7221e testing-01.lab.demo '/bin/sh -c '"'"'echo ~ansible && sleep 0'"'"''
<testing-01.lab.demo> (0, b'/home/ansible\n', b'')
<testing-01.lab.demo> ESTABLISH SSH CONNECTION FOR USER: ansible
<testing-01.lab.demo> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ansible"' -o ConnectTimeout=10 -o ControlPath=/home/ansible/.ansible/cp/b2a5e7221e testing-01.lab.demo '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /home/ansible/.ansible/tmp `"&& mkdir "` echo /home/ansible/.ansible/tmp/ansible-tmp-1604072353.5431116-373556-91835152736500 `" && echo ansible-tmp-1604072353.5431116-373556-91835152736500="` echo /home/ansible/.ansible/tmp/ansible-tmp-1604072353.5431116-373556-91835152736500 `" ) && sleep 0'"'"''
<testing-01.lab.demo> (0, b'ansible-tmp-1604072353.5431116-373556-91835152736500=/home/ansible/.ansible/tmp/ansible-tmp-1604072353.5431116-373556-91835152736500\n', b'')
redirecting (type: modules) ansible.builtin.gluster_volume to gluster.gluster.gluster_volume
Using module file /usr/local/lib/python3.8/dist-packages/ansible_collections/gluster/gluster/plugins/modules/gluster_volume.py
<testing-01.lab.demo> PUT /home/ansible/.ansible/tmp/ansible-local-3735132heab9gh/tmpyknma2uo TO /home/ansible/.ansible/tmp/ansible-tmp-1604072353.5431116-373556-91835152736500/AnsiballZ_gluster_volume.py
<testing-01.lab.demo> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ansible"' -o ConnectTimeout=10 -o ControlPath=/home/ansible/.ansible/cp/b2a5e7221e '[testing-01.lab.demo]'
<testing-01.lab.demo> (0, b'sftp> put /home/ansible/.ansible/tmp/ansible-local-3735132heab9gh/tmpyknma2uo /home/ansible/.ansible/tmp/ansible-tmp-1604072353.5431116-373556-91835152736500/AnsiballZ_gluster_volume.py\n', b'')
<testing-01.lab.demo> ESTABLISH SSH CONNECTION FOR USER: ansible
<testing-01.lab.demo> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ansible"' -o ConnectTimeout=10 -o ControlPath=/home/ansible/.ansible/cp/b2a5e7221e testing-01.lab.demo '/bin/sh -c '"'"'chmod u+x /home/ansible/.ansible/tmp/ansible-tmp-1604072353.5431116-373556-91835152736500/ /home/ansible/.ansible/tmp/ansible-tmp-1604072353.5431116-373556-91835152736500/AnsiballZ_gluster_volume.py && sleep 0'"'"''
<testing-01.lab.demo> (0, b'', b'')
<testing-01.lab.demo> ESTABLISH SSH CONNECTION FOR USER: ansible
<testing-01.lab.demo> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ansible"' -o ConnectTimeout=10 -o ControlPath=/home/ansible/.ansible/cp/b2a5e7221e -tt testing-01.lab.demo '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-rygxsdbkepwiazdnneqqfqdkxptfamzd ; /usr/bin/env python3 /home/ansible/.ansible/tmp/ansible-tmp-1604072353.5431116-373556-91835152736500/AnsiballZ_gluster_volume.py'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<testing-01.lab.demo> (1, b'Traceback (most recent call last):\r\n File "/tmp/ansible_gluster_volume_payload_fb2mfkeq/ansible_gluster_volume_payload.zip/ansible_collections/gluster/gluster/plugins/modules/gluster_volume.py", line 206, in run_gluster\r\n File "/tmp/ansible_gluster_volume_payload_fb2mfkeq/ansible_gluster_volume_payload.zip/ansible/module_utils/basic.py", line 2624, in run_command\r\n File "/tmp/ansible_gluster_volume_payload_fb2mfkeq/ansible_gluster_volume_payload.zip/ansible/module_utils/basic.py", line 2624, in <listcomp>\r\n File "/usr/lib/python3.8/posixpath.py", line 284, in expandvars\r\n path = os.fspath(path)\r\nTypeError: expected str, bytes or os.PathLike object, not list\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File "/home/ansible/.ansible/tmp/ansible-tmp-1604072353.5431116-373556-91835152736500/AnsiballZ_gluster_volume.py", line 102, in <module>\r\n _ansiballz_main()\r\n File "/home/ansible/.ansible/tmp/ansible-tmp-1604072353.5431116-373556-91835152736500/AnsiballZ_gluster_volume.py", line 94, in _ansiballz_main\r\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\r\n File "/home/ansible/.ansible/tmp/ansible-tmp-1604072353.5431116-373556-91835152736500/AnsiballZ_gluster_volume.py", line 40, in invoke_module\r\n runpy.run_module(mod_name=\'ansible_collections.gluster.gluster.plugins.modules.gluster_volume\', init_globals=None, run_name=\'__main__\', alter_sys=True)\r\n File "/usr/lib/python3.8/runpy.py", line 207, in run_module\r\n return _run_module_code(code, init_globals, run_name, mod_spec)\r\n File "/usr/lib/python3.8/runpy.py", line 97, in _run_module_code\r\n _run_code(code, mod_globals, init_globals,\r\n File "/usr/lib/python3.8/runpy.py", line 87, in _run_code\r\n exec(code, run_globals)\r\n File "/tmp/ansible_gluster_volume_payload_fb2mfkeq/ansible_gluster_volume_payload.zip/ansible_collections/gluster/gluster/plugins/modules/gluster_volume.py", line 621, in <module>\r\n File "/tmp/ansible_gluster_volume_payload_fb2mfkeq/ansible_gluster_volume_payload.zip/ansible_collections/gluster/gluster/plugins/modules/gluster_volume.py", line 590, in main\r\n File "/tmp/ansible_gluster_volume_payload_fb2mfkeq/ansible_gluster_volume_payload.zip/ansible_collections/gluster/gluster/plugins/modules/gluster_volume.py", line 373, in set_volume_option\r\n File "/tmp/ansible_gluster_volume_payload_fb2mfkeq/ansible_gluster_volume_payload.zip/ansible_collections/gluster/gluster/plugins/modules/gluster_volume.py", line 211, in run_gluster\r\nTypeError: sequence item 6: expected str instance, list found\r\n', b'Shared connection to testing-01.lab.demo closed.\r\n')
<testing-01.lab.demo> Failed to connect to the host via ssh: Shared connection to testing-01.lab.demo closed.
<testing-01.lab.demo> ESTABLISH SSH CONNECTION FOR USER: ansible
<testing-01.lab.demo> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ansible"' -o ConnectTimeout=10 -o ControlPath=/home/ansible/.ansible/cp/b2a5e7221e testing-01.lab.demo '/bin/sh -c '"'"'rm -f -r /home/ansible/.ansible/tmp/ansible-tmp-1604072353.5431116-373556-91835152736500/ > /dev/null 2>&1 && sleep 0'"'"''
<testing-01.lab.demo> (0, b'', b'')
The full traceback is:
Traceback (most recent call last):
File "/tmp/ansible_gluster_volume_payload_fb2mfkeq/ansible_gluster_volume_payload.zip/ansible_collections/gluster/gluster/plugins/modules/gluster_volume.py", line 206, in run_gluster
File "/tmp/ansible_gluster_volume_payload_fb2mfkeq/ansible_gluster_volume_payload.zip/ansible/module_utils/basic.py", line 2624, in run_command
File "/tmp/ansible_gluster_volume_payload_fb2mfkeq/ansible_gluster_volume_payload.zip/ansible/module_utils/basic.py", line 2624, in <listcomp>
File "/usr/lib/python3.8/posixpath.py", line 284, in expandvars
path = os.fspath(path)
TypeError: expected str, bytes or os.PathLike object, not list
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ansible/.ansible/tmp/ansible-tmp-1604072353.5431116-373556-91835152736500/AnsiballZ_gluster_volume.py", line 102, in <module>
_ansiballz_main()
File "/home/ansible/.ansible/tmp/ansible-tmp-1604072353.5431116-373556-91835152736500/AnsiballZ_gluster_volume.py", line 94, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/home/ansible/.ansible/tmp/ansible-tmp-1604072353.5431116-373556-91835152736500/AnsiballZ_gluster_volume.py", line 40, in invoke_module
runpy.run_module(mod_name='ansible_collections.gluster.gluster.plugins.modules.gluster_volume', init_globals=None, run_name='__main__', alter_sys=True)
File "/usr/lib/python3.8/runpy.py", line 207, in run_module
return _run_module_code(code, init_globals, run_name, mod_spec)
File "/usr/lib/python3.8/runpy.py", line 97, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/tmp/ansible_gluster_volume_payload_fb2mfkeq/ansible_gluster_volume_payload.zip/ansible_collections/gluster/gluster/plugins/modules/gluster_volume.py", line 621, in <module>
File "/tmp/ansible_gluster_volume_payload_fb2mfkeq/ansible_gluster_volume_payload.zip/ansible_collections/gluster/gluster/plugins/modules/gluster_volume.py", line 590, in main
File "/tmp/ansible_gluster_volume_payload_fb2mfkeq/ansible_gluster_volume_payload.zip/ansible_collections/gluster/gluster/plugins/modules/gluster_volume.py", line 373, in set_volume_option
File "/tmp/ansible_gluster_volume_payload_fb2mfkeq/ansible_gluster_volume_payload.zip/ansible_collections/gluster/gluster/plugins/modules/gluster_volume.py", line 211, in run_gluster
TypeError: sequence item 6: expected str instance, list found
failed: [testing-01] (item={'name': 'test-vol1', 'bricks': '/mnt/cluster/brick-01/test-vol1', 'gluster_nodes': ['testing-01.lab.demo', 'testing-02.lab.demo', 'testing-03.lab.demo'], 'state': 'present'}) => {
"ansible_loop_var": "item",
"changed": false,
"item": {
"bricks": "/mnt/cluster/brick-01/test-vol1",
"gluster_nodes": [
"testing-01.lab.demo",
"testing-02.lab.demo",
"testing-03.lab.demo"
],
"name": "test-vol1",
"state": "present"
},
"module_stderr": "Shared connection to testing-01.lab.demo closed.\r\n",
"module_stdout": "Traceback (most recent call last):\r\n File \"/tmp/ansible_gluster_volume_payload_fb2mfkeq/ansible_gluster_volume_payload.zip/ansible_collections/gluster/gluster/plugins/modules/gluster_volume.py\", line 206, in run_gluster\r\n File \"/tmp/ansible_gluster_volume_payload_fb2mfkeq/ansible_gluster_volume_payload.zip/ansible/module_utils/basic.py\", line 2624, in run_command\r\n File \"/tmp/ansible_gluster_volume_payload_fb2mfkeq/ansible_gluster_volume_payload.zip/ansible/module_utils/basic.py\", line 2624, in <listcomp>\r\n File \"/usr/lib/python3.8/posixpath.py\", line 284, in expandvars\r\n path = os.fspath(path)\r\nTypeError: expected str, bytes or os.PathLike object, not list\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/home/ansible/.ansible/tmp/ansible-tmp-1604072353.5431116-373556-91835152736500/AnsiballZ_gluster_volume.py\", line 102, in <module>\r\n _ansiballz_main()\r\n File \"/home/ansible/.ansible/tmp/ansible-tmp-1604072353.5431116-373556-91835152736500/AnsiballZ_gluster_volume.py\", line 94, in _ansiballz_main\r\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\r\n File \"/home/ansible/.ansible/tmp/ansible-tmp-1604072353.5431116-373556-91835152736500/AnsiballZ_gluster_volume.py\", line 40, in invoke_module\r\n runpy.run_module(mod_name='ansible_collections.gluster.gluster.plugins.modules.gluster_volume', init_globals=None, run_name='__main__', alter_sys=True)\r\n File \"/usr/lib/python3.8/runpy.py\", line 207, in run_module\r\n return _run_module_code(code, init_globals, run_name, mod_spec)\r\n File \"/usr/lib/python3.8/runpy.py\", line 97, in _run_module_code\r\n _run_code(code, mod_globals, init_globals,\r\n File \"/usr/lib/python3.8/runpy.py\", line 87, in _run_code\r\n exec(code, run_globals)\r\n File \"/tmp/ansible_gluster_volume_payload_fb2mfkeq/ansible_gluster_volume_payload.zip/ansible_collections/gluster/gluster/plugins/modules/gluster_volume.py\", line 621, in <module>\r\n File \"/tmp/ansible_gluster_volume_payload_fb2mfkeq/ansible_gluster_volume_payload.zip/ansible_collections/gluster/gluster/plugins/modules/gluster_volume.py\", line 590, in main\r\n File \"/tmp/ansible_gluster_volume_payload_fb2mfkeq/ansible_gluster_volume_payload.zip/ansible_collections/gluster/gluster/plugins/modules/gluster_volume.py\", line 373, in set_volume_option\r\n File \"/tmp/ansible_gluster_volume_payload_fb2mfkeq/ansible_gluster_volume_payload.zip/ansible_collections/gluster/gluster/plugins/modules/gluster_volume.py\", line 211, in run_gluster\r\nTypeError: sequence item 6: expected str instance, list found\r\n",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1
}
Changing to coma-separated string, as:
gluster_volumes:
- name: test-vol1
bricks: "/mnt/cluster/brick-01/test-vol1"
gluster_nodes: "{{ groups['testing_cluster'] | product([domain_name]) | map('join', '.') | list | join(',') }}"
state: present
Works fine.
Thanks.
I have a task that uses the gluster_volume module like this :
- name: delete glusterfs volume
gluster_volume:
state: absent
name: "{{ volume.name }}"
run_once: true
When the volume exists, it is correctly deleted, but when it doesn't exist, the task is failed with the following message :
TASK [glusterfs/init/delete/server : delete glusterfs volume] ******************
fatal: [vm_sto1]: FAILED! => {"changed": false, "msg": "volume not found volume1"}
As a consequence, the play stops with FAILURE status.
Task result is "ok", indicating no change happened, and the play continues on.
Setting ignore_errors
to "yes" on the task is not satisfactory because I don't want to ignore errors when actually failing to delete an existing volume, plus I would like the task to have the "ok" status, not "failed but ignored".
I've been playing with failed_when
conditions to try and mimic the behaviour I want. I ended up with this :
- name: delete glusterfs volume
gluster_volume:
state: absent
name: "{{ volume.name }}"
run_once: true
register: output
failed_when:
- '"volume not found" not in output.msg'
- not output.changed
but I'm not sure if this is robust and it definitely isn't elegant, so I would like to get rid of it if possible.
The only workaround I could find was to create another gluster_volume task where the state is set to "present" before my task tries to start a volume. In my case, the state is dynamically set from a list of volumes in a variable. The state can be any of "absent", "stopped", "present" and "started". This means that my playbook gets more complicated because the workaround task should not always be executed and also there are loops involved.
gluster_volume module
ansible 2.9.7
config file = /data/work/master/mbcpos-platform-setup/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /data/work/master/mbcpos-platform-setup/.venv/lib/python3.6/site-packages/ansible
executable location = /data/work/master/mbcpos-platform-setup/.venv/bin/ansible
python version = 3.6.9 (default, Apr 18 2020, 01:56:04) [GCC 8.4.0]
Default
Ubuntu 18.04.4 LTS
- name: "Create GlusterFS volumes"
gluster_volume:
name: test_volume
state: started
cluster:
- node1
- node2
- node3
replicas: 3
bricks: /glusterfs/test_volume/brick1/brick
The test_volume should be created and started.
When I execute my playbook on node1, I get the following error message:
volume not found test_volume
Based on the community decision to use true/false
for boolean values in documentation and examples, we ask that you evaluate booleans in this collection and consider changing any that do not use true/false
(lowercase).
See documentation block format for more info (specifically, option defaults).
If you have already implemented this or decide not to, feel free to close this issue.
P.S. This is auto-generated issue, please raise any concerns here
This will be created as an issue in all collection repositories mentioned in https://github.com/ansible-community/ansible-build-data/blob/main/2.10/ansible.in once it's done.
This collection will be included in Ansible 2.10 because it contains modules and/or plugins that were included in Ansible 2.9. Please review:
The latest version of the collection available on August 18 will be included in Ansible 2.10.0, except possibly newer versions which differ only in the patch level. (For details, see the roadmap). Please release version 1.0.0 of your collection by this date! If 1.0.0 does not exist, the same 0.x.y version will be used in all of Ansible 2.10 without updates, and your 1.x.y release will not be included until Ansible 2.11 (unless you request an exception at a community working group meeting and go through a demanding manual process to vouch for backwards compatibility . . . you want to avoid this!).
Your collection versioning must follow all semver rules. This means:
Your collection should provide data for the Ansible 2.10 changelog and porting guide. The changelog and porting guide are automatically generated from ansible-base, and from the changelogs of the included collections. All changes from the breaking_changes
, major_changes
, removed_features
and deprecated_features
sections will appear in both the changelog and the porting guide. You have two options for providing changelog fragments to include:
changelogs/changelog.yaml
inside your collection (see the documentation of changelogs/changelog.yaml format).If you cannot contribute to the integrated Ansible changelog using one of these methods, please provide a link to your collection's changelog by creating an issue in https://github.com/ansible-community/ansible-build-data/. If you do not provide changelogs/changelog.yml
or a link, users will not be able to find out what changed in your collection from the Ansible changelog and porting guide.
Run ansible-test sanity --docker -v
in the collection with the latest ansible-base or stable-2.10
ansible/ansible checkout.
Be sure you're subscribed to:
If you have questions or want to provide feedback, please see the Feedback section in the collection requirements.
(Internal link to keep track of issues: ansible-collections/overview#102)
(Note: This issue was filed in a semi-automated fashion. Let me know if you see errors in this issue.)
As per the Ansible community package inclusion requirements, collections must pass ansible-test sanity
tests. Version 1.0.2
of gluster.gluster
, corresponding to the 1.0.2
tag in this repo, fails one or more of the required sanity tests.
Please see the errors below and address them. If these issues aren't addressed within a reasonable time period, the collection may be subject to removal from Ansible.
Thank you for your efforts and for being part of the Ansible package! We appreciate it.
The following tests were run using ansible-test
version 2.16.1
:
Note that this is only a subset of the required sanity tests. Please make sure you run them in all in your CI.
💡 NOTE:
Check the
[explain]
links below for more information about each test and how to fix failures.
See Sanity Tests: Ignores in the dev guide if, after reading the test-specific documentation, you still believe an error is a false positive.
The test ansible-test sanity --test validate-modules
[explain] failed with 10 errors:
plugins/modules/gluster_heal_info.py:0:0: invalid-ansiblemodule-schema: AnsibleModule.supports_check_mode: required key not provided @ data['supports_check_mode']. Got None
plugins/modules/gluster_peer.py:0:0: doc-default-does-not-match-spec: Argument 'force' in argument_spec defines default as (None) but documentation defines default as (False)
plugins/modules/gluster_peer.py:0:0: doc-required-mismatch: Argument 'state' in argument_spec is not required, but is documented as being required
plugins/modules/gluster_peer.py:0:0: no-default-for-required-parameter: DOCUMENTATION.options.state: Argument is marked as required but specifies a default. Arguments with a default should not be marked as required for dictionary value @ data['options']['state']. Got {'choices': ['present', 'absent'], 'default': 'present', 'description': ['Determines whether the nodes should be attached to the pool or removed from the pool. If the state is present, nodes will be attached to the pool. If state is absent, nodes will be detached from the pool.'], 'required': True, 'type': 'str'}
plugins/modules/gluster_peer.py:0:0: parameter-list-no-elements: Argument 'nodes' in argument_spec defines type as list but elements is not defined
plugins/modules/gluster_peer.py:0:0: parameter-list-no-elements: DOCUMENTATION.options.nodes: Argument defines type as list but elements is not defined for dictionary value @ data['options']['nodes']. Got {'description': ['List of nodes that have to be probed into the pool.'], 'required': True, 'type': 'list'}
plugins/modules/gluster_volume.py:0:0: doc-default-does-not-match-spec: Argument 'force' in argument_spec defines default as (False) but documentation defines default as (None)
plugins/modules/gluster_volume.py:0:0: doc-default-does-not-match-spec: Argument 'options' in argument_spec defines default as ({}) but documentation defines default as (None)
plugins/modules/gluster_volume.py:0:0: parameter-list-no-elements: Argument 'cluster' in argument_spec defines type as list but elements is not defined
plugins/modules/gluster_volume.py:0:0: parameter-list-no-elements: DOCUMENTATION.options.cluster: Argument defines type as list but elements is not defined for dictionary value @ data['options']['cluster']. Got {'description': ['List of hosts to use for probing and brick setup.'], 'type': 'list'}
Hi all.
I started using GlusterFS at work and noticed a few things that could be improved with this collection.
I was going to work on it to help out but, looking at the repo, it kinda feels like it is dead.
Are any of the maintainers still active ?
We are running sanity tests across every collection included in the Ansible community package (as part of this issue) and found that ansible-test sanity --docker
against gluster.gluster 1.0.2 fails with ansible-core 2.13.0rc1 in ansible 6.0.0a2.
n/a
ansible [core 2.13.0rc1]
1.0.2
ansible-test sanity --docker
Tests are either passing or ignored.
ERROR: Found 2 import issue(s) on python 3.10 which need to be resolved:
ERROR: plugins/modules/gluster_heal_info.py:84:0: traceback: DeprecationWarning: The distutils package is deprecated and slated for removal in Python 3.12. Use setuptools or check PEP 632 for potential alternatives
ERROR: plugins/modules/gluster_peer.py:81:0: traceback: DeprecationWarning: The distutils package is deprecated and slated for removal in Python 3.12. Use setuptools or check PEP 632 for potential alternatives
ERROR: Found 9 validate-modules issue(s) which need to be resolved:
ERROR: plugins/modules/gluster_heal_info.py:0:0: invalid-ansiblemodule-schema: AnsibleModule.supports_check_mode: required key not provided @ data['supports_check_mode']. Got None
ERROR: plugins/modules/gluster_peer.py:0:0: doc-default-does-not-match-spec: Argument 'force' in argument_spec defines default as (None) but documentation defines default as (False)
ERROR: plugins/modules/gluster_peer.py:0:0: doc-required-mismatch: Argument 'state' in argument_spec is not required, but is documented as being required
ERROR: plugins/modules/gluster_peer.py:0:0: no-default-for-required-parameter: DOCUMENTATION.options.state: Argument is marked as required but specifies a default. Arguments with a default should not be marked as required for dictionary value @ data['options']['state']. Got {'choices': ['present', 'absent'], 'default': 'present', 'description': ['Determines whether the nodes should be attached to the pool or removed from the pool. If the state is present, nodes will be attached to the pool. If state is absent, nodes will be detached from the pool.'], 'required': True, 'type': 'str'}
ERROR: plugins/modules/gluster_peer.py:0:0: parameter-list-no-elements: Argument 'nodes' in argument_spec defines type as list but elements is not defined
ERROR: plugins/modules/gluster_peer.py:0:0: parameter-list-no-elements: DOCUMENTATION.options.nodes: Argument defines type as list but elements is not defined for dictionary value @ data['options']['nodes']. Got {'description': ['List of nodes that have to be probed into the pool.'], 'required': True, 'type': 'list'}
ERROR: plugins/modules/gluster_volume.py:0:0: doc-default-does-not-match-spec: Argument 'force' in argument_spec defines default as (False) but documentation defines default as (None)
ERROR: plugins/modules/gluster_volume.py:0:0: parameter-list-no-elements: Argument 'cluster' in argument_spec defines type as list but elements is not defined
ERROR: plugins/modules/gluster_volume.py:0:0: parameter-list-no-elements: DOCUMENTATION.options.cluster: Argument defines type as list but elements is not defined for dictionary value @ data['options']['cluster']. Got {'description': ['List of hosts to use for probing and brick setup.'], 'type': 'list'}
ERROR: The 2 sanity test(s) listed below (out of 43) failed. See error output above for details.
import --python 3.10
validate-modules
ERROR: Command "podman exec ansible-test-controller-IH9dfZhG /usr/bin/env ANSIBLE_TEST_CONTENT_ROOT=/root/ansible_collections/gluster/gluster LC_ALL=en_US.UTF-8 /usr/bin/python3.10 /root/ansible/bin/ansible-test sanity --containers '{}' --skip-test pylint --metadata tests/output/.tmp/metadata-gm28kjz5.json --truncate 0 --color no --host-path tests/output/.tmp/host-ec9vlkqt" returned exit status 1.
Create volume fail with msg: "Brick may be containing or be contained by an existing brick"
This is the task:
The error returned is:
TASK [glusterfs : storage1 : Creates volume] *******************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: NoneType: None
fatal: [dips1-test]: FAILED! => {
"changed": false
}
MSG:
error running gluster (/usr/sbin/gluster --mode=script volume add-brick storage1 replica 3 dips1-test:/var/spool/gluster_brick//storage1/brick1 dips2-test:/var/spool/gluster_brick//storage1/brick1 dips3-test:/var/spool/gluster_brick//storage1/brick1) command (rc=1): volume add-brick: failed: Brick: dips1-test:/var/spool/gluster_brick/storage1/brick1 not available. Brick may be containing or be contained by an existing brick.
But on hosts the volume is created:
~# gluster volume info
Volume Name: storage1
Type: Replicate
Volume ID: d72f03a4-c128-4a01-9986-658eeb52ed2c
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: dips1-test:/var/spool/gluster_brick/storage1/brick1
Brick2: dips2-test:/var/spool/gluster_brick/storage1/brick1
Brick3: dips3-test:/var/spool/gluster_brick/storage1/brick1
Options Reconfigured:
auth.allow: 127.0.0.1,10.44.107.*
performance.quick-read: on
performance.write-behind: off
performance.cache-size: 128MB
cluster.granular-entry-heal: on
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
Versions of tools:
~# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 22.04.1 LTS
Release: 22.04
Codename: jammy
~# glusterd -V
glusterfs 10.1
~# ansible-galaxy collection list
gluster.gluster 1.0.2
~# ansible-playbook --version
ansible-playbook [core 2.14.3]
config file = /var/lib/playbooks/ubuntu/ansible.cfg
configured module search path = ['/var/lib/playbooks/generic/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
ansible collection location = /var/lib/playbooks/ansible_galaxy/collections
executable location = /usr/bin/ansible-playbook
python version = 3.10.6 (main, Nov 14 2022, 16:10:14) [GCC 11.3.0] (/usr/bin/python3)
jinja version = 3.0.3
libyaml = True
Any help is appreciate.
Cheers,
Hi,
I have a problem with ansible 2.12.
My problem is that after i passed my tasks who create my volume, all the tasks after are skipped :
- name: GlusterFS | Create volume
gluster_volume:
state: present
name: "{{ server.volume }}"
bricks: "{{ item.bricks | join(',') }}"
rebalance: true
start_on_create: true
cluster: "{{ server.servers }}"
force: true
loop: "{{ glusterfs_node }}"
run_once: true
- name: debug test
debug:
msg: "test"
changed_when: true
stdout :
TASK [sys-glusterfs : include_tasks] *******************************************
included: /home/rbureau/Documents/DigDeo-GitDev/ansible-2.12/sys-glusterfs/tasks/2-configure.yml for debian-10-bp-1, debian-10-bp-2, debian-10-bp-3 => (item={'bricks': ['/mnt/gv0'], 'servers': ['debian-10-bp-1', 'debian-10-bp-2', 'debian-10-bp-3'], 'volume': 'gv0'})
TASK [sys-glusterfs : GlusterFS | Create volume] *******************************
ok: [debian-10-bp-1] => (item={'bricks': ['/mnt/gv0'], 'servers': ['debian-10-bp-1', 'debian-10-bp-2', 'debian-10-bp-3'], 'volume': 'gv0'})
TASK [sys-glusterfs : debug test] **********************************************
skipping: [debian-10-bp-1]
skipping: [debian-10-bp-2]
skipping: [debian-10-bp-3]
Why ?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.