Git Product home page Git Product logo

terraform-provider-netapp-cloudmanager's Introduction

Terraform Provider for NetApp Cloud Volumes ONTAP for AWS, GCP and Azure

This is the repository for the Terraform Provider for NetApp Cloud Volumes ONTAP (CVO) for AWS, GCP and Azure. The Provider can be used with Terraform to work with Cloud Volumes ONTAP for AWS, GCP and Azure resources.

For general information about Terraform, visit the official website and the GitHub project page.

The provider plugin was developed by NetApp.

Naming Conventions

The APIs for NetApp Cloud Volumes ONTAP for AWS, GCP and Azure do not require resource names to be unique. They are considered as 'labels' and resources are uniquely identified by 'ids'. However these ids are not user friendly, and as they are generated on the fly, they make it difficult to track resources and automate.

This provider assumes that resource names are unique, and enforces it within its scope. This is not an issue if everything is managed through Terraform, but could raise conflicts if the rule is not respected outside of Terraform.

Using the Provider

The current version of this provider requires Terraform 0.13 or higher to run.

Terraform 0.13 introduces a registry, and you can use directly the provider without building it yourself. See https://registry.terraform.io/providers/NetApp/netapp-cloudmanager

If you want to build it, see the section below.

Note that you need to run terraform init to fetch the provider before deploying.

Provider Documentation

The documentation is available at: https://registry.terraform.io/providers/NetApp/netapp-cloudmanager/latest/docs

The provider is also documented here.

Check the provider documentation for details on entering your connection information and how to get started with writing configuration for NetApp CVO resources.

Controlling the provider version

Note that you can also control the provider version. This is controlled by a required_providers block in your Terraform configuration.

The syntax is as follows:

terraform {
  required_providers {
    netapp-cloudmanager = {
      source = "NetApp/netapp-cloudmanager"
      version = "20.10.0"
    }
  }
}

Read more on provider version control.

Building The Provider

Prerequisites

If you wish to work on the provider, you'll first need Go installed on your machine (version 1.11+ is required). You'll also need to correctly setup a GOPATH, as well as adding $GOPATH/bin to your $PATH.

The following go packages are required to build the provider:

	github.com/Azure/azure-sdk-for-go v46.4.0+incompatible
	github.com/Azure/go-autorest/autorest/azure/auth v0.5.3
	github.com/aws/aws-sdk-go v1.35.5
	github.com/fatih/structs v1.1.0
	github.com/hashicorp/terraform v0.13.4
	github.com/sirupsen/logrus v1.7.0
	golang.org/x/oauth2 v0.0.0-20200902213428-5d25da1a8d43
	golang.org/x/tools v0.0.0-20201008025239-9df69603baec // indirect

Check go.mod for the latest list.

Cloning the Project

First, you will want to clone the repository to $GOPATH/terraform-provider-netapp-cloudmanager:

mkdir -p $GOPATH
cd $GOPATH
git clone https://github.com/NetApp/terraform-provider-netapp-cloudmanager.git

Running the Build

After the clone has been completed, you can enter the provider directory and build the provider.

cd $GOPATH/terraform-provider-netapp-cloudmanager
make build

Note: go install will move the binary to $GOPATH/bin

Installing the Local Plugin

With Terraform 0.13 or newer, see the sanity check section under Walkthrough example.

With earlier versions of Terraform, after the build is complete, copy the terraform-provider-netapp-cloudmanager binary into the same path as your terraform binary, and re-run terraform init.

After this, your project-local .terraform/plugins/ARCH/lock.json (where ARCH matches the architecture of your machine) file should contain a SHA256 sum that matches the local plugin. Run shasum -a 256 on the binary to verify the values match.

Developing the Provider

NOTE: Before you start work on a feature, please make sure to check the issue tracker and existing pull requests to ensure that work is not being duplicated. For further clarification, you can also ask in a new issue.

See Building the Provider for details on building the provider.

Testing the Provider

NOTE: Testing the provider for NetApp Cloud Volumes ONTAP for AWS, GCP and Azure is currently a complex operation as it requires having a NetApp CVO subscription in CVO to test against. You can then use a .json file to expose your credentials.

Configuring Environment Variables

Most of the tests in this provider require a comprehensive list of environment variables to run. See the individual *_test.go files in the cloudmanager/ directory for more details. The next section also describes how you can manage a configuration file of the test environment variables.

Using the .tf-netapp-cloudmanager-devrc.mk file

The tf-netapp-cloudmanager-devrc.mk.example file contains an up-to-date list of environment variables required to run the acceptance tests. Copy this to $HOME/.tf-netapp-cloudmanager-devrc.mk and change the permissions to something more secure (ie: chmod 600 $HOME/.tf-netapp-cloudmanager-devrc.mk), and configure the variables accordingly.

Running the Acceptance Tests

After this is done, you can run the acceptance tests by running:

$ make testacc

If you want to run against a specific set of tests, run make testacc with the TESTARGS parameter containing the run mask as per below:

make testacc TESTARGS="-run=TestAccNetAppCVOOCCM"

This following example would run all of the acceptance tests matching TestAccNetAppCVOOCCM. Change this for the specific tests you want to run.

Walkthrough example

Installing go and terraform

bash
mkdir tf_na_netapp_cloudmanager
cd tf_na_netapp_cloudmanager

# if you want a private installation, use
export GO_INSTALL_DIR=`pwd`/go_install
mkdir $GO_INSTALL_DIR
# otherwise, go recommends to use
export GO_INSTALL_DIR=/usr/local

linux

curl -O https://dl.google.com/go/go1.15.2.linux-amd64.tar.gz
tar -C $GO_INSTALL_DIR -xvf go1.15.2.linux-amd64.tar.gz

export PATH=$PATH:$GO_INSTALL_DIR/go/bin

curl -O https://releases.hashicorp.com/terraform/0.13.4/terraform_0.13.4_linux_amd64.zip
unzip terraform_0.13.4_linux_amd64.zip
mv terraform $GO_INSTALL_DIR/go/bin

mac

curl -O https://dl.google.com/go/go1.15.2.darwin-amd64.tar.gz
tar -C $GO_INSTALL_DIR -xvf go1.15.2.darwin-amd64.tar.gz

export PATH=$PATH:$GO_INSTALL_DIR/go/bin

curl -O https://releases.hashicorp.com/terraform/0.13.4/terraform_0.13.4_darwin_amd64.zip
unzip terraform_0.13.4_darwin_amd64.zip
mv terraform $GO_INSTALL_DIR/go/bin

Installing dependencies

We're using go.mod to manage dependencies, so there is not much to do.

# make sure git is installed
which git

export GOPATH=`pwd`

Cloning the NetApp provider repository and building the provider

git clone https://github.com/NetApp/terraform-provider-netapp-cloudmanager.git
cd terraform-provider-netapp-cloudmanager
make build
# binary is in: $GOPATH/bin/terraform-provider-netapp-cloudmanager

The build step will install the provider in the $GOPATH/bin directory.

Sanity check

Local installation - linux

mkdir -p /tmp/terraform/netapp.com/netapp/netapp-cloudmanager/20.10.0/linux_amd64
cp $GOPATH/bin/terraform-provider-netapp-cloudmanager /tmp/terraform/netapp.com/netapp/netapp-cloudmanager/20.10.0/linux_amd64

Local installation - mac

mkdir -p ~/.terraform.d/plug-in/netapp.com/netapp/netapp-cloudmanager/20.10.0/darwin_amd64
cp $GOPATH/bin/terraform-provider-netapp-cloudmanager ~/.terraform.d/plug-in/netapp.com/netapp/netapp-cloudmanager/20.10.0/darwin_amd64

Check the provider can be loaded

cd examples/cloudmanager/local
export TF_CLI_CONFIG_FILE=`pwd`/terraform.rc
terraform init

Should do nothing but indicate that Terraform has been successfully initialized!

terraform-provider-netapp-cloudmanager's People

Contributors

carchi8py avatar dmccaffery avatar lonico avatar mattrobinsonsre avatar wenjun666 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-provider-netapp-cloudmanager's Issues

Remove requirement for GCP service account JSON

Currently the netapp-cloudmanager_connector_gcp resource requires the service_account_path value to be set. This is insecure and often requires the storage of a key within a repository for jenkins builds.

Alternative option would be to use GCP SDK to authenticate using local service account/gcloud auth login.

CVO-HA Azure deployment failure using terrafom providers

While deploying CVO-HA instance using terraform provider in azure getting below “az login” error

Appreciate someone could help us to understand if we are missing something here

Error: Invoking Azure CLI failed with the following error: Please run 'az login' to setup account.
on cvo.tf line 1, in resource "netapp-cloudmanager_cvo_azure" "cvo_ha":
1: resource "netapp-cloudmanager_cvo_azure" "cvo_ha"

===================

Failed: Set labels of disk - in GCP

While building Cloud-manger connector with provider version 21.3.0 in GCP the VM fails to build after trying to set label on created boot disk. The error message in activity log after successful disk creation is (replaced sensitive info with xxxxx...):

Failed: Set labels of disk

[email protected] failed to set labels of disk xxxxxxx-dev-vm-disk-boot
April 2, 2021 at 9:39:32 PM GMT-6

User
[email protected]

Resource name
projects/xxxxxxxxxxxxxxxxx/zones/us-west1-a/disks/xxxxxxxxxxxx-vm-disk-boot

Error message
Invalid argument (HTTP 400): Labels fingerprint either invalid or resource labels have changed

Response > error
Code
412
Errors
Errors 1
Domain
global
Location
If-Match
Location type
header
Message
Labels fingerprint either invalid or resource labels have changed
Reason
conditionNotMet
Message
Labels fingerprint either invalid or resource labels have changed

Cannot create several aggregates at once using count or for_each (GCP).

Having an issue with multiple aggregates creation when using count or for_each statement. Terraform creates only the first defined from the list and then gives the error:

11:41:41 module.aggregate["aggregate_cfg3"].netapp-cloudmanager_aggregate.cl-aggregate: Creation complete after 3m35s [id=what_aggr_3]
11:41:41
11:41:41 Error: code: 409, message: {"message":"Couldn't perform action Create Aggregate, because there are ongoing operations which might interfere with it: Create Aggregate","causeMessage":"OnGoingAsyncOperationException: Couldn't perform action Create Aggregate, because there are ongoing operations which might interfere with it: Create Aggregate"}

Storage Account name derived from Connector name

Hi,

When creating a Cloud Connector, a Storage Account is created at the same time. There is currently no way of changing the Storage Account name.

The Storage Account name uses the Cloud Connector name and appends "sa" to the end.

The Cloud Connector name needs to meet the naming conventions of a Storage Account. For example, has to be globally unique, cannot contain hyphens etc.

As an option, can a parameter be added that allows the Storage Account name to be set?

Many thanks
Dave

Terraform wants to create a connector in aws that already exists, and is already in the state file

I was adding to my Terraform code today and it wants to create a resource that already exists in the state file, and nothing I added is related to this resource or anything it depends on. I can't figure out why it wants to create a new one, so any help is appreciated. Here is the result of the plan for this resource and also the current resource shown in the state file:

root@PC:/mnt/e/Git/Github/aws-core-infrastructure# terraform plan -var-file=./config/test.tfvars -out test.tfplan -target netapp-cloudmanager_connector_aws.occm

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # netapp-cloudmanager_connector_aws.occm will be created
  + resource "netapp-cloudmanager_connector_aws" "occm" {
      + account_id                    = "account-xxxxxxxx"
      + associate_public_ip_address   = true
      + client_id                     = (known after apply)
      + company                       = "My Co"
      + enable_termination_protection = false
      + iam_instance_profile_name     = "iam_profile_name"
      + id                            = (known after apply)
      + instance_type                 = "t3.xlarge"
      + key_name                      = "my-key"
      + name                          = "netapp-connector"
      + region                        = "us-east-1"
      + security_group_id             = "sg-xxxxxxxx3"
      + subnet_id                     = "subnet-xxxxxxxx4"
    }

Plan: 1 to add, 0 to change, 0 to destroy.

Warning: Resource targeting is in effect

You are creating a plan with the -target option, which means that the result
of this plan may not represent all of the changes requested by the current
configuration.

The -target option is not for routine use, and is provided only for
exceptional situations such as recovering from errors or mistakes, or when
Terraform specifically suggests to use it as part of an error message.


------------------------------------------------------------------------

This plan was saved to: test.tfplan

To perform exactly these actions, run the following command to apply:
    terraform apply "test.tfplan"

root@PC:/mnt/e/Git/Github/aws-core-infrastructure# terraform state show netapp-cloudmanager_connector_aws.occm
# netapp-cloudmanager_connector_aws.occm:
resource "netapp-cloudmanager_connector_aws" "occm" {
    account_id                  = "account-xxxxxxxx"
    associate_public_ip_address = true
    client_id                   = "xxxxxxxx1"
    company                     = "My Co"
    iam_instance_profile_name   = "iam_profile_name"
    id                          = "xxxxxxxx2"
    instance_type               = "t3.xlarge"
    key_name                    = "my-key"
    name                        = "netapp-connector"
    region                      = "us-east-1"
    security_group_id           = "sg-xxxxxxxx3"
    subnet_id                   = "subnet-xxxxxxxx4"
}

Version starting 22.1.0 cannot find existing AWS connector

Hello,

We have a production workload of CVO on AWS deployed & managed using this provider. Recently we run Terraform plan (TF version 1.0.3) through our pipeline which picks the latest netapp-cloudmanager provider version.
We noticed the provider is not able to find the existing AWS connectors and after verifying they are still alive & healthy, we started pinning the provider version to older ones to see if we can find the problem there.
22.2.0 & 22.1.0 have this problem, while 21.12.0 works fine (sees the connectors and doesn't try to recreate them).

Error: InvalidParameterValue: Value (OCCM_AUTOMATION) for parameter iamInstanceProfile.name is invalid.

AWS -Creating Cloud Connector- I have successfully generated Cloud.netapp.com refresh token, created AWS IAM Role/Policies, kept default naming convention of OCCM_AUTOMATION. Definitely getting to AWS as it validates my VPC and SGs, but fails when trying to find the IAM Policy OCCM_AUTOMATION with Error: InvalidParameterValue: Value (OCCM_AUTOMATION) for parameter iamInstanceProfile.name is invalid. I am aware there is a time frame after creating AWS Policy before it may show active, but it has been 12 hours. Terraform apply bombs out after inserting refresh token and saying Yes that it can't find the above IAM Policy. Hopefully just doing something dumb. I can aws iam list-policies to validate that my policy does exist.

Provider cannot find working environments when multiple environments are managed in the same Terraform statefile

When managing multiple CVO working environments (tested for AWS) within a single Terraform run (statefile), the provider fails to find the working environments.
Setup:
2 AWS Regions
(1x NetApp AWS Connector & 1x NetApp CVO AWS) per region

image

Not adding a debug log as having this in a production workload, but can be easily reproduced.
Feels like an issue that can be caused by global variables/objects within the provider. Haven't tried creating multiple environments using the same NetApp Connector.

Encrypt EBS disks with AWS KMS for Cloud Manager and Cloud Volumes ONTAP

Would like the option to provide:
"aws_encryption": {
"kms_key_arn": "string",
"kms_key_id": "string"
}

and pass that through to AWS to create the cloud manager instance with EBS disks encrypted by that key defined as well as the ability to then pass that key as a variable whilst deploying CVO in AWS to encrypt all of the EBS disks as well.

Currently, this would be available via the Cloud Manager API:
https://docs.netapp.com/us-en/occm/api.html#_awsencryption

Documentation needs example of CVO HA in GCP to show self-links

Currently self-links aren't explained fully in the documentation to show how to reference them inside the provider or where to get them from Google (You cannot get them from the console, only gcloud commands, or you have to build the path yourself).

Worth entering an example:
A self link for a VPC looks like this:
https://www.googleapis.com/compute/v1/projects/[project]/global/networks/[network_name]

An example would be:
https://www.googleapis.com/compute/v1/projects/my-project/global/networks/default

A self link for a subnet looks like this: https://www.googleapis.com/compute/v1/projects/[project]/regions/[region]/subnetworks/[subnet_name]

An example would be:
https://www.googleapis.com/compute/v1/projects/my-project/regions/us-east4/subnetworks/default

Cannot change firewall_tags on netapp-cloudmanager_connector_gcp

Hello

We are deploying multiple connectors in the same GCP project but in different regions.
If I deploy the connector through Cloud Manager UI then I can apply the custom firewall tag on the SVMs but If I deploy it through terraform, it always shows 'firewall-tag-bvsu'. This can be an issue if we have multiple connectors in the same GCP project because all the connectors will use the same firewall tags.
Are there any ways to customize the firewall tag on the connectors via the NetApp-cloudmanager_connector_gcp like Cloud Manager?

Firewall Tags on the connector in GCP deployed via Cloud Manager:
image

Firewall Tags on the connector in GCP deployed via Terraform:
image

support attaching existing role to connector

In the cloud manager SaaS UI when deploying a connector you can select an existing role to apply to the connector. Terraform deployment of the connector does not support a pre-existing role to attach. This functionality needs to be added.
It currently allows you to name the instance profile but not to have a role that already exists.

image

Cannot assign labels to GCP or Azure connector instances

Affected: netapp-cloudmanager_connector_gcp, netapp-cloudmanager_connector_azure

Would like implementation close to CVO labelling. E.G:
Azure:
azure_tag {
tag_key = "abcd"
tag_value = "ABCD"
}

GCP:
gcp_label {
label_key = "abcd"
label_value = "ABCD"
}

This value should be updatable after deployment is complete.

Cloud Manager Connector fails to be created in Terraform Cloud

Trying to deploy Cloud Manager connector or Cloud Volumes ONTAP via Terraform Cloud fails as below in Azure. However, all the infrastructure using using the AzureRM provider works without issues. This suggest the way Cloud manager tries to auth using CLI clashes somehow with the way AzureRM provider auths to Azure
image

netapp-cloudmanager_cifs_server.AWS-cifs Error

when netapp_cloudmanager_cifs_server is used I encountered the below error:

netapp-cloudmanager_cifs_server.AWS-cifs: Creating...

Error: Provider produced inconsistent result after apply

When applying changes to netapp-cloudmanager_cifs_server.AWS-cifs, provider
"registry.terraform.io/netapp/netapp-cloudmanager" produced an unexpected new
value: Root resource was present, but now absent.

This is a bug in the provider, which should be reported in the provider's own
issue tracker.

More information from the created resources.

After the resources are created, there is very little usable information available to use elsewhere in terraform.
In our case, we are using AWS and would like to be able to get information such as,

  • The Network Interface IDs
  • The EC2 instance IDs
  • The EBS volume IDs
  • The IP addresses for each of Data,Management,Cluster etc. (as a tag?)

Having these would allow use to create things like Route53 DNS entries, CloudWatch Alarms etc.

Resource Group created with the CVO in Azure

Hi,

When creating a CVO with netapp-cloudmanager_cvo_azure the resource group is created at the same time. If you want to use an existing resource group, this is not possible and Terraform stops due to the group already existing.

If this could be changed so the resource group can be built outside of the CVO and use that group for the resources, that would be great!

Thanks

Update data_floating_ips and svm_floating_ip for netapp-cloudmanager_cvo_aws

Hi.
Changes of data_floating_ip, data_floating_ip2, svm_floating_ip forces recreation of netapp-cloudmanager_cvo_aws. At the same time, it's possible to change data_lifs addresses via ONTAP CLI, Cloud Manager fetches these updated addresses and shows them in web UI. There are also no issues with mediator updating routing tables in AWS with new addresses automatically.
It would be convenient to change floating IPs change via terraform.
As a workaround, I've tried to edit floating IP addresses in a state file but looks like there is some metadata that is not updated when one changes data_lifs addresses in ONTAP CLI.

   # netapp-cloudmanager_cvo_aws.netapp-cvo must be replaced
 -/+ resource "netapp-cloudmanager_cvo_aws" "netapp-cvo" {
       ~ data_floating_ip              = "192.168.87.1" -> "192.168.87.51" # forces replacement
       ~ data_floating_ip2             = "192.168.87.2" -> "192.168.87.52" # forces replacement
       ~ id                            = "VsaWorkingEnvironment-*******" -> (known after apply)
         name                          = "****_netapp_cvo_lab"
       ~ svm_floating_ip               = "192.168.87.10" -> "192.168.87.60" # forces replacement
         # (30 unchanged attributes hidden)
 
         # (5 unchanged blocks hidden)
     }

terraform 1.0
netapp/netapp-cloudmanager v21.9.4

capacity-based licensing

Per a recent deployment with the following tf config, we are seeing the following error with the license:
module.netapp.module.systems["primary"].netapp-cloudmanager_cvo_gcp.this: Creating...

│ Error: license_type must be capacity-paygo

│ with module.netapp.module.systems["primary"].netapp-cloudmanager_cvo_gcp.this,

│ on ../../../tf-module-gcp-netapp/modules/system/cvo.tf line 1, in resource "netapp-cloudmanager_cvo_gcp" "this":

│ 1: resource "netapp-cloudmanager_cvo_gcp" "this" {

Here is the tf config example with customer info stripped for security:
~ netapp = {
+ system = {
+ primary = {
+ capacity_package_name = "Essential"
+ cluster_subnet = "netapp-cluster"
+ gcp_volume_size = "500"
+ gcp_volume_size_unit = "GB"
+ gcp_volume_type = "pd-ssd"
+ ha_cluster = false
+ ha_subnet = "netapp-ha"
+ instance_type = "n2-standard-8"
+ license_type = "gcp-cot-premium-byol"
+ primary_subnet = "netapp"
+ project = "xxxx"
+ region = "us-east4"
+ replication_subnet = "xxxx"
+ serial_number = "xxxx"
+ snapmirror_policy = "MirrorAllSnapshots"
+ source_ranges = [
+ volume_size = 500
}

firewall_tags in netapp-cloudmanager_connector_gcp should allow you to specify tags

Currently, there is no way to apply your own firewalls to the connector instance that you deploy.

I propose that the "firewall_tags" variable enables you to specify an array of tags that you wish to apply to the instance. This would enable you to use pre-defined firewalls that are created before deploying the CM instance.

This would require another boolean which would be "let_cloudmanager_create_firewall".

Currently, you would have to deploy the instance, add it to GCP terraform and then apply the tags manually

Provider crash on changing CIFS permission for volume

Hello,

The provider is crashing when trying to change the permission on a CIFS volume.

Below is the panic snippet from TF TRACE log:

panic: runtime error: index out of range [0] with length 0
2021-07-22T14:06:04.973+0300 [DEBUG] plugin.terraform-provider-netapp-cloudmanager_v21.6.0:
2021-07-22T14:06:04.973+0300 [DEBUG] plugin.terraform-provider-netapp-cloudmanager_v21.6.0: goroutine 65 [running]:
2021-07-22T14:06:04.973+0300 [DEBUG] plugin.terraform-provider-netapp-cloudmanager_v21.6.0: github.com/netapp/terraform-provider-netapp-cloudmanager/cloudmanager.resourceCVOVolumeUpdate(0xc000463c20, 0x1c34ac0, 0xc00000a3c0, 0x24, 0x23bb460)
2021-07-22T14:06:04.973+0300 [DEBUG] plugin.terraform-provider-netapp-cloudmanager_v21.6.0: github.com/netapp/terraform-provider-netapp-cloudmanager/cloudmanager/resource_netapp_cloudmanager_volume.go:583 +0xdac
2021-07-22T14:06:04.973+0300 [DEBUG] plugin.terraform-provider-netapp-cloudmanager_v21.6.0: github.com/hashicorp/terraform/helper/schema.(*Resource).Apply(0xc00031c580, 0xc000215e30, 0xc0002899a0, 0x1c34ac0, 0xc00000a3c0, 0x1, 0x0, 0x0)
2021-07-22T14:06:04.973+0300 [DEBUG] plugin.terraform-provider-netapp-cloudmanager_v21.6.0: github.com/hashicorp/[email protected]/helper/schema/resource.go:314 +0x2b3
2021-07-22T14:06:04.973+0300 [DEBUG] plugin.terraform-provider-netapp-cloudmanager_v21.6.0: github.com/hashicorp/terraform/helper/schema.(*Provider).Apply(0xc00031cf00, 0xc0001b3a28, 0xc000215e30, 0xc0002899a0, 0xc000286050, 0x1e12428, 0xc000286050)
2021-07-22T14:06:04.973+0300 [DEBUG] plugin.terraform-provider-netapp-cloudmanager_v21.6.0: github.com/hashicorp/[email protected]/helper/schema/provider.go:297 +0x99
2021-07-22T14:06:04.973+0300 [DEBUG] plugin.terraform-provider-netapp-cloudmanager_v21.6.0: github.com/hashicorp/terraform/helper/plugin.(*GRPCProviderServer).ApplyResourceChange(0xc00030e068, 0x1e11660, 0xc00024fa10, 0xc000215c70, 0xc00030e068, 0xc00024fa10, 0xc0005c1ba0)
2021-07-22T14:06:04.973+0300 [DEBUG] plugin.terraform-provider-netapp-cloudmanager_v21.6.0: github.com/hashicorp/[email protected]/helper/plugin/grpc_provider.go:923 +0x8e5
2021-07-22T14:06:04.973+0300 [DEBUG] plugin.terraform-provider-netapp-cloudmanager_v21.6.0: github.com/hashicorp/terraform/internal/tfplugin5._Provider_ApplyResourceChange_Handler(0x1bfc940, 0xc00030e068, 0x1e11660, 0xc00024fa10, 0xc00030baa0, 0x0, 0x1e11660, 0xc00024fa10, 0xc00027a800, 0x719)
2021-07-22T14:06:04.973+0300 [DEBUG] plugin.terraform-provider-netapp-cloudmanager_v21.6.0: github.com/hashicorp/[email protected]/internal/tfplugin5/tfplugin5.pb.go:3303 +0x214
2021-07-22T14:06:04.973+0300 [DEBUG] plugin.terraform-provider-netapp-cloudmanager_v21.6.0: google.golang.org/grpc.(*Server).processUnaryRPC(0xc0001b6c40, 0x1e1a418, 0xc000001680, 0xc00021f100, 0xc000554a50, 0x237c660, 0x0, 0x0, 0x0)
2021-07-22T14:06:04.973+0300 [DEBUG] plugin.terraform-provider-netapp-cloudmanager_v21.6.0: google.golang.org/[email protected]/server.go:1180 +0x52b
2021-07-22T14:06:04.973+0300 [DEBUG] plugin.terraform-provider-netapp-cloudmanager_v21.6.0: google.golang.org/grpc.(*Server).handleStream(0xc0001b6c40, 0x1e1a418, 0xc000001680, 0xc00021f100, 0x0)
2021-07-22T14:06:04.973+0300 [DEBUG] plugin.terraform-provider-netapp-cloudmanager_v21.6.0: google.golang.org/[email protected]/server.go:1503 +0xd0c
2021-07-22T14:06:04.973+0300 [DEBUG] plugin.terraform-provider-netapp-cloudmanager_v21.6.0: google.golang.org/grpc.(*Server).serveStreams.func1.2(0xc0000aa460, 0xc0001b6c40, 0x1e1a418, 0xc000001680, 0xc00021f100)
2021-07-22T14:06:04.973+0300 [DEBUG] plugin.terraform-provider-netapp-cloudmanager_v21.6.0: google.golang.org/[email protected]/server.go:843 +0xab
2021-07-22T14:06:04.973+0300 [DEBUG] plugin.terraform-provider-netapp-cloudmanager_v21.6.0: created by google.golang.org/grpc.(*Server).serveStreams.func1
2021-07-22T14:06:04.973+0300 [DEBUG] plugin.terraform-provider-netapp-cloudmanager_v21.6.0: google.golang.org/[email protected]/server.go:841 +0x1fd
2021-07-22T14:06:04.978+0300 [DEBUG] plugin: plugin process exited: path=.terraform/providers/registry.terraform.io/netapp/netapp-cloudmanager/21.6.0/darwin_amd64/terraform-provider-netapp-cloudmanager_v21.6.0 pid=13950 error="exit status 2"
2021-07-22T14:06:04.979+0300 [WARN] plugin.stdio: received EOF, stopping recv loop: err="rpc error: code = Unavailable desc = transport is closing"
2021/07/22 14:06:04 [DEBUG] module.ntap_cvo_frankfurt.module.volume_app["dev"].netapp-cloudmanager_volume.volume: apply errored, but we're indicating that via the Error pointer rather than returning it: rpc error: code = Canceled desc = context canceled

Below is the plan output:
image

And here's the output for terraform version:
image

Problem seems to be that shareInfoUpdateRequest struct has a list of AC objects, unlike shareInfoRequest which has only the direct struct. The code than tries to update the AC object at line 583 in cloudmanager/resource_netapp_cloudmanager_volume.go is then trying to access the first element in the list, but the list is empty.

[v21.8.1] AWS Connector reads only first security group.

Hello,

The recently released version, 21.8.1, has a bug in reading the AWS connector, setting only the first SG in the security_group_id property for the createOCCMDetails object.
Line 277 of cloudmanager/resource_netapp_cloudmanager_connector_aws.go
The value must be a comma-delimited list of security group ids associated with the connector EC2 instance.
This causes resource recreation which cascades down to entire CVO environments being recreated when more than one security group is used.

Cloudmanager provider issue - TFE plan try to create an 4 month existing CM, like if not existing...

Hello.

Since yesterday afternoon, we are no more able to provide any resources (AWS EFS, FsX, S3 buckets,... ) in
our 4 PROD accounts in AWS via Terraform, because when running a simple terraform plan
for any other instance, EFS or else, not
related to our CMO's deployment done via the NetApp provider, the pipeline try to create a new cloud manager like if it do not find
the existing one (existing for
more than 4 months) .

I verified the workspace state, it is listed in there, and no changes were
done since the last apply.

Precision: we use Terraform Enterprise v.0.13.5 and the netapp-cloudmanager Provider of Hashicorp
registry, version 21.1.1


Bellow you see the log of terraform
symptoms,

Terraform v0.13.5
Configuring remote state backend...
Initializing Terraform configuration...
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

....................


An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create
  ~ update in-place
-/+ destroy and then create replacement

Terraform will perform the following actions:

  # netapp-cloudmanager_connector_aws.xxxxxxxxxxxxxxxxxx **will be created**
  + resource "netapp-cloudmanager_connector_aws" "xxxxxxxxxxxxx" {
      + account_id                    = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
      + associate_public_ip_address   = false
      + client_id                     = (known after apply)
      + company                       = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
      + enable_termination_protection = false
      + iam_instance_profile_name     = "occm"
      + id                            = (known after apply)
      + instance_type                 = "t3.xlarge"
      + key_name                      = "NetApp"
      + name                          = "ncmopgr01"
      + proxy_certificates            = [
          + "./xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.crt",
        ]
      + proxy_url                     = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
      + region                        = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
      + security_group_id             = "xxxxxxxxxxxxxxxxxxxxxx"
      + subnet_id                     = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
.
.
.

     
    }

Plan: **1 to add**, 0 to change, 0 to destroy.

capacityTier accepts "none" as valid entry, but creation fails because "none" isn't referenced in the if statement

if snapMirror.ReplicationVolume.DestinationCapacityTier == "" {

netapp-cloudmanager_cvo_gcp.cl-cvo-gcp: Refreshing state... [id=vsaworkingenvironment-3r3ljrnd]
netapp-cloudmanager_volume.cvo-volume-source: Refreshing state... [id=b2abcd64-76c0-11eb-93c7-4981710572cf]

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:

  • create

Terraform will perform the following actions:

netapp-cloudmanager_snapmirror.cl-snapmirror will be created

  • resource "netapp-cloudmanager_snapmirror" "cl-snapmirror" {
    • capacity_tier = "none"
    • client_id = "DIIfQ0VYmWZNHuUloTNTo4jaAuIdgdsp"
    • destination_aggregate_name = "aggr1"
    • destination_svm_name = "svm_gcpreplnetapp"
    • destination_volume_name = "tgt_vol1"
    • destination_working_environment_id = "vsaworkingenvironment-msywnwae"
    • id = (known after apply)
    • max_transfer_rate = 102400
    • policy = "MirrorAllSnapshots"
    • schedule = "5min"
    • source_svm_name = "svm_terraformreplsrc"
    • source_volume_name = "src_vol1"
    • source_working_environment_id = "vsaworkingenvironment-3r3ljrnd"
      }

Plan: 1 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.

Enter a value: yes

netapp-cloudmanager_snapmirror.cl-snapmirror: Creating...

Error: code: 400, message: {"message":"Request contains invalid parameters.","violations":[{"path":"capacityTier","message":"must be one of [S3, Blob, cloudStorage]. Actual: none"}]}

on CVO_terraform.tf line 54, in resource "netapp-cloudmanager_snapmirror" "cl-snapmirror":
54: resource "netapp-cloudmanager_snapmirror" "cl-snapmirror" {

OCCM Agent taking to long when creating the Cloud Connector

Hi all,

Only recently I have started having issues deploying a Cloud Connector using the latest version of the provider (21.9.4). After about 17mins the following error occurs in the console.

Error: Taking too long for OCCM agent to be active or not properly setup

  with netapp-cloudmanager_connector_azure.this,
  on cloud-connector.tf line 8, in resource "netapp-cloudmanager_connector_azure" "this":
   8: resource "netapp-cloudmanager_connector_azure" "this" {

I have had it deploy in the past with no issues.

Is anyone else having this problem or could it be my environment causing it?

Many thanks
Dave

Setting Ontap Version gives InvalidMetadataException. GCP CVO-HA

Hello,
Whichever version I try to set with ontap_version I get the following error:
"InvalidMetadataException: There is no valid configuration for Ontap Version 9.11.0P1.
I also tried 9.10, 9.10.1, 9.10.3, and 9.11.1.
Could you please advise if my version formats are wrong?
Thanks

Cannot change subnet_id on netapp-cloudmanager_connector_gcp for shared VPC

We are deploying the connector on shared VPC in GCP as shown below but subnet_id is not changed
We have to use host1 subnet but it seems there is no way to change the subnet. We also tried hard-coded value but it did not work.
Could you have a look at the issue?

It should show:

  subnetwork: projects/host1/regions/us-east4/subnetworks/dev3

but it returns:

  subnetwork: projects/dev1/regions/us-east4/subnetworks/projects/ap-engg-develop-host1/regions/us-east4/subnetworks/dev3

Code:

provider "google" {
  project = "dev1"
  region  = "us-east4"
}

provider "google" {
  alias   = "shared"
  project = "host1"
  region  = "us-east4"
}

data "google_compute_subnetwork" "this" {
  provider = google.shared  
  name    = "dev3"
  region  = "us-east4"
  project = "host1"  
}

resource "netapp-cloudmanager_connector_gcp" "this" {
  depends_on            = [google_compute_firewall.this]
  name                  = "netapp-dev-connector"
  zone                  = "us-east4-c"
  company               = "xxxxx"
  service_account_email = google_service_account.deploy.email
  service_account_key   = base64decode(google_service_account_key.deploy.private_key)
  account_id            = "xxxxx"
  project_id            = "xxxxx"
  associate_public_ip   = false
  subnet_id             = data.google_compute_subnetwork.this.id
}

apply fails but connector created

Hello NetApp,

When applying code that creates a connector and CVO instance on AWS, the apply fails with the following:

netapp-cloudmanager_connector_aws.cl-occm-aws: Still creating... [3m30s elapsed]
netapp-cloudmanager_connector_aws.cl-occm-aws: Creation complete after 3m35s [id=i-0a7af118b944b5910]
netapp-cloudmanager_cvo_aws.cvo-aws: Creating...

Error: code: 400, message: Failure received for messageId OF99nQNr with context . Failure message: {"message":"Connection refused: /127.0.0.1:80"}

  on main.tf line 13, in resource "netapp-cloudmanager_cvo_aws" "cvo-aws":
  13: resource "netapp-cloudmanager_cvo_aws" "cvo-aws" {

but the connector has successfully been created and showing up as active in cloud manager. A subsequent apply initiated the creation of a CVO instance:

Thomass-MacBook-Pro:aws tomh$ terraform apply -auto-approve
netapp-cloudmanager_connector_aws.cl-occm-aws: Refreshing state... [id=i-0a7af118b944b5910]
netapp-cloudmanager_cvo_aws.cvo-aws: Creating...
netapp-cloudmanager_cvo_aws.cvo-aws: Still creating... [10s elapsed] 

...

netapp-cloudmanager_cvo_aws.cvo-aws: Creation complete after 23m16s [id=VsaWorkingEnvironment-28gXwFQr]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

Terraform assumes ALL VPCs are shared when using shared VPC for VPC 0 in GCP

When setting the network_project_id in the netapp-cloudmanager_cvo_gcp resource, terraform uses that same network_project_id for all of the VPC paths, not just the path for VPC 0. This means that if we want to use a VPC residing in project_id project, we cannot.

Current behavior:

VPC0: Shared VPC using network_project_id
VPC1: Private VPC using project_id
VPC2: Private VPC using project_id
VPC3: Private VPC using project_id

Path on API call for this should be:
VPC0: projects/[network_project_id]/global/networks/[vpc0_node_and_data_connectivity]
VPC1: projects/[network_project_id]/global/networks/[vpc1_cluster_connectivity]
VPC2: projects/[network_project_id]/global/networks/[vpc2_ha_connectivity]
VPC3: projects/[network_project_id]/global/networks/[vpc3_data_replication]

Desired behavior:

VPC0: Shared VPC using network_project_id
VPC1: Private VPC using project_id
VPC2: Private VPC using project_id
VPC3: Private VPC using project_id

Path on API call for this should be:
VPC0: projects/[network_project_id]/global/networks/[vpc0_node_and_data_connectivity]
VPC1: projects/[project_id]/global/networks/[vpc1_cluster_connectivity]
VPC2: projects/[project_id]/global/networks/[vpc2_ha_connectivity]
VPC3: projects/[project_id]/global/networks/[vpc3_data_replication]

Similarly, the subnet paths need to be updated in line with the VPC paths

Changing route_table_ids parameter in netapp-cloudmanager_cvo_aws shouldn't force resource recreation

terraform 1.0
netapp/netapp-cloudmanager v21.9.4

Editing list of route tables in route_table_ids parameter for netapp-cloudmanager_cvo_aws resource forces recreation of the CVO.

  # netapp-cloudmanager_cvo_aws.netapp-cvo-test must be replaced
-/+ resource "netapp-cloudmanager_cvo_aws" "netapp-cvo-test" {
      ~ id                            = "VsaWorkingEnvironment-6IjbneoS" -> (known after apply)
        name                          = "test"
      ~ route_table_ids               = [ # forces replacement
            # (5 unchanged elements hidden)
            "rtb-08b706c9cfa12fab9",
          + "rtb-007916e33e0b7a3bc",
        ]
        # (31 unchanged attributes hidden)

        # (5 unchanged blocks hidden)
    }

Plan: 1 to add, 0 to change, 1 to destroy

Such change shouldn't force recreation of netapp-cloudmanager_cvo_aws because it's possible to edit the list of route tables in Cloud Manager Web UI without disruption.

Unexpected SHA-256 hash for `21.9.2` release

It appears that the latest release published in the official Registry (5 days ago) doesn't match with what was just published here on GitHub (1 hour ago).

This can be reproduced with terraform init and corresponding required_providers entry which results in the following

Error: Failed to install provider

Error while installing netapp/netapp-cloudmanager v21.9.2: checksum list has
unexpected SHA-256 hash
d64332f424f8de615f347277c70532141aa0e4314250e557942c102c1cab40cc (expected
ecbeadad5b629628301a14da48c2f1b7242e00d26ff54ae278d562702dfd92a9)

Deploying CVO to an existing Resource group in Azure is getting ignored

Tried specifying an existing RG with the resource_group parameter but it is ignoring this and creating a new RG. Understand that deploying CVO to an existing RG is not recommended - however, we have a dedicated RG for CVO components which is our requirement. Has anyone deployed CVO in an existing RG before - please share your views.

CVO VM type change requires CVO recreation (GCP)

Hello,

When I am trying to change the GCE type of the CVO, the CVO is being recreated if the "writing_speed_state" argument is not specified:
Screenshot 2022-06-27 at 15 44 30

If I specify the "writing_speed_state" argument there is no need to recreate the CVO instead.

I believe that the expected behaviour is the second one, a GCE change should not need the recreation of the CVO, and that there is an issue with the "writing_speed_state" argument that is an optional one.

Thanks

Import of netapp-cloudmanager_connector_aws fails

Import of netapp-cloudmanager_connector_aws fails.
Terraform versions: v0.14.10, v0.15.5, v1.0.1.
Provider version: registry.terraform.io/netapp/netapp-cloudmanager v21.6.0

Command I use: terraform import netapp-cloudmanager_connector_aws.cl-occm-aws i-0339289e5c20097d1

Error: Cannot import non-existent remote object

While attempting to import an existing object to
netapp-cloudmanager_connector_aws.cl-occm-aws, the provider detected that no
object exists with the given id. Only pre-existing objects can be imported;
check that the id is correct and that it is associated with the provider's
configured region or endpoint, or use "terraform apply" to create a new remote
object for this resource.

Way to reproduce:
Create connector via terraform, edit state file and remove a section with created connector, try to import connector.

Terraform code used for connector creation:

terraform {
  required_providers {
    netapp-cloudmanager = {
      source = "NetApp/netapp-cloudmanager"
      version = "21.6.0"
    }
  }
}

provider "netapp-cloudmanager" {
  refresh_token = "*******************************"
}


resource "netapp-cloudmanager_connector_aws" "cl-occm-aws" {
  provider      = netapp-cloudmanager
  name          = "netapp-cvo-poc-ConnectorAWS"
  region        = "eu-central-1"
  key_name      = "netapp-cvo-poc"
  company       = "Test Company"
  instance_type = "t3.xlarge"
  aws_tag {
    tag_key   = "Purpose"
    tag_value = "netapp-cvo-poc"
  }
  subnet_id                 = "subnet-09c67948dab3b2654"
  security_group_id         = "sg-0965aa20d2bd2af06"
  iam_instance_profile_name = "netapp-cvo-poc-connectorOCCM1612800196794-OCCMInstanceProfile-1LEQGFWBLJNEO"
  account_id                = ""*******************************""
}

State file before removing connector description:

{
  "version": 4,
  "terraform_version": "0.14.10",
  "serial": 16,
  "lineage": "dc5bda82-dd44-8beb-f574-8e7b222d7080",
  "outputs": {},
  "resources": [
    {
      "mode": "managed",
      "type": "netapp-cloudmanager_connector_aws",
      "name": "cl-occm-aws",
      "provider": "provider[\"registry.terraform.io/netapp/netapp-cloudmanager\"]",
      "instances": [
        {
          "schema_version": 0,
          "attributes": {
            "account_id": ""*******************************"",
            "ami": null,
            "associate_public_ip_address": true,
            "aws_tag": [
              {
                "tag_key": "Purpose",
                "tag_value": "netapp-cvo-poc12345"
              }
            ],
            "client_id": "A6sLVR1Hzk7uu6GFc8hmfMcYFUjc5LZd",
            "company": "Test Company",
            "enable_termination_protection": false,
            "iam_instance_profile_name": "netapp-cvo-poc-connectorOCCM1612800196794-OCCMInstanceProfile-1LEQGFWBLJNEO",
            "id": "i-0339289e5c20097d1",
            "instance_type": "t3.xlarge",
            "key_name": "netapp-cvo-poc",
            "name": "netapp-cvo-poc-ConnectorAWS",
            "proxy_certificates": null,
            "proxy_password": null,
            "proxy_url": null,
            "proxy_user_name": null,
            "region": "eu-central-1",
            "security_group_id": "sg-0965aa20d2bd2af06",
            "subnet_id": "subnet-09c67948dab3b2654"
          },
          "sensitive_attributes": [],
          "private": ""*******************************""
        }
      ]
    }
  ]
}

State file while I'm trying to import connector:

{
  "version": 4,
  "terraform_version": "0.14.10",
  "serial": 16,
  "lineage": "dc5bda82-dd44-8beb-f574-8e7b222d7080",
  "outputs": {},
  "resources": [
 
  ]
}

RFE - Support capacity licensing within provider

Current license types only include BYOL and PAYGO, but we need additional support for capacity packages (and freemium).

Would need:
additional license_type: capacity-paygo
additional parameter: capacity_package_name: essential/freemium/professional

Parameter naming consistency in Data Connector Azure for vnet and subnet ID's

I was creating a Data connector in Azure and found a bit confusing the parameters requirements for vnet_id and subnet_id as they are not looking for the resources ID's instead looking for the vnet and subnet names respectively, looking at the code below (occm_azure.go lines 93-104)I can see the Resource ID string gets created based on the parameters provided.

if occmDetails.VnetResourceGroup != "" { registerAgentTOService.Placement.Network = fmt.Sprintf("/subscriptions/%s/resourceGroups/%s/providers/Microsoft.Network/virtualNetworks/%s", occmDetails.SubscriptionID, occmDetails.VnetResourceGroup, occmDetails.VnetID) } else { registerAgentTOService.Placement.Network = fmt.Sprintf("/subscriptions/%s/resourceGroups/%s/providers/Microsoft.Network/virtualNetworks/%s", occmDetails.SubscriptionID, occmDetails.ResourceGroup, occmDetails.VnetID) } registerAgentTOService.Placement.Subnet = fmt.Sprintf("%s/subnets/%s", registerAgentTOService.Placement.Network, occmDetails.SubnetID) userData, newClientID, err := c.getCustomData(registerAgentTOService, proxyCertificates, clientID) if err != nil { return OCCMMResult{}, err }

I can see there are two options

  1. Replace the parameters vnet_id and subnet_id for vnet_name and subnet_name to be consistent with the AzureRM provider.

  2. Provide vnet_id and subnet_id as the Resource ID's of the resources and removing the string manipulation to generate the Resource ID within the code. Provisioning infrastructure using terraform we can pass on this ID as part of the vnet and subnet creation as those attributes are exportable e.g. azurerm_virtual_network. However, that will mean a breaking change.
    Hope that provides some context

Cloud Manger removes existing RG tags in Azure

While deploying CVO to an existing RG in Azure, cloud manger forcibly removes existing tags and replace with "ExistingRGDeploymentDenyDeletion : true".
Removing existing tags from a resource groups might break other configuration management functionalities. Hence, it should apply its own tag without affecting existing tags.

Importing Azure Working Environment fails with "Missing X-Agent-Id"

Command run:
shell>>:Prod felixmelligan$ terraform import 'netapp-cloudmanager_cvo_azure.cl-azure[0]' VsaWorkingEnvironment-wDPK77mG
netapp-cloudmanager_cvo_azure.cl-azure[0]: Importing from ID "VsaWorkingEnvironment-wDPK77mG"...
netapp-cloudmanager_cvo_azure.cl-azure[0]: Import prepared!
Prepared netapp-cloudmanager_cvo_azure for import
netapp-cloudmanager_cvo_azure.cl-azure[0]: Refreshing state... [id=VsaWorkingEnvironment-wDPK77mG]

Error: code: 400, message: Missing X-Agent-Id header

Looks like we need to specify the clientID of the connector but I'm not sure where?

Thanks!

add support for darwin/arm64 (apple silicon)

The provider does not currently publish a build with support for Apple silicon. Support for this should be added as terraform now officially supports this and has updated all of their providers.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.