Git Product home page Git Product logo

cloudfoundry / cloud-service-broker Goto Github PK

View Code? Open in Web Editor NEW
80.0 12.0 36.0 953.21 MB

OSBAPI service broker that uses Terraform to provision and bind services. Derived from https://github.com/GoogleCloudPlatform/gcp-service-broker

License: Apache License 2.0

Dockerfile 0.12% Makefile 0.38% Go 98.68% Shell 0.18% Mustache 0.08% HCL 0.56% Procfile 0.01%
osbapi terraform azure gcp aws cloud-foundry service-broker cff-wg-service-management opentofu

cloud-service-broker's Introduction

License test Go Report Card

Warning: From Version 1.0.0 onwards the Cloud Service Broker only supports OpenTofu. Custom brokerpaks need to specify an OpenTofu version and the upgrade process must be followed for existing instances. Only upgrades from terraform versions 1.5.x are supported. For more information, see the OpenTofu migration guide

Cloud Service Broker

An OSBAPI-compliant service broker that uses OpenTofu to create service instances.

This is a service broker built to be used with Cloud Foundry and Kubernetes. It adheres to the Open Service Broker API v2.13.

Cloud Service Broker is a fork of the GCP Service Broker and uses Brokerpaks to expose services. As long as your target cloud has a OpenTofu provider, services can be provisioned via a common interface using standard cf CLI commands.

Some of the benefits over traditional, IaaS-provided, service brokers include:

  • Easily extensible and maintainable Less talking to far-flung teams, more getting work done.
  • One common broker for all brokered services. Cloud Service Broker decouples the service broker functionality from the catalog of services that it exposes.
  • Credhub integration out-of-the-box CredHub encrypts and manages all the secrets associated with your usage of cloud services.
  • Community When you expose a service via a Brokerpak, you can make it available to everyone who uses CSB.
  • Possible to migrate existing services using OpenTofu Import

Architecture

Architecture Diagram

Slack

Please reach out on the #cloudservicebroker channel in the Cloud Foundry Slack!

Installation

This service broker can be installed as a CF application. See the instructions for:

CSB-Provided Brokerpaks

To examine, submit issues or pull requests to the Brokerpaks which have been created for the major public clouds (AWS, Azure, GCP) see the repos below:

Usage

For operators: see docs/configuration.md for details about configuring the service broker.

For developers: see docs/ ReadMe for service options and details.

You can get documentation specific to your install from the /docs endpoint of your deployment.

Commands

The service broker can be run as both a server (the service broker) and as a general purpose command line utility. It supports the following sub-commands:

  • client - A CLI client for the service broker.
  • config - Show and merge configuration options together.
  • help - Help about any command.
  • serve - Start the service broker.

Development

make is used to orchestrate most development tasks. go is required to build the broker. If you don't have go installed, it is possible to use docker to launch an interactive shell into some supported image containing all necessary tools. For example:

# From the root of this repo run:
docker run -it --rm -v "${PWD}:/repo" --workdir "/repo" --entrypoint "/bin/bash" golang:latest
make

There are make targets for most common dev tasks. Running make without a target will list the possible targets.

command action
make build builds broker into ./build
make test-units runs unit tests
make clean removes binaries and built broker paks

Local mimic commands

The mimic commands look and feel like CloudFoundry CLI commands, but actually run CSB actions locally. They are useful when developing brokerpaks. By using the make target make install you can install the CSB as a local command called csb. The mimic commands are:

  • csb create-service - creates a service instance
  • csb services - lists created service instances
  • csb service - displays information on an existing service instance
  • csb update-service - updates a service instance
  • csb upgrade-service - upgrades a service instance
  • csb delete-service - deletes a service instance
  • csb create-service-key - creates a "binding" and prints credentials
  • csb service-keys - lists service keys
  • csb service-key - prints a service key
  • csb delete-service-key - deletes a "binding"

The mimic commands build a brokerpak, start an ephemeral CSB server and send OSBAPI requests to it in a similar style to what CloudFoundry would do. The CSB database is stored as a file called .csb.db.

Additionally, there are commands which use the same framework to run the example tests. These are:

  • csb examples - list the example tests
  • csb run-examples - runs the specified example tests

Bug Reports, Feature Requests, Documentation Requests & Support

File a GitHub issue for bug reports and documentation or feature requests. Please use the provided templates.

Contributing

We are always looking for folks to contribute Brokerpaks!

See Brokerpak Dissection and the user guides for more information on how to contribute to existing brokerpaks and how build one from scratch.

cloud-service-broker's People

Contributors

blgm avatar ccemeraldeyes avatar dependabot-preview[bot] avatar dependabot[bot] avatar dominikmueller avatar erniebilling avatar evandbrown avatar felisiam avatar fnaranjo-vmw avatar gberche-orange avatar jhvhs avatar jimbo459 avatar johnsonj avatar josephlewis42 avatar mbrukman avatar mkjelland avatar mogul avatar mszostok avatar omerbensaadon avatar pivotal-marcela-campo avatar rambleraptor avatar servicesenablement avatar sophiawho avatar stefanbotzenhartdmb2bcom avatar stevewallone avatar sujitdmello avatar svennela avatar svennela-pivotal avatar tinygrasshopper avatar zucchinidev avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cloud-service-broker's Issues

[BUG] Tile's default "Location" not being used with service "csb-azure-mssql-server" and "csb-azure-mssql"

Description

When installing using the tile, in the "Azure Config" we configure the default "Location". Those default settings are not honored when using the "csb-azure-mssql-server" and "csb-azure-mssql" services. It appears like those defaults are reset by the default location setting in the "azure-mssql.yml" and "azure-mssql-server.yml" files.

Expected Behavior

Default Location and Resource-group configured in the tile should be reflected accross all the Azure services provisionning.

Actual Behavior

"csb-azure-mssql-server" and "csb-azure-mssql" services default to "westus" , which are the default specified setting in the "azure-mssql.yml" and "azure-mssql-server.yml" files.

Possible Fix

Only use the default setting specified in the "azure-mssql.yml" and "azure-mssql-server.yml" only if it's not a tile installation or default settings are not configured.

Steps to Reproduce

  1. Install the tile version
  2. Set default location settings in "Azure Config"
  3. Create a "csb-azure-mssql-server" and "csb-azure-mssql" services without specifying the "Location"

Context

We are trying to limit the paramaters that developers need to enter when provisionning services so using the default settings as much as possible to limit the use of parameters and restrict.

Your Environment

Version used: Cloud service broker for Microsoft Azure 0.0.35
Platform (Azure/AWS/GCP): Azure
Applicable Services: Bosh 2.10.1 TAS: 2.10.3

[DOCS] Update example to connect to a db in a second fondation in documentation

Documentation Requested

In the mssql-db-fog-config documentation, the example to connect from a second fondation to a existing bd needs to include the "instance_name" and "db_name".

URL:
https://github.com/pivotal/cloud-service-broker/blob/master/docs/mssql-db-fog-config.md

Section:
"And then connect to that db in the second foundation:

Example:
cf create-service csb-azure-mssql-db-failover-group existing medium-fog -c '{"server_pair":"pair1"}'

Need's to change to:
cf create-service csb-azure-mssql-db-failover-group existing medium-fog -c '{"server_pair":"pair1","instance_name": "csb-failover-group-test", "db_name":"test-db"}'

[FR] Control backup retention period for Azure SQL DB

Describe the solution you'd like
Would like to be able to control Azure SQL DB backup retention period as the default has been changed by Microsoft from 35 down to 7 days.

Describe alternatives you've considered
N/A

Additional Context
N/A

Priority
Medium

Priority Context
New default backup retention period does not meet business requirements.

Platform
Azure

Applicable Services
azure-mssql-db

Other notes
I will submit a pull request to implement this feature

[FR] Set default network_rules actions and authorized_network for the csb-azure-storage-account service

Is your feature request related to a problem? Please describe.
When deploying Azure storage accounts, we do not want to have them opened to the public. We want to be able to leverage the options to set network_rules in the storage account's "Firewall and virtual networks"

We want to be able to Allow access from "Selectec networks" only in order to secure the storage account.

Describe the solution you'd like
We want to be able to set the Terraform default.action to "Deny" and then set authorized_networks for that storage account.

Describe alternatives you've considered
Putting in place an pipeline / automated process to set these default settings

Additional Context
We would like to be able to set these as defaults and also be able to set these options as a cf command flag parameter.

This is a feature that we would need to have in place before being able to go in production with the storage accounts provisioned by the CSB.

Priority
Medium

Priority Context
In order to secure the network access going to the storage account.

Platform
Azure

Applicable Services
csb-azure-storage-account

[BUG] Broker database name can't be set, even to default

Description

The broker does not pick up the name of the database to use from VCAP_SERVICES, the DB_NAME env var, or the default in the source code.

Expected Behavior

GIVEN I have authenticated with cf login -a api.fr.cloud.gov -sso
AND I have run cf create-service aws-rds medium-mysql servicebroker-db -t mysql
AND I have run cf push cloud-service-broker -c './cloud-service-broker serve --config config.yml' -b binary_buildpack --random-route --no-start
AND I have run cf bind-service cloud-service-broker servicebroker-db to bind the brokered MySQL service to the broker
AND the VCAP_SERVICES variable in cf env cloud-service-broker looks like:

VCAP_SERVICES=
{
  "aws-rds": [
    {
      "label": "aws-rds",
      "provider": null,
      "plan": "medium-mysql",
      "name": "servicebroker-db",
      "tags": [
        "database",
        "RDS",
        "mysql"
      ],
      "instance_name": "servicebroker-db",
      "binding_name": null,
      "credentials": {
        "db_name": "cgawsbrokerprodk8evyj9pv9fdotd",
        "host": "cg-aws-broker-prodk8evyj9pv9fdotd.ci7nkegdizyy.us-gov-west-1.rds.amazonaws.com",
        "password": "[REDACTED]",
        "port": "3306",
        "uri": "mysql://ux6uzfrqb2f61kei:[REDACTED]@cg-aws-broker-prodk8evyj9pv9fdotd.ci7nkegdizyy.us-gov-west-1.rds.amazonaws.com:3306/cgawsbrokerprodk8evyj9pv9fdotd",
        "username": "ux6uzfrqb2f61kei"
      },
      "syslog_drain_url": null,
      "volume_mounts": []
    }
  ]
}

WHEN I run cf start cloud-service-broker
THEN I should see the cloud-service-broker app start successfully

Actual Behavior

BUT I see

Waiting for app to start...
Start unsuccessful

TIP: use 'cf logs cloud-service-broker --recent' for more information
FAILED

AND I see in the app logs

23:29:29.781: [APP/PROC/WEB.0] {"timestamp":"1594880969.781164885","source":"cloud-service-broker","message":"cloud-service-broker.Connecting to MySQL Database","log_level":1,<b>"data":{"host":"cg-aws-broker-prodk8evyj9pv9fdotd.ci7nkegdizyy.us-gov-west-1.rds.amazonaws.com","name":"","port":"3306"}}
23:29:29.801: [APP/PROC/WEB.0] (Error 1046: No database selected) 
23:29:29.801: [APP/PROC/WEB.0] [2020-07-16 06:29:29]  
23:29:29.804: [APP/PROC/WEB.0] (Error 1046: No database selected) 
23:29:29.804: [APP/PROC/WEB.0] [2020-07-16 06:29:29]  
23:29:29.806: [APP/PROC/WEB.0] (Error 1046: No database selected) 
23:29:29.806: [APP/PROC/WEB.0] [2020-07-16 06:29:29]  
23:29:29.810: [APP/PROC/WEB.0] (Error 1046: No database selected) 
23:29:29.810: [APP/PROC/WEB.0] [2020-07-16 06:29:29]  
23:29:29.813: [APP/PROC/WEB.0] panic: Error migrating database: Error 1046: No database selected
23:29:29.813: [APP/PROC/WEB.0] goroutine 1 [running]:
23:29:29.813: [APP/PROC/WEB.0] github.com/pivotal/cloud-service-broker/db_service.New.func1()
23:29:29.813: [APP/PROC/WEB.0] 	/tmp/build/80754af9/cloud-service-broker/db_service/db_service.go:37 +0x123
23:29:29.813: [APP/PROC/WEB.0] sync.(*Once).doSlow(0x1bd6ef8, 0xc00063fbe8)
23:29:29.813: [APP/PROC/WEB.0] 	/usr/local/go/src/sync/once.go:66 +0xec
23:29:29.813: [APP/PROC/WEB.0] sync.(*Once).Do(...)
23:29:29.813: [APP/PROC/WEB.0] 	/usr/local/go/src/sync/once.go:57
23:29:29.813: [APP/PROC/WEB.0] github.com/pivotal/cloud-service-broker/db_service.New(0x1409380, 0xc0006945a0, 0x1409380)
23:29:29.814: [APP/PROC/WEB.0] 	/tmp/build/80754af9/cloud-service-broker/db_service/db_service.go:34 +0x88
23:29:29.814: [APP/PROC/WEB.0] github.com/pivotal/cloud-service-broker/cmd.serve()
23:29:29.814: [APP/PROC/WEB.0] 	/tmp/build/80754af9/cloud-service-broker/cmd/serve.go:71 +0x70
23:29:29.814: [APP/PROC/WEB.0] github.com/pivotal/cloud-service-broker/cmd.init.5.func1(0xc0001e9180, 0xc000680fe0, 0x0, 0x2)
23:29:29.814: [APP/PROC/WEB.0] 	/tmp/build/80754af9/cloud-service-broker/cmd/serve.go:52 +0x20
23:29:29.814: [APP/PROC/WEB.0] github.com/spf13/cobra.(*Command).execute(0xc0001e9180, 0xc000680fc0, 0x2, 0x2, 0xc0001e9180, 0xc000680fc0)
23:29:29.814: [APP/PROC/WEB.0] 	/go/pkg/mod/github.com/spf13/[email protected]/command.go:766 +0x29d
23:29:29.814: [APP/PROC/WEB.0] github.com/spf13/cobra.(*Command).ExecuteC(0x1b9e9e0, 0x44299a, 0x1b32760, 0xc00005a778)
23:29:29.814: [APP/PROC/WEB.0] 	/go/pkg/mod/github.com/spf13/[email protected]/command.go:852 +0x2ea
23:29:29.814: [APP/PROC/WEB.0] github.com/spf13/cobra.(*Command).Execute(...)
23:29:29.814: [APP/PROC/WEB.0] 	/go/pkg/mod/github.com/spf13/[email protected]/command.go:800
23:29:29.814: [APP/PROC/WEB.0] github.com/pivotal/cloud-service-broker/cmd.Execute()
23:29:29.814: [APP/PROC/WEB.0] 	/tmp/build/80754af9/cloud-service-broker/cmd/root.go:43 +0x31
23:29:29.814: [APP/PROC/WEB.0] main.main()
23:29:29.814: [APP/PROC/WEB.0] 	/tmp/build/80754af9/cloud-service-broker/main.go:22 +0x20
23:29:29.856: [APP/PROC/WEB.0] Exit status 2

Possible Fix

The code is somehow coming up with "name":"" (the empty string) for the database name instead of either using the servicebroker default value or the db_name from VCAP_SERVICES.

I also tried setting db.name in the config.yml file, re-pushing, and then cf restart'ing the app, but found the same thing happened.

I also tried also setting the DB_NAME env var with cf set-env cloud-service-broker DB_NAME cgawsbrokerprodk8evyj9pv9fdotd and cf restart cloud-service-broker, but found the same thing happened.

The fact that none of these work and the name is empty make me think there's a bug somewhere in here, but I haven't the Golang chops to find it.

Steps to Reproduce

(see GIVEN/WHEN/THEN steps above)

Context

I'm trying to run the cloud-service-broker successfully in cloud.gov, in preparation for writing my own brokerpak services and contributing them upstream.

Your Environment

[FR] Updating csb-azure-mssql-db and csb-azure-mssql-db-failover-group not supported by broker

Is your feature request related to a problem? Please describe.
In the cloud-service-broker, we are enable to run the update-service in order to update the service to a bigger plan as it was available with the MASB.

If we try to perform a "cf update-server csb-db -p bigger-plan" we get the following error.
Server error, status code: 400, error code: 11004, message: The service does not support changing plans."

Describe the solution you'd like
We would like to be able to increase the database plan in order to update to a bigger plan is plan quotas are reach.

Describe alternatives you've considered
The alternative is to create a new service and perform a migration from the original DB to a new database.

Additional context
image

Platform
Azure

Version used: Bosh 2.9.6, TAS 2.9.7, cf version 6.51.0+2acd15650.2020-04-07, azure-services-0.0.1-rc.97.brokerpak

Applicable Services
csb-azure-mssql-db and csb-azure-mssql-db-failover-group

[BUG] Unable to bind Azure MySQL Basic tier database instance

Description

When trying to create a service key for bind an application to an Azure MySQL instance created with Basic tier, the broker times out and broker logs show the following error:

2020-11-10T09:13:02.14-0500 [APP/PROC/WEB/0] ERR {"timestamp":"1605017582.143054008","source":"cloud-service-broker","message":"cloud-service-broker.bind.unknown-error","log_level":2,"data":{"binding-id":"165cf2c3-bcee-48c9-b992-1d18d23e7aa1","error":"Error: Could not connect to server: Error 9009: Client connections to Basic tier servers through Virtual Network Service Endpoints are not supported. Virtual Network Service Endpoints are supported for General Purpose and Memory Optimized severs.\u0000  on brokertemplate/definition.tf line 42, in resource \"mysql_user\" \"newuser\":  42: resource \"mysql_user\" \"newuser\" { exit status 1","instance-id":"253d7f39-4747-472c-b2cd-42869276aa94","session":"16"}}

Network connectivity does appear to work though from the cloud service broker app:

cf ssh cloud-service-broker
...
vcap@cb9da572-9ca3-480e-6646-7ca4:~$ nc -zv 9o0zbkwe.mysql.database.azure.com 3306
Connection to 9o0zbkwe.mysql.database.azure.com 3306 port [tcp/mysql] succeeded!

Expected Behavior

Service instance should bind successfully.

Actual Behavior

Binding fails with a timeout.

Possible Fix

?

Steps to Reproduce

  1. cf create-service csb-azure-mysql basic test-db
  2. cf create-service-key test-db testkey OR bind service instance to an app

Context

We have created a custom plan for Basic tier MySQL database:

{
        "name": "basic",
        "id": "3de9246d-10da-47e7-afbd-614c1f2ffd2d",
        "description": "B_Gen5_1 with 5GB storage",
        "sku_name": "B_Gen5_1",
        "storage_gb": 5,
        "use_tls": true,
        "tls_min_version": "TLS1_2"
}

Your Environment

  • Version used: sb-0.2.0-rc.7-azure-1.0.0-rc.8
  • Platform (Azure/AWS/GCP): Azure
  • Applicable Services: csb-azure-mysql

[FR] Allow specifying Azure Redis max memory policy

Is your feature request related to a problem? Please describe.
Would like to be able to specify the max memory policy (e.g. allkeys-lru, noeviction) when creating Azure Redis service instances

Describe the solution you'd like
cf create-service csb-azure-redis small my-redis -c '{"maxmemory_policy": "value"}'

Describe alternatives you've considered
N/A

Priority
Low

Platform
Azure

Applicable Services
Redis

[BUG] Azure MySQL service creation fails with registration permission error when location is specified, even with skip_provider_registration set to true

Description

When creating an Azure MySQL service instance with skip_provider_registration set to true the service creation is successful, however when also specifying location as "eastus" it fails with registration permission issues even though skip_provider_registration is still set to true.

Expected Behavior

Service instance should be created successfully

Actual Behavior

Service instance fails with error:

Error: Error ensuring Resource Providers are registered.Terraform automatically attempts to register the Resource Providers it supports toensure it's able to provision resources.If you don't have permission to register Resource Providers you may wish to use
the"skip_provider_registration" flag in the Provider block to disable this functionality.Please note that if you opt out of Resource Provider Registration and Terraform triesto provision a resource from a Resource Provider which is unregistered, then the
errorsmay appear misleading - for example:> API version 2019-XX-XX was not found for Microsoft.FooCould indicate either that the Resource Provider "Microsoft.Foo" requires registration,but this could also indicate that this Azure Region doesn't support this API
version.More information on the "skip_provider_registration" flag can be found here:https://www.terraform.io/docs/providers/azurerm/index.html#skip_provider_registrationOriginal Error: Cannot register provider Microsoft.CustomProviders with Azure Resource
Manager: resources.ProvidersClient#Register: Failure responding to request: StatusCode=403 -- Original Error: autorest/azure: Service returned an error. Status=403 Code="AuthorizationFailed" Message="The client 'b56542ac-6ea5-48d4-849a-4cf7a8285032' with object
id 'b56542ac-6ea5-48d4-849a-4cf7a8285032' does not have authorization to perform action 'Microsoft.CustomProviders/register/action' over scope '/subscriptions/957108b5-005f-4b11-bae1-9e62ba39e857' or the scope is invalid. If access was recently granted, please
refresh your credentials.". on brokertemplate/definition.tf line 33, in provider "azurerm": 33: provider "azurerm" { exit status 1

Possible Fix

N/A

Steps to Reproduce

Broker configuration:

csb-azure-mysql:
    provision:
      defaults: '{
       "skip_provider_registration": true,
       "resource_group":"mfc-use-pcf-psb",
       "location", "eastus"
      }'
  1. cf create-service csb-azure-mysql small test-csb-mysql-eastus

Context

Related to #68. When using the authorized_network parameter to add VNET access rules I had to also specify the location as "eastus" to match the location where our VNET is and ran into this error. I then tried removing the authorized_network parameter and just specifying the location as "eastus" and still got the same error.

Your Environment

  • Version used: sb-0.1.0-rc.47-azure-0.0.1-rc.126
  • Platform (Azure/AWS/GCP): Azure
  • Applicable Services: MySQL

[FR] Support specifying minimum TLS version for Azure MySQL

Is your feature request related to a problem? Please describe.
No default TLS minimum version is configured when use_tls is set to true.

Describe the solution you'd like
Would like to be able to specify the minimum TLS version when creating Azure MySQL database.

Describe alternatives you've considered
N/A

Priority
High

Priority Context
Security requirement

Platform
Azure

Applicable Services
MySQL

Proposed Solution
Use https://www.terraform.io/docs/providers/azurerm/r/mysql_server.html#ssl_minimal_tls_version_enforced

[FR] Azure SQL DTU based plans

Describe the solution you'd like
Would like to be able to create Azure SQL database instances using DTU based plans (S0, S1 etc.)

Describe alternatives you've considered
None

Priority
High

Priority Context

Platform
Azure

Applicable Services
csb-azure-mssql-db
csb-azure-mssql-db-failover-group

**Additional notes
I plan to work on this feature myself

[BUG] Cloud service broker tile 0.0.28-azure-beta no stemcell associated

Description

We were able to successfully install the Beta cloud service broker tile manually using the opsman but when we try to deploy using platform automation, we get an error since no stemcell are specified in the release.

Expected Behavior

A stemcell is specified in pivnet to download so that the errands to be able to run sucessfully.

Actual Behavior

When running the paltform automation, an error occurs when trying to get the information on the stemcell required. No stemcell specified in the "download-product" section.

CSB-0 0 28-azure-platform-automation

Possible Fix

  • Add a dependancy on a stemcell as a "download-product" to the release so that the platform automation tools are able to download the proper stemcell and run trough other steps of the deployments.

  • We can change our paltform automation pipeline so that it bypass the step in order to download the stemcell if no stemcell will be specified for this tile in the "download-product" section.

Steps to Reproduce

  1. Use platform automation pipeline in order to install the current Azure 0.0.28 beta release.

Context

We are able to install the tile manually but not able to use platform automation in order to perform update/upgrades.

Your Environment

  • Version used: Cloud service broker for Microsoft Azure 0.0.28
  • Platform (Azure/AWS/GCP): Azure
  • Applicable Services: Bosh 2.10.1 TAS: 2.10.3

[BUG] Azure SQL FOG subsumed instance bindings provide incorrect server and database names

Description

After subsuming an Azure SQL FOG, the bindings created provide "sqlSeverName" and "sqldbName" parameters which do not match the subsumed database and thus cause apps to fail to to connect to the subsumed database.

Expected Behavior

Bindings should provide credentials that are correct.

Actual Behavior

Credentials appear to provide randomly generated server and database name.

Possible Fix

Steps to Reproduce

Creating MASB FOG db:

cf create-service azure-sqldb StandardS0 test-fog-subsume-primary -c '{"sqlServerName": "azugessqlsandbox", "sqldbName": "test-fog-subsume"}'

cf create-service azure-sqldb-failover-group SecondaryDatabaseWithFailoverGroup test-fog-subsume-fog -c '{"primaryServerName": "azugessqlsandbox", "secondaryServerName": "azugessqlsandboxdr", "primaryDbName": "test-fog-subsume2", "failoverGroupName": "test-fog-subsume-fog"}'

Perform subsume

cf create-service csb-azure-mssql-db-failover-group subsume test-fog-subsume-primary2 -c <params>

where params are:

{
   azure_primary_db_id: '/subscriptions/957108b5-005f-4b11-bae1-9e62ba39e857/resourceGroups/MFC-USE-PCF-PSB/providers/Microsoft.Sql/servers/azugessqlsandbox/databases/test-fog-subsume',
   azure_secondary_db_id: '/subscriptions/957108b5-005f-4b11-bae1-9e62ba39e857/resourceGroups/MFC-USE-PCF-PSB/providers/Microsoft.Sql/servers/azugessqlsandboxdr/databases/test-fog-subsume',
   auzure_fog_id: '/subscriptions/957108b5-005f-4b11-bae1-9e62ba39e857/resourceGroups/MFC-USE-PCF-PSB/providers/Microsoft.Sql/servers/azugessqlsandbox/failoverGroups/test-fog-subsume-fog'
}

The subsume completes successfully, but when binding the new subsumed service to an app or generating a service key the credentials have incorrect server and database names (although the jdbcUrls are correct):

{
 "databaseLogin": "XZhbWSGECNUUeAxy",
 "databaseLoginPassword": "<redacted>",
 "hostname": "test-fog-subsume-fog.database.windows.net",
 "jdbcUrl": "jdbc:sqlserver://test-fog-subsume-fog.database.windows.net:1433;database=test-fog-subsume;user=XZhbWSGECNUUeAxy;password=<redacted>;Encrypt=true;TrustServerCertificate=false;HostNameInCertificate=*.database.windows.net;loginTimeout=30",
 "jdbcUrlForAuditingEnabled": "jdbc:sqlserver://test-fog-subsume-fog.database.windows.net:1433;database=test-fog-subsume;user=XZhbWSGECNUUeAxy;password=<redacted>;Encrypt=true;TrustServerCertificate=false;HostNameInCertificate=*.database.windows.net;loginTimeout=30",
 "name": "test-fog-subsume",
 "password": "<redacted>",
 "port": 1433,
 "sqlServerFullyQualifiedDomainName": "test-fog-subsume-fog.database.windows.net",
 "sqlServerName": "csb-azsql-fog-b4b67d04-a213-4ed6-9c8e-7b1f6c5e2866",
 "sqldbName": "csb-fog-db-b4b67d04-a213-4ed6-9c8e-7b1f6c5e2866",
 "status": "created failover group test-fog-subsume-fog (id: /subscriptions/957108b5-005f-4b11-bae1-9e62ba39e857/resourceGroups/MFC-USE-PCF-PSB/providers/Microsoft.Sql/servers/azugessqlsandbox/failoverGroups/test-fog-subsume-fog), primary db test-fog-subsume-fog (id: /subscriptions/957108b5-005f-4b11-bae1-9e62ba39e857/resourceGroups/MFC-USE-PCF-PSB/providers/Microsoft.Sql/servers/azugessqlsandbox/databases/test-fog-subsume) on server azugessqlsandbox (id: /subscriptions/957108b5-005f-4b11-bae1-9e62ba39e857/resourceGroups/MFC-USE-PCF-PSB/providers/Microsoft.Sql/servers/azugessqlsandbox), secondary db test-fog-subsume (id: /subscriptions/957108b5-005f-4b11-bae1-9e62ba39e857/resourceGroups/MFC-USE-PCF-PSB/providers/Microsoft.Sql/servers/azugessqlsandboxdr/databases/test-fog-subsume) on server azugessqlsandboxdr (id: /subscriptions/957108b5-005f-4b11-bae1-9e62ba39e857/resourceGroups/MFC-USE-PCF-PSB/providers/Microsoft.Sql/servers/azugessqlsandboxdr) URL: https://portal.azure.com/#@5d3e2773-e07f-4432-a630-1a0f68a28a05/resource/subscriptions/957108b5-005f-4b11-bae1-9e62ba39e857/resourceGroups/MFC-USE-PCF-PSB/providers/Microsoft.Sql/servers/azugessqlsandbox/failoverGroup",
 "uri": "mssql://test-fog-subsume-fog.database.windows.net:1433/test-fog-subsume?encrypt=true\u0026TrustServerCertificate=false\u0026HostNameInCertificate=*.database.windows.net",
 "username": "XZhbWSGECNUUeAxy"
}

The JDBC urls here are correct, but the "sqlSeverName" and "sqldbName" values are incorrect:

"sqlServerName": "csb-azsql-fog-b4b67d04-a213-4ed6-9c8e-7b1f6c5e2866",
"sqldbName": "csb-fog-db-b4b67d04-a213-4ed6-9c8e-7b1f6c5e2866",

Context

This causes issue for apps depending on which VCAP_SERVICE parameters they use to connect to the database.

Your Environment

  • Version used: sb-0.2.0-rc.9-azure-1.0.0-rc.13
  • Platform (Azure/AWS/GCP): Azure
  • Applicable Services: csb-azure-mssql-db-failover-group

[FR] Ability to change plans Azure SQL instances subsumed using azure-mssql-db-masb-subsume

Is your feature request related to a problem? Please describe.
After a database has been subsumed, there is no way that I can see to change the plan to scale up/down the database.

Describe the solution you'd like
Open to any possibilities.

Describe alternatives you've considered
None

Priority
High

Priority Context
The ability for teams to change their database service plans like they currently can after using the subsume capability is extremely important.

Platform
Azure

Applicable Services
azure-mssql-db-masb-subsume

[FR] Support for service broker dashboards

Is your feature request related to a problem? Please describe.

As a cloud-service-broker user (e.g a CF developer provisionning AWS RDS through CSB)

Currently, CSB seems to always return an empty dashboard url

https://github.com/pivotal/cloud-service-broker/blob/99dab14683fc429dbcb04bb4c025cfa6fe378480/brokerapi/brokers/service_broker.go#L168

https://github.com/pivotal/cloud-service-broker/blob/99dab14683fc429dbcb04bb4c025cfa6fe378480/brokerapi/brokers/service_broker.go#L584

Describe the solution you'd like

A way to specify the dashboard in the provision action by selecting an input.

Describe alternatives you've considered

Additional Context

Regarding ways cloudfoundry end user would authentify to the AWS RDS dashboard, I would expect that support for AWS IAM external users would enable a brokerpak to provision web identity federation and restricted IAM role so that a CF signed in user would be granted just enough permission to access the AWS RDS console to list backups/snapshots and restore them in a self service maneer.

Priority

Priority Context

Platform

Applicable Services

Does this apply to all services? Just one? Let us know

Most data services need access to backup/restore/metrics/logs.

The OSB API group has not yet prioritized associated work, see openservicebrokerapi/servicebroker#486 openservicebrokerapi/servicebroker#485 openservicebrokerapi/servicebroker#107, therefore service providers have no other choices than direct users to a web ui through the dashboard url.

[FR] "examples" parameter blocks should be able to refer to environment variables

Is your feature request related to a problem? Please describe.

Some services require security-sensitive parameters to be provided during provisioning or binding. However, I can't put security-sensitive parameters in the examples block of the YAML for my brokerpak, as doing so would result in those parameters appearing either in version control or generated docs. Since I can't change the parameters used for examples without rebuilding, I'm unable to make use of run-tests. This makes automated testing very difficult, and iterating on a brokerpak very cumbersome.

Describe the solution you'd like

I would like to be able to refer to environment variables for both the provisioning and binding parameters in the examples block. Then I want run-tests to use the local value of those environment variables when making client requests to the running broker, and serve-docs to just list the unexpanded variable names.

Describe alternatives you've considered

I've considered working around this by explicitly listing out the testing steps in a Makefile, but this feels like building additional scaffolding for little value. It also reduces the "self-contained, single-source-of-truth" benefit of using the examples block.

Additional Context

In the brokerpak I'm working on, provisioning requires the user to provide valid client credentials for an available kubernetes cluster; these cannot and should not be included in the YAML file, so I've had to remove the examples block.

I asked about this situation in Slack:

Hey CSB folks... Is there a way to specify provision_params values for examples: such that the values will be grabbed from an environment variable? In particular, I have an example I want to use with run-tests that refers to a fixture k8s deployment, and I don't want the actual credentials for the fixture k8s to be included in the .yml file or in the generated documentation.

At this time, there is no support for examples to reference environment variables.

Priority

Medium

Priority Context

While the lack of this feature doesn't prevent use of the broker or development of new brokerpaks, it adds friction to the development process that increases the effort needed to see changes successfully reflected in the broker. In other words, it kills iteration speed.

Platform

Local, but applies to any platform

Applicable Services

This is for my own custom brokerpak

[FR] Full support for OSB Context + originating-identity-header in request variable

Is your feature request related to a problem? Please describe.

As a brokerpak author

  • in order to apply some processing depending on OSB client metadata (such as opening network ACLs depending on caller, or recording audit traces for caller metadata such as organization_name, space_name, instance_name or K8S namespace)
  • I need Osb context and Originating Identity Header to be exposed to brokerpak Terraform templates

Describe the solution you'd like

A new request.context field in the existing request variable. This field would contain the full context JSON object as a map, including the platform field with values among cloudfoundry and kubernetes

A new request.x_broker_api_originating_identity field in the existing request variable. This field would contain the base64 decoded full JSON object as a map, eg.

for cloudfoundry

{
  "user_id": "683ea748-3092-4ff4-b656-39cacc4d5360"
}

and for K8S:

{
  "username": "duke",
  "uid": "c2dde242-5ce4-11e7-988c-000c2946f14f",
  "groups": [ "admin", "dev" ],
  "extra": {
    "mydata": [ "data1", "data3" ]
  }
}

Describe alternatives you've considered

The existing request.default_labels are exposing and mapping part of the Osb context for cloudfoundry but are missing

Additional Context

Priority

Priority Context

Platform

Applicable Services

/CC @mlesaout

[BUG] sql failover group db update does not propagate changes to secondary db

Description

Running `cf update-service fog-S0-update -p standard-S3 -c '{"server_pair":"pair_xxx"}' does not update plan size on the secondary db

Expected Behavior

running cf update-service fog-S0-update -p standard-S3 -c '{"server_pair":"pair_xxx"}' should reflect changes in primary and secondary db

Actual Behavior

running cf update-service fog-S0-update -p standard-S3 -c '{"server_pair":"pair_xxx"}' only reflects changes on primary db

Possible Fix

Fix for terraform provider or the Azure APIs that the provider uses

Steps to Reproduce

  1. Create a FOG db instance
  2. Try and update that instance to a different plan size
  3. See that the primary db successfully updates
  4. See that the secondary db does now

Context

This is an extension of the FR submitted by @drebake #47. The Feature Request was delivered, but the functionality is incomplete due to circumstances beyond the team's control (see #47 for context

Your Environment

  • Version used: Bosh 2.9.6, TAS 2.9.7, cf version 6.51.0+2acd15650.2020-04-07, sb-0.1.0-rc.35-azure-0.0.1-rc.112โ€ฉ
  • Operating System and version (desktop): N/A
  • Link to your project (if public): N/A
  • Platform (Azure/AWS/GCP): Azure
  • Applicable Services: Azure Failover Groups

[BUG] csb-azure-mssql-fog-run-failover fails with "Can't use a null value as an indexing key"

Description

Performing Azure SQL DB FOG failover with csb-azure-mssql-fog-run-failover fails

Expected Behavior

Should successfully create the csb-azure-mssql-fog-run-failover service instance and successfully perform the failover.

Actual Behavior

status:    create failed
message:   Error: 2 problems:- Invalid index: Can't use a null value as an indexing key.- Invalid index: Can't use a null value as an indexing key. exit status 1

Possible Fix

?

Steps to Reproduce

  1. cf create-service csb-azure-mssql-db-failover-group small csb-failover-group-test-2 -c '{"instance_name": "csb-failover-group-test-2", "db_name": "csb-db-test-2"}' (completes successfully)

  2. cf create-service csb-azure-mssql-fog-run-failover standard my-failover -c '{"fog_instance_name":"csb-failover-group-test-2"}' (fails)

Your Environment

Environment: Azure
Version: sb-0.1.0-rc.37-gcp-0.0.1-rc.76
Configuration:

csb-azure-mssql-db-failover-group:
    provision:
      defaults: '{
       "skip_provider_registration": true,
       "server_pair": "pair1",
       "server_credential_pairs": {
        "pair1": {
          "admin_username":"<redacted>",
          "admin_password":"<redacted>",
          "primary": {
            "server_name":"azugessqlsandbox",
            "resource_group":"MFC-USE-PCF-PSB"
          },
          "secondary": {
            "server_name":"azugessqlsandboxdr",
            "resource_group":"MFC-USE-PCF-PSB"
          }
        } 
       } 
      }'

[BUG] Provider package layout is incompatible with Terraform >0.12

My brokerpak attempts to use the Terraform registry despite provider packages being available locally.

Description

GIVEN the file manifest.yml contains (note the version of Terraform):

packversion: 1
name: solr-services-pak
version: 1.0.0
metadata:
  author: [redacted]
platforms:
- os: linux
  arch: "386"
- os: linux
  arch: amd64
terraform_binaries:
- name: terraform
  version: 0.13.3
  source: https://github.com/hashicorp/terraform/archive/v0.13.3.zip  
- name: terraform-provider-random
  version: 2.3.0
  source: https://releases.hashicorp.com/terraform-provider-random/2.3.0/terraform-provider-random_2.3.0_linux_amd64.zip
- name: terraform-provider-kubernetes
  version: 1.13.2
  source: https://releases.hashicorp.com/terraform-provider-kubernetes/1.13.2/terraform-provider-kubernetes_1.13.2_linux_amd64.zip
- name: terraform-provider-helm
  version: 1.3.0
  source: https://releases.hashicorp.com/terraform-provider-helm/1.3.0/terraform-provider-helm_1.3.0_linux_amd64.zip
service_definitions:
- services/solr-operator.yml
parameters: []
required_env_variables: []
env_config_mapping: {}

...AND the file services/solr-operator.yml contains:

version: 1
name: solr-operator
id: f145c5aa-4cee-4570-8a95-9a65f0d8d9da
description: Fault-tolerant and highly-available distributed indexing and searching using Apache Solr, in the Kubernetes cluster of your choice
display_name: Apache SolrCloud
image_url: https://lucene.apache.org/theme/images/solr/identity/Solr_Logo_on_white.png
documentation_url: https://lucene.apache.org/solr/resources.html
support_url: https://github.com/GSA/solr-brokerpak
tags: [apache, search, index, k8s]
plans:
- name: base
  id: 1779d7d5-874a-4352-b9c4-877be1f0745b
  description: Establish a beachhead in the provided k8s where you want SolrCloud services available
  display_name: SolrCloud beachhead in k8s
  bullets:
  - "REQUIRED prerequisite: Create an operator before creating instances of the SolrCloud service"
  properties: {}
provision:
  plan_inputs: []
  user_inputs:
  - required: true
    field_name: cluster_id
    type: string
    details: The cluster ID to target from the passed kubeconfig
  - required: true
    field_name: ingress_base_domain
    # Does the customer have to know? Can we derive this from an existing ingress
    # controller in the provided k8s?
    type: string
    details: "The base domain to expose Solr on, eg *.(ingress-base-domain)"
  - required: false
    field_name: operator_name
    type: string
    details: The namespace to use (only specify for demo purposes)"
  computed_inputs: 
  - name: operator_name
    default: ""
    overwrite: false
    type: string
  outputs:
  - required: true
    field_name: operator
    type: string
    details: The name of the operator, to be used when creating SolrCloud instances
  template_ref: "services/solr-operator/operator-provision.tf"
bind:
  plan_inputs: []
  user_inputs: []
  computed_inputs:
  - name: operator
    default: ${instance.details["operator"]}
    overwrite: true
    type: string
  outputs:
  - required: true
    field_name: operator
    type: string
    details: The name of the operator, to be used when creating SolrCloud instances
  template_ref: "services/solr-operator/operator-bind.tf"
examples:
- name: Create an operator in Docker Desktop
  description: "This example creates an operator in a local k8s provided by
  Docker Desktop. It's presumes the k8s already has an ingress controller
  available that will handle ing.local.domain."
  plan_id: 1779d7d5-874a-4352-b9c4-877be1f0745b
  provision_params:
    cluster_id: docker-desktop
    ingress_base_domain: ing.local.domain
  bind_params: {}
- name: Create an operator in Docker Desktop in the default namespace
  description: "This example creates an operator in a local k8s provided by
  Docker Desktop. It's presumes the k8s already has an ingress controller
  available that will handle ing.local.domain. The `default` namespace will be
  used in order to make it easy to demonstrate a provisioned SolrCloud in the
  ing.local.domain later."
  plan_id: 1779d7d5-874a-4352-b9c4-877be1f0745b
  provision_params:
    cluster_id: docker-desktop
    ingress_base_domain: ing.local.domain
  bind_params: {}
plan_updateable: false
requiredenvvars: []

AND I have run the CSB to create the brokerpak
AND I have run the CSB with the brokerpak with the serve parameter
WHEN I run cloud-service-broker client provision --instanceid myoperator --serviceid f145c5aa-4cee-4570-8a95-9a65f0d8d9da --planid 1779d7d5-874a-4352-b9c4-877be1f0745b --params '{"cluster_id":"docker-desktop", "ingress_base_domain":"ing.local.domain"}'

Expected Behavior

...THEN I should see a 2xx response indicating that the service instance was created

Actual Behavior

...BUT I see:

{
    "url": "http://user:pass@broker:80/v2/service_instances/myoperator?accepts_incomplete=true",
    "http_method": "PUT",
    "status_code": 500,
    "response": {
        "description": "Error: Failed to query available provider packagesCould not retrieve the list of available versions for providerhashicorp/random: provider registry.terraform.io/hashicorp/random was notfound in any of the search locations- /tmp/brokerpak000473515Error: Failed to query available provider packagesCould not retrieve the list of available versions for providerhashicorp/kubernetes: provider registry.terraform.io/hashicorp/kubernetes wasnot found in any of the search locations- /tmp/brokerpak000473515Error: Failed to query available provider packagesCould not retrieve the list of available versions for provider hashicorp/helm:provider registry.terraform.io/hashicorp/helm was not found in any of thesearch locations- /tmp/brokerpak000473515 exit status 1"
    }
}

Possible Fix

I suspect this may be due to changes in how Terraform works with the registry as of 0.13.x.

Steps to Reproduce

Check out this particular branch. Run:

cp .env.secrets-template .env.secrets
make build
make up
make test

Context

I'm trying to create my own brokerpak, but I can't get the brokerpak to work despite including provider packages matching my local Terraform config:

Terraform v0.13.3
+ provider registry.terraform.io/hashicorp/helm v1.3.0
+ provider registry.terraform.io/hashicorp/kubernetes v1.13.2
+ provider registry.terraform.io/hashicorp/random v2.3.0

Your Environment

[BUG] template_ref should search paths relative to the build directory, not the working directory

Description

The way template_ref is processed requires $PWD for the CSB invocation to be the same directory being built. CSB shouldn't require that!

Expected Behavior

When you run cloud-service-broker pak build /some/directory/name, template_ref entries in /some/directory/name/manifest.yml will get resolved by searching in /some/directory/name.

Actual Behavior

When you run cloud-service-broker pak build /some/directory/name, template_ref entries in /some/directory/name/manifest.yml fail to be resolved because the file is searched for in $PWD. You'll see something like this:

% build/cloud-service-broker.darwin pak build aws-brokerpak 
2020/08/17 22:12:39 Packing...
2020/08/17 22:12:39 Using temp directory: /var/folders/p_/cydmjpbj3b5dzzl4pgzyy5br0000gn/T/brokerpak330015225
2020/08/17 22:12:39 Packing sources...
2020/08/17 22:12:39      https://github.com/hashicorp/terraform/archive/v0.12.23.zip -> /var/folders/p_/cydmjpbj3b5dzzl4pgzyy5br0000gn/T/brokerpak330015225/src/terraform.zip
2020/08/17 22:12:43      https://github.com/terraform-providers/terraform-provider-aws/archive/v2.57.0.zip -> /var/folders/p_/cydmjpbj3b5dzzl4pgzyy5br0000gn/T/brokerpak330015225/src/terraform-provider-aws.zip
2020/08/17 22:12:47      https://releases.hashicorp.com/terraform-provider-random/2.2.1/terraform-provider-random_2.2.1_linux_amd64.zip -> /var/folders/p_/cydmjpbj3b5dzzl4pgzyy5br0000gn/T/brokerpak330015225/src/terraform-provider-random.zip
2020/08/17 22:12:48      https://releases.hashicorp.com/terraform-provider-mysql/1.9.0/terraform-provider-mysql_1.9.0_linux_amd64.zip -> /var/folders/p_/cydmjpbj3b5dzzl4pgzyy5br0000gn/T/brokerpak330015225/src/terraform-provider-mysql.zip
2020/08/17 22:12:48      https://github.com/terraform-providers/terraform-provider-postgresql/archive/v1.5.0.zip -> /var/folders/p_/cydmjpbj3b5dzzl4pgzyy5br0000gn/T/brokerpak330015225/src/terraform-provider-postgresql.zip
2020/08/17 22:12:50 Packing binaries...
2020/08/17 22:12:50      https://releases.hashicorp.com/terraform/0.12.23/terraform_0.12.23_linux_amd64.zip -> /var/folders/p_/cydmjpbj3b5dzzl4pgzyy5br0000gn/T/brokerpak330015225/bin/linux/amd64
2020/08/17 22:12:51      https://releases.hashicorp.com/terraform-provider-aws/2.57.0/terraform-provider-aws_2.57.0_linux_amd64.zip -> /var/folders/p_/cydmjpbj3b5dzzl4pgzyy5br0000gn/T/brokerpak330015225/bin/linux/amd64
2020/08/17 22:12:54      https://releases.hashicorp.com/terraform-provider-random/2.2.1/terraform-provider-random_2.2.1_linux_amd64.zip -> /var/folders/p_/cydmjpbj3b5dzzl4pgzyy5br0000gn/T/brokerpak330015225/bin/linux/amd64
2020/08/17 22:12:55      https://releases.hashicorp.com/terraform-provider-mysql/1.9.0/terraform-provider-mysql_1.9.0_linux_amd64.zip -> /var/folders/p_/cydmjpbj3b5dzzl4pgzyy5br0000gn/T/brokerpak330015225/bin/linux/amd64
2020/08/17 22:12:56      https://releases.hashicorp.com/terraform-provider-postgresql/1.5.0/terraform-provider-postgresql_1.5.0_linux_amd64.zip -> /var/folders/p_/cydmjpbj3b5dzzl4pgzyy5br0000gn/T/brokerpak330015225/bin/linux/amd64
2020/08/17 22:12:57 Packing definitions...
2020/08/17 22:12:57 error while packing "aws-brokerpak": couldn't load provision template terraform/aws-rds-provision.tf: open terraform/aws-rds-provision.tf: no such file or directory

Possible Fix

Steps to Reproduce

  1. Clone the cloud-service-broker repository.
  2. export USE_GO_CONTAINERS=true
  3. make build
  4. build/cloud-service-broker.darwin pak build aws-brokerpak (for the Darwin build/OS situation, obviously)

Context

This bug complicates delivering a brokerpak in a standalone repository. I'd like to use a single Docker image that can pak init, pak build, and client run-examples, but I have to awkwardly use the brokerpak source directory as the working directory, and it's hard to treat invocation of the Docker image the same way I would the CLI.

Your Environment

  • Version used: master as of the time of submitting this issue
  • Operating System and version (desktop): OSX High Sierra
  • Link to your project (if public):
  • Platform (Azure/AWS/GCP): N/A
  • Applicable Services: N/A

[FR]Improve legacy broker migration with additional properties for AWS RDS PostegreSQL plans

In order to better facilitate the migration from the legacy broker, a few additional properties would be beneficial

The following additional properties from the Legacy broker would allow for a cleaner migration of Legacy plans:

Enable Storage Encryption
DB Parameter Group Name
DB Subnet Group Name
Require SSL for communication
VPC Security Group Ids

This will allow customers to migrate using the same networking, parameter, and security groups that already exist from the Legacy Broker.

[BUG] csb-masb-mssql-db-subsume issue asking for server

Description

As describe in the https://docs.pivotal.io/cloud-service-broker/1-0/reference.html#azure-mssql-fog-preconfig and https://github.com/pivotal/cloud-service-broker/blob/master/docs/mssql-fog-plans-and-config.md documentation, in order to subsume and gain Control of the Secondary Database Instance in a failover group, we must perform the following:

cf create-service csb-masb-mssql-db-subsume PLAN-NAME SERVICE-INSTANCE-NAME -c '{"azure_db_id":"DATABASE-ID"}'

When I perform that action, I get a "Service broker error: 1 error(s) occurred: server: server is required".

cf create-service csb-masb-mssql-db-subsume current abfog2-s -c '{"azure_db_id":"sqldb-xxxxxx-xxxxx-server01","azure_db_id":"/subscriptions/xxxxxxxxxx-xxxxx-xxxxx-xxxxx-xxxxxxxxxxxx/resourceGroups/sqldb-xxxxx-rg/providers/Microsoft.Sql/servers/sqldb-xxxx-xxx-server
01/databases/csb-fog-db-c3c7f60c-f6dd-452b-a232-26955f66f829"}'

Creating service instance abfog2-s in org xxxxxxxxx-org / space dev as admin...
Service broker error: 1 error(s) occurred: server: server is required
FAILED

In order to perform a successfull subsume, I must add the "server" parameter in the -c.

cf create-service csb-masb-mssql-db-subsume current abfog2-s -c '{"server":"sqldb-xxxxxx-xxxxx-server01","azure_db_id":"/subscriptions/xxxxxxxxxx-xxxxx-xxxxx-xxxxx-xxxxxxxxxxxx/resourceGroups/sqldb-xxxxx-rg/providers/Microsoft.Sql/servers/sqldb-xxxx-xxx-server
01/databases/csb-fog-db-c3c7f60c-f6dd-452b-a232-26955f66f829"}'

In the documentation, I don't see any mentions of the paramater having to be specified to gain control of the secondary database.

I also notice that in the acceptance tests for the https://github.com/pivotal/cloud-service-broker/blob/master/acceptance-tests/azure/cf-test-masb-sql-db-subsume.sh, other parameters seemed to be speficied but it appears to be more in regards of subsumming a MASB database and not the secondary FOG database.

Expected Behavior

As per the documentation, I shouldn't have to specify the "server" argument when I want to gain control of the secondary database within the original primary foundation FOG.

Actual Behavior

I must add the "server" parameter in the -c in the parameters.

ex: cf create-service csb-masb-mssql-db-subsume current abfog2-s -c '{"server":"sqldb-xxxxxx-xxxxx-server01","azure_db_id":"/subscriptions/xxxxxxxxxx-xxxxx-xxxxx-xxxxx-xxxxxxxxxxxx/resourceGroups/sqldb-xxxxx-rg/providers/Microsoft.Sql/servers/sqldb-xxxx-xxx-server
01/databases/csb-fog-db-c3c7f60c-f6dd-452b-a232-26955f66f829"}'

Possible Fix

Update documentation or resolve issue so that we do not have to specify the server.

Steps to Reproduce

  1. Added the json informaiton in the "Existing SQL Server Credentials" - Azure SQL Config To Subsume MASB DB Instances (csb-masb-mssql-db-subsume) portion of the tile.
  2. Created a csb-azure-mssql-db-failover-group instance
  3. Tried to subsume to gain control of the secondary database of my FOG on the primary foundation.

Context

Creating end user documentation and test cases and notice this difference compared to previous versions of the tile.

Your Environment

  • Version used: Cloud service broker for Microsoft Azure ver 1.0.1
  • Operating System and version (desktop): linux
  • Platform (Azure/AWS/GCP): Azure
  • Applicable Services: csb-masb-mssql-db-subsume

[FR] Add vnet integration for Azure Redis Cache Premium plans

Is your feature request related to a problem? Please describe.
By default, all the communication through Azure redis Cache are done via Internet. It would be good that all the communication stay in the vnet.

Describe the solution you'd like
The azure terraform provider already has an option allowing to specify in which subnet the Redis instance will be.
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/redis_cache#subnet_id
We would like the option to be present in the CSB tile.

Describe alternatives you've considered
Add the code in the CSB tile allowing to specify in which subnet the redis instance will be

Priority
High (for security)

Priority Context
To enhance security

Platform
Azure

Applicable Services

[FR] when building a brokerpak, package files needed by Terraform code

Is your feature request related to a problem? Please describe.
There's a local solr-crd directory containing a Helm chart that I want my Terraform to use. Here's the Terraform in question:

resource "helm_release" "solrcloud" {
  name = local.cloud_name
  chart = "./solr-crd"
  namespace = data.kubernetes_namespace.namespace.id
  cleanup_on_fail = true
  atomic = true
  wait = true
  timeout = 600
}

However, when the CSB builds the brokerpak, the referenced directory is not included. As a result, when I try to use the broker, I see:

2020/10/13 19:55:10 Last operation for "ex3065862978-" was "failed": Error: path "./solr-crd" not found  on brokertemplate/definition.tf line 62, in resource "helm_release" "solrcloud":  62: resource "helm_release" "solrcloud" { exit status 1

Describe the solution you'd like

The manifest.yml should include fields for specifying additional files to be included alongside the Terraform providers and service definition YAML.

(Alternatively, if Terraform provides a way, the broker could infer the needed files by inspecting the provided HCL.)

Describe alternatives you've considered

I can work around this particular problem by having the helm_release resource refer to a .tgz of the Helm chart hosted on GitHub.

However, this won't be the case for other Terraform code, eg when using a local file as a template; see below.

Additional Context

Priority

High - It's impossible to fully take advantage of Terraform and certain services cannot be brokered without this feature. For example, I'm about to start brokering AWS EKS. In my Terraform code, I have:

resource "helm_release" "prometheus" {
  count   = local.env == "default" ? 1 : 0
  name    = "prometheus"
  chart   = "stable/prometheus-operator"
  version = "8.13.11"
  namespace = "monitoring"

  values    = [
    templatefile("./charts/prometheus/values.yaml", { grafana_pwd = var.GRAFANA_PWD, base_domain = local.base_domain })
  ]
  provisioner "local-exec" {
    command = "helm --kubeconfig kubeconfig_${module.eks.cluster_id} test -n ${self.namespace} ${self.name}"
  }

  depends_on = [
    module.eks.cluster_id
  ]
}

Here we see that using templatefile(), a workhorse in lots of Terraform deployments, will not be possible with CSB.

Priority Context

It prevents brokering any but the most trivial Terraform deployments.

Platform

N/A

Applicable Services

Anything that wants to use a resource other than HCL code.

Please Add Topics and Give `All-Pivotal` Team Read Access

Hey @erniebilling ,

Thanks for creating a new repo in /pivotal!

One of the goals in the new org is to make code as internally accessible as possible. We hope this spirit of inner-sourcing will help inspire ideas and collaboration outside of sometimes silo'd teams/pairs.

To help with this (if it is possible), can you please grant the 'All-Pivotal' team read access and tag your repo with topics?

For All-Pivotal Access:

Since the owner of a repo will understand the content of that repo - and any secrecy needs around it - much better than we can, we are leaving this decision up to you. More on this can be found here: https://github.com/pivotal/read-me-first/blob/master/adding-collaborators.md

For Tags:

We ask that you tag your repo with the following information:

  • Team
  • Product
  • Coding Language(s)
  • Subject Area

You can find more information on this here: https://github.com/pivotal/read-me-first/blob/master/organization-convention.md

If you have any questions or concerns, please don't hesitate to reach out to [email protected] and a member of our admin team will be happy to assist.

Thank you!

/cc

[DOCS] MASB "azure-sqldb-failover-group / ExistingDatabaseInFailoverGroup" equivalence.

Documentation Requested

I would like to know if there is a way that we can connect to a csb-azure-mssql-db-failover-group secondary database from another PCF platform.

Basicly the equivalence of the MASB "azure-sqldb-failover-group / ExistingDatabaseInFailoverGroup" service enabled us to create a service-instance on another PCF platform on another site in order to connect to the FOG from the secondary server.

What would be the equivalence of the MASB "azure-sqldb-failover-group / ExistingDatabaseInFailoverGroup" option using the cloud-service-broker?

Thank you

[FR] Service to bind to existing AzureSQL failover group in a different PCF foundation

Is your feature request related to a problem? Please describe.
When we create an AzureSQL failover group, with e.g.:

cf create-service csb-azure-mssql-db-failover-group small my-failover-group -c '{"server_pair": "pair1", "instance_name": "csb-failover-group-test", "db_name": "csb-db-test"}'

there is no way to bind to this existing failover group from a different PCF org/space which is required if we want an app that is deployed in a different DR region to be able to connect to and use the existing failover group.

Describe the solution you'd like
Something like:

cf create-service csb-azure-mssql-db-failover-group-existing small my-failover-group -c '{"server_pair": "pair1", "instance_name": "csb-failover-group-test"}'

which would create a service which provides connection details to the existing failover group.

Describe alternatives you've considered
N/A

Additional context
N/A

Platform
PCF

Applicable Services
Azure SQL

[BUG] Tile version of the Cloud Service Broker doesn't store csb-azure-storage-account credentials in CredHub.

Description

When creating a csb-azure-storage-account service, the credentials are not stored in the CredHub like other services. Access_keys and storage_account_name are available in the VCAP_SERVICES. I recall this not being the case with the Credhub information in the previous non-tile version of the broker.

Expected Behavior

Binding information should be stored in the Credhub.

Actual Behavior

Access_keys and storage_account_name available in the VCAP_SERVICES

Possible Fix

Store the binding information in the credhub like other services.

Steps to Reproduce

  1. Install tile version of the broker
  2. cf created-service csb-azure-storage-account
  3. cf bind to a application
  4. cf env application_name

Context

Upon testing recent PR for csb-azure-storage-account changes, we noticed that the default behaviour of storing binding information in the credhub does not behave like other services.

Your Environment

Version used: Cloud service broker for Microsoft Azure 0.0.35
Platform (Azure/AWS/GCP): Azure
Applicable Services: Bosh 2.10.1 TAS: 2.10.3

[DOCS] Document the use of environment variables that may be too sensitive to be kept in YAML

A possible workaround: It appears that the client run-examples command accepts some parameters:

$ cloud-service-broker client run-examples --help
Run all examples generated by the use command through a
        provision/bind/unbind/deprovision cycle.

        Exits with a 0 if all examples were successful, 1 otherwise.

Usage:
  cloud-service-broker client run-examples [flags]

Flags:
      --example-name string   only run examples matching this name
      --filename string       json file that contains list of CompleteServiceExamples
  -h, --help                  help for run-examples
  -j, --jobs int              number of parallel client examples to run concurrently (default 1)
      --service-name string   name of the service to run tests for

Global Flags:
      --config string   Configuration file to be read

I suspect I can supply the test parameters using the --filename parameter, although there's no documentation about this option or what a CompleteServiceExamples might be. (I am guessing it is expecting a YAML block that corresponds to the examples block in the service YAML; will report back.)

This needs documentation!

Originally posted by @mogul in https://github.com/pivotal/cloud-service-broker/issue_comments/708887286

[BUG] When deleting a csb-azure-mssql-db-failover-group, the database read-only database on the secondary server is not deleted.

Description

When deleting a csb-azure-mssql-db-failover-group, the database read-only database on the secondary site is not deleted.

The failover-group is deleted on both servers in the server "pair", but the database appears to only be deleted on the primairy server and remains on the secondary failover-group server.

Expected Behavior

When deleting a failovergroup, the failovergroup and databases are deleted on both servers that are configured in the "pair"

Actual Behavior

A manual cleanup is required for the database on the secondary server.

Possible Fix

Delete all database instances in the server "pair".

Steps to Reproduce

  1. Provision a new csb-azure-mssql-failover-db
    cf create-service csb-azure-mssql-db-failover-group small db-fog-small -c '{"server_pair":"pair1"}'

  2. Get the guid of the service
    cf service db-fog-small --guid

  3. Delete the csb-azure-mssql-db-failover-group.
    cf ds db-fog-small

  4. Got on the secondary server of the server "pair" and the database will still be present.

Context

Manual cleanup required after deleteing the service.

Your Environment

  • Version used: Bosh 2.9.6, TAS 2.9.7, cf version 6.51.0+2acd15650.2020-04-07, azure-services-0.0.1-rc.97.brokerpak
  • Operating System and version (desktop): elementary OS 5.1.6 Hera
  • Link to your project (if public):
  • Platform (Azure/AWS/GCP): Azure
  • Applicable Services: csb-azure-mssql-db-failover-group

[FR] We would like to be able to configure the "read_write_endpoint_failover_policy" to manual mode.

Is your feature request related to a problem? Please describe.
We would like to be able to configure the "read_write_endpoint_failover_policy" to manual mode. Currently, in the https://github.com/pivotal/cloud-service-broker/search?q=read_write_endpoint_failover_policy&unscoped_q=read_write_endpoint_failover_policy it appears the failover_policy is hard coded to automatic. We would like to be able to set it to manual so that we can control the failover.

Describe the solution you'd like
Have the failover_policy configurable so that we can change it from automatic to manual.

Describe alternatives you've considered
None

Additional context
We are currently using service_endpoints so we cannot reach the secondary database in the second Azure Location. So it a automatic failover occurs, our application then cannnot reach the database which in turn causes an outage on our side.

Platform
Azure

Applicable Services
csb-azure-mssql-db-failover-group

[FR] Add HEALTHCHECK to Dockerfile

Is your feature request related to a problem? Please describe.

When using the Docker image in automated testing, after we start the CSB and then run tests (with the eden OSBAPI client), we see:

Could not find service in catalog: Could not fetch catalog: Failed doing HTTP request: Get http://127.0.0.1:8080/v2/catalog: read tcp 127.0.0.1:54568->127.0.0.1:8080: read: connection reset by peer

This is because after startup of the CSB, there is a period where the broker is still reading files and not yet ready to serve requests, and there's no good way to know when the CSB is actually ready.

Describe the solution you'd like

The Dockerfile should include a HEALTHCHECK directive so that scripts and Makefiles can explicitly wait for the container to be healthy before proceeding.

Describe alternatives you've considered

We previously added in a delay to wait at least X seconds after starting the image before proceeding to run tests, where X was picked by trial and error. This felt like a pretty poor workaround, given that the performance between the GitHub Actions runner and our local machines varies widely, and we didn't want to inject any more delay than was necessary into our manual iteration time.

Additional Context

I worked out what the healthcheck should be over here (though you may want to change the interval and retry count). With the healthcheck in place, here's a one-liner that waits for the container to be healthy.

Priority

Low

Priority Context

I'd like to remove as much complexity from my Makefile as possible, and having a built-in healthcheck will benefit other users of the CSB in various contexts. However, I'm clearly able to work around the problem now by supplying the docker run arguments.

Platform

N/A

Applicable Services

N/A

[BUG] Azure MySQL bind/create service key fails with broker timeout

Description

Binding an Azure MySQL service to an app or trying to create a service key fails with a broker timeout error

Expected Behavior

Service should successfully be bound to app.
Service key should be successfully created

Actual Behavior

Both operations fail with error:

Unexpected Response
Response code: 504
CC code:       0
CC error code:
Request ID:    a2ebe399-7231-4588-7c62-4c6744c33229::94abf6d5-bbaf-4ead-9023-f166af8eb06e
Description:   {
  "description": "The request to the service broker timed out: https://cloud-service-broker.apps.use.sandbox.pcf.manulife.com/v2/service_instances/23139856-e296-44f0-a05c-c474221d3ba1/service_bindings/b9ae8436-a67a-4be1-8e0f-46407a36106f?accepts_incomplete=true",
  "error_code": "CF-HttpClientTimeout",
  "code": 10001,
  "http": {
    "uri": "https://cloud-service-broker.apps.use.sandbox.pcf.manulife.com/v2/service_instances/23139856-e296-44f0-a05c-c474221d3ba1/service_bindings/b9ae8436-a67a-4be1-8e0f-46407a36106f?accepts_incomplete=true",
    "method": "PUT"
  }
}

Possible Fix

?

Steps to Reproduce

  1. Create service via cf create-service csb-azure-mysql small csb-mysql-test-db
  2. Create service key via cf create-service-key csb-mysql-test-db test-key
  3. Bind service to app via cf bind-service test-app csb-mysql-test-db

Your Environment

Environment: Azure
Version: sb-0.1.0-rc.37-gcp-0.0.1-rc.76
Configuration:

csb-azure-mysql:
    provision:
      defaults: '{
       "skip_provider_registration": true,
       "resource_group":"mfc-use-pcf-psb"
      }'

[FR] Subsume Azure SQL as a plan of csb-azure-mssql-db

Is your feature request related to a problem? Please describe.
In order to make the transition from MASB Azure SQL to CSB and subsequent upgrades of service plans for subsumed instances match more closely with the current workflow that teams already use and reduce the amount of effort required modifying existing pipelines.

Describe the solution you'd like
Instead of a standalone service (azure-mssql-db-masb-subsume) being used to subsume an Azure SQL instance and then upgrades being performed with internal parameters, e.g.:

cf create-service csb-azure-mssql-db subsume <instance-name> -c '{ <params required for subsume> }'
cf update-service <instance-name> -c '{"service_objective":"new objective"}'

to have the subsume process simply be a plan of the csb-azure-mssql-db service, such that after subsuming is performed teams need only to change the service name in their existing pipeline scripts and updates to service plans can be executed using plan names in the usual way, e.g.:

cf update-service <instance-name> -p <plan-name>

The desired over all flow would look like:

# Initial subsume
cf create-service csb-azure-mssql-db subsume <instance-name> -c '{ <params required for subsume> }'

# Subsequent update of service plan
cf update-service <instance-name> -p <plan-name>

Describe alternatives you've considered
Using the existing solution with parameter updates described above.

Additional Context
We currently wrap the CF APIs directly with our own CLI which acts as a replacement for the CF CLI, such that cf create-service and cf update-service are the same command and it is automatically detected if an update needs to be performed rather than a create.

For example a team's pipeline would contain a line such as:

./provisioning-cli service create --name <instance name> --serviceName <service name> --planName <plan name>

where if the same command is run again with a different plan or different parameters we internally convert this into the equivalent of a cf update-service via a PUT call to the /service_instances/<instanceGUID> API.

This new proposed solution would allow teams to simply change the <service name> value in their pipelines and the workflow for updating services would be identical to the process for newly created (non-subsumed) instances.

Priority
High

Priority Context
The migration effort and communication of changes for the many teams that use Azure SQL that will need to migrate will be significantly reduced with this proposed approach.

Platform
Azure

Applicable Services
csb-azure-mssql-db
azure-mssql-db-masb-subsume

[BUG] When a service is binded to the cloud-service-broker-azure-X.X.XX app, we get "Error parsing VCAP_SERVICES: Error finding MySQL tag"

Description

We did a tile installation. So we create a p.mysql database for the cloud-service-broker database. We then create a access-key to that service and used that information in the database settings of the "Service Broker Config" tab in the tile.

If we then bind a service to the cloud-service-broker-azure-X.X.XX app (Splunk log drain in our case), we get "Error parsing VCAP_SERVICES: Error finding MySQL tag" and the app doesn't start.

It appears like it's trying to find the database information from the VCAP_SERVICES.

If I then bind the cloud-service-broker-azure-X.X.XX app to my mysql database service. It then finds the VCAP_SERVICE information and the app starts normally and enables us to use the splunk log drain.

Expected Behavior

When binding another service, the cloud-service-broker should still use the information provided in the in the tile configuration.

Actual Behavior

When a service binding is present at the app start time, it looks in the VCAP_SERVICE for the mysql database.

Possible Fix

Be able to bind an external service and still use the configured database information and not look in VCAP_SERVICES if those settings are provided by the tile configuration.

Steps to Reproduce

  1. Create a Mysql database
  2. Install the tile and provided the access-key information of the Mysql database.
  3. Bind a external service (ex: splunk log drain)
  4. Restart the cloud-service-broker-azure-X.X.XX app

Context

We had to bind to the database and we must also keep our service-key in order to be able to provide that informaiton to the tile configuration. We then have to make available two set's of credentials to the cloud-service-broker app in order to be able to use our log drain.

Your Environment

Version used: Cloud service broker for Microsoft Azure 0.0.35
Platform (Azure/AWS/GCP): Azure
Applicable Services: Bosh 2.10.1 TAS: 2.10.3

Full error log

2020-09-16T15:18:13.720-04:00 [APP/PROC/WEB/0] [OUT] {"timestamp":"1600283893.719773531","source":"cloud-service-broker","message":"cloud-service-broker.Invalid VCAP_SERVICES environment variable","log_level":2,"data":{"error":"Error parsing VCAP_SERVICES: Error finding MySQL tag: The variable VCAP_SERVICES must have one VCAP service with a tag of 'mysql'. There are currently 0 VCAP services with the tag 'mysql'."}}

2020-09-16T15:18:13.720-04:00 [APP/PROC/WEB/0] [ERR] {"timestamp":"1600283893.719773531","source":"cloud-service-broker","message":"cloud-service-broker.Invalid VCAP_SERVICES environment variable","log_level":2,"data":{"error":"Error parsing VCAP_SERVICES: Error finding MySQL tag: The variable VCAP_SERVICES must have one VCAP service with a tag of 'mysql'. There are currently 0 VCAP services with the tag 'mysql'."}}

2020-09-16T15:18:13.774-04:00 [APP/PROC/WEB/0] [OUT] Exit status 1

Deprovisioning MySQL Service on Azure leaves the servicebroker database in an inconsistent state

Running cloud-service-broker serve on my Windows PC using Windows Subsystem for Linux

Provision a new Azure MySQL database instance using the cloud-service-broker client
(successfully created in Azure)

Deprovision the above instance
(successfully deletes it from Azure)

Provision a new MySQL database instance
Expecting a new instance to be created in Azure but instead throws this error on the client:
image

Server shows this error:
image

Manually delete the entries from the terraform_deployments table in the servicebroker database using mysql CLI.

Provision a new MySQL database instance
(successfully created in Azure)

[BUG] Cannot delete a service instance that failed to create

Description

After a service instance creation (cf create-service) fails, the service instance ends up in a "create failed" state. Trying to delete this invalid service instance results in an error, preventing the service instance from being removed.

ie: create a service instance but do not set the "skip_provider_registration" flag to "true" resulting in a failure of the service instance creation. Doing a cf delete-service on that service instance results in the same error related to provider registration.

Expected Behavior

The service instance should be deleted successfully, perhaps under a force option. (the actual resource was not created successfully on azure)

Actual Behavior

Error trying to remove a service instance that failed on creation

Possible Fix

Perhaps the service broker is trying to do a "terraform destroy" on the deletion of the service. Since the terraform template wasn't valid to begin with, this operation fails.

Steps to Reproduce

  1. Using an azure account that doesn't have access to register providers
  2. cf create a sample service without the "skip_provider_registration". This should error out with:

Error: Error ensuring Resource Providers are registered.Terraform automatically attempts to register the Resource Providers it supports toensure it's able to provision resources.If you don't have permission to register
Resource Providers you may wish to use the"skip_provider_registration" flag in the Provider block to disable this functionality.Please note that if you opt out of Resource Provider Registration and Terraform triesto provision
a resource from a Resource Provider which is unregistered, then the errorsmay appear misleading - for example:> API version 2019-XX-XX was not found for Microsoft.FooCould indicate either that the Resource Provider
"Microsoft.Foo" requires registration,but this could also indicate that this Azure Region doesn't support this API version.More information on the "skip_provider_registration" flag can be found
here:https://www.terraform.io/docs/providers/azurerm/index.html#skip_provider_registrationOriginal Error: Cannot register provider Microsoft.Maintenance with Azure Resource Manager: resources.ProvidersClient#Register: Failure
responding to request: StatusCode=403 -- Original Error: autorest/azure: Service returned an error. Status=403 Code="AuthorizationFailed" Message="The client

  1. Do a cf delete-service on that failed service. You should see the same error.

Context

Initial set up and testing of the default settings for a service plan results in the errorneus service instances. Cannot remove them afterwards.

Your Environment

  • Version used: PCF 2.8, the latest CSB
  • Platform (Azure/AWS/GCP): Azure
  • Applicable Services: csb-azure-mssql-db

[DOCS] How to add "read_write_endpoint_failover_policy" to defaults in example-configs

Documentation Requested

Would it be possible to add in the "example-configs" document an example of setting the "read_write_endpoint_failover_policy" as a default for the "csb-azure-mssql-db-failover-group"?

File:
https://github.com/pivotal/cloud-service-broker/blob/master/docs/example-configs.md

Section:
Azure csb-azure-mssql-db-failover-group

Feature:
"read_write_endpoint_failover_policy":"Manual/Automatic"
0ebfac6

[FR] Subsume capability for Azure SQL MASB Failover Groups

Is your feature request related to a problem? Please describe.
Would like a streamlined way for users to migrate from existing MASB Azure SQL Failover group to CSB Failover Group instance

Describe the solution you'd like
Something similar to the existing csb-masb-mssql-db-subsume service

Describe alternatives you've considered
Exporting data from existing database, creating new CSB instances and importing the data to the new instance.

Additional Context

Priority
High

Priority Context
A solution with minimal downtime for migrating from MASB to CSB Azure SQL failover groups is required for minimal application downtime and business disruption.

Platform
Azure

Applicable Services
Azure SQL Failover Group

[FR]GCP CloudSQL Project Configurability

Is your feature request related to a problem? Please describe.
GCP has quota limits on GCP projects for managed services such as cloudsql service instances. The default is 40.
GCP recommends provisioning cloudsql instances in a new gcp project once Service instance quota is reached.

Describe the solution you'd like
It would be great feature to add to cloud service broker to provide gcp project as a parameter during provisioning of creating service such as cloudsql instances, so that gcp service quota limit constraint can be overcome when using CF create service in TAS.

Describe alternatives you've considered
Bumping the GCP Quota for max CloudSQL instances in a project. Google recommends against this.

Additional Context
GCP Service broker currently cannot be configured to provision cloudsql instances in multiple GCP projects.

Priority
Low

Priority Context
This looks like it might already be possible with this broker. Just depends on if the config option will be in the tile config.

Platform
GCP

Applicable Services
CloudSQL

[BUG] aws-brokerpak Makefile target "run" broken

Description

When you try to run the aws-brokerpak with make run, the CSB binary will not start due to sqlite library problems.

Expected Behavior

The binary comes up and starts serving requests

Actual Behavior

bretamogilefsky aws-brokerpak $ make run
docker run --rm -v /Users/bretamogilefsky/Documents/Code/cloud-service-broker/aws-brokerpak:/brokerpak -w /brokerpak  \
        -p 8080:8080 \
        -e SECURITY_USER_NAME \
        -e SECURITY_USER_PASSWORD \
        -e AWS_ACCESS_KEY_ID \
    -e AWS_SECRET_ACCESS_KEY \
        -e "DB_TYPE=sqlite3" \
        -e "DB_PATH=/tmp/csb-db" \
        -e GSB_PROVISION_DEFAULTS \
        cfplatformeng/csb serve
{"timestamp":"1606849820.006082296","source":"cloud-service-broker","message":"cloud-service-broker.WARNING: DO NOT USE SQLITE3 IN PRODUCTION!","log_level":1,"data":{}}
{"timestamp":"1606849820.012327909","source":"cloud-service-broker","message":"cloud-service-broker.Database Setup","log_level":2,"data":{"error":"Binary was compiled with 'CGO_ENABLED=0', go-sqlite3 requires cgo to work. This is a stub"}}
{"timestamp":"1606849820.012327909","source":"cloud-service-broker","message":"cloud-service-broker.Database Setup","log_level":2,"data":{"error":"Binary was compiled with 'CGO_ENABLED=0', go-sqlite3 requires cgo to work. This is a stub"}}
make: *** [run] Error 1

Possible Fix

I'm unsure how you're building the Docker image, but this may help...?

Steps to Reproduce

  1. Clone the repository
  2. cd aws-brokerpak
  3. Set the required env vars
  4. make; make run

Context

I was trying to remove the need to use docker-compose and a separate MySQL DB in the Makefile for my own brokerpak. I expected that I could mimic this run target in my own project, but then ran into the errors mentioned. I tried it in your aws-brokerpak directory and found the same errors.

Your Environment

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.