Git Product home page Git Product logo

fog-google's Introduction

Fog::Google

Gem Version Build Status codecov Dependabot Status Doc coverage

The main maintainers for the Google sections are @icco, @Temikus and @plribeiro3000. Please send pull requests to them.

Important notices

  • As of v1.0.0, fog-google includes google-api-client as a dependency, there is no need to include it separately anymore.

  • Fog-google is currently supported on Ruby 2.7+ See supported ruby versions for more info.

See MIGRATING.md for migration between major versions.

Sponsors

We're proud to be sponsored by MeisterLabs who are generously funding our CI stack. A small message from them:

"As extensive users of fog-google we are excited to help! Meister is the company behind the productivity tools MindMeister, MeisterTask, and MeisterNote. We are based in Vienna, Austria and we have a very talented international team who build our products on top of Ruby on Rails, Elixir, React and Redux. We are constantly looking for great talent in Engineering, so If you feel like taking on a new Ruby or Elixir challenge. get in touch, open jobs can be found here."

Usage

Storage

There are two ways to access Google Cloud Storage. The old S3 API and the new JSON API. Fog::Storage::Google will automatically direct you to the appropriate API based on the credentials you provide it.

Compute

Google Compute Engine is a Virtual Machine hosting service. Currently it is built on version v1 of the GCE API.

As of 2017-12-15, we are still working on making Fog for Google Compute engine (Fog::Compute::Google) feature complete. If you are using Fog to interact with GCE, please keep Fog up to date and file issues for any anomalies you see or features you would like.

SQL

Fog implements v1beta4 of the Google Cloud SQL Admin API. As of 2017-11-06, Cloud SQL is mostly feature-complete. Please file issues for any anomalies you see or features you would like as we finish adding remaining features.

DNS

Fog implements v1 of the Google Cloud DNS API. We are always looking for people to improve our code and test coverage, so please file issues for any anomalies you see or features you would like.

Monitoring

Fog implements v3 of the Google Cloud Monitoring API. As of 2017-10-05, we believe Fog for Google Cloud Monitoring is feature complete for metric-related resources and are working on supporting groups.

We are always looking for people to improve our code and test coverage, so please file issues for any anomalies you see or features you would like.

Pubsub

Fog mostly implements v1 of the Google Cloud Pub/Sub API; however some less common API methods are missing. Pull requests for additions would be greatly appreciated.

Installation

Add the following two lines to your application's Gemfile:

gem 'fog-google'

And then execute:

$ bundle

Or install it yourself as:

$ gem install fog-google

Testing

Integration tests can be kicked off via following rake tasks. Important note: As those tests are running against real API's YOU WILL BE BILLED.

rake test               # Run all integration tests
rake test:parallel      # Run all integration tests in parallel

rake test:compute       # Run Compute API tests
rake test:monitoring    # Run Monitoring API tests
rake test:pubsub        # Run PubSub API tests
rake test:sql           # Run SQL API tests
rake test:storage       # Run Storage API tests

Since some resources can be expensive to test, we have a self-hosted CI server. Due to security considerations a repo maintainer needs to add the label integrate to kick off the CI.

Setup

Credentials

Follow the instructions to generate a private key. A sample credentials file can be found in .fog.example in this directory:

cat .fog.example >> ~/.fog # appends the sample configuration
vim ~/.fog                 # edit file with yout config

As of 1.9.0 fog-google supports Google application default credentials (ADC) The auth method uses Google::Auth.get_application_default under the hood.

Example workflow for a GCE instance with service account scopes defined:

> connection = Fog::Compute::Google.new(:google_project => "my-project", :google_application_default => true)
=> #<Fog::Compute::Google::Real:32157700...
> connection.servers
=> [  <Fog::Compute::Google::Server ...  ]

CarrierWave integration

It is common to integrate Fog with Carrierwave. Here's a minimal config that's commonly put in config/initializers/carrierwave.rb:

CarrierWave.configure do |config|
    config.fog_provider = 'fog/google'
    config.fog_credentials = {
        provider: 'Google',
        google_project: Rails.application.secrets.google_cloud_storage_project_name,
        google_json_key_string: Rails.application.secrets.google_cloud_storage_credential_content
        # can optionally use google_json_key_location if using an actual file;
    }
    config.fog_directory = Rails.application.secrets.google_cloud_storage_bucket_name
end

This needs a corresponding secret in config/secrets.yml, e.g.:

development:
    google_cloud_storage_project_name: your-project-name
    google_cloud_storage_credential_content: '{
        "type": "service_account",
        "project_id": "your-project-name",
        "private_key_id": "REDACTED",
        "private_key": "-----BEGIN PRIVATE KEY-----REDACTED-----END PRIVATE KEY-----\n",
        "client_email": "[email protected]",
        "client_id": "REDACTED",
        "auth_uri": "https://accounts.google.com/o/oauth2/auth",
        "token_uri": "https://accounts.google.com/o/oauth2/token",
        "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
        "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/REDACTED%40your-project-name.iam.gserviceaccount.com"
    }'
    google_cloud_storage_bucket_name: your-bucket-name

SSH-ing into instances

If you want to be able to bootstrap SSH-able instances, (using servers.bootstrap,) be sure you have a key in ~/.ssh/id_rsa and ~/.ssh/id_rsa.pub

Quickstart

Once you've specified your credentials, you should be good to go!

$ bundle exec pry
[1] pry(main)> require 'fog/google'
=> true
[2] pry(main)> connection = Fog::Compute::Google.new
[3] pry(main)> connection.servers
=> [  <Fog::Compute::Google::Server
    name="xxxxxxx",
    kind="compute#instance",

Supported Ruby Versions

Fog-google is currently supported on Ruby 3.0+.

In general we support (and run our CI) for Ruby versions that are actively supported by Ruby Core - that is, Ruby versions that are not end of life. Older versions of Ruby may still work, but are unsupported and not recommended. See https://www.ruby-lang.org/en/downloads/branches/ for details about the Ruby support schedule.

Contributing

See CONTRIBUTING.md in this repository.

fog-google's People

Contributors

cowboyrushforth avatar dawidjanczak avatar deanputney avatar dependabot-preview[bot] avatar dependabot[bot] avatar easkay avatar emilymye avatar erjohnso avatar everlag avatar faberge-eggs avatar fog-google-bot avatar geemus avatar gscho avatar hattorious avatar icco avatar jayhsu21 avatar kgaikwad avatar lcy0321 avatar mlazarov avatar palladius avatar plribeiro3000 avatar richardwallace avatar seanmalloy avatar selmanj avatar sethboyles avatar stanhu avatar temikus avatar tumido avatar wyosotis avatar yosiat avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fog-google's Issues

servers::bootstrap should use disk/autoDelete

Right now, bootstrapped servers automatically create a disk to use with the server, but do not automatically destroy that disk when the server is destroyed. I propose we use the disk/autoDelete option when creating bootstrapped instances, to avoid orphaned disks, (more info here).

@icco You know this codebase better than I do; objections? Thanks.

Struggling to authenticate with service account (email & key)

I keep getting Missing required arguments: google_storage_access_key_id, google_storage_secret_access_key. I understand that I am supposed to put my credential "in a /.fog" file, but I don't quite understand how that's supposed to work in the context of a Rails app. Can someone elaborate on how to configure this? I have tried passing the settings in an initializer (as suggested here), but they don't seem to get recognized in the validate_options method.

config/initializers/fog.rb

GoogleStorage = Fog::Storage.new(
  provider: 'Google',
  google_project: 'xxxxxxxxxxxxx',
  google_client_email: '[email protected]',
  google_key_location: 'private/google-cloud-service-key.p12'
)

How to perform CI testing with integration tests?

Per #18.

I've written an integration test framework and suite of tests that work with a live integration setup, and we need to figure out how to get those tests to not fail on Travis CI. We need these tests run regularly to make sure that the library actually works against the current API. Options I can think of:

  1. We could upload the cassettes as part of the codebase. This is contrary to fog/fog#2112, and the problems are numerous. The two big ones are:
    • We'll have to commit VCR cassettes into the codebase. That's a huge pain, and creates messy commits.
    • We or someone else might inadvertently commit sensitive information embedded in the cassettes, (e.g. authentication information).
  2. We can link tests to mocks. This is also not a great solution, since it means:
    • We have to keep the mocks up-to-date
    • We might get the CI test passing when it should fail against the live API. This is how we got to where we are now, where we don't know how much of the codebase still works and how much is built against an old API spec.
  3. We can run tests against a live project. This also has numerous issues:
    • If a test fails, it's (very) hard to clean up after, (i.e. it may have created resources that it didn't delete, so we might end up with VMs or other things lying around which will cause problems with future tests and also rack up costs for anyone else testing this stuff).
    • It means putting credentials for a project up on our Travis CI server. I don't know how secure that is.
    • They take a very long time to run, (at least 30 minutes,) and they aren't particularly consistent, (e.g. a bad connection can make a whole suite fail for unclear reasons).
  4. We can not run integration tests on Travis CI, and run them in some other environment, where we can store credentials. The brain-dead solution would be to just have me, (or someone else,) run them locally at every update. This is neither transparent nor sustainable.

@plribeiro3000 Your thoughts would be helpful here. Does the Fog community, as far as you know, have working solutions to this problem?

@erjohnso Do you have suggestions, based on how other projects are doing this?

Implement better examples

ะกurrently all fog-google examples are essentially code snippets:
Take https://github.com/fog/fog-google/blob/master/examples/image_all.rb as an example:

def test
  connection = Fog::Compute.new({ :provider => "Google" })

  # If this doesn't raise an exception, everything is good.
  connection.images.all
end

If someone wants to fix something quick, he needs to first set up a development environment, which potentially deters quick changes to the library and getting started easily.

What I propose is:

  1. Briefly describe in the README to get .fogrc working, as in:
default:
    google_project: "my-awesome-project"
    google_client_email: "[email protected]"
    google_json_key_location: "/tmp/gce-key.json"
  1. Adding development dependencies to a separate group in Gemfile, for example, for latest git versions of fog and fog-core:
group :development do
   gem 'fog-core', git: "https://github.com/fog/fog-core"
   gem 'fog', git: "https://github.com/fog/fog"
   gem 'fog-google', path: "."
   gem 'fog-json'
end
  1. With that set up we just need to add 4 lines to any example:
require 'bundler'
Bundler.require(:default, :development)
# Comment this if you don't want to make real requests to GCE
WebMock.disable!

And voila! It becomes an actual working script:

temikus ฮป cat image_all.rb
require 'bundler'
Bundler.require(:default, :development)
# Comment this if you don't want to make real requests to GCE
WebMock.disable!

def test
  connection = Fog::Compute.new({ :provider => "Google" })

  # If this doesn't raise an exception, everything is good.
  connection.images.all
end

images = test
pp test

temikus ฮป ruby image_all.rb >> log
warning: parser/current is loading parser/ruby21, which recognizes
warning: 2.1.6-compliant syntax, but you are running 2.1.5.
[  <Fog::Compute::Google::Image
    name="centos-6-v20131120",
    id="11748647391859510935",
    kind="compute#image",
    archive_size_bytes="269993565",
    creation_timestamp="2013-11-25T15:13:50.611-08:00",
    deprecated={"state"=>"DEPRECATED", "replacement"=>"https://www.googleapis.com/compute/v1/projects/centos-cloud/global/images/centos-6-v20150226"},
    description="SCSI-enabled CentOS 6 built on 2013-11-20",
    disk_size_gb="10",
    self_link="https://www.googleapis.com/compute/v1/projects/centos-cloud/global/images/centos-6-v20131120",
    source_type="RAW",
    status="READY",
    project="centos-cloud",
    raw_disk={"source"=>"", "containerType"=>"TAR"}
...

This is just a suggestion, but IMO this will save a bunch of time for anyone who is just starting to work on this library. If you don't want any changes in Gemfile/examples, then maybe just a change in the README/CONTRIBUTING?

Let me know what you think.

Better errors for incorrect service_accounts

If one states incorrect service_accounts field in server parameters, for example:

  server = connection.servers.create(defaults = {
    :name => "fog-smoke-test-#{Time.now.to_i}",
    :disks => [disk],
    :machine_type => "n1-standard-1",
    :private_key_path => File.expand_path("~/.ssh/id_rsa"),
    :public_key_path => File.expand_path("~/.ssh/id_rsa.pub"),
    :zone_name => "europe-west1-b",
    :user => ENV['USER'],
    :tags => ["fog"],
    :service_accounts => [ 'foo', 'bar', 'baz' ],
  })

, we get a very ambiguous error back:

/Users/temikus/Code/vagrant-dev/fog-google/lib/fog/google.rb:222:in `build_excon_response': Code: 'CghJTlNUQU5DRRImNzY4MDk5NTM1NzQxLmZvZy1zbW9rZS10ZXN0LTE0MzMxNDcxMzM=' (Fog::Errors::Error)
    from /Users/temikus/Code/vagrant-dev/fog-google/lib/fog/google.rb:193:in `request'
    from /Users/temikus/Code/vagrant-dev/fog-google/lib/fog/google/requests/compute/get_zone_operation.rb:50:in `get_zone_operation'
    from /Users/temikus/Code/vagrant-dev/fog-google/lib/fog/google/models/compute/operations.rb:23:in `get'
    from /Users/temikus/Code/vagrant-dev/fog-google/lib/fog/google/models/compute/operation.rb:63:in `reload'
    from /Users/temikus/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/bundler/gems/fog-core-d117cd252d28/lib/fog/core/model.rb:70:in `block in wait_for'
    from /Users/temikus/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/bundler/gems/fog-core-d117cd252d28/lib/fog/core/wait_for.rb:7:in `block in wait_for'
    from /Users/temikus/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/bundler/gems/fog-core-d117cd252d28/lib/fog/core/wait_for.rb:6:in `loop'
    from /Users/temikus/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/bundler/gems/fog-core-d117cd252d28/lib/fog/core/wait_for.rb:6:in `wait_for'
    from /Users/temikus/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/bundler/gems/fog-core-d117cd252d28/lib/fog/core/model.rb:69:in `wait_for'
    from /Users/temikus/Code/vagrant-dev/fog-google/lib/fog/google/models/compute/server.rb:280:in `save'
    from /Users/temikus/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/bundler/gems/fog-core-d117cd252d28/lib/fog/core/collection.rb:51:in `create'
    from example_create.rb:22:in `test'
    from example_create.rb:58:in `<main>'

, where encoded string CghJTlNUQU5DRRImNzY4MDk5NTM1NzQxLmZvZy1zbW9rZS10ZXN0LTE0MzMxNDcxMzM= is just the instance name: INSTANCE&768099535741.fog-smoke-test-1433147133

Maybe we should add a bit of verbosity to it? At least "instance config rejected" or something?

What do you think?

Improve authentication mechanisms

google/storage uses a legacy Amazon-compatible authentication system that still works, but has some limitations and requires some hackery to get working in a non-trivial case. It looks for the parameters :google_storage_access_key_id and :google_storage_secret_access_key

google/compute embraces the newer service account model, and accepts :google_project, google_client_email, :google_key_location, :google_key_string and :google_client

Instances provisioned on Google Compute Engine can be authorized at launch time with service_account_scopes, which preauthorize the instance on various Google OAuth scopes, e.g.: https://www.googleapis.com/auth/devstorage.full_control -- once this is done, a GET query to the Google metadata server from that instance will return a valid token for the service for that instance scoped to its own project -- no other service accounts required.

I would propose:

  1. expanding google/storage's vocabulary to accept the same service account parameters as google/compute

  2. expanding the vocabulary of google/compute to allow service_account_scopes to be set at instance launch time

  3. adding a parameter to both google/compute and google/storage to attempt using an OAuth token from the metadata service if fog is running on a preauthorized instance

This would allow a fog user to provision a Compute Engine node using fog and a provisioning service account, preauthorize that node to connect to Cloud Storage (and/or other Google OAuth scopes), and then have that node be able to run and interact with Cloud Storage, Datastore, etc. without needing to be issued its own unique service account.

I can work on this and it doesn't look too terribly difficult, but I haven't contributed to fog before and this is really my first time looking at its internals. Before I waste too much effort, does this all sound worthwhile, and is there anyone actively maintaining the google stuff that I can coordinate with?

ArgumentError ( is not a recognized provider): on Heroku, using paperclip and fog

screen shot 2015-10-13 at 4 15 27 pm

This works locally on dev machine, doesn't work on heroku though. Does anyone have any thoughts on this?

User Model

has_attached_file   :avatar,
                      styles: {:big => "200x200>", thumb: "50x50>"},
                      storage: :fog,
                      fog_credentials: "#{Rails.root}/config/gce.yml",
                      fog_directory: "google-bucket-name"

  validates_attachment_content_type :avatar, content_type: /\Aimage\/.*\Z/

Gemfile

gem "paperclip", git: "git://github.com/thoughtbot/paperclip.git"
gem 'fog'

multiple directories : one for each engine

Hello, is it possible to have a different fog configuration for each engine mounted on the main application ?
typically I'd like each engine to upload to its own bucket.

Exception raised in #get_target_pool_health when instance is terminated

Google instances can be terminated, yet still in a Target Pool. A Fog::Errors::Error exception is raised as 'resource is not ready', which prevents you from getting health for all other instances in that Target Pool.

Here's how I'm calling Target Pool #get_health

  if (t = load_balancers.target_pools.get(n)) && t.get_health.any?
          Hash[*t.get_health.map{|i, h|
                 i = i.split_link if split
                 [i, {:state => h.first['healthState'], :ip_address => h.first['ipAddress'] }]
               }.flatten]
  end

And the backtrace

#<Fog::Errors::Error: The resource 'projects/<PROJECT>/zones/us-central1-b/instances/<INSTANCE>' is not ready>
/Users/dacamp/.rvm/gems/ruby-2.0.0-p481/gems/fog-1.23.0/lib/fog/google/compute.rb:179:in `build_excon_response'
/Users/dacamp/.rvm/gems/ruby-2.0.0-p481/gems/fog-1.23.0/lib/fog/google/compute.rb:959:in `build_response'
/Users/dacamp/.rvm/gems/ruby-2.0.0-p481/gems/fog-1.23.0/lib/fog/google/requests/compute/get_target_pool_health.rb:21:in `block in get_target_pool_health'
/Users/dacamp/.rvm/gems/ruby-2.0.0-p481/gems/fog-1.23.0/lib/fog/google/requests/compute/get_target_pool_health.rb:19:in `map'
/Users/dacamp/.rvm/gems/ruby-2.0.0-p481/gems/fog-1.23.0/lib/fog/google/requests/compute/get_target_pool_health.rb:19:in `get_target_pool_health'
/Users/dacamp/.rvm/gems/ruby-2.0.0-p481/gems/fog-1.23.0/lib/fog/google/models/compute/target_pool.rb:79:in `get_health'

Error in Ruby's rescue clause

In the following file/commit/line:

1976f5c#commitcomment-10822546

Causes (at least) the following error:

/.../ruby/2.2.0/gems/fog-google-0.0.3/lib/fog/google/models/compute/images.rb:67:in `rescue in get': class or module required for rescue clause (TypeError)
    from /.../ruby/2.2.0/gems/fog-google-0.0.3/lib/fog/google/models/compute/images.rb:47:in `get'
    from /.../ruby/2.2.0/gems/fog-google-0.0.3/lib/fog/google/requests/compute/insert_disk.rb:77:in `insert_disk'
    from /.../ruby/2.2.0/gems/fog-google-0.0.3/lib/fog/google/models/compute/disk.rb:40:in `save'
    from /.../ruby/2.2.0/gems/fog-core-1.30.0/lib/fog/core/collection.rb:51:in `create'
    from ...

Models and requests should be unit tested in minitest

Right now, the shindo tests are backed by mocks in the codebase, which don't provide any assurance that the code actually works, only that it is internally consistent. This has led to drift away from the API as code was written and tested against the mocks, but not continually tested against the live API. In turn, the mocks are hard to keep up-to-date.

HTTPS link

config.asset_host     = 'https://assets.example.com' 

doesn't work in CarrierWave.configure

How we can to show carrierwave use ssl versiton in url method?

Incorrect arguments for some operations.get calls causes exceptions

The recent update to google/models/compute/target_pool.save and google/models/compute/forwarding_rule.save pass operations.get incorrect arguments which in turn throws an error when wait_for is called on a nil object. My colleague or I will be submitting a pull request soon to fix this

Add support for custom vm's

Custom VM's got released, would be a good thing to support them.

This shouldn't be extremely hard, since API is very simple, one just needs to specify a custom machineType value:

zones/ZONE/machineTypes/custom-NUMBER_OF_CPUS-AMOUNT_OF_MEMORY

Since machineType is just pasted in as a string in our case, we may just need to verify that it works.

More info here:
http://googlecloudplatform.blogspot.sg/2015/11/introducing-Custom-Machine-Types-the-freedom-to-configure-the-best-VM-shape-for-your-workload.html#gpluscomments
https://cloud.google.com/compute/docs/instances/creating-instance-with-custom-machine-type

Fog::DNS::Google::Records example is broken

get method in Fog::DNS::Google::Records supports only positional arguments

connection.records.get is structured like so:

def get(name, type)
  requires :zone

  records = service.list_resource_record_sets(zone.identity, { :name => name, :type => type }).body['rrsets'] || []
  records.any? ? new(records.first) : nil
rescue Fog::Errors::NotFound
  nil
end

, however, our example lists:

record = connection.records(zone: zone).get(name: 'tessts.example.org.',type: 'A')

, which leads to argument errors:

> record = connection.records(zone: zone).get(name: 'tessts.example.org.',type: 'A')
ArgumentError: wrong number of arguments (1 for 2)

Should I:

  • Fix the example?
    or
  • Fix the method to accept an options hash? ( Ruby 2.0 doesn't support required named arguments ๐Ÿ˜ž )

/CC @plribeiro3000 @icco

Scope aliases consistency with gcloud

We've had a discussion with @erjohnso about service account scopes inconsistency with official gcloud utility in mitchellh/vagrant-google#71.

If one wants to define a service account scope using a short name, we have the following (left is gcloud compute, right is fog scope: attribute, since gcloud compute aliases do not match up to API endpoints:

          compute-ro         compute.readonly
          compute-rw         compute
          computeaccounts-ro computeaccounts.readonly
          computeaccounts-rw computeaccounts
          logging-write      logging.write
          sql                sqlservice
          sql-admin          sqlservice.admin
          storage-full       devstorage.full_control
          storage-ro         devstorage.read_only
          storage-rw         devstorage.read_write

Excerpt from gcloud compute instances create --help:

          Alias              URI
          bigquery           https://www.googleapis.com/auth/bigquery
          cloud-platform     https://www.googleapis.com/auth/cloud-platform
          compute-ro         https://www.googleapis.com/auth/compute.readonly
          compute-rw         https://www.googleapis.com/auth/compute
          computeaccounts-ro https://www.googleapis.com/auth/computeaccounts.readonly
          computeaccounts-rw https://www.googleapis.com/auth/computeaccounts
          datastore          https://www.googleapis.com/auth/datastore
          logging-write      https://www.googleapis.com/auth/logging.write
          monitoring         https://www.googleapis.com/auth/monitoring
          sql                https://www.googleapis.com/auth/sqlservice
          sql-admin          https://www.googleapis.com/auth/sqlservice.admin
          storage-full       https://www.googleapis.com/auth/devstorage.full_control
          storage-ro         https://www.googleapis.com/auth/devstorage.read_only
          storage-rw         https://www.googleapis.com/auth/devstorage.read_write
          taskqueue          https://www.googleapis.com/auth/taskqueue
          userinfo-email     https://www.googleapis.com/auth/userinfo.email

I was thinking about implementing it by either:

  1. Keeping an additional mapping of those shortcuts and if one matches - expanding them (Since it's probably a bad idea to have only gcloud-style mappings available in scopes, as that may break things for existing users, who expect the old-style shortcut).
    or
  2. Creating a separate scope_aliases: attribute which will accept only aliased parameters in the same format as gcloud, expanding them and passing them on to service_accounts: later.

Any thoughts on this?

P.S. This has already been done in libcloud:
https://github.com/apache/libcloud/blob/trunk/libcloud/compute/drivers/gce.py#L971
https://github.com/apache/libcloud/blob/trunk/libcloud/compute/drivers/gce.py#L4677

Support listing >1000 file directories via Fog::Storage::GoogleJSON

I am connecting to Google Cloud Storage where I have 20,000+ records, but when I connect to the bucket with Fog and list all files I only get 1000. Is there a way to increase the number of files returned? I was hoping to programmatically go through all of these and make modifications, but now I'm stuck. Anyone? Thanks and great work on this Gem. ๐Ÿ˜„

Update CONTRIBUTORS.md

We gotta keep up the CONTRIBUTORS.md file updated.

@geemus has a tool for that (osrcry) but it does not help much here because we already have a ton of contributors from fog that does not have commit here and since the tool works on top of git, it will just drop all of them. What i'm doing so far is update it by hand but i guess that a provider of the size of this one can't keep it going like this.

Perhaps we might send some patches to @geemus. =)

Obsolete development dependencies

Currently, fog-google.gemspec says that fog-google relies on pry, vcr, webmock, coveralls, and rubocop as development dependencies. Should some of these be removed?

Bootstrap method should look gcloud ssh keys

Currently, the live bootstrap test assumes the user has ~/.ssh/id_rsa[.pub] for ssh keys. Google users will typically have ~/.ssh/google_compute_engine[.pub]. In that case, ssh will "Just Work"(tm), so the suggestion is for the live bootstrap test to first try to use a the google key, then could fall back to id_rsa.

wdyt @ihmccreery?

Reconcile fog-google and fog/lib/fog/google

Right now, fog-google and fog/lib/fog/google codebases don't reference each other. fog-google is (kind of) under development, but fog doesn't pull those changes in.

We need to freeze one of these codebases to prevent having changes to both fog-google and fog/lib/fog/google that are difficult to merge. I propose we either:

  1. freeze fog-google, develop in fog/lib/fog/google to the point where we're confident that the code is not broken and properly tested, then pull that codebase into fog-google; or
  2. freeze fog/lib/fog/google, pull it over to fog-google, (pretty much already done,) and deprecate fog/lib/fog/google so that all development is happening in fog-google.

I think the first option will be the easier one, and will have the highest probability of not exploding in our faces. Transferring the whole codebase, where a lot of it is of unknown correctness, could be dangerous. However, the first option goes back on the current trajectory, and also might mean that pulling that codebase into fog-google later will be more painful.

Image create example is broken

I'm using https://github.com/fog/fog/blob/master/lib/fog/google/examples/image_create.rb as a template. Specifically, at least two different issues:

First, connection.image.create should be connection.images.create -- trivial fix

Second, connection.images.create fails with:

/home/diwaker/.rvm/gems/ruby-2.0.0-p195@mgt_console/gems/fog-1.23.0/lib/fog/google/compute.rb:179:in `build_excon_response': Invalid value for field 'image.hasRawDisk': 'false'.  (Fog::Errors::Error)
        from /home/diwaker/.rvm/gems/ruby-2.0.0-p195@mgt_console/gems/fog-1.23.0/lib/fog/google/compute.rb:959:in `build_response'
        from /home/diwaker/.rvm/gems/ruby-2.0.0-p195@mgt_console/gems/fog-1.23.0/lib/fog/google/requests/compute/get_global_operation.rb:21:in `get_global_operation'
        from /home/diwaker/.rvm/gems/ruby-2.0.0-p195@mgt_console/gems/fog-1.23.0/lib/fog/google/models/compute/operations.rb:27:in `get'
        from /home/diwaker/.rvm/gems/ruby-2.0.0-p195@mgt_console/gems/fog-1.23.0/lib/fog/google/models/compute/image.rb:79:in `save'
        from /home/diwaker/.rvm/gems/ruby-2.0.0-p195@mgt_console/gems/fog-core-1.23.0/lib/fog/core/collection.rb:51:in `create'

Release 0.1.1

I'm gonna bundle up 0.1.1 to go into Fog's 2.0 release. This drops support for all versions of Ruby < 2.0.

Allow Range header to work with Google Cloud Storage

The gem seems to support the http Range: header in requests, however it doesn't accept the 206 response from google, which is required. (206 = Partial Content)

The following PR adds basic support for accepting 206 Partial Content, allowing the Range header to be used to get objects from GCS:

#106

v0.0.6 source and tag

rubygems.org has a 0.0.6 version published as of yesterday, but this repo doesn't seem to contain a tag or corresponding source code (lib/fog/google/version.rb says 0.0.5).

get and other business logic should be DRY-ed up in the models

This issue is coming from the pain point that different resources behave differently when #get('nonexistent-identity') is called:

  • for Addresses#get, Servers#get, and others, if the resource isn't found, it returns nil, (this seems to be the preferred behavior,) whereas
  • for UrlMaps#get, TargetHttpProxies#get, and others, if the resource isn't found, it throws a Fog::Errors::NotFound.

This particular issue has been patched up in ikehz/fog-google@4e1d5dd and others, but it should be solved more permanently by DRY-ing up the duplicated business logic, (as well as implementing more consistent tests).

It's worth noting that these are breaking changes, but they are minor enough that I'm willing to put them in v0.1, though I'm happy to hear dissent. It will require some serious workarounds to properly test, (per the work I've been doing moving to Minitest,) if we decide not to change the behavior until v1.

For some resources, resource#get(nil) returns a resource

I would expect that, for example, Fog::Compute[:google].servers.get(nil) should either return nil or raise an error. Instead, if at least one server exists in my project,

> Fog::Compute[:google].servers.get(nil)
=> <Fog::Compute::Google::Server
      name="server-name",
      ...
    >

This is because of the way we find servers, disks, and other resources that are zoned/regioned. For example:

servers = service.list_aggregated_servers(:filter => "name eq .*#{identity}").body['items']

Every server matches that filter if identity is nil.

CI Linter introduction

Are you planning on introducing a linter into the project? (e.g. rubocop)
Can save you some cycles on PR reviews if it's tied to travis, also, fix some style guide violations that were overlooked/inherited.

Cut new Gem to adjust for Google DNS API v.1 deprecation

v1beta1 will be deprecated very soon (docs state cutoff on Sept 25th)

We already had the support proposed in: #64
This needs to be acceptance tested but I don't quite know what's the current setup is.

Is master stable enough to start cutting out a new gem?
If not - what is needed to push this through?

/CC @ihmccreery who has access to Jenkins.

Shindo should be replaced with minitest

Per fog/fog#1266 and fog/fog#2630, Minitest is the new testing framework to be used by fog projects.

  • #51 is open to cover compute in integration tests.
  • Nothing is unit-tested with minitest; it should be. Opening #49.
  • Furthermore, it's not clear whether or not we can run these integration tests in Travis CI, (see #19). Regardless, unit tests should be run in Travis.
  • #51 only covers compute. dns, monitoring, sql, and storage all remain tested in under shindo, and should be moved over to minitest.
  • The Rakefile is kind of a mess, and should eventually be cleaned up.

attribute names should be standardized

For example, Server has the following format for attributes, (this seems to be the preferred format):

attribute :can_ip_forward, :aliases => 'canIpForward'
attribute :creation_timestamp, :aliases => 'creationTimestamp'

whereas UrlMap has the following format:

attribute :fingerprint, :aliases => 'fingerprint'
attribute :hostRules, :aliases => 'host_rules'

It's worth noting that the format that UrlMap provides does not allow :underscored_symbol notation as it stands now, (symbol keys are not automatically converted to or checked against strings when passing parameters around).

Implement SCRATCH disk type

Is it possible to create a SCRATCH disk instead of PERSISTENT, I've looked through the source and it seems that PERSISTENT is hard coded into everything.

Ambiguity in asynchronous execution setting

While figuring out acceptance tests for vagrant-google, I found a weird logic piece in the implementation of synchronous operations in Fog.
As an example, let's take a look at Server class' destroy method:

def destroy(async=true)
  requires :name, :zone

  data = service.delete_server(name, zone_name)
  operation = Fog::Compute::Google::Operations.new(:service => service).get(data.body['name'], data.body['zone'])
  unless async
    operation.wait_for { ready? }
  end
  operation
end

The async parameter is just a true/false switch, so if we need to perform the operation synchronously (important for tests for example), we need to specify it like so:

instance.destroy(false)

Which, I find highly confusing to understand for someone who's reading the code later.
Due to the default being true, it is not easy to wrap around with a statement, since it will not make any sense either:

async_execution = false
instance.destroy(async_execution)

I was wandering - maybe it makes sense to make it a named parameter?
This will allow for:

  1. Logically sound statements:
instance.start(async: false)
  1. Ability to write in more execution flow control parameters easily if we need them.

ACL

Hi, how do I deal with ACL while uploading ?

Accessing url is slow

So I'm having a problem where If I try to access say image.url it takes a little too long. For example calling url on 100 images takes around 28 seconds when using Google Storage but using S3 with the same function call takes around half a second.

This might be related to this CarrierWave issue. However, the problem they were having was for both Google Storage and S3. @geemus mentioned in that issue to use public_url instead but this is still slow compared to using fog + S3.

This might not be a fog issue but a Google Storage issue.

As a workaround I'm currently just hardcoding the url to my Google Storage API and appending image.path to that - much faster.

Meta tests in `fog-google`

@plribeiro3000 Thanks again for doing the extraction.

I'm getting ready to cut a gem for fog-google, but I wanted to make sure all of the tests are passing properly and that we're not missing any components.

When I run the tests inside fog-google versus fog, (using sh("export FOG_MOCK=#{mock} && bundle exec shindont tests/google")), pretty much everything looks the same, except that the fog-google test suite seems to have following two lines that the fog tests don't, with 48 tests missing:

Fog::Schema::DataValidator (meta) +++++++++++++++++++++++
test_helper (meta) +++++++++++++++++++++++++

Should these tests be here? Seems weird to me that they are. It may very well be an artifact of other weird things that are going on.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.