Git Product home page Git Product logo

manageiq-providers-amazon's Introduction

ManageIQ::Providers::Amazon

CI Maintainability Test Coverage

Chat

Build history for master branch

ManageIQ plugin for the Amazon provider.

Development

See the section on plugins in the ManageIQ Developer Setup

For quick local setup run bin/setup, which will clone the core ManageIQ repository under the spec directory and setup necessary config files. If you have already cloned it, you can run bin/update to bring the core ManageIQ code up to date.

License

The gem is available as open source under the terms of the Apache License 2.0.

Contributing

  1. Fork it
  2. Create your feature branch (git checkout -b my-new-feature)
  3. Commit your changes (git commit -am 'Add some feature')
  4. Push to the branch (git push origin my-new-feature)
  5. Create new Pull Request

manageiq-providers-amazon's People

Contributors

agrare avatar alexanderzagaynov avatar bdunne avatar blomquisg avatar bzwei avatar carbonin avatar cben avatar chessbyte avatar d-m-u avatar djberg96 avatar durandom avatar fryguy avatar gberginc avatar gmcculloug avatar gtanzillo avatar hsong-rh avatar jerryk55 avatar jprause avatar jrafanie avatar kbrock avatar ladas avatar mfeifer avatar miha-plesko avatar miq-bot avatar mzazrivec avatar nicklamuro avatar roliveri avatar skateman avatar slemrmartin avatar tumido avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

manageiq-providers-amazon's Issues

Allow instance types to be pulled from github.com

I was thinking about how instance types are not updated until the next refresh and once we've merged #752, I wonder if those values can be pulled live from this repo during refresh, or perhaps on demand. I haven't worked out all of the details in my head, but some points I'm thinking:

  1. We would need to include a version number or flag or something in the yaml file and/or permanently ensure backward compatibility.
  2. Since it's unlikely to update that frequently we might want to use an ETag to avoid pulling something that hasn't changed.
  3. Failure to fetch, update, anything would gracefully write a log message and move on.
  4. Users should be able to "opt-out" or "opt-in" in case they are deploying into a DMZ and don't want to be pulling from github.com

Failure during smartstate agent setup leaves a zombie instance and job

If you start a scan job, and it tries to docker login when credentials are missing or one of the ssh commands fail, you get the error as expected in the log, but the job and the instance seems to be running forever. This is because it never is part of the heartbeat check. Even so, we should ensure that any failure during setup is caught directly and handled as opposed to relying on heartbeat.

AWS ECS

Hi all,
There is a way to manage ECS ochestrated containers ?
Cyril

Write specs for tag setting

PR #183 still needs to be ported to the new graph refresh. Given that the spec is running against the old and the new (graph) refresh, it will fail when the graph refresh is merged. For this reason, the tagging spec test was commented out but it needs to be enabled after tagging support has been added to the graph refresh.

Amazon SSA not working

Hello,

Customer is trying to run SmartState Analysis on an instance of AWS running CentOS.

In the user tasks this analysis fails with: "job timed out after 3060.770851409 seconds of inactivity. Inactivity threshold [3000 seconds]"

This is Infrastructure Management running as virtual appliance on VMWare.

In the provided logs (TS004217864/2020-09-27/ dir on ecurep) I can see:

{"@timestamp":"2020-10-07T14:04:52.176002 ","hostname":"1-amazon-agent-coordinator-0-55fd49dc87-hsgwd","pid":7,"tid":"2acb296b6e48","level":"debug","message":"[Aws::SQS::Client 400 0.057859 0 retries] get_queue_url(queue_name:\"smartstate_extract_reply-b0d2ab4b-40ed-4025-a5e2-788ba23ba35d\") Aws::SQS::Errors::NonExistentQueue The specified queue does not exist for this wsdl version."}
{"@timestamp":"2020-10-07T14:05:02.577138 ","hostname":"1-amazon-agent-coordinator-0-55fd49dc87-hsgwd","pid":7,"tid":"2acb257d795c","level":"debug","message":"[Aws::S3::Client 404 0.20764 0 retries] head_bucket(bucket:\"smartstate-b0d2ab4b-40ed-4025-a5e2-788ba23ba35d\") Aws::S3::Errors::NotFound "}

--> process agent coordinator running

12190 root 21 1 395768 204608 14740 R 15.4 0.8 0:09.24 MIQ: Amazon::CloudManager::EventCatcher id: 76, queue: ems_3

12188 root 30 10 361160 202108 13100 R 15.3 0.8 0:09.19 MIQ: Amazon::AgentCoordinatorWorker id: 75, queue: ems_agent_coordinator

12194 root 23 3 361408 195428 13128 R 14.7 0.8 0:08.85 MIQ: Amazon::CloudManager::MetricsCollectorWorker id: 78, queue: amazon

--> The related task info with GUID

[----] I, [2020-09-27T15:03:16.993192 #2228:2abfbf325970] INFO -- : MIQ(ManageIQ::Providers::Amazon::AgentCoordinatorWorker#start) Worker started: ID [75], PID [12188], GUID [5b493b88-02ff-4570-8697-6e393122260b]

--> I have looked at ews.log for GUID but, the only timeout error I found is the following:

[----] I, [2020-09-27T15:59:18.098238 #12188:2aeeb017595c] INFO -- : MIQ(ManageIQ::Providers::Amazon::AgentCoordinatorWorker::Runner#do_work) Alive agents in EMS(guid=1c9078f4-1231-4f90-9b63-29fbc205ce72): [].

.....

and then:

[----] I, [2020-09-27T15:59:23.405619 #2684:2ad3ca51b968] INFO -- : MIQ(MiqQueue#deliver) Message id: [5879], Delivering...

[----] W, [2020-09-27T15:59:23.409676 #2684:2ad3ca51b968] WARN -- : MIQ(ManageIQ::Providers::Amazon::CloudManager::Scanning::Job#timeout!) Job: guid: [17167f79-8f9d-428f-a382-4cb29ee8e498], job timed out after 3052.271594948 seconds of inactivity. Inactivity threshold [3000 seconds], aborting

[----] I, [2020-09-27T15:59:23.416496 #2684:2ad3ca51b968] INFO -- : MIQ(MiqQueue.put) Message id: [5882], id: [], Zone: [default], Role: [smartstate], Server: [], MiqTask id: [], Ident: [generic], Target id: [], Instance id: [1], Task id: [], Command: [Job.signal_abort], Timeout: [600], Priority: [100], State: [ready], Deliver On: [], Data: [], Args: ["job timed out after 3052.271594948 seconds of inactivity. Inactivity threshold [3000 seconds]", "error"]

[----] I, [2020-09-27T15:59:23.416693 #2684:2ad3ca51b968] INFO -- : MIQ(MiqQueue#delivered) Message id: [5879], State: [ok], Delivered in [0.01107655] seconds

[----] I, [2020-09-27T15:59:23.422359 #2684:2ad3ca51b968] INFO -- : MIQ(MiqGenericWorker::Runner#get_message_via_drb) Message id: [5881], MiqWorker id: [64], Zone: [], Role: [], Server: [], MiqTask id: [], Ident: [generic], Target id: [], Instance id: [], Task id: [], Command: [MiqQueue.check_for_timeout], Timeout: [600], Priority: [90], State: [dequeue], Deliver On: [], Data: [], Args: [], Dequeued in: [3.302457898] seconds

[----] I, [2020-09-27T15:59:23.422617 #2684:2ad3ca51b968] INFO -- : MIQ(MiqQueue#deliver) Message id: [5881], Delivering...

[----] I, [2020-09-27T15:59:23.423784 #2684:2ad3ca51b968] INFO -- : MIQ(MiqQueue#delivered) Message id: [5881], State: [ok], Delivered in [0.001177885] seconds

[----] I, [2020-09-27T15:59:28.470684 #2228:2abfbf325970] INFO -- : MIQ(MiqServer#populate_queue_messages) Fetched 1 miq_queue rows for queue_name=generic, wcount=4, priority=200

[----] I, [2020-09-27T15:59:29.506630 #2684:2ad3ca51b968] INFO -- : MIQ(MiqGenericWorker::Runner#get_message_via_drb) Message id: [5882], MiqWorker id: [64], Zone: [default], Role: [smartstate], Server: [], MiqTask id: [], Ident: [generic], Target id: [], Instance id: [1], Task id: [], Command: [Job.signal_abort], Timeout: [600], Priority: [100], State: [dequeue], Deliver On: [], Data: [], Args: ["job timed out after 3052.271594948 seconds of inactivity. Inactivity threshold [3000 seconds]", "error"], Dequeued in: [6.094388538] seconds

[----] I, [2020-09-27T15:59:29.506939 #2684:2ad3ca51b968] INFO -- : MIQ(MiqQueue#deliver) Message id: [5882], Delivering...

[----] I, [2020-09-27T15:59:29.656524 #2684:2ad3ca51b968] INFO -- : MIQ(MiqQueue.put) Message id: [5883], id: [], Zone: [default], Role: [automate], Server: [], MiqTask id: [], Ident: [generic], Target id: [], Instance id: [], Task id: [], Command: [MiqAeEngine.deliver], Timeout: [3600], Priority: [20], State: [ready], Deliver On: [], Data: [], Args: [{:object_type=>"ManageIQ::Providers::Amazon::CloudManager::Vm", :object_id=>34, :attrs=>{:event_type=>"vm_scan_abort", "VmOrTemplate::vm"=>34, :vm_id=>34, :host=>nil, "MiqEvent::miq_event"=>308, :miq_event_id=>308, "EventStream::event_stream"=>308, :event_stream_id=>308}, :instance_name=>"Event", :user_id=>1, :miq_group_id=>1, :tenant_id=>1, :automate_message=>nil}]

[----] E, [2020-09-27T15:59:29.656639 #2684:2ad3ca51b968] ERROR -- : MIQ(ManageIQ::Providers::Amazon::CloudManager::Scanning::Job#process_abort) job aborting, job timed out after 3052.271594948 seconds of inactivity. Inactivity threshold [3000 seconds]

[----] E, [2020-09-27T15:59:29.662790 #2684:2ad3ca51b968] ERROR -- : MIQ(MiqQueue#deliver) Message id: [5882], Error: [finish is not permitted at state aborting]

[----] E, [2020-09-27T15:59:29.662909 #2684:2ad3ca51b968] ERROR -- : [RuntimeError]: finish is not permitted at state aborting Method:[block (2 levels) in class:LogProxy]

[----] E, [2020-09-27T15:59:29.662994 #2684:2ad3ca51b968] ERROR -- : /var/www/miq/vmdb/app/models/job/state_machine.rb:50:in `signal'

Not sure if this is related to the issue.

Add a new region to regions.yml

Hi,

We have a cloud provider that offers "aws compatible" API that we would like to test with ManageIQ. I didn't find the way to add a new region. Adding a custom Endpoint URL on UI doesn't help.

Can't seem to use a Rails console

Not sure if it's possible, but it would be nice to Rails console from this repo. Not sure how that would work, or if it's even possible.

Gather AWS instance types list from AWS's GitHub repo

Instance types list currently hard-coded in the https://github.com/ManageIQ/manageiq-providers-amazon/blob/master/app/models/manageiq/providers/amazon/instance_types.rb#L7 file.

We can fetch it (either dynamically or at build time) from: https://github.com/aws/aws-sdk-ruby/blob/master/apis/ec2/2016-11-15/api-2.json#L11007

(link to the latest version's date available, all the version's dates can be seen here: https://github.com/aws/aws-sdk-ruby/blob/master/apis/ec2)

Are relationships between cloud_networks and network_routers correct?

After digging into the Amazon network provider, I'm confused about the way the relationships are setup between cloud_networks, network_routers and cloud_subnets.

To see what I'm talking about, I've created a VPC (dberger-vpc), two subnets (dberger-subnet1 and dberger-subnet2) and two network_routers (dberger-route1 and dberger-route2). Both subnets and both routers are associated with dberger-vpc. However, both subnets are associated with dberger-route1, and none with dberger-route2.

There are two curious things that are currently happening. First, even though both routers are associated with the VPC on the AWS side, only one of them - the one associated with the subnets - actually has a relationship with the cloud network on the ManageIQ side:

cloud_network = ManageIQ::Providers::Amazon::NetworkManager::CloudNetwork.find_by(:name => "dberger-vpc")
cloud_network.network_routers # => only dberger-route1

Second, even though dberger-route1 can be seen via cloud_network.network_routers, the reverse is not true.

router = ManageIQ::Providers::Amazon::NetworkManager::NetworkRouter.find_by(:name => "dberger-route1")
router.cloud_network # => nil

Note that within the AWS console, the association with the VPC is definitely there (and you can see the vpc-id using the rest API).

I think I know where/why this is happening within the code, but I wanted to double check as to whether or not this was a deliberate design decision before I try to update it.

Verify access to SQS before starting the EventCatcher

INFO -- evm: MIQ(ManageIQ::Providers::Amazon::CloudManager#with_provider_connection) Connecting through ManageIQ::Providers::Amazon::CloudManager: [AWS]
ERROR -- evm: MIQ(ManageIQ::Providers::Amazon::CloudManager::EventCatcher::Runner#start_event_monitor) EMS [] as [AKIAZWHQRYM2XTM36DH5] Event Monitor Thread aborted because [Access to the resource https://sqs.us-east-1.amazonaws.com/ is denied.]
ERROR -- evm: [Aws::SQS::Errors::AccessDenied]: Access to the resource https://sqs.us-east-1.amazonaws.com/ is denied.  Method:[block (2 levels) in <class:LogProxy>]
ERROR -- evm: /opt/IBM/infrastructure-management-gemset/gems/aws-sdk-core-3.130.2/lib/seahorse/client/plugins/raise_response_errors.rb:17:in `call'
/opt/IBM/infrastructure-management-gemset/gems/aws-sdk-core-3.130.2/lib/aws-sdk-core/plugins/checksum_algorithm.rb:111:in `call'
/opt/IBM/infrastructure-management-gemset/gems/aws-sdk-core-3.130.2/lib/aws-sdk-core/plugins/jsonvalue_converter.rb:22:in `call'
/opt/IBM/infrastructure-management-gemset/gems/aws-sdk-core-3.130.2/lib/aws-sdk-core/plugins/idempotency_token.rb:19:in `call'
/opt/IBM/infrastructure-management-gemset/gems/aws-sdk-core-3.130.2/lib/aws-sdk-core/plugins/param_converter.rb:26:in `call'
/opt/IBM/infrastructure-management-gemset/gems/aws-sdk-core-3.130.2/lib/seahorse/client/plugins/request_callback.rb:71:in `call'
/opt/IBM/infrastructure-management-gemset/gems/aws-sdk-core-3.130.2/lib/aws-sdk-core/plugins/response_paging.rb:12:in `call'
/opt/IBM/infrastructure-management-gemset/gems/aws-sdk-core-3.130.2/lib/seahorse/client/plugins/response_target.rb:24:in `call'
/opt/IBM/infrastructure-management-gemset/gems/aws-sdk-core-3.130.2/lib/seahorse/client/request.rb:72:in `send_request'
/opt/IBM/infrastructure-management-gemset/gems/aws-sdk-sqs-1.51.1/lib/aws-sdk-sqs/client.rb:930:in `create_queue'
/opt/IBM/infrastructure-management-gemset/bundler/gems/bluecf-providers-amazon-b6ef87053a7a/app/models/manageiq/providers/amazon/cloud_manager/event_catcher/stream.rb:103:in `block in sqs_create_queue'
/var/www/miq/vmdb/app/models/ext_management_system.rb:606:in `with_provider_connection'
/opt/IBM/infrastructure-management-gemset/bundler/gems/bluecf-providers-amazon-b6ef87053a7a/app/models/manageiq/providers/amazon/cloud_manager/event_catcher/stream.rb:102:in `sqs_create_queue'
/opt/IBM/infrastructure-management-gemset/bundler/gems/bluecf-providers-amazon-b6ef87053a7a/app/models/manageiq/providers/amazon/cloud_manager/event_catcher/stream.rb:75:in `rescue in find_or_create_queue'
/opt/IBM/infrastructure-management-gemset/bundler/gems/bluecf-providers-amazon-b6ef87053a7a/app/models/manageiq/providers/amazon/cloud_manager/event_catcher/stream.rb:68:in `find_or_create_queue'
/opt/IBM/infrastructure-management-gemset/bundler/gems/bluecf-providers-amazon-b6ef87053a7a/app/models/manageiq/providers/amazon/cloud_manager/event_catcher/stream.rb:47:in `block in poll'
/var/www/miq/vmdb/app/models/ext_management_system.rb:606:in `with_provider_connection'
/opt/IBM/infrastructure-management-gemset/bundler/gems/bluecf-providers-amazon-b6ef87053a7a/app/models/manageiq/providers/amazon/cloud_manager/event_catcher/stream.rb:45:in `poll'
/opt/IBM/infrastructure-management-gemset/bundler/gems/bluecf-providers-amazon-b6ef87053a7a/app/models/manageiq/providers/amazon/cloud_manager/event_catcher/runner.rb:10:in `monitor_events'
/var/www/miq/vmdb/app/models/manageiq/providers/base_manager/event_catcher/runner.rb:156:in `block in start_event_monitor'
Jul 22 12:58:48 unused-9-37-37-179.rtp.raleigh.ibm.com evm[7153]:  INFO -- evm: MIQ(MiqQueue.put) Message id: [84078], Zone: [], Role: [], Server: [], MiqTask id: [], Handler id: [], Ident: [

Os on AWS not working

Hi guys,

I added the AWS provider,
ManageIq get my ec2/ hosts / network / az etc.... but the os detection is not working, every ec2 are in Unknown os in the guest os chart for example.

I m using debian 9 or ubuntu 16 on my nodes.


This issue was moved to this repository from ManageIQ/manageiq#17118, originally opened by @cyrilbkr

After switching to aws-sdk v3 paged results are not handled

The aws-sdk v3 describe_* calls take max_results and next_token options, and the *Result object has a next token.

The default page size if no max_results is specified appears to be 100.

If we do not loop through these pages not all attributes will be collected if there are more than the default page size present.

Ref:
https://docs.aws.amazon.com/sdkforruby/api/Aws/EC2/Client.html#describe_instances-instance_method
https://docs.aws.amazon.com/sdkforruby/api/Aws/EC2/Types/DescribeInstancesResult.html

Raise exceptions if SSH commands failed in docker running image

Amazon SSA uses SSH to connect to agent and docker run the commands to start container. If one of the command fails, an exception should be raised, so the following cleanup logic will remove the agent properly. Make sure no zombie instance is left in ec2 environment.

CloudObjectStoreContainer warning and MissingRegionError

Running on latest master with a refreshed database.

I spotted an error and a warning in the evm.log during an AWS refresh that may be of concern. Neither appears to affect the refresh, but they should probably be addressed.

The warning is: warning: toplevel constant CloudObjectStoreContainer referenced by ManageIQ::Providers::Amazon::StorageManager::S3::CloudObjectStoreContainer

The error is: [Aws::Errors::MissingRegionError]: missing region; use :region option or export region name to ENV['AWS_REGION'] Method:[rescue in block in refresh]

To duplicate, just add or refresh an AWS provider that contains resources. The full backtrace that I'm seeing is below:

[----] I, [2017-03-06T09:10:10.300053 #79505:3fc7d783fa0c]  INFO -- : MIQ(ManageIQ::Providers::Amazon::StorageManager::S3::Refresher#refresh_targets_for_ems) EMS: [amazon S3 Storage Manager], id: [8] Refreshing target ManageIQ::Providers::Amazon::StorageManager::S3 [amazon S3 Storage Manager] id [8]...
/Users/dberger/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/bundler/gems/manageiq-providers-amazon-b65b8993449b/app/models/manageiq/providers/amazon/inventory/parser/storage_manager/s3.rb:80: warning: toplevel constant CloudObjectStoreContainer referenced by ManageIQ::Providers::Amazon::StorageManager::S3::CloudObjectStoreContainer
/Users/dberger/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/bundler/gems/manageiq-providers-amazon-b65b8993449b/app/models/manageiq/providers/amazon/inventory/parser/storage_manager/s3.rb:80: warning: toplevel constant CloudObjectStoreContainer referenced by ManageIQ::Providers::Amazon::StorageManager::S3::CloudObjectStoreContainer
[----] E, [2017-03-06T09:10:11.343181 #79505:3fc7d783fa0c] ERROR -- : MIQ(ManageIQ::Providers::Amazon::StorageManager::S3::Refresher#refresh) EMS: [amazon S3 Storage Manager], id: [8] Refresh failed
[----] E, [2017-03-06T09:10:11.343482 #79505:3fc7d783fa0c] ERROR -- : [Aws::Errors::MissingRegionError]: missing region; use :region option or export region name to ENV['AWS_REGION']  Method:[rescue in block in refresh]
[----] E, [2017-03-06T09:10:11.343706 #79505:3fc7d783fa0c] ERROR -- : /Users/dberger/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/aws-sdk-core-2.6.50/lib/aws-sdk-core/plugins/regional_endpoint.rb:34:in `after_initialize'
/Users/dberger/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/aws-sdk-core-2.6.50/lib/seahorse/client/base.rb:84:in `block in after_initialize'
/Users/dberger/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/aws-sdk-core-2.6.50/lib/seahorse/client/base.rb:83:in `each'
/Users/dberger/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/aws-sdk-core-2.6.50/lib/seahorse/client/base.rb:83:in `after_initialize'
/Users/dberger/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/aws-sdk-core-2.6.50/lib/seahorse/client/base.rb:21:in `initialize'
/Users/dberger/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/aws-sdk-core-2.6.50/lib/seahorse/client/base.rb:105:in `new'
/Users/dberger/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/aws-sdk-resources-2.6.50/lib/aws-sdk-resources/resource.rb:169:in `extract_client'
/Users/dberger/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/aws-sdk-resources-2.6.50/lib/aws-sdk-resources/resource.rb:15:in `initialize'
/Users/dberger/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/bundler/gems/manageiq-providers-amazon-b65b8993449b/app/models/manageiq/providers/amazon/manager_mixin.rb:72:in `new'
/Users/dberger/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/bundler/gems/manageiq-providers-amazon-b65b8993449b/app/models/manageiq/providers/amazon/manager_mixin.rb:72:in `raw_connect'
/Users/dberger/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/bundler/gems/manageiq-providers-amazon-b65b8993449b/app/models/manageiq/providers/amazon/manager_mixin.rb:29:in `connect'
/Users/dberger/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/bundler/gems/manageiq-providers-amazon-b65b8993449b/app/models/manageiq/providers/amazon/storage_manager/s3.rb:37:in `connect'
/Users/dberger/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/bundler/gems/manageiq-providers-amazon-b65b8993449b/app/models/manageiq/providers/amazon/inventory/collector/storage_manager/s3.rb:27:in `aws_s3_regional'
/Users/dberger/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/bundler/gems/manageiq-providers-amazon-b65b8993449b/app/models/manageiq/providers/amazon/inventory/collector/storage_manager/s3.rb:11:in `cloud_object_store_objects'
/Users/dberger/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/bundler/gems/manageiq-providers-amazon-b65b8993449b/app/models/manageiq/providers/amazon/inventory/parser/storage_manager/s3.rb:49:in `process_objects'
/Users/dberger/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/bundler/gems/manageiq-providers-amazon-b65b8993449b/app/models/manageiq/providers/amazon/inventory/parser/storage_manager/s3.rb:23:in `block in process_containers'
/Users/dberger/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/bundler/gems/manageiq-providers-amazon-b65b8993449b/app/models/manageiq/providers/amazon/inventory/parser/storage_manager/s3.rb:21:in `each'
/Users/dberger/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/bundler/gems/manageiq-providers-amazon-b65b8993449b/app/models/manageiq/providers/amazon/inventory/parser/storage_manager/s3.rb:21:in `process_containers'
/Users/dberger/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/bundler/gems/manageiq-providers-amazon-b65b8993449b/app/models/manageiq/providers/amazon/inventory/parser/storage_manager/s3.rb:10:in `parse'
/Users/dberger/Repositories/manageiq-djberg96/app/models/manager_refresh/inventory.rb:23:in `block in inventory_collections'
/Users/dberger/Repositories/manageiq-djberg96/app/models/manager_refresh/inventory.rb:20:in `each'
/Users/dberger/Repositories/manageiq-djberg96/app/models/manager_refresh/inventory.rb:20:in `inventory_collections'
/Users/dberger/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/bundler/gems/manageiq-providers-amazon-b65b8993449b/app/models/manageiq/providers/amazon/storage_manager/s3/refresher.rb:27:in `block in parse_targeted_inventory'
/Users/dberger/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/bundler/gems/manageiq-gems-pending-fae141b817f2/lib/gems/pending/util/extensions/miq-benchmark.rb:11:in `realtime_store'
/Users/dberger/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/bundler/gems/manageiq-gems-pending-fae141b817f2/lib/gems/pending/util/extensions/miq-benchmark.rb:30:in `realtime_block'
/Users/dberger/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/bundler/gems/manageiq-providers-amazon-b65b8993449b/app/models/manageiq/providers/amazon/storage_manager/s3/refresher.rb:25:in `parse_targeted_inventory'
/Users/dberger/Repositories/manageiq-djberg96/app/models/ems_refresh/refreshers/ems_refresher_mixin.rb:87:in `block in refresh_targets_for_ems'
/Users/dberger/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/bundler/gems/manageiq-gems-pending-fae141b817f2/lib/gems/pending/util/extensions/miq-benchmark.rb:11:in `realtime_store'
/Users/dberger/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/bundler/gems/manageiq-gems-pending-fae141b817f2/lib/gems/pending/util/extensions/miq-benchmark.rb:30:in `realtime_block'
/Users/dberger/Repositories/manageiq-djberg96/app/models/ems_refresh/refreshers/ems_refresher_mixin.rb:86:in `refresh_targets_for_ems'
/Users/dberger/Repositories/manageiq-djberg96/app/models/ems_refresh/refreshers/ems_refresher_mixin.rb:24:in `block (2 levels) in refresh'
/Users/dberger/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/bundler/gems/manageiq-gems-pending-fae141b817f2/lib/gems/pending/util/extensions/miq-benchmark.rb:11:in `realtime_store'
/Users/dberger/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/bundler/gems/manageiq-gems-pending-fae141b817f2/lib/gems/pending/util/extensions/miq-benchmark.rb:30:in `realtime_block'
/Users/dberger/Repositories/manageiq-djberg96/app/models/ems_refresh/refreshers/ems_refresher_mixin.rb:24:in `block in refresh'
/Users/dberger/Repositories/manageiq-djberg96/app/models/ems_refresh/refreshers/ems_refresher_mixin.rb:14:in `each'
/Users/dberger/Repositories/manageiq-djberg96/app/models/ems_refresh/refreshers/ems_refresher_mixin.rb:14:in `refresh'
/Users/dberger/Repositories/manageiq-djberg96/app/models/manageiq/providers/base_manager/refresher.rb:10:in `refresh'
/Users/dberger/Repositories/manageiq-djberg96/app/models/ems_refresh.rb:93:in `block in refresh'
/Users/dberger/Repositories/manageiq-djberg96/app/models/ems_refresh.rb:92:in `each'
/Users/dberger/Repositories/manageiq-djberg96/app/models/ems_refresh.rb:92:in `refresh'
/Users/dberger/Repositories/manageiq-djberg96/app/models/miq_queue.rb:347:in `block in deliver'
/Users/dberger/.rbenv/versions/2.3.1/lib/ruby/2.3.0/timeout.rb:91:in `block in timeout'
/Users/dberger/.rbenv/versions/2.3.1/lib/ruby/2.3.0/timeout.rb:33:in `block in catch'
/Users/dberger/.rbenv/versions/2.3.1/lib/ruby/2.3.0/timeout.rb:33:in `catch'
/Users/dberger/.rbenv/versions/2.3.1/lib/ruby/2.3.0/timeout.rb:33:in `catch'
/Users/dberger/.rbenv/versions/2.3.1/lib/ruby/2.3.0/timeout.rb:106:in `timeout'
/Users/dberger/Repositories/manageiq-djberg96/app/models/miq_queue.rb:343:in `deliver'
/Users/dberger/Repositories/manageiq-djberg96/app/models/miq_queue_worker_base/runner.rb:106:in `deliver_queue_message'
/Users/dberger/Repositories/manageiq-djberg96/app/models/miq_queue_worker_base/runner.rb:134:in `deliver_message'
/Users/dberger/Repositories/manageiq-djberg96/app/models/miq_queue_worker_base/runner.rb:152:in `block in do_work'
/Users/dberger/Repositories/manageiq-djberg96/app/models/miq_queue_worker_base/runner.rb:146:in `loop'
/Users/dberger/Repositories/manageiq-djberg96/app/models/miq_queue_worker_base/runner.rb:146:in `do_work'
/Users/dberger/Repositories/manageiq-djberg96/app/models/miq_worker/runner.rb:334:in `block in do_work_loop'
/Users/dberger/Repositories/manageiq-djberg96/app/models/miq_worker/runner.rb:331:in `loop'
/Users/dberger/Repositories/manageiq-djberg96/app/models/miq_worker/runner.rb:331:in `do_work_loop'
/Users/dberger/Repositories/manageiq-djberg96/app/models/miq_worker/runner.rb:153:in `run'
/Users/dberger/Repositories/manageiq-djberg96/app/models/miq_worker/runner.rb:128:in `start'
/Users/dberger/Repositories/manageiq-djberg96/app/models/miq_worker/runner.rb:21:in `start_worker'
/Users/dberger/Repositories/manageiq-djberg96/app/models/miq_worker.rb:343:in `block in start_runner'
/Users/dberger/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/nakayoshi_fork-0.0.3/lib/nakayoshi_fork.rb:24:in `fork'
/Users/dberger/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/nakayoshi_fork-0.0.3/lib/nakayoshi_fork.rb:24:in `fork'
/Users/dberger/Repositories/manageiq-djberg96/app/models/miq_worker.rb:341:in `start_runner'
/Users/dberger/Repositories/manageiq-djberg96/app/models/miq_worker.rb:352:in `start'
/Users/dberger/Repositories/manageiq-djberg96/app/models/miq_worker.rb:270:in `start_worker'
/Users/dberger/Repositories/manageiq-djberg96/app/models/mixins/per_ems_worker_mixin.rb:68:in `start_worker_for_ems'
/Users/dberger/Repositories/manageiq-djberg96/app/models/mixins/per_ems_worker_mixin.rb:46:in `block in sync_workers'
/Users/dberger/Repositories/manageiq-djberg96/app/models/mixins/per_ems_worker_mixin.rb:45:in `each'
/Users/dberger/Repositories/manageiq-djberg96/app/models/mixins/per_ems_worker_mixin.rb:45:in `sync_workers'
/Users/dberger/Repositories/manageiq-djberg96/app/models/miq_server/worker_management/monitor.rb:53:in `block in sync_workers'
/Users/dberger/Repositories/manageiq-djberg96/app/models/miq_server/worker_management/monitor.rb:50:in `each'
/Users/dberger/Repositories/manageiq-djberg96/app/models/miq_server/worker_management/monitor.rb:50:in `sync_workers'
/Users/dberger/Repositories/manageiq-djberg96/app/models/miq_server/worker_management/monitor.rb:22:in `monitor_workers'
/Users/dberger/Repositories/manageiq-djberg96/app/models/miq_server.rb:348:in `block in monitor'
/Users/dberger/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/bundler/gems/manageiq-gems-pending-fae141b817f2/lib/gems/pending/util/extensions/miq-benchmark.rb:11:in `realtime_store'
/Users/dberger/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/bundler/gems/manageiq-gems-pending-fae141b817f2/lib/gems/pending/util/extensions/miq-benchmark.rb:30:in `realtime_block'
/Users/dberger/Repositories/manageiq-djberg96/app/models/miq_server.rb:348:in `monitor'
/Users/dberger/Repositories/manageiq-djberg96/app/models/miq_server.rb:370:in `block (2 levels) in monitor_loop'
/Users/dberger/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/bundler/gems/manageiq-gems-pending-fae141b817f2/lib/gems/pending/util/extensions/miq-benchmark.rb:11:in `realtime_store'
/Users/dberger/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/bundler/gems/manageiq-gems-pending-fae141b817f2/lib/gems/pending/util/extensions/miq-benchmark.rb:30:in `realtime_block'
/Users/dberger/Repositories/manageiq-djberg96/app/models/miq_server.rb:370:in `block in monitor_loop'
/Users/dberger/Repositories/manageiq-djberg96/app/models/miq_server.rb:369:in `loop'
/Users/dberger/Repositories/manageiq-djberg96/app/models/miq_server.rb:369:in `monitor_loop'
/Users/dberger/Repositories/manageiq-djberg96/app/models/miq_server.rb:252:in `start'
/Users/dberger/Repositories/manageiq-djberg96/lib/workers/evm_server.rb:65:in `start'
/Users/dberger/Repositories/manageiq-djberg96/lib/workers/evm_server.rb:91:in `start'
/Users/dberger/Repositories/manageiq-djberg96/lib/workers/bin/evm_server.rb:4:in `<main>'
[----] E, [2017-03-06T09:10:11.343815 #79505:3fc7d783fa0c] ERROR -- : MIQ(ManageIQ::Providers::Amazon::StorageManager::S3::Refresher#refresh) EMS: [amazon S3 Storage Manager], id: [8] Unable to perform refresh for the following targets:
[----] E, [2017-03-06T09:10:11.343939 #79505:3fc7d783fa0c] ERROR -- : MIQ(ManageIQ::Providers::Amazon::StorageManager::S3::Refresher#refresh)  --- ManageIQ::Providers::Amazon::StorageManager::S3 [amazon S3 Storage Manager] id [8]
[----] I, [2017-03-06T09:10:11.414577 #79505:3fc7d783fa0c]  INFO -- : MIQ(ManageIQ::Providers::Amazon::StorageManager::S3::Refresher#refresh) Refreshing all targets...Complete
[----] E, [2017-03-06T09:10:11.414792 #79505:3fc7d783fa0c] ERROR -- : MIQ(MiqQueue#deliver) Message id: [25761], Error: [missing region; use :region option or export region name to ENV['AWS_REGION']]
[----] E, [2017-03-06T09:10:11.414962 #79505:3fc7d783fa0c] ERROR -- : [EmsRefresh::Refreshers::EmsRefresherMixin::PartialRefreshError]: missing region; use :region option or export region name to ENV['AWS_REGION']  Method:[rescue in deliver]
[----] E, [2017-03-06T09:10:11.415119 #79505:3fc7d783fa0c] ERROR -- : /Users/dberger/Repositories/manageiq-djberg96/app/models/ems_refresh/refreshers/ems_refresher_mixin.rb:50:in `refresh'
/Users/dberger/Repositories/manageiq-djberg96/app/models/manageiq/providers/base_manager/refresher.rb:10:in `refresh'
/Users/dberger/Repositories/manageiq-djberg96/app/models/ems_refresh.rb:93:in `block in refresh'
/Users/dberger/Repositories/manageiq-djberg96/app/models/ems_refresh.rb:92:in `each'
/Users/dberger/Repositories/manageiq-djberg96/app/models/ems_refresh.rb:92:in `refresh'
/Users/dberger/Repositories/manageiq-djberg96/app/models/miq_queue.rb:347:in `block in deliver'
/Users/dberger/.rbenv/versions/2.3.1/lib/ruby/2.3.0/timeout.rb:91:in `block in timeout'
/Users/dberger/.rbenv/versions/2.3.1/lib/ruby/2.3.0/timeout.rb:33:in `block in catch'
/Users/dberger/.rbenv/versions/2.3.1/lib/ruby/2.3.0/timeout.rb:33:in `catch'
/Users/dberger/.rbenv/versions/2.3.1/lib/ruby/2.3.0/timeout.rb:33:in `catch'
/Users/dberger/.rbenv/versions/2.3.1/lib/ruby/2.3.0/timeout.rb:106:in `timeout'
/Users/dberger/Repositories/manageiq-djberg96/app/models/miq_queue.rb:343:in `deliver'
/Users/dberger/Repositories/manageiq-djberg96/app/models/miq_queue_worker_base/runner.rb:106:in `deliver_queue_message'
/Users/dberger/Repositories/manageiq-djberg96/app/models/miq_queue_worker_base/runner.rb:134:in `deliver_message'
/Users/dberger/Repositories/manageiq-djberg96/app/models/miq_queue_worker_base/runner.rb:152:in `block in do_work'
/Users/dberger/Repositories/manageiq-djberg96/app/models/miq_queue_worker_base/runner.rb:146:in `loop'
/Users/dberger/Repositories/manageiq-djberg96/app/models/miq_queue_worker_base/runner.rb:146:in `do_work'
/Users/dberger/Repositories/manageiq-djberg96/app/models/miq_worker/runner.rb:334:in `block in do_work_loop'
/Users/dberger/Repositories/manageiq-djberg96/app/models/miq_worker/runner.rb:331:in `loop'
/Users/dberger/Repositories/manageiq-djberg96/app/models/miq_worker/runner.rb:331:in `do_work_loop'
/Users/dberger/Repositories/manageiq-djberg96/app/models/miq_worker/runner.rb:153:in `run'
/Users/dberger/Repositories/manageiq-djberg96/app/models/miq_worker/runner.rb:128:in `start'
/Users/dberger/Repositories/manageiq-djberg96/app/models/miq_worker/runner.rb:21:in `start_worker'
/Users/dberger/Repositories/manageiq-djberg96/app/models/miq_worker.rb:343:in `block in start_runner'
/Users/dberger/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/nakayoshi_fork-0.0.3/lib/nakayoshi_fork.rb:24:in `fork'
/Users/dberger/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/nakayoshi_fork-0.0.3/lib/nakayoshi_fork.rb:24:in `fork'
/Users/dberger/Repositories/manageiq-djberg96/app/models/miq_worker.rb:341:in `start_runner'
/Users/dberger/Repositories/manageiq-djberg96/app/models/miq_worker.rb:352:in `start'
/Users/dberger/Repositories/manageiq-djberg96/app/models/miq_worker.rb:270:in `start_worker'
/Users/dberger/Repositories/manageiq-djberg96/app/models/mixins/per_ems_worker_mixin.rb:68:in `start_worker_for_ems'
/Users/dberger/Repositories/manageiq-djberg96/app/models/mixins/per_ems_worker_mixin.rb:46:in `block in sync_workers'
/Users/dberger/Repositories/manageiq-djberg96/app/models/mixins/per_ems_worker_mixin.rb:45:in `each'
/Users/dberger/Repositories/manageiq-djberg96/app/models/mixins/per_ems_worker_mixin.rb:45:in `sync_workers'
/Users/dberger/Repositories/manageiq-djberg96/app/models/miq_server/worker_management/monitor.rb:53:in `block in sync_workers'
/Users/dberger/Repositories/manageiq-djberg96/app/models/miq_server/worker_management/monitor.rb:50:in `each'
/Users/dberger/Repositories/manageiq-djberg96/app/models/miq_server/worker_management/monitor.rb:50:in `sync_workers'
/Users/dberger/Repositories/manageiq-djberg96/app/models/miq_server/worker_management/monitor.rb:22:in `monitor_workers'
/Users/dberger/Repositories/manageiq-djberg96/app/models/miq_server.rb:348:in `block in monitor'
/Users/dberger/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/bundler/gems/manageiq-gems-pending-fae141b817f2/lib/gems/pending/util/extensions/miq-benchmark.rb:11:in `realtime_store'
/Users/dberger/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/bundler/gems/manageiq-gems-pending-fae141b817f2/lib/gems/pending/util/extensions/miq-benchmark.rb:30:in `realtime_block'
/Users/dberger/Repositories/manageiq-djberg96/app/models/miq_server.rb:348:in `monitor'
/Users/dberger/Repositories/manageiq-djberg96/app/models/miq_server.rb:370:in `block (2 levels) in monitor_loop'
/Users/dberger/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/bundler/gems/manageiq-gems-pending-fae141b817f2/lib/gems/pending/util/extensions/miq-benchmark.rb:11:in `realtime_store'
/Users/dberger/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/bundler/gems/manageiq-gems-pending-fae141b817f2/lib/gems/pending/util/extensions/miq-benchmark.rb:30:in `realtime_block'
/Users/dberger/Repositories/manageiq-djberg96/app/models/miq_server.rb:370:in `block in monitor_loop'
/Users/dberger/Repositories/manageiq-djberg96/app/models/miq_server.rb:369:in `loop'
/Users/dberger/Repositories/manageiq-djberg96/app/models/miq_server.rb:369:in `monitor_loop'
/Users/dberger/Repositories/manageiq-djberg96/app/models/miq_server.rb:252:in `start'
/Users/dberger/Repositories/manageiq-djberg96/lib/workers/evm_server.rb:65:in `start'
/Users/dberger/Repositories/manageiq-djberg96/lib/workers/evm_server.rb:91:in `start'
/Users/dberger/Repositories/manageiq-djberg96/lib/workers/bin/evm_server.rb:4:in `<main>'
[----] I, [2017-03-06T09:10:11.415274 #79505:3fc7d783fa0c]  INFO -- : MIQ(MiqQueue#delivered) Message id: [25761], State: [error], Delivered in [1.44828] seconds
[----] I, [2017-03-06T09:10:12.183275 #79502:3fc7d783fa0c]  INFO -- : MIQ(EmsRefresh.save_ems_block_storage_inventory) EMS: [amazon EBS Storage Manager], id: [7] Saving EMS Inventory...
Session:	 Hash of Size 7089, Elements 29

rake -T failure

manageiq-providers-amazon [master]>rake -T

[!] There was an error parsing `Gemfile`: 
[!] There was an error parsing `Gemfile`: No such file or directory @ rb_sysopen - /Users/dberger/Repositories/manageiq-providers-amazon/spec/manageiq/Gemfile. Bundler cannot continue.
. Bundler cannot continue.

 #  from /Users/dberger/Repositories/manageiq-providers-amazon/Gemfile:16
 #  -------------------------------------------
 #  # Load Gemfile with dependencies from manageiq
 >  eval_gemfile(File.expand_path("spec/manageiq/Gemfile", __dir__))
 #  -------------------------------------------

euwe branch CI broken

The euwe branch CI is red for this repo:
https://github.com/ManageIQ/manageiq-providers-amazon/branches

Looks like something with coveralls changed:

.[Coveralls] Set up the SimpleCov formatter.
[Coveralls] Using SimpleCov's 'rails' settings.
W, [2017-01-11T20:33:07.411751 #21089]  WARN -- :       This usage of the Code Climate Test Reporter is now deprecated. Since version
      1.0, we now require you to run `SimpleCov` in your test/spec helper, and then
      run the provided `codeclimate-test-reporter` binary separately to report your
      results to Code Climate.
      More information here: https://github.com/codeclimate/ruby-test-reporter/blob/master/README.md
[Coveralls] Submitting to https://coveralls.io/api/v1
[Coveralls] Couldn't find a repository matching this job.
Coverage is at 0.23%.
Coverage report sent to Coveralls.

It seems we are using something different than miq core here
https://github.com/ManageIQ/manageiq-providers-amazon/blob/master/spec/spec_helper.rb#L1-L4

@jrafanie can you have a look?

This blocks #100

cc @Ladas

Event filtering through configuration

This issue identifies how event filtering is done today under the covers and explores a way to make it configurable to the end user.

Option 1 - Blacklist table

There is a table called the "blacklisted_events" that stores event types that will be filtered out for a specific provider type. The only way to populate this table today is by:

  1. Hardcoding the event types in the CloudManager.rb file of the providers, eg:
    https://github.com/ManageIQ/manageiq-providers-amazon/blob/master/app/models/manageiq/providers/amazon/cloud_manager.rb#L105:111

  2. Adding rows through the rails console, like this:
    BlacklistedEvent.create(:event_name => "filter_me", :provider_model => "ManageIQ::Providers::Amazon::CloudManager")

Option 2 - Event storms

Duplicate events that are received en mass within a short timeframe are considered an event storm and are de-duplicated for the VMware provider.
I am not sure this functionality can be leveraged for this purpose; configuring the handling of event storms is not available to the end user and it addresses a different use case.

Given that the BlacklistedEvents class and the corresponding table already exist for the purposes of event filtering, I think it makes sense to leverage it. One place to configure this blacklist is the settings.yml file, this file already has an event_handling section but it is enormous and looks like it needs to be moved out to its own file. I think a new events_setting.yml file should be created for each provider as follows:
crawl - it would support a blacklist of event types to be filtered
walk - move over the entire event_handling section

AWS Network(VPC) in CFME has no relationship with Network Router(Route Table)

Original BZ: https://bugzilla.redhat.com/show_bug.cgi?id=1746862

Description of problem:
AWS Network(VPC) in CFME has no relationship with Network Router(Route Table)

How reproducible:
Always

Steps to Reproduce:

  1. Add VPC in AWS
  2. It automatically creates subresources like Main Route Table which is associated to VPC
    Actual results:
    In CFME Network-> Networks -> AWS Network(VPC) has zero associations in Relationships to Network Routers even when VPC must have at least main router.

Allow AWS provider to be defined without secret/key in order to assume instance credentials

The AWS Ruby SDK v3 states the following order of credential usage for service interaction.

The SDK searches the following locations for credentials:

  • ENV['AWS_ACCESS_KEY_ID'] and ENV['AWS_SECRET_ACCESS_KEY']
  • Unless ENV['AWS_SDK_CONFIG_OPT_OUT'] is set, the shared configuration files (~/.aws/credentials and ~/.aws/config) will be checked for a role_arn and source_profile, which if present will be used to attempt to assume a role.
  • The shared credentials ini file at ~/.aws/credentials (more information)
  • Unless ENV['AWS_SDK_CONFIG_OPT_OUT'] is set, the shared configuration ini file at ~/.aws/config will also be parsed for credentials.
  • From an instance profile when running on EC2, or from the ECS credential provider when running in an ECS container with that feature enabled.
  • If using ~/.aws/config or ~/.aws/credentials a :profile option can be used to choose the proper credentials.

If a user were to run ManageIQ on an EC2 instance that had the appropriate instance profile defined (Instance Profile = IAM Role for an EC2 instance, rather than a user), no credentials would be required to add an AWS Cloud Provider to ManageIQ (assuming all interaction with AWS is done via Ruby SDK).

Currently ManageIQ requires a user to hard code a secret and key combination when defining an AWS Provider, which is a poor choice for security reasons (forces manual rotation of keys, passing the secrets in the clear in order to enter them, etc..). Allowing an instance profile to be used would remove the need for ManageIQ to store / maintain AWS-related credentials, and reduce the overall threat vector that comes with storing those secrets (keys are temporary, and rotated automatically).

As the AWS Ruby SDK already supports this functionality, when ManageIQ calls the SDK, simply not providing a secret or key should allow the SDK to cycle through the above listed options for obtaining credentials. I would request this feature / change to allow the definition of an AWS provider without specifying credentials, and when none are specified ManageIQ functionality should not attempt to pass any credentials to the SDK.


This issue was moved to this repository from ManageIQ/manageiq#17709, originally opened by @namebrandon

Memory metrics on AWS are not captured

AWS provider doesn't capture data related to memory usage of instances. Data is not showed in the UI.

[1] manageiq-providers-amazon/app/models/manageiq/providers/amazon/cloud_manager/metrics_capture.rb

Gather AWS instances details from AWS's Price List API

Instances details are currently hard-coded in the https://github.com/ManageIQ/manageiq-providers-amazon/blob/master/app/models/manageiq/providers/amazon/instance_types.rb#L12 file.

We can get them (either dynamically or at build time) following the Amazon's Price List API instruction: https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/using-ppslong.html

P.S. It can be useful also to use price list changes notifications: https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/price-notification.html

Reuse setup files from manageiq

I'm wondering if instead of having a separate tools/ci/before_install.sh we can reuse the /spec/manageiq/bin/setup. Or perhaps, change ManageIQ's tools/ci/before_install.sh to separate out those shared operations allowing it to be called from this repo's before_install.sh

AWS_EC2_Instance_DELETE events are not always received

I have a simple test case that will,

  1. Provision an instance on EC2
  2. Delete the instance
  3. Look for the AWS_EC2_Instance_DELETE event on the timeline of the instance

Out of three runs of this test, only 1 received the AWS_EC2_Instance_DELETE and it was not associated with the instance name. When running this manually, I will check the DB for the events, from those 3 test runs, the events received look like:

vmdb_production=# select id, event_type, vm_name, source, timestamp from event_streams where source = 'AMAZON';
 id  |           event_type            |        vm_name         | source |        timestamp        
-----+---------------------------------+------------------------+--------+-------------------------
 567 | AWS_EC2_NetworkInterface_CREATE |                        | AMAZON | 2020-04-27 12:26:03.122
 568 | AWS_EC2_VPC_UPDATE              |                        | AMAZON | 2020-04-27 12:26:03.098
 569 | AWS_EC2_Subnet_UPDATE           |                        | AMAZON | 2020-04-27 12:26:03.175
 570 | AWS_EC2_SecurityGroup_UPDATE    |                        | AMAZON | 2020-04-27 12:26:03.16
 571 | AWS_EC2_Volume_CREATE           |                        | AMAZON | 2020-04-27 12:26:03.143
 572 | AWS_EC2_Instance_CREATE         | test-event-vcqj-jdupuy | AMAZON | 2020-04-27 12:26:03.201
 573 | AWS_EC2_Instance_DELETE         |                        | AMAZON | 2020-04-27 12:28:19.663
 574 | AWS_EC2_SecurityGroup_UPDATE    |                        | AMAZON | 2020-04-27 12:28:19.696
 575 | AWS_EC2_Subnet_UPDATE           |                        | AMAZON | 2020-04-27 12:28:19.713
 576 | AWS_EC2_VPC_UPDATE              |                        | AMAZON | 2020-04-27 12:28:19.679
 583 | AWS_EC2_Volume_CREATE           |                        | AMAZON | 2020-04-27 13:16:44.214
 584 | AWS_EC2_NetworkInterface_CREATE |                        | AMAZON | 2020-04-27 13:16:44.187
 585 | AWS_EC2_Instance_CREATE         | test-event-wwf4-jdupuy | AMAZON | 2020-04-27 13:16:44.274
 586 | AWS_EC2_SecurityGroup_UPDATE    |                        | AMAZON | 2020-04-27 13:16:44.231
 587 | AWS_EC2_VPC_UPDATE              |                        | AMAZON | 2020-04-27 13:16:44.161
 588 | AWS_EC2_Subnet_UPDATE           |                        | AMAZON | 2020-04-27 13:16:44.248
 590 | AWS_EC2_NetworkInterface_CREATE |                        | AMAZON | 2020-04-27 13:26:50.395
 591 | AWS_EC2_Volume_CREATE           |                        | AMAZON | 2020-04-27 13:26:50.371
 592 | AWS_EC2_VPC_UPDATE              |                        | AMAZON | 2020-04-27 13:26:50.347
 593 | AWS_EC2_SecurityGroup_UPDATE    |                        | AMAZON | 2020-04-27 13:26:50.411
 594 | AWS_EC2_Subnet_UPDATE           |                        | AMAZON | 2020-04-27 13:26:50.426
 595 | AWS_EC2_Instance_CREATE         | test-event-wg6c-jdupuy | AMAZON | 2020-04-27 13:26:50.451

Testing was carried out in the downstream build of 5.11.5.2. Since the create events are received, I would also expect the delete events to be received as well.

Map AWS Tags to MIQ Tags

This GitHub issue outlines the steps involved in translating AWS Tags, referred to as labels here, to ManageIQ Tags.

  1. A user will map an AWS label name to a MIQ category through the "Map Tags" tab in the configuration page of the UI. For example:
        AWS Label:        AWS_Tier = Gold
        MIQ Mapping:    AWS_Tier = MIQ_Tier
    At this point:
    a) A row in the Tags table will be created (check Classifications table too), note we call this a "category".
    b) A ContainerLabelTagMapping, or a generic version of this class, will be instantiated with the AWS label name and the Tag ID of the category created in a) above, this can be inspected through the rails console.

  2. An EMS refresh will be kicked off for AWS

  3. During the refresh:
    a) The AWS Labels are parsed for a particular resource
    b) A Tag object is created for the label value
    c) A Tagging object is instantiated and saved to the taggings table which stores:
         - resource Id
         - Tag id
    but only if a mapping was created for this label in step 1).

Continuing with the example above, any resources with the AWS label "AWS_Tier" = "Gold" will have a "MIQ_Tier" = "Gold" tag assigned to it and visible in the "My Company Tag" box of the resource Summary page. Any AWS labels without a mapping will be dropped.

AWS VPC Security Groups and Network Ports(Network Interfaces) don't use in CFME Name field for naming

Original BZ: https://bugzilla.redhat.com/show_bug.cgi?id=1746864

Description of problem:

How reproducible:
Always

Steps to Reproduce:

  1. Create in AWS Security Group and set tag Name to something
  2. Create in AWS Network Interface and set tag Name to something
    Actual results:
    For Security Groups naming is taken from Group Name.
    For Network Interface naming is taken from Interface ID.

Expected results:
It should take naming from Name tag if set same as for other resources from AWS.

Refresh fails normalizing OS name for image with no image_location

MIQ(ManageIQ::Providers::Amazon::CloudManager::Refresher#refresh) EMS: [AWS-55], id: [10] Refresh failed
[NoMethodError]: undefined method `downcase' for nil:NilClass  Method:[block (2 levels) in <class:LogProxy>]
/app/models/operating_system.rb:76:in `normalize_os_name'
/manageiq-providers-amazon/app/models/manageiq/providers/amazon/inventory/parser/cloud_manager.rb:76:in `image_hardware'
/manageiq-providers-amazon/app/models/manageiq/providers/amazon/inventory/parser/cloud_manager.rb:66:in `block in images'
/manageiq-providers-amazon/app/models/manageiq/providers/amazon/inventory/parser/cloud_manager.rb:47:in `each'
/manageiq-providers-amazon/app/models/manageiq/providers/amazon/inventory/parser/cloud_manager.rb:47:in `images'
/manageiq-providers-amazon/app/models/manageiq/providers/amazon/inventory/parser/cloud_manager.rb:39:in `public_images'
/manageiq-providers-amazon/app/models/manageiq/providers/amazon/inventory/parser/cloud_manager.rb:19:in `parse'
/app/models/manageiq/providers/inventory.rb:42:in `block in parse' 
/app/models/manageiq/providers/inventory.rb:39:in `each'
/app/models/manageiq/providers/inventory.rb:39:in `parse'

A better way for building instance types list

Currently we are using this rake task which also involve pre-rake manual steps to retrieve info from multiple website.

This procedure has to be done before each MIQ release. In between releases, user now can use this (yet to be released) mechanism to capture new instance types which missed the release train.

A better solution will be a web service that is always available to allow MIQ to retrieve. (Similar to what Azure has)

What we can do for now is to

  1. contribute to https://github.com/powdahound/ec2instances.info to enhance its json output to contain what we need

  2. Adapt and move the logic in this rake task to the ManageIQ::Providers::Amazon::CloudManager::RefreshParser.get_flavors and/or ManageIQ::Providers::Amazon::InstanceTypes.instance_types

  3. (MIQ would need access to http://www.ec2instances.info/)

I have 2 PRs moving in that direction.

Make subscribing to AWS event services configurable

Define a blacklist of AWS services we do not wish to capture events from.
Pivotal story:https://www.pivotaltracker.com/story/show/138951121

A new setting would be added to config/settings.yml, some like this:
:ems_amazon:
:disabled_event_services: ["AWS_Config", "CloudTrail", "EC2", "ELB", "ESB" ]

Events from a service included in :disabled_event_services will not be collected on AWS.

ManageIQ currently receives events from the following three AWS services:

  • AWS_Config,
  • CloudTrail,
  • CloudWatch.

The process for enabling or disabling event collection from each is not the same; AWS_Config and CloudWatch have an ON/OFF switch that can be flipped. CloudWatch is more complicated; it contains a set of rules where a rule applies to a single service eg EC2, ELB, EBS. A rule will be disabled if the service it applies to has been marked as disabled in disabled_event_services. For example:

:ems_amazon:
:disabled_event_services: ["EC2", "ELB" ]

CloudWatch event rules that apply to the EC2 and ELB services will be disabled on AWS preventing events for ELB and EC2 from being emitted, all other CloudWatch event rules will remain enabled.

Consideration(s):

  • Maybe not directly relevant to this GitHub issue, but during CloudTrail setup, a trail can be applied to all regions, see screenshot below. In ManageIQ, an EMS is tied to a specific region which means each EMS will receive the same CloudTrail events. An excerpt from the AWS documentation:

"When you configure notifications for a trail that applies to all regions, notifications from all regions are sent to the Amazon SNS topic that you specify. If you have one or more region-specific trails, you must create a separate topic for each region and subscribe to each individually."

screen shot 2017-02-03 at 1 13 12 pm 2

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.