Git Product home page Git Product logo

arc's Introduction

Cheers! 🍻

arc's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

arc's Issues

Support for additional S3 upload options and/or built-in support for MIME types

S3 defaults to application/octet-stream for any files uploaded without a Content-Type header. As a result, linking to any file uploaded via Arc causes the browser to download the file instead of viewing it in the browser (PDFs, JPGs, etc.). It appears that other upload libraries (CarrierWave for example) set the MIME type based on the filename before uploading. Plug.MIME has path/1 which will give you the mime type for the given filename (though that would introduce a dependency on Plug).

As an alternative, or maybe in addition, it would be great to add another overridden function, similar to storage_dir/2, where you could return additional S3 PUT options (right now, Arc.Storage.S3.put/3 hardcodes the options list to [acl: acl], but it could be something along the lines of [acl: acl] ++ definition.upload_options(version, {file, scope}) or something like that). This way, if MIME support wasn't built in, the end user could still examine the filename and provide a content_type option if necessary.

Thoughts? I'm happy to submit a pull request for either or both of these but wanted to check in first. Thanks!

More secure file validation?

Hi

How can I safely validate avatars uploaded by users?

It seems to me that a user could easily just rename a virus.exe to avatar.png and pass the default validation.

Any ideas?

Absolute links for local urls

What do you think about having absolute links when generating urls for the local storage?

Today the urls are relatives, (without a forward slash at the start) this means that if I try to use it in a page that is not in the website root, it will not be found.

The change is very simple, just need to add a / at the start of the url.

I can't think in any problem it may cause, could also be a configuration if you don't want to change the current behaviour.

Removing not used assets

I notices assets I uploaded to s3 persists when I update my 'avatar'

I don't know if it's a amazon thing or it's something I should/can handle

Arc.Storage.S3 changeset is always invalid

Hi,

I'm trying to upload avatars, my setup is pretty normal given the examples, etc, when I try to upload an image with Storage.S3 I always get is_invalid, but if I change it to Arc.Storage.S3 it works ok, I created a gist with all relevant files/snippets, the controller action, the user model, and the uploader, I'm also using ecto 2.0.2:

https://gist.github.com/kainlite/58202fbffbe948ad240cfd3967e75c71

config.exs
config :arc,
  bucket: "wauploads",
  virtual_host: true

config :ex_aws, :httpoison_opts,
  recv_timeout: 60_000,
  hackney: [recv_timeout: 60_000, pool: false]

config :myapp, :ex_aws,
  debug_requests: true,
  access_key_id: System.get_env("AWS_ACCESS_KEY_ID"),
  secret_access_key: System.get_env("AWS_SECRET_ACCESS_KEY")
  # region: ["us-west-2"]
  # s3: [
  #   scheme: "https://",
  #   host: "s3-us-west-2.amazonaws.com",
  #   region: "us-west-2"
  # ]
config :arc,
  bucket: "myuploads",
  virtual_host: true

config :ex_aws, :httpoison_opts,
  recv_timeout: 60_000,
  hackney: [recv_timeout: 60_000, pool: false]

config :myapp, :ex_aws,
  debug_requests: true,
  access_key_id: System.get_env("AWS_ACCESS_KEY_ID"),
  secret_access_key: System.get_env("AWS_SECRET_ACCESS_KEY")
  # region: ["us-west-2"]
  # s3: [
  #   scheme: "https://",
  #   host: "s3-us-west-2.amazonaws.com",
  #   region: "us-west-2"
  # ]

mix.exs

  def application do
    [mod: {MyApp, []},
     applications: [:phoenix, :phoenix_html, :cowboy, :logger, :gettext,
                    :phoenix_ecto, :postgrex, :comeonin, :canary, :canada,
                    :hound, :ex_machina, :mailgun, :guardian,
                    :ex_aws, :httpoison, :arc, :arc_ecto
                  ]]
  end

  defp deps do
    [{:phoenix, "~> 1.1.4"},
     {:postgrex, "~> 0.11.2", [hex: :postgrex, optional: true]},
     {:phoenix_ecto, "~> 3.0.0", override: true},
     {:ecto, "~> 2.0.2", override: true},
     {:phoenix_html, "~> 2.6"},
     {:phoenix_live_reload, "~> 1.0", only: :dev},
     {:comeonin, "~> 2.0"},
     {:guardian, "~> 0.10.1"},
     {:gettext, "~> 0.9"},
     {:mailgun, "~> 0.1.2"},
     {:canary, "~> 0.14.1", override: true},
     {:canada, github: "jarednorman/canada", override: true},
     {:credo, "~> 0.1.6", only: [:dev, :test]},
     {:hound, "~> 1.0"},
     {:mix_test_watch, "~> 0.2", only: :dev},
     {:ex_machina, "~> 0.6.1"},
     {:exrm, "~> 1.0"},
     {:cowboy, "~> 1.0"},
     {:arc, "~> 0.5.3"},
     {:arc_ecto, "~> 0.4.2"},
     {:ex_aws, "~> 0.4.10"},
     {:httpoison, "~> 0.7"},
     {:poison, "~> 1.2"},
   ]
  end

Thanks.

Does arc need to be added to the applications list?

Hi!

When I compile for production I get this:

[...]
Building release with MIX_ENV=prod.

You have dependencies (direct/transitive) which are not in :applications!

The following apps should be added to :applications in mix.exs:

        arc => arc is missing from my_app

Continue anyway? Your release may not work as expected if these dependencies are required! [Yn]: 

Is it necessary to add arc to the applications list as well? If so, we should update the readme :)

If not, can I ignore this warning somehow?

scope.id nil, upload not nested.

When creating a new user, I can upload a profile image, but since the user is being created the same time as the image is being uploaded, my scope.id is nil. Do I need to use an uuid as the user.id? Is there something else I'm missing?

  def storage_dir(version, {file, scope}) do
    "uploads/user/profile_photo/#{scope.id}"
  end

Because scope.id is nil, all my uploads are being placed in the uploads/user/profile_photo/ instead of uploads/user/profile_photo/#{user.id}.

Erlang list_to_integer error while uploading

Hi, I am using arc and arc_ecto.

I have included the bucket name like

config :arc,
  bucket: "arn:aws:s3:::bucket-name/"

And I have ex_aws: configuration

config :ex_aws,
  debug_requests: true,
  access_key_id: [{:system, "AWS_ACCESS_KEY_ID"}],
  secret_access_key: [{:system, "AWS_SECRET_ACCESS_KEY"}],
  region: "us-west-2"

config :ex_aws, :httpoison_opts,
    recv_timeout: 60_000,
    hackney: [recv_timeout: 60_000, pool: false]

Now when I try to upload a file, with phoenix, I have an error:

[debug] Processing by Expense.TransactionController.create/2
  Parameters: %{"_csrf_token" => "MWtWVml4EAADHE0mB2V+AEoQEQJz7oN3K68xg9QQ==", "_utf8" => "βœ“", "account_id" => "1", "transaction" => %{"amount" => "1234", "date" => %{"day" => "1", "month" => "6", "year" => "2016"}, "description" => "", "receipt" => %Plug.Upload{content_type: "image/png", filename: "Screenshot from 2016-05-26 16:17:21.png", path: "/tmp/plug-1465/multipart-534432-578580-2"}}}
  Pipelines: [:browser]
[debug] SELECT u0.`id`, u0.`name`, u0.`email`, u0.`encrypted_password`, u0.`verified_at`, u0.`verified`, u0.`role`, u0.`inserted_at`, u0.`updated_at` FROM `users` AS u0 WHERE (u0.`id` = ?) [1] OK query=0.7ms
[debug] SELECT a0.`id`, a0.`name`, a0.`activated`, a0.`user_id`, a0.`inserted_at`, a0.`updated_at` FROM `accounts` AS a0 WHERE (a0.`id` = ?) [1] OK query=0.9ms queue=0.1ms
[debug] SELECT t0.`id`, t0.`amount`, t0.`date`, t0.`description`, t0.`approved_at`, t0.`deleted_at`, t0.`receipt`, t0.`approved_by`, t0.`deleted_by`, t0.`user_id`, t0.`account_id`, t0.`inserted_at`, t0.`updated_at` FROM `transactions` AS t0 WHERE (t0.`account_id` = ?) [1] OK query=1.9ms queue=0.1ms
[debug] SELECT a0.`id`, a0.`name`, a0.`activated`, a0.`user_id`, a0.`inserted_at`, a0.`updated_at` FROM `accounts` AS a0 WHERE (a0.`id` = ?) [1] OK query=1.8ms queue=0.1ms
[debug] Request URL: "https://arn:aws:s3:::memento-is-dev/.s3-us-west-2.amazonaws.com/uploads/Screenshot from 2016-05-26 16:17:21.png"
[debug] Request HEADERS: [{"Authorization", "AWS4-HMAC Credential=accesskey/20160610/us-west-2/s3/aws4_request,SignedHeaders=content-length;host;x-amz-acl;x-amz-content-sha256;x-amz-date,Signature=9e9ced01075"}, {"host", "arn"}, {"x-amz-date", "20160610T045352Z"}, {"content-length", 75189}, {"x-amz-acl", "private"}, {"x-amz-content-sha256", "2d969d852e270a4c0aa2a73"}]
[debug] Request BODY: <<137, 80, 78, 71, 13, 10, 26, 10, 0, 0, 0, 13, 73, 72, 68, 82, 0, 0, 5, 86, 0, 0, 3, 0, 8, 6, 0, 0, 0, 207, 62, 60, 194, 0, 0, 0, 4, 115, 66, 73, 84, 8, 8, 8, 8, 124, 8, 100, 136, 0, ...>>
[error] Task #PID<0.734.0> started from #PID<0.721.0> terminating
** (ArgumentError) argument error
    :erlang.list_to_integer('aws:s3:::test-bucket/')
    (hackney) src/hackney_url.erl:196: :hackney_url.parse_netloc/2
    (hackney) src/hackney.erl:341: :hackney.request/5
    (httpoison) lib/httpoison/base.ex:396: HTTPoison.Base.request/9
    (ex_aws) lib/ex_aws/request/httpoison.ex:35: ExAws.Request.HTTPoison.request/4
    (ex_aws) lib/ex_aws/request.ex:38: ExAws.Request.request_and_retry/7
    lib/arc/storage/s3.ex:14: Arc.Storage.S3.put/3
    (elixir) lib/task/supervised.ex:89: Task.Supervised.do_apply/2
    (elixir) lib/task/supervised.ex:40: Task.Supervised.reply/5
    (stdlib) proc_lib.erl:240: :proc_lib.init_p_do_apply/3
Function: #Function<4.37961548/0 in Arc.Actions.Store.async_put_version/3>
    Args: []
[error] Ranch protocol #PID<0.721.0> (:cowboy_protocol) of listener Expense.Endpoint.HTTP terminated
** (exit) an exception was raised:
    ** (ArgumentError) argument error
        :erlang.list_to_integer('aws:s3:::test-bucket')
        (hackney) src/hackney_url.erl:196: :hackney_url.parse_netloc/2
        (hackney) src/hackney.erl:341: :hackney.request/5
        (httpoison) lib/httpoison/base.ex:396: HTTPoison.Base.request/9
        (ex_aws) lib/ex_aws/request/httpoison.ex:35: ExAws.Request.HTTPoison.request/4
        (ex_aws) lib/ex_aws/request.ex:38: ExAws.Request.request_and_retry/7
        lib/arc/storage/s3.ex:14: Arc.Storage.S3.put/3
        (elixir) lib/task/supervised.ex:89: Task.Supervised.do_apply/2
        (elixir) lib/task/supervised.ex:40: Task.Supervised.reply/5
        (stdlib) proc_lib.erl:240: :proc_lib.init_p_do_apply/3

Moving from erlcloud to ex_aws ?

I think it may be worth it to move from using erlcloud to ex_aws for accesing S3 services, the latter gives way more useful responses, easier to configure and some other things. You can find it here. What do you think?

Authentication issue.

I get this error:

 {:aws_error, {:http_error, 400, 'Bad Request', "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<Error>
<Code>InvalidRequest</Code><Message>The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256.</Message>
<RequestId>C27DE9ADAC91C66F</RequestId>

As you can see there is a problem with the authentication. erlcloud is migrating now to what they refer to as sign_v4. You can see the issue here. alertlogic/erlcloud#11

Some of the modules have been migrated, including s3.(alertlogic/erlcloud@12ca9a9)

The problem might be because my bucket is in the wrong region (Frankfurt).
http://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html

Generate random file name

Hi!

My model can have multiple images so I need the filenames to be unique, is there any way to do this? I have tried to do it in my changeset function but I can't make it work since cast_attachment uploads the image directly.

Any ideas?

Namespaced Uploads

I'm building out a server that supports multiple sites. Each site is decided by its domain. Therefore, I'd like to store files for each site in their own folder. Is there a way to do that with Arc?

Thanks,
Lee

Transformations can't handle complex command args

I'm building up an ffmpeg command that sets metadata and I can't pass this transform in its current state:

fn(in, out) -> ~s(-f mp3 -i #{in} -metadata title="The Title" -f mp3 #{out}) end

I believe the problem arises in Arc.Transformations.Convert where it always takes the args variable and sends it to ~w() before passing it to System.cmd. The string in this case is:

"-f mp3 -i in -metadata title=\"The Title\" -f mp3 out"

Calling ~w() on that produces:

["-f", "mp3", "-i", "in", "-metadata", "title=\"The", "Title\"", "-f", "mp3", "out"]

ffmpeg chokes on this because it improperly splits the title in the metadata.

A possible solution: check the type of args before sending it to System.cmd. If it's a string, pass it to ~w(). If it's already a list, leave it alone. This would allow me to build up the argument list inside my function and return it instead of a string.

If that solution works for you, I'm happy to PR. Please let me know, thanks.

Proposal: add `get` or `fetch` to storage

Currently the storage expects a url function to generate the URL which points to the uploaded file. However, there is no way to fetch an asset from the uploaded location. Once the asset has been uploaded, there is no way to get the asset from the storage location again.

An example of when this would be convenient is proxying requests. For example if you want to do some sort of permission check before allowing a user to download a file. You can do this on S3 with signed urls, but sometimes that is not desirable. Perhaps you wish to serve the content over a different protocol (such as a [web]socket.) In this case it would be nice if there was a convenient function to read the asset into memory from the storage location.

If this is something that you think is a good fit for the project then I can take a look at implementing it.

Delay transformations

Would it be possible to store the original upload, but delay transformations until later?

For instance, video encoding takes forever, especially if the clip is of any significant length. It would make sense to offload this task to a pool of workers, which would handle encoding later on, once the upload (to S3, etc) has completed.

As I understand it, Arc currently runs transformations automatically. For images, that's often fine. But larger files could benefit from manual control.

SignatureDoesNotMatch error on Arc.Storage.S3

I'm just starting with Elixir and Phoenix and trying to use Arc (arc_ecto, actually) to upload files to S3.

I the access keys set up in the config file, but when trying to upload I get this:

[error] Ranch protocol #PID<0.341.0> (:cowboy_protocol) of 
                listener X.Endpoint.HTTP terminated
** (exit) an exception was raised:
    ** (ErlangError) erlang error: {:aws_error, 
              {:http_error, 403, 'Forbidden', 
              "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n
              <Error><Code>SignatureDoesNotMatch</Code>
              <Message>The request signature we calculated does not match 
              the signature you provided. Check your key and signing method.
              </Message>
              <AWSAccessKeyId>XXXX</AWSAccessKeyId>
              <StringToSign>PUT\n+NFGtaSc2qJy2E/IjvTHHA==\n\n
              Tue, 15 Sep 2015 17:45:58 GMT\nx-amz-acl:private\n
              /lukla-dev/uploads/profile.jpg</StringToSign>
              <SignatureProvided>VqTEbMI37SMKvavENE0eP+mQF2I=</SignatureProvided>
              <StringToSignBytes>50 55 54 0a 2b 4e 46 47 74 61 53 63 32 71 4a 79 32 45 
              2f 49 6a 76 54 48 48 41 3d 3d 0a 0a 54 75 65 2c 20 31 35 20 53 65 70 20 32 
              30 31 35 20 31 37 3a 34 35 3a 35 38 20 47 4d 54 0a 78 2d 61 6d 7a 2d 61 63 
              6c 3a 70 72 69 76 61 74 65 0a 2f 6c 75 6b 6c 61 2d 64 65 76 2f 75 70 6c 
              6f 61 64 73 2f 70 72 6f 66 69 6c 65 2e 6a 70 67</StringToSignBytes>
              <RequestId>A8A1CB9362F3DB59</RequestId>
              <HostId>46tQHK/X1Vl26jl+HDQr6e00/
              QqTqHxKtEy4Aq2FlFq686KYTw+0xPXzCgp3npKyVLljaufO3uE=</HostId>
              </Error>"
        }}

        (erlcloud) src/erlcloud_s3.erl:911: :erlcloud_s3.s3_request/8
        (erlcloud) src/erlcloud_s3.erl:611: :erlcloud_s3.put_object/6
        lib/arc/storage/s3.ex:9: Arc.Storage.S3.put/3
        (elixir) lib/task/supervised.ex:74: Task.Supervised.do_apply/2
        (elixir) lib/task/supervised.ex:19: Task.Supervised.async/3
        (stdlib) proc_lib.erl:239: :proc_lib.init_p_do_apply/3

I tried to further investigate, but I'm stuck.

The strange thing to me is that, Arc.Storage.S3/put/3 calls:

:erlcloud_s3.put_object(bucket, s3_key, binary, [acl: acl], erlcloud_config)

which looks to me as a put-object/5

yet, on the stack trace, it says :erlcloud_s3.put_object/6 was called, which expects
a list of HTTPHeaders as 5th parameter

I'm confused πŸ˜•

Not sure if it's a bug, or a misconfiguration on my part.

Also, my mix.lock is

%{"arc": {:hex, :arc, "0.1.2"},
  "arc_ecto": {:hex, :arc_ecto, "0.2.0"},
  "cowboy": {:hex, :cowboy, "1.0.3"},
  "cowlib": {:hex, :cowlib, "1.0.1"},
  "decimal": {:hex, :decimal, "1.1.0"},
  "ecto": {:hex, :ecto, "1.0.2"},
  "erlcloud": {:hex, :erlcloud, "0.9.2"},
  "fs": {:hex, :fs, "0.9.2"},
  "jsx": {:hex, :jsx, "2.1.1"},
  "lhttpc": {:hex, :lhttpc, "1.3.0"},
  "meck": {:hex, :meck, "0.8.3"},
  "mock": {:hex, :mock, "0.1.1"},
  "phoenix": {:hex, :phoenix, "1.0.2"},
  "phoenix_ecto": {:hex, :phoenix_ecto, "1.2.0"},
  "phoenix_html": {:hex, :phoenix_html, "2.2.0"},
  "phoenix_live_reload": {:hex, :phoenix_live_reload, "1.0.0"},
  "plug": {:hex, :plug, "1.0.0"},
  "poison": {:hex, :poison, "1.5.0"},
  "poolboy": {:hex, :poolboy, "1.5.1"},
  "postgrex": {:hex, :postgrex, "0.9.1"},
  "ranch": {:hex, :ranch, "1.1.0"}}

:http_error, 307, 'Temporary Redirect'

I am getting the below error when trying to store files to S3.

iex(1)> Myapp.SiteImageUploader.store("path/to/image.jpg")
** (EXIT from #PID<0.703.0>) an exception was raised:
    ** (ErlangError) erlang error: {:aws_error, {:http_error, 307, 'Temporary Redirect', "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<Error><Code>TemporaryRedirect</Code><Message>Please re-send this request to the specified temporary endpoint. Continue to use the original request endpoint for future requests.</Message><Bucket>mybucket-dev</Bucket><Endpoint>mybucket-dev.s3-us-west-2.amazonaws.com</Endpoint><RequestId>801142CBF4589FAC</RequestId><HostId>RJuvEnRxWxmQMBMCLA7Bn1ie+znHQeWhunRFkjk1sjMsARTDeu92N0EmeU8xTOAP2gMR5ydLP1Q=</HostId></Error>"}}
        (erlcloud) src/erlcloud_s3.erl:1022: :erlcloud_s3.s3_request/8
        (erlcloud) src/erlcloud_s3.erl:682: :erlcloud_s3.put_object/6
        lib/arc/storage/s3.ex:9: Arc.Storage.S3.put/3
        (elixir) lib/task/supervised.ex:74: Task.Supervised.do_apply/2
        (elixir) lib/task/supervised.ex:19: Task.Supervised.async/3
        (stdlib) proc_lib.erl:240: :proc_lib.init_p_do_apply/3

My hunch is that its related to not able to specify the region? My bucket is in the us-west-2 region.

Cannot upload file bigger than 700KB to s3

Everytime I uploaded file bigger than 700KB, I've got this error:

[error] #PID<0.550.0> running AppscoastFm.Endpoint terminated
Server: localhost:4000 (http)
Request: POST /episodes
** (exit) exited in: Task.await(%Task{owner: #PID<0.550.0>, pid: #PID<0.553.0>, ref: #Reference<0.0.1.14482>}, 10000)
    ** (EXIT) time out

Did this: config :arc, version_timeout: 100_000_000 #milliseconds and

  plug Plug.Parsers,
    parsers: [:urlencoded, :multipart, :json],
    pass: ["*/*"],
    json_decoder: Poison,
    length: 100_000_000

Please help

Upload progress

Hi,

It would be very nice if the async process could return a progress status.
Either in the form of 15324/2123456 Bytes or as a percentage.

I've been looking at both hackney and httpotion but could not find any hooks regarding this myself.
I hope you might have more insight into it :)

Gerard

error in lib/arc/storage/s3.ex:69: Arc.Storage.S3.bucket/0

i get this error:

[error] Task #PID<0.487.0> started from #PID<0.463.0> terminating
** (MatchError) no match of right hand side value: :error
    lib/arc/storage/s3.ex:69: Arc.Storage.S3.bucket/0
    lib/arc/storage/s3.ex:14: Arc.Storage.S3.put/3
    (elixir) lib/task/supervised.ex:89: Task.Supervised.do_apply/2
    (elixir) lib/task/supervised.ex:40: Task.Supervised.reply/5
    (stdlib) proc_lib.erl:240: :proc_lib.init_p_do_apply/3
Function: #Function<4.60738616/0 in Arc.Actions.Store.async_put_version/3>
    Args: []

but i didn't use s3 storage.

avatar params:

%{avatar: %Plug.Upload{content_type: "image/png", filename: "Klogoε‰―ζœ¬.png", 
   path: "/var/folders/pn/jhz7bfx944b0ftdxrxtlsv7h0000gn/T//plug-1459/multipart-849582-337479-2"},
  id: 12}

user params:

%ChinaPhoneix.User{__meta__: #Ecto.Schema.Metadata<:loaded>, admin: false,
 age: nil, avatar: nil, comments: [], email: "[email protected]", id: 12,
 inserted_at: #Ecto.DateTime<2016-04-05T08:54:52Z>, name: "ssj4429108",
 password: nil, password_confirmation: nil,
 password_digest: "$2b$12$FOGrltWbsFXEMu24RXf1VOriDQaB3fXPS3WfWi5NzHflvDOYma/kC",
 posts: [], score: 1, updated_at: #Ecto.DateTime<2016-04-05T09:00:50Z>}

error in

          changeset = User.update_changeset(user, params)

Transformation not working (file not found)

I'm trying to do a basic transformation but I'm getting the following error:

[error] Task #PID<0.797.0> started from #PID<0.793.0> terminating
** (stop) :enoent
    (elixir) lib/system.ex:435: System.cmd("convert", ["/var/folders/2g/b9yxmq5d4bzc1yrxpwpgtdth0000gn/T//plug-1451/multipart-778711-631234-1", "-strip", "-thumbnail", "200x200", "-format", "png", "/var/folders/2g/b9yxmq5d4bzc1yrxpwpgtdth0000gn/T/CXJPJE5N3S7NFJZCIVL67NTHAXACTWRD"], [stderr_to_stdout: true])
    lib/arc/transformations/convert.ex:5: Arc.Transformations.Convert.apply/2
    lib/arc/actions/store.ex:50: Arc.Actions.Store.put_version/3
    (elixir) lib/task/supervised.ex:74: Task.Supervised.do_apply/2
    (elixir) lib/task/supervised.ex:19: Task.Supervised.async/3
    (stdlib) proc_lib.erl:240: :proc_lib.init_p_do_apply/3
Function: #Function<2.27415194/0 in Arc.Actions.Store.async_put_version/3>
    Args: []
[error] Ranch protocol #PID<0.793.0> (:cowboy_protocol) of listener Bookroo.Endpoint.HTTP terminated
** (exit) an exception was raised:
    ** (ErlangError) erlang error: :enoent
        (elixir) lib/system.ex:435: System.cmd("convert", ["/var/folders/2g/b9yxmq5d4bzc1yrxpwpgtdth0000gn/T//plug-1451/multipart-778711-631234-1", "-strip", "-thumbnail", "200x200", "-format", "png", "/var/folders/2g/b9yxmq5d4bzc1yrxpwpgtdth0000gn/T/CXJPJE5N3S7NFJZCIVL67NTHAXACTWRD"], [stderr_to_stdout: true])
        lib/arc/transformations/convert.ex:5: Arc.Transformations.Convert.apply/2
        lib/arc/actions/store.ex:50: Arc.Actions.Store.put_version/3
        (elixir) lib/task/supervised.ex:74: Task.Supervised.do_apply/2
        (elixir) lib/task/supervised.ex:19: Task.Supervised.async/3
        (stdlib) proc_lib.erl:240: :proc_lib.init_p_do_apply/3

It seems like this has to do with a missing file. This is my definition file.

defmodule Bookroo.BookImage do
  use Arc.Definition
  use Arc.Ecto.Definition

  # To add a thumbnail version:
  @versions [:original, :thumb]

  @extension_whitelist ~w(.jpg .jpeg .gif .png)

  def validate({file, _}) do
    file_extension = file.file_name |> Path.extname |> String.downcase
    Enum.member?(@extension_whitelist, file_extension)
  end

  def transform(:thumb, _) do
    {:convert, "-strip -thumbnail 200x200 -format png"}
  end

  # Override the persisted filenames:
  def filename(version, _) do
    version
  end

  # Override the storage directory:
  def storage_dir(version, {_, scope}) do
    "uploads/books/#{scope.uuid}/"
  end

  # Provide a default URL if there hasn't been a file uploaded
  def default_url(:thumb, _) do
    "http://placehold.it/200x200"
  end
end

Any thoughts? Could this have something to do with permissions?

Attaching multiple images to Arc.Ecto model

Arc allows me to upload multiple image, however I need to access those images using arc.ecto model. As explained in tutorial I added one field ("avatar") to my user model as shown below

schema "users" do
    field :name, :string
    field :age, :integer
    field :gender, :string
    field :user_name, :string
    field :email, :string
    field :crypted_password, :string
    field :password, :string, virtual: true
    **field :avatar, LoginService.Avatar.Type**
    field :token, :string, virtual: true
   timestamps
  end

Field "avatar" allows me to access image associated with the user model and I can access the avatar path by with following method:
Avatar.url({user.avatar, user})

but I can't find a way to associate multiple images to the user so that I can access those images with "Avatar.url" method as I accessed avatar image ?

Resize animated gif

I tried resizing an animated gif but it messes up the animation after resizing. I found out that there are two steps required to resize an animated gif.

Example

convert do.gif -coalesce temporary.gif
convert -size <original size> temporary.gif -resize 24x24 smaller.gif

Is there a way to get a resized version of an animated gif with arc?

Doesn't work with most recent version of Phoenix?

Hello, I'm trying to use your app and I set up everything like in the README but it keeps giving this error:

== Compilation error on file web/uploaders/attachment.ex ==
** (CompileError) web/uploaders/attachment.ex:2: module Arc.Definition is not loaded and could not be found
(elixir) expanding macro: Kernel.use/1
web/uploaders/attachment.ex:2: MyApp.Attachment (module)
(elixir) lib/kernel/parallel_compiler.ex:100: anonymous fn/4 in Kernel.ParallelCompiler.spawn_compilers/8

I've got the 'use Arc.Definition' macro and have the mix.exs file setup correctly as well as the config.exs (I think)

How to use a transformer which doesn't accept an output file name as parameter?

Arc.Transformations.Convert expects program to leave a temporary-named file in the file system, but for example soffice (LibreOffice's converter) doesn't take an output path configuration (only --outdir).

With the following configuration

  def transform(:html, _) do
    {
      :soffice,
      &args/2,
      :html
    }
  end

  def args(input, _output) do
    " --headless --convert-to html #{input} "
  end

My application throws:

** (File.CopyError) could not copy from /var/folders/g7/dtxf2tc57z71whsmx20slp0h0000gn/T/YC7IUQLEUHTWUPUF77L67YFIE6N22MZA to uploads/documents/html-609W56th.xlsx.html: no such file or directory

If I set --outdir #{System.tmp_dir} it still doesn't know what file name to look for.

How can we work around this issue?

Thanks for your work on this library! :)

File fingerprint

Is there a way to get the fingerprint of the uploaded file? I want to append the fingerprint to the filename.

Getting a "SignatureDoesNotMatch" when trying to access files.

Uploading works fine, but I get an "SignatureDoesNotMatch" error when trying to access signed files.

Here is my config:

mix file:

     applications: [:phoenix, :phoenix_html, :cowboy, :logger, :gettext,
                    :phoenix_ecto, :postgrex, :ex_aws, :httpoison]]

deps:

    [{:phoenix, "~> 1.1"},
     {:phoenix_ecto, "~> 2.0"},
     {:postgrex, ">= 0.0.0"},
     {:phoenix_html, "~> 2.3"},
     {:phoenix_live_reload, "~> 1.0", only: :dev},
     {:cowboy, "~> 1.0"},
     {:gettext, "~> 0.9"},
     {:arc,  "~> 0.2.2"},
     {:arc_ecto, github: "stavro/arc_ecto"},
     {:ex_aws, "~> 0.4.10"},
     {:httpoison, "~> 0.7"}]

dev config file:

config :arc,
  bucket: "verktyget-development"

import_config "dev.secret.exs"

In dev.secret.exs:

config :ex_aws,
  access_key_id: "KEY",
  secret_access_key: "SECRET"

My attachment module:

defmodule MyApp.Image do
  use Arc.Definition

  # Include ecto support (requires package arc_ecto installed):
  use Arc.Ecto.Definition

  @versions [:original]

  # To add a thumbnail version:
  # @versions [:original, :thumb]

  # Whitelist file extensions:
  def validate({file, _}) do
    ~w(.jpg .jpeg .gif .png) |> Enum.member?(Path.extname(file.file_name))
  end

  # Define a thumbnail transformation:
  # def transform(:thumb, _) do
  #   {:convert, "-strip -thumbnail 250x250^ -gravity center -extent 250x250 -format png"}
  # end

  # Override the persisted filenames:
  def filename(version, _) do
    version
  end

  # Override the storage directory:
  def storage_dir(version, {file, scope}) do
    "uploads/media/#{scope.id}"
  end

  # Provide a default URL if there hasn't been a file uploaded
  # def default_url(version, scope) do
  #   "/images/avatars/default_#{version}.png"
  # end
end

To get the file I use:

MyApp.Image.url({modell.image, modell}, :original, signed: true)

Uploading of big files to S3 storage causes out of memory

Hi,

when I want to upload some big (ca. 3-6 GB) video files in S3 I get an out of memory exception:

2016-05-31 12:21:02.694 [error] Task #PID<0.761.0> started from #PID<0.759.0> terminating ** (File.Error) could not read file /tmp/plug-1464/multipart-697199-932634-1: not enough memory (elixir) lib/file.ex:244: File.read!/1 (arc) lib/arc/storage/s3.ex:7: Arc.Storage.S3.put/3

What do you think about changing the implementation in a way that we could define a kind of threshold and if a file is bigger than that we're uploading that file in chunks as a multipart-upload instead of a simple 'put_object'?

See https://aws.amazon.com/blogs/aws/amazon-s3-multipart-upload/

Cheers
florian

Scope in storage_dir on create

Would it be easy/straigthforward to have the scope in storage_dir after it passed throught some callbacks? My use case is: I have a model with a slug column and I want that column to be in the uploaded path. The content of this column is generated in a before_insert callback. The goal is to have a unique path for each record with an identifier persisted in the database. As it is right now in arc, the scope contains no id and no other fields set in callback.

Could it be possible to achieve this with arc? My hack right now is to inject the generated slug value in the controller before it goes in the changeset.

Thank you for this package, by the way, it’s really fun and easy to use πŸ˜„

No function clause - transform :noaction and ffmpeg

I'm attempting to use Arc's transformation functionality to reformat audio using FFMPEG before I store them. Currently I'm running into an issue with both the :noaction transformation, and the "action" transformation.

When a user uploads a file in the desired format, where I don't need to reformat it, and I attempt the :noaction method I get the following error:

[error] Task #PID<0.553.0> started from #PID<0.550.0> terminating
** (FunctionClauseError) no function clause matching in Arc.Processor.apply_transformation/2
    (arc) lib/arc/processor.ex:6: Arc.Processor.apply_transformation(%Arc.File{file_name: "audio.wav", path: "/tmp/plug-1458/multipart-862094-378542-2"}, :noaction)
    (arc) lib/arc/actions/store.ex:50: Arc.Actions.Store.put_version/3
    (elixir) lib/task/supervised.ex:89: Task.Supervised.do_apply/2
    (elixir) lib/task/supervised.ex:40: Task.Supervised.reply/5
    (stdlib) proc_lib.erl:240: :proc_lib.init_p_do_apply/3
Function: #Function<2.66792300/0 in Arc.Actions.Store.async_put_version/3>
    Args: []

audio_file.ex

defmodule AudioFile do
  use Arc.Definition

  @extension_whitelist ~w(.wav)

  def acl(_, _), do: :private

  def filename(_, {_, scope}) do
    "#{scope.storage_key}"
  end

  def storage_dir(_, {_, scope}) do
    "#{scope.name}/"
  end

  def transform(_version_atom, {file, _scope}) do
    file_extension = get_file_extension(file)

    if Enum.member?(@extension_whitelist, file_extension) do
      :noaction
    else
      {:ffmpeg, fn(input, output) -> "#{input} -format wav #{output}" end, :wav}
    end
  end

  defp get_file_extension(file) do
    file.file_name |> Path.extname |> String.downcase
  end
end

When I attempt the deprecated method listed in lib/arc/processor.ex :7, by wrapping the :noaction in a tuple like {:noaction}

I instead get what looks like the transform function running multiple times, twice without error, and a third time running without including the parameters at all.

Based on the lib/arc/processor.ex file it looks like that function should be defined, and have an appropriately matching pattern.

When attempting to actually run the transformation {:ffmpeg, fn(input, output) -> "#{input} -format png #{output}" end, :wav} I run into a similar issue:

[error] Task #PID<0.552.0> started from #PID<0.549.0> terminating
** (FunctionClauseError) no function clause matching in Arc.Processor.apply_transformation/2
    (arc) lib/arc/processor.ex:6: Arc.Processor.apply_transformation(%Arc.File{file_name: "mp3_test.mp3", path: "/tmp/plug-1458/multipart-864421-249072-1"}, {:ffmpeg, #Function<1.113334142/2 in AudioFile.transform/2>})
    (arc) lib/arc/actions/store.ex:50: Arc.Actions.Store.put_version/3
    (elixir) lib/task/supervised.ex:89: Task.Supervised.do_apply/2
    (elixir) lib/task/supervised.ex:40: Task.Supervised.reply/5
    (stdlib) proc_lib.erl:240: :proc_lib.init_p_do_apply/3
Function: #Function<2.66792300/0 in Arc.Actions.Store.async_put_version/3>
    Args: []

This error doesn't seem to make sense to me given the methods on lines 8 and 12 of lib/arc/processor.ex seem to have functions that should match.

I'm wondering I'm misunderstanding how Arc should be used, or if there are other issues at play here.

Upload size 413

Hi

When uploading larger images on my production app, I receive the following error:

413 Request Entity Too Large

How can I lift the maximum upload size and is this done on application layer or NGINX?

connection error with version 0.2.x

In the new version with ex_aws instead of erlcloud, I get a connection error.
If the S3 bucket is in region other than US east, do you need to specify that in the configuration?

Scope is nil in nested models

Thanks for a great uploader.
I ran into an issue when attaching files to nested model

I have modified the uploader so I can upload an image on creation

def storage_dir(version, {file, scope}) do
"uploads/banner/images/#{scope.storage_dir}" <<< storage_id is a UUID
end

This works great :-)

Bui If I use the same uploader on a nested model e.g a user has many party_images then scope is nil
when I try to create / update a `party_image, version and file is OK

Signed URLs

Has anyone done signed URLs on any regions other than us-east-1 ?

I get SignatureDoesNotMatch from amazonaws

I tried

config :arc,
  virtual_host: true,
  bucket: "bucketname",
  asset_host: "https://bucketname.s3-eu-west-1.amazonaws.com/"

but still get the same error.

Noobs here.....

I'd like to thank you for such an amazing libraries....

  1. Arc_ecto doesn't seem to work with phoenix_ecto (current version) but i managed it by using arc_ecto 0.3.2
  2. While using arc_ecto 0.3.2 i realized that, in the storage dir (scope.id) doesn't show... how to handle this ?

Crashing when I commit my form if I'm use scope. maybe it's something I do wrong

defmodule uploaders/Digiramp.Avatar do

..
..

def filename(version, {file, scope}) do
    "#{scope.id}_#{version}_#{file.file_name}"
end

from the console

[error] Task #PID<0.951.0> started from #PID<0.949.0> terminating
Function: #Function<0.47797856/0 in Arc.Actions.Store.async_put_version/3>
Args: []
** (exit) an exception was raised:
** (UndefinedFunctionError) undefined function: nil.id/0
nil.id()
(digiramp) web/uploaders/avatar.ex:31: Digiramp.Avatar.filename/2
(arc) lib/arc/definition/versioning.ex:10: Arc.Definition.Versioning.resolve_file_name/3
(arc) lib/arc/actions/store.ex:49: Arc.Actions.Store.put_version/3
(elixir) lib/task/supervised.ex:74: Task.Supervised.do_apply/2
(elixir) lib/task/supervised.ex:19: Task.Supervised.async/3
(stdlib) proc_lib.erl:237: :proc_lib.init_p_do_apply/3
[error] Ranch protocol #PID<0.949.0> (:cowboy_protocol) of listener Digiramp.Endpoint.HTTP terminated
** (exit) an exception was raised:
** (UndefinedFunctionError) undefined function: nil.id/0
nil.id()
(digiramp) web/uploaders/avatar.ex:31: Digiramp.Avatar.filename/2
(arc) lib/arc/definition/versioning.ex:10: Arc.Definition.Versioning.resolve_file_name/3
(arc) lib/arc/actions/store.ex:49: Arc.Actions.Store.put_version/3
(elixir) lib/task/supervised.ex:74: Task.Supervised.do_apply/2
(elixir) lib/task/supervised.ex:19: Task.Supervised.async/3
(stdlib) proc_lib.erl:237: :proc_lib.init_p_do_apply/3

Same thing with Overrideing the storage directory:

  def storage_dir(version, {file, scope}) do
    "uploads/users/avatars/#{scope.id}"
  end

Different public path for uploads

Path for the application to access the file:
priv/static/system/trainers/avatars/17/thumb-ms_big_yellow.png.png?v=63620674899

Public path to the file:
http://localhost:4000/system/trainers/avatars/17/thumb-ms_big_yellow.png.?v=63620674899

I tried it with:

MyApp.Avatar.url({@trainer.avatar, @trainer}, :thumb)

but it returns priv/static/system/trainers/avatars/17/thumb-ms_big_yellow.png.png?v=63620674899 which of course can't be accessed by the user since the domain is rooted to the priv/static directory.

How do I get this to work? Couldn't find anything in the docs.

Local file is processed and uploaded to S3 but nothing is stored in database

I'm following the examples in the Readme closely but I cannot get it to work. I think something is wrong here.

I'm using elixirs Arc with Ecto and Amazon S3 to store files that I have previously downloaded. Everything seems to work, they end up on S3. But nothing is stored in my database. So if I try to generate an URL I always just get the default image back.

This is how I store a file:

iex > user = Repo.get(User, 3)
iex > Avatar.store({"/tmp/my_file.png", user})
{:ok, "my_file.png"}
iex > user.avatar
nil

But the user.avatar field is still nil.

My user module:


defmodule MyApp.User do
  use MyApp.Web, :model
  use Arc.Ecto.Schema

  alias MyApp.Repo

  schema "users" do
    field :name, :string
    field :email, :string
    field :avatar, MyApp.Avatar.Type    
    embeds_many :billing_emails, MyApp.BillingEmail
    embeds_many :addresses, MyApp.Address
    timestamps
  end

  @required_fields ~w(name email)
  @optional_fields ~w(avatar)

  def changeset(model, params \\ :empty) do
    model
    |> cast(params, @required_fields, @optional_fields)
    |> cast_embed(:billing_emails)
    |> cast_embed(:addresses)
    |> validate_required([:name, :email])
    |> validate_format(:email, ~r/@/)
    |> unique_constraint(:email)
    |> cast_attachments(params, [:avatar])
  end

end

The Avatar uploader:

defmodule MyApp.Avatar do
  use Arc.Definition

  # Include ecto support (requires package arc_ecto installed):
  use Arc.Ecto.Definition

  @acl :public_read

  # To add a thumbnail version:
  @versions [:original, :thumb]

  # Whitelist file extensions:
  def validate({file, _}) do
    ~w(.jpg .jpeg .gif .png) |> Enum.member?(Path.extname(file.file_name))
  end

  # Define a thumbnail transformation:
  def transform(:thumb, _) do
    {:convert, "-strip -thumbnail 250x250^ -gravity center -extent 250x250 -format png", :png}
  end

  def transform(:original, _) do
    {:convert, "-format png", :png}
  end

  def filename(version,  {file, scope}), do: "#{version}-#{file.file_name}"

  # Override the storage directory:
  def storage_dir(version, {file, scope}) do
    "uploads/user/avatars/#{scope.id}"
  end

  # Provide a default URL if there hasn't been a file uploaded
  def default_url(version, scope) do
    "/images/avatars/default_#{version}.png"
  end

end

s3 host is incorrect for other regions

Hello. Thanks for you app.

I have faced an issue with the S3 storage. I have registered a s3-bucket in eu-west-1 region. And have these lines in configuration:

config :arc,
  bucket: "bucketname"

config :ex_aws,
  access_key_id: System.get_env("AWS_ACCESS_KEY_ID"),
  secret_access_key: System.get_env("AWS_SECRET_ACCESS_KEY"),
  s3: [
    scheme: "https://",
    host: "s3-eu-west-1.amazonaws.com",
    region: "eu-west-1"
  ]

But, when trying to build the url it gives back: https://s3.amazonaws.com/bucketname/filename.jpg. So this causes an error, when trying to access this path:

<Error>
  <Code>PermanentRedirect</Code>
  <Message>
    The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint.
  </Message>
  <Bucket>bucketname</Bucket>
  <Endpoint>bucketname.s3.amazonaws.com</Endpoint>
  <RequestId>some-request-id</RequestId>
  <HostId>
    some-host-id
  </HostId>
</Error>

The reason is these lines in s3.ex:

defp default_host do
    case virtual_host do
      true -> "https://#{bucket}.s3.amazonaws.com"
      _    -> "https://s3.amazonaws.com/#{bucket}"
    end
  end

So I have changed the configuration to be:

config :arc,
  asset_host: "https://s3-eu-west-1.amazonaws.com/bucketname"

It solved the issue. Also adding virtual_host: true solves it too. But, maybe it is possible to reuse the code from ex_aws? Since it already has all the configurations needed. If so, I can make a PR for this.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.