superbalist / flysystem-google-cloud-storage Goto Github PK
View Code? Open in Web Editor NEWFlysystem Adapter for Google Cloud Storage
License: MIT License
Flysystem Adapter for Google Cloud Storage
License: MIT License
I am running into an issue reading a directory of files from a google cloud storage bucket.
ErrorException in GoogleStorageAdapter.php line 193: Undefined index: contentType
/home/vagrant/Code/TestApplication/vendor/superbalist/flysystem-google-storage/src/GoogleStorageAdapter.php line 193
The error occurs in the normalizeObject function when I run where "testing" is my directory name:
$files = Storage::disk('gcs')->files('testing');
dd($files);
And here is the dump of the object that causes the exception:
array:16 [▼
"kind" => "storage#object"
"id" => "testbucket/testing//1486146319340058"
"selfLink" => "https://www.googleapis.com/storage/v1/b/testapplication/o/testing%2F"
"name" => "testing/"
"bucket" => "testbucket"
"generation" => "1486146319340058"
"metageneration" => "1"
"timeCreated" => "2017-02-03T18:25:19.324Z"
"updated" => "2017-02-03T18:25:19.324Z"
"storageClass" => "STANDARD"
"timeStorageClassUpdated" => "2017-02-03T18:25:19.324Z"
"size" => "0"
"md5Hash" => "1B2M2Y8AsgTpgAmY7PhCfg=="
"mediaLink" => "https://www.googleapis.com/download/storage/v1/b/testapplication/o/testing%2F?generation=1486146319340058&alt=media"
"crc32c" => "AAAAAA=="
"etag" => "CJqky7vG9NECEAE="
]
If I mount the bucket on my local machine via the gcsfuse tool and use it as a local storage disk everything works fine and won't trigger this exception.
A possible quick fix is to check if contentType exists when setting the mimetype:
'mimetype' => (isset($info[''contentType]) ? $info['contentType'] : ''),
Hi
I am getting the following error running the service on Laravel 5.6 on a PHP 7.2 stack
Google \ Cloud \ Core \ Exception \ ServiceException
count(): Parameter must be an array or an object that implements Countable
Generating an uri with whitespaces or any other characters besides alphanumeric characters or "-_.~" will not be RFC 3986 compliant. They will work in most browsers because they are automatically encoded but may fail when passed to other software modules which are not doing automatic encoding.
Pull request #90 will fix this bug. For anyone who has the same problem at the moment the bug can be solved by pointing at our bugfix:
{
"require": {
"superbalist/flysystem-google-storage": "dev-bugfix-rfc3986-urls"
},
"repositories": [
{
"type": "vcs",
"url": "https://github.com/mailspice/flysystem-google-cloud-storage"
}
]
}
I have a bucket with some nest folders and I am getting duplication for some reason eg:
So, big problem with moving a file inside a GCS bucket.
I have a file "text.css" in the root of a bucket, alongside some folders. I drag-n-dropped the CSS file into a folder. elFinder does it's thing and seems to have moved the file. I navigate into a folder (CSS), and yup, there is the file.
However when I head to my Google console and check the bucket, the test.css file is still in the root of the bucket, but has been renamed "css\test.css".
I tried with an image file, moving to another fodler, same thing. It simply changes the name of the file.
Creating a new file in the folder also results in a file being created in the root of the bucket, with the file name prepended with the folder name.
Screenshot of the bucket in my Google Console http://imgur.com/wck5YlA
I'm not sure if this is problem with FlysystemStreamWrapper or this library, but using the similar code with AWS S3 adapter works fine. If problem is with FlysystemStreamWrapper I can open separate issue there.
Problem is that when I register GoogleCloudStorageAdapter as stream wrapper and write to file using file_put_contents
file is uploaded fine but I get PHP warnings.
PHP version:
> php -v
PHP 7.1.13 (cli) (built: Jan 5 2018 15:31:15) ( NTS )
Copyright (c) 1997-2017 The PHP Group
Zend Engine v3.1.0, Copyright (c) 1998-2017 Zend Technologies
composer.json
{
"require": {
"league/flysystem": "dev-master",
"twistor/flysystem-stream-wrapper": "dev-master",
"superbalist/flysystem-google-storage": "dev-master"
}
}
test.php
<?php
use League\Flysystem\Filesystem;
use Twistor\FlysystemStreamWrapper;
use Google\Cloud\Storage\StorageClient;
use Superbalist\Flysystem\GoogleStorage\GoogleStorageAdapter;
require __DIR__ . '/vendor/autoload.php';
$keyFilePath = '/Users/eero/flystream/google-application-credentials.json';
$bucketName = 'yourbucketname';
$projectId = 'yourgoogleprojectid';
$basePath = 'sandbox';
$clientConfig = [
'projectId' => $projectId,
'keyFilePath' => $keyFilePath
];
$client = new StorageClient($clientConfig);
$bucket = $client->bucket($bucketName);
$adapter = new GoogleStorageAdapter($client, $bucket, $basePath);
$filesystem = new Filesystem($adapter);
FlysystemStreamWrapper::register('eerotest', $filesystem);
$targetPath = 'eerotest://testfile';
file_put_contents($targetPath, 'foo');
Now when I run php test.php
I would expect no errors, but what I actually get:
PHP Warning: fseek(): supplied resource is not a valid stream resource in /Users/eero/flystream/vendor/twistor/flysystem-stream-wrapper/src/FlysystemStreamWrapper.php on line 421
Warning: fseek(): supplied resource is not a valid stream resource in /Users/eero/flystream/vendor/twistor/flysystem-stream-wrapper/src/FlysystemStreamWrapper.php on line 421
PHP Warning: fclose(): supplied resource is not a valid stream resource in /Users/eero/flystream/vendor/twistor/flysystem-stream-wrapper/src/FlysystemStreamWrapper.php on line 387
Warning: fclose(): supplied resource is not a valid stream resource in /Users/eero/flystream/vendor/twistor/flysystem-stream-wrapper/src/FlysystemStreamWrapper.php on line 387
https://cloud.google.com/storage/docs/access-control/signed-urls
Thanks for the excellent work!
Could someone provide examples on how to use the new GCloud API?
I've upgraded from v1 to v3 (this library) and I can no longer use Google_Auth_AssertionCredentials
as the Google API library was changed to GCloud.
I get the following error when trying to move a file from "my-bucket/sub-doler/file.txt" to "my-other-bucket/file.txt"
Fatal error: Nesting level too deep - recursive dependency? in C:\wamp\www\my-app\application\libraries\elfinder\elFinderVolumeDriver.class.php on line 1887
Update - I get the same error trying when no sub-folder is involved.
Is there anything preventing support for the newer versions of the google/cloud library?
When trying to move folders, I get a File Not Found error.
I also find that my breakpoint on the copy function (line 173, GoogleStorageAdapter.php) is not being hit when moving folders, but is when moving files?
Is moving (copy/cut and pasting effectively) folders simply not supported/working?
As the title says its time for a new release.
we are testing against those versions. but we cannot use them.
I am using the GCS FlySystem with some success, but when I have a GCS bucket set as a root volume, that has a lot of sub-folders, it is taking forever to load.
Is there anything I can do to speed this up at all?
Hello there,
The getOptionsFromConfig method should accept the metadata key as a possible value.
I require this for the following properties:
"contentType": string
"contentLanguage": string
"cacheControl": string
The official google cloud documentation shows that the StorageObject allows the metadata property.
https://googlecloudplatform.github.io/google-cloud-php/#/docs/v0.20.1/storage/storageobject?method=update
And the JSON API docs shows what properties the StorageObject can contain.
https://cloud.google.com/storage/docs/json_api/v1/objects#resource
Last night my GCS root volume worked fine, this morning, it has started acting "odd".
The root volume now thinks it is locked, but I have no access controls on it at all.
I put a break point on the code where I initialise the adapter, and it looks like it has stopped authorizing it for some reason?
Code:
function googleCloud($bucket)
{
$credentials = new \Google_Auth_AssertionCredentials(
'[email protected]',
[\Google_Service_Storage::DEVSTORAGE_FULL_CONTROL],
file_get_contents(set_realpath('my-project.p12')),
'notasecret'
);
$client = new \Google_Client();
$client->setAssertionCredentials($credentials);
$client->setDeveloperKey('MY_KEY');
$service = new \Google_Service_Storage($client);
$adapter = new GoogleStorageAdapter($service, $bucket);
return $adapter;
}
I am looking to do some huge uploads (using elFidner and Google Cloud Storage) and it looks like I am running out of memory when doing so.
elFinder's debug information shows that it's using a peak memory of 5mb.
The upload is chunked into 10mb segments as well.
I'm wondering if there is anything that can be done?
Testing uploading to a local file system, everything is OK, and a file that is under 2gb (the current memory limit of PHP) also works, but I'm wondering if there is a way to have the upload streamed as bits, rather then the whole file?
It seems I am unable to edit text files inside a GCS bucket.
I tried both files created with elFinder and files uploaded directly in the Google Console.
Oddly, I can edit CSS files and I can even use the image resizer/cropper/rotator.
Thanks very much for this great package. Would you consider moving to semver-compatible versioning? I recently had to submit a change to the Drupal module using this package to pull in 7.0.0, however it doesn't appear there are any real breaking changes from 5.x, which is what it had been on. Using semantic versioning would make it easier for packages depending on this one to identify breaking changes and keep API compatibility for more minor changes. It appears every release at the moment bumps the major version.
After composer updating, I now get the error:
PHP Fatal error: Uncaught Error: Class 'Google_Auth_AssertionCredentials' not found
This was introduced in #5 when we allowed for both ~1.1|^2.0.0@RC of the google/apiclient to be installed. At some point during RC and 2.0.0 final, the Google_Auth_AssertionCredentials class was removed.
See googleapis/google-api-php-client#748 and https://github.com/google/google-api-php-client/blob/master/UPGRADING.md#google_auth_assertioncredentials-has-been-removed
When we pushed this release allowing 2.0.0@RC, we incorrectly bumped the minor version and kept major as is.
I have a problem: upload file (image) to Storage Firebase by PHP
Current, I don't find any function that it can resolve for this problem.
Pls, support help me.
Thanks!
I think the code just hangs when you try to delete a massive directory, or list a massive directory.
Anyways to limit the number of rows returned? Do it in small chunks?
how to get url from firebase storage file path?
Dear Sir/Madam,
Please coud you help to provide me the configuration guide between Elfinder and Flysystem-google-cloud-storage?
Best Regards,
I just received the following from google cloud:
Hello Google Cloud Storage Customer,
We are writing to let you know that starting June 20, 2019, Cloud Storage will allow
JSON API requests to be sent to storage.googleapis.com in addition to
www.googleapis.com (the current endpoint).
What do I need to know?
On June 20, 2019, we will begin updating the Cloud Client Libraries and gsutil to
use the new endpoint; storage.googleapis.com. After the update, your JSON API
requests will start using the new endpoint.
What do I need to do?
If your production or test code doesn't check for endpoint-specific details, no
action is required on your part.
If your production or test code checks for endpoint-specific details, you will need to
modify them before June 20, 2019 as follows:
* If your code checks that the ‘baseUrl’ or ‘rootUrl’ fields in the JSON API
Discovery document point to www.googleapis.com, you will need to modify
those checks to allow either storage.googleapis.com or www.googleapis.com.
Note that the oauth2 scopes fields in the Discovery document will not change
and will continue to point to www.googleapis.com.
* If your code checks that the ‘selfLink’ field in bucket or object metadata
points to www.googleapis.com, you will need to modify that check to allow
either storage.googleapis.com or www.googleapis.com.
* If you access Cloud Storage through a firewall, you will need to ensure that
requests to storage.googleapis.com are allowed by your firewall rules.
I would imagine the plugin will be affected by this?
I get the following error show up in elFinder when deleting a file from a GCS bucket:
HTTP Error: Unable to connect: 'fopen(compress.zlib://https://www.googleapis.com/storage/v1/b/MY_BUCKET/o/MY_FILE.JPG?key=d687c7417ba524c0fe64905a23250af5e5e0c332): failed to open stream: operation failed'
the file is actually deleted, but I absolutely cannot have this error pop up!
I'm testing out v3 of this library as I need to upload huge files, and the chnage to a streaming method should help.
However, now when I upload a large file (for now, just 100-200mb) elFinder is behaving odd, and I think it is linked to this library maybe?
Previously, the upload progress was accurate, and there would be lots of small chunked requests.
Now, there are a few small XHR requests (depending on the chunk size set in elFinder) which fills the progress bas to 100%, then there is a final request (elFinder display "Doing something") that takes X amount of time and does the full upload of the file.
This is giving me issues with timeouts if the file isn't written in enough time. I was under the impression the streaming method you implemented would solve timeout issues?
It would be great if the consumer of the package could decide, which version of the Google API Client libraries he/she/it wants to use. The signatures of the relevant classes should be stable and didn't diverge.
We in particular are excited for new authentication methods for our API Client integration https://github.com/websightgmbh/l5-google-client
To allow graceful transition, I propose a new major version of your package (which would block composer from updating the api-client libraries from 1.x in older / existing projects). This new version could as well get the change from #4 , which may interfere with you current production environment.
I will pull-request the change to your package but leave the ticket here for discussion.
Thank you in advance!
I'm unable to delete a directory on Cloud Storage bucket since the update to 7.2.0 My application worked fine with v7.1.0
I believe this was introduced by PR#94.
To illustrate the issue, please run the following snippet after doing composer require superbalist/flysystem-google-cloud-storage
.
require_once __DIR__ . '/vendor/autoload.php';
use Google\Cloud\Storage\StorageClient;
use League\Flysystem\Filesystem;
use Superbalist\Flysystem\GoogleStorage\GoogleStorageAdapter;
$storageClient = new StorageClient([
'projectId' => '<valid-project-id>',
'keyFilePath' => '</path/to/service/account/file.json>',
]);
$bucket = $storageClient->bucket('<bucket-name>');
$adapter = new GoogleStorageAdapter($storageClient, $bucket);
$filesystem = new Filesystem($adapter);
$filesystem->createDir('test');
$filesystem->put('test/file.txt', 'contents');
// The above works fine. but here this call deletes the file, but fails to delete
// directory `test` from the bucket.
$filesystem->deleteDir('test');
I inspected the changes to ::deleteDir
method from said PR and I think the issue lies at line264. Because at that point $object['path']
for the case where $object['type'] === 'dir'
isn't normalised, while $dirname
is, therefore the directory we request to delete is never added to $filtered_objects
.
To verify, I edited that loop to look like:
$filtered_objects = [];
foreach ($objects as $object) {
if ($object['type'] === 'dir') { // normalise path for directories
$object['path'] = $this->normaliseDirName($object['path']);
}
if (strpos($object['path'], $dirname) !== false) {
$filtered_objects[] = $object;
}
}
This seems to work for me. I have now locked this dependency at v7.1.0 to continue using it.
Thank you.
The Adapter only needs the DEVSTORAGE_READ_WRITE
scope
Does this adapter work with any form of caching from Flysystem?
In the following lines, we check if the visibility has been specified, and if it hasn't we default to private. https://github.com/Superbalist/flysystem-google-cloud-storage/blob/master/src/GoogleStorageAdapter.php#L143-L149
Is there any reason we need to default to anything at all? If we specify nothing, the default object ACL of the bucket should be applied (by default private, visible to owner). This seems like more desirable behaviour imo – if users specify a default object ACL on their buckets, I believe it's expected that it will apply to uploaded files.
If there is a good reason this is being done, let's keep it but make it clear in the documentation that we're applying private visibility by default.
Could you please bump the version number, so we can get the latest changes without requiring dev-master
?
Thank you :)
Whilst developing locally, using v5.0.0 of this library, I am getting a cURL error about the SSL certificate.
How can I disable SSL checks for this library?
Or, if I use cacert.pem, where do I need to put that to get it to work?
I think I found a big problem with the GCS adapter.
I am having issues where directories are not being listed in elFinder, when they do exist in the GCS console.
I set up some break points, namely on listContents()
and it seems that directories are being normalised (with the normaliseObject
method) as files. I think this is stopping them being shown in elFinder.
Using elFinder, I am connected to a storage bucket. After I make a new directory, it is visible, until I reload the page.
The directory is still present in the bucket, but is simply not loaded again.
I am unable to move files from a GCS Bucket to a local file volume, but I can move it from the local file volume to a GCS Bucket.
No apparent PHP errors, just elFinder fails and shows a friendly error popup.
We made a Laravel ServiceProvider for your adapter (thanks a bunch for that! :) ) and now run into the issue that files uploaded with the adapter are not manageable through the cloud console.
websightgmbh/l5-google-cloud-storage#1 shows the problem with a picture. I wonder if there are any missing attributes on initial creation of the objects?
[Symfony\Component\Config\Definition\Exception\InvalidConfigurationException]
Unrecognized option "googlecloudstorage" under "oneup_flysystem.adapters.catalog_storage_adapter"
Is there any reason for this constraint?
https://github.com/Superbalist/flysystem-google-cloud-storage/blob/master/composer.json#L14
I am using google/cloud-storage 1.5, that is why I cannot install a package which fails on this dependency.
Thanks
now superbalist/flysystem-google-storage
require google/cloud:<0.50
, but my project need use google/cloud:0.53
.
I find google had split google/cloud
to many sub projects, such as google/cloud-pubsub
, google/cloud-storage
.
https://packagist.org/packages/google/cloud-storage
so, maybe it's better to require google/cloud-storage
Using elFinder to create a new directory in the root of a bucket, an error is thrown.
Whilst elFinder shows an error, the directory is actually created in the GCS (checked using the GCS console).
I think the problem is with this adapter though.
In the upload()
method, uploadType
is always set as media
which may cause a conflict with elFinder.
It also seems to always want to set data
to $contents
, which is passed an empty string via createDir()
which again is likely to cause issues?
Right now you can pass an array containing upload config to writeStream, updateStream etc. after retrieving the filesystem driver via $disk->getDriver() which will essentially pass the arguments to
protected function upload($path, $contents, Config $config)
where the actual config of the google bucket upload happens.
I noticed, that the config arguments are prepared by another function:
protected function getOptionsFromConfig(Config $config)
{
$options = [];
if ($visibility = $config->get('visibility')) {
$options['predefinedAcl'] = $this->getPredefinedAclForVisibility($visibility);
} else {
// if a file is created without an acl, it isn't accessible via the console
// we therefore default to private
$options['predefinedAcl'] = $this->getPredefinedAclForVisibility(AdapterInterface::VISIBILITY_PRIVATE);
}
return $options;
})
The problem I have is, that only the 'visibility' argument from $config is actually parsed, additional arguments like 'chunkSize' or 'resumable' that are essential for chunked file uploads are discarded.
I would suggest adding something along the line of this, to enable those features:
if ($config->has('chunkSize')) {
$options['chunkSize'] = $config->get('chunkSize');
}
if ($config->has('resumable')) {
$options['resumable'] = $config->get('resumable');
}
Currently I have a site that uploads a file stored in the temp directory of the server. This is a path like - "/private/var/tmp/phprROvf6"
Then we open that path, and then try to upload it GC. Example
$handle = fopen($path, 'r');
if ($handle === false) {
throw new InvalidArgumentException("$path could not be opened for reading");
}
$result = $filesystem->putStream($fileID, $handle);
fclose($handle);
return $result;
The issue is whenever it tries to close the $handle variable after uploading it states it is no longer a stream and errors out. I can confirm that the handle is a stream originally, that the file does upload correctly and the result returned says true, but after it gets uploaded - https://github.com/Superbalist/flysystem-google-cloud-storage/blob/master/src/GoogleStorageAdapter.php line 174 (protected function upload($path, $contents, Config $config)
$object = $this->bucket->upload($contents, $options);
the stream then becomes corrupt.
var_dump($handle) before it tries to upload -
resource(13) of type (stream)
var_dump($handle) after the stream is uploaded -
resource(13) of type (Unknown)
Any idea why this would happen or how to prevent it.
In the tests, the stream is being explicitly tested being an instance of StreamInterface.
This stream is returned back further in Filesystem::readStream().
The problem here, however, is that the Filesystem interface states that the method should be returning a resource.
It's not a problem for me to check if I'm getting a StreamInterface or a resource, but this is probably a little bit misleading. I also realise that this is a backwards-compatibility breaking change, so that raises the stakes a bit. Or, maybe I'm wrong. In that case do not hesitate to call me a fool.
Is there any option to download file?.
I have error:
RuntimeException: Cannot read from non-readable stream in D:\xampp\htdocs\bitbucket\classified\vendor\guzzlehttp\psr7\src\Stream.php:208
Plz suggest me solution. Is update key file?
Are you going to support getUrl()? It is used by Laravel and Spark, and a change to illuminate/filesystem allows the driver to produce an URL for files now. It looks like it is going to be released with the next Laravel release.
Doing tests with huge file uploads (10gb+) it seems that this adapter does not stream the upload, rather it just writes the file, presumably to memory, then finally to the bucket.
Is there any chance of getting the read and write stream functionality added in?
Does this library support resumable uploads?
If not, is it something you can implement as a new feature?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.