Git Product home page Git Product logo

bigqueryr's Introduction

bigQueryR

Introduction

This is a package for interacting with BigQuery from within R.

See the bigQueryR website for examples, details and tutorials.

Installation

CRAN Build Status codecov.io

This package is on CRAN, but to install the latest development version you can install from the cloudyr drat repository:

# latest stable version
install.packages("bigQueryR", repos = c(getOption("repos"), "http://cloudyr.github.io/drat"))

Or, to pull a potentially unstable version directly from GitHub:

if(!require("ghit")){
    install.packages("ghit")
}
ghit::install_github("cloudyr/bigQueryR")

cloudyr project logo

bigqueryr's People

Contributors

django-djack avatar dkulp2 avatar ekocsis3 avatar husseyd avatar leeper avatar markedmondson1234 avatar peteratemarsys avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

bigqueryr's Issues

bqr_delete_table gives jason related error

### Using some arbitrary names
bqr_delete_table("My_Project", "My_data_set", "My_table")

Request Status Code: 204
Error: parse error: premature EOF

                     (right here) ------^

Tested this with the bigrquery::delete_table on the same data set and it works perfectly

This error also appears with bqr_upload_data(...., overwrite = TRUE) obviously as it runs bqr_delete_table too underneath.

put a wait loop function in

A common user need is this:

  status2 <- FALSE
  time2 <- Sys.time()
  while(!status2){
    Sys.sleep(5)
    message("Waiting for BigQuery extract job....job time:", format(difftime(Sys.time(), time2), format = "%H:%M:%S"))
    extract_job <- bigQueryR::bqr_get_job(projectId, extract_job$jobReference$jobId)

    if(extract_job$status$state == "DONE"){
      status2 <- TRUE 
    } else {
      status2 <- FALSE
    }
  }

DELETE ROW-- Wrote diagnostic object to 'gar_parse_error.rds'

Hello, thanks for your contribution. I'm facing the following error.

I'm working in a shiny app with bigqueryR where I need to create, erase and update client information, base in two variable: Name and User. For this, I create two function, one to append information and other one, to erase a row in BQ from R. The first one add client properly. But, the following function deleteBQ also works, but it give me the following parsing error from the API, where I do not know how to manage it. Then, I check it out the data in BQ console and was erased it properly.

Any idea what is behind this error?
Best regards
Tomás

deleteDB = function(client=NULL,user=NULL,table=table){ 
  query = paste0("DELETE FROM ",table," WHERE Name = '",client,"' AND User = '",user,"'") 
  QueryReturn <- try(bqr_query(projectId = project_id,
                               datasetId = datasetid, query,useLegacySql = FALSE)) 
}

>  deleteDB(client = client,user = user)

Error in matrix(unlist(unlist(x$rows)), ncol = length(schema$name), byrow = TRUE) : 
  'data' must be of a vector type, was 'NULL'
Error : API Data failed to parse.  
             Wrote diagnostic object to 'gar_parse_error.rds', use googleAuthR::gar_debug_parse('gar_parse_error.rds') to 
             debug the data_parse_function.
Warning message:
In bqr_query(projectId = cred$project_id, datasetId = cred$datasetid,  :
  API Data failed to parse.  Wrote diagnostic object to 'gar_parse_error.rds', use googleAuthR::gar_debug_parse('gar_parse_error.rds') to debug the data_parse_function.


> googleAuthR::gar_debug_parsing(filename = "gar_parse_error.rds")

2020-02-12 13:18:28> # When creating a GitHub issue, please include this output.
List of 3
 $ request       :List of 4
  ..$ req_url     : chr "https://www.googleapis.com/bigquery/v2/projects/'project'/queries"
  ..$ request_type: chr "POST"
  ..$ the_body    :List of 6
  .. ..$ kind          : chr "bigquery#queryRequest"
  .. ..$ query         : chr "DELETE FROM clientes WHERE Name = 'diego' AND User = 'uribe'"
  .. ..$ maxResults    : num 1000
  .. ..$ useLegacySql  : logi FALSE
  .. ..$ useQueryCache : logi TRUE
  .. ..$ defaultDataset:List of 2
  .. .. ..$ datasetId: chr "datasetid"
  .. .. ..$ projectId: chr "proyect"
  ..$ customConfig: NULL
 $ response      :List of 3
  ..$ data_parse_args: list()
  ..$ data_parse_func:function (x)  
  ..$ content        :List of 7
  .. ..$ kind               : chr "bigquery#queryResponse"
  .. ..$ schema             :List of 1
  .. .. ..$ fields:'data.frame':	2 obs. of  4 variables:
  .. .. .. ..$ name       : chr [1:2] "Name" "User"
  .. .. .. ..$ type       : chr [1:2] "STRING" "STRING"
  .. .. .. ..$ mode       : chr [1:2] "NULLABLE" "NULLABLE"
  .. .. .. ..$ description: chr [1:2] "" ""
  .. ..$ jobReference       :List of 3
  .. .. ..$ projectId: chr "proyect"
  .. .. ..$ jobId    : chr "job_1rpsitZzGa8ndzuDeBIN0eNBPCHA"
  .. .. ..$ location : chr "US"
  .. ..$ totalBytesProcessed: chr "587"
  .. ..$ jobComplete        : logi TRUE
  .. ..$ cacheHit           : logi FALSE
  .. ..$ numDmlAffectedRows : chr "0"
 $ authentication:List of 1
  ..$ token:Classes 'TokenServiceAccount', 'Token2.0', 'Token', 'R6' <TokenServiceAccount>
  Inherits from: <Token2.0>
  Public:
    app: NULL
    cache: function (path) 
    cache_path: FALSE
    can_refresh: function () 
    clone: function (deep = FALSE) 
    credentials: list
    endpoint: oauth_endpoint
    hash: function () 
    init_credentials: function () 
    initialize: function (endpoint, secrets, params) 
    load_from_cache: function () 
    params: list
    print: function (...) 
    private_key: NULL
    refresh: function () 
    revoke: function () 
    secrets: list
    sign: function (method, url) 
    validate: function ()  
 - attr(*, "class")= chr "gar_parse_error"
2020-02-12 13:18:29> - Attempting data parsing
$request
$request$req_url
[1] "https://www.googleapis.com/bigquery/v2/projects/'project'/queries"

$request$request_type
[1] "POST"

$request$the_body
$request$the_body$kind
[1] "bigquery#queryRequest"

$request$the_body$query
[1] "DELETE FROM clientes WHERE Name = 'diego' AND User = 'uribe'"

$request$the_body$maxResults
[1] 1000

$request$the_body$useLegacySql
[1] FALSE

$request$the_body$useQueryCache
[1] TRUE

$request$the_body$defaultDataset
$request$the_body$defaultDataset$datasetId
[1] "datasetID"

$request$the_body$defaultDataset$projectId
[1] "project"

$request$customConfig
NULL

$response
$response$data_parse_args
list()

$response$data_parse_func
function (x) 
{
    converter <- list(integer = as.integer, float = as.double, 
        boolean = as.logical, string = identity, timestamp = function(x) as.POSIXct(as.integer(x), 
            origin = "1970-01-01", tz = "UTC"), date = function(x) as.Date(x, 
            format = "%Y-%m-%d"))
    schema <- x$schema$fields
    data_f <- as.data.frame(matrix(unlist(unlist(x$rows)), ncol = length(schema$name), 
        byrow = TRUE), stringsAsFactors = FALSE)
    types <- tolower(schema$type)
    converter_funcs <- converter[types]
    for (i in seq_along(converter_funcs)) {
        data_f[, i] <- converter_funcs[[i]](data_f[, i])
    }
    names(data_f) <- schema$name
    out <- data_f
    out <- as.data.frame(out, stringsAsFactors = FALSE)
    attr(out, "jobReference") <- x$jobReference
    attr(out, "pageToken") <- x$pageToken
    out
}
<bytecode: 0x000001f4832d22b8>
<environment: namespace:bigQueryR>

$response$content
$response$content$kind
[1] "bigquery#queryResponse"

$response$content$schema
$response$content$schema$fields
  name   type     mode description
1 Name STRING NULLABLE            
2 User STRING NULLABLE            


$response$content$jobReference
$response$content$jobReference$projectId
[1] "project"

$response$content$jobReference$jobId
[1] "job_1rpsitZzGa8ndzuDeBIN0eNBPCHA"

$response$content$jobReference$location
[1] "US"


$response$content$totalBytesProcessed
[1] "587"

$response$content$jobComplete
[1] TRUE

$response$content$cacheHit
[1] FALSE

$response$content$numDmlAffectedRows
[1] "0"



$authentication
$authentication$token
<Token>
<oauth_endpoint>
 authorize: https://accounts.google.com/o/oauth2/auth
 access:    https://accounts.google.com/o/oauth2/token
 validate:  https://www.googleapis.com/oauth2/v1/tokeninfo
 revoke:    https://accounts.google.com/o/oauth2/revoke
NULL
<credentials> access_token, expires_in, token_type
---


attr(,"class")
[1] "gar_parse_error"

> sessionInfo()

R version 3.6.0 (2019-04-26)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 10 x64 (build 18362)

Matrix products: default

locale:
[1] LC_COLLATE=English_United States.1252  LC_CTYPE=English_United States.1252   
[3] LC_MONETARY=English_United States.1252 LC_NUMERIC=C                          
[5] LC_TIME=English_United States.1252    

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
 [1] sp_1.3-1                  bigQueryR_0.5.0           googleCloudStorageR_0.5.0
 [4] googlesheets4_0.1.0.9000  googledrive_1.0.0         DBI_1.0.0                
 [7] bigrquery_1.2.0           scales_1.0.0              ggthemes_4.2.0           
[10] shinybusy_0.2.0           fullcalendar_0.0.0.9000   htmlwidgets_1.3          
[13] plotly_4.9.0              ggplot2_3.2.1             jsonlite_1.6             
[16] dplyr_0.8.3               leaflet_2.0.2             shinyalert_1.0           
[19] shinyBS_0.61              shinycssloaders_0.2.0     shinyWidgets_0.4.9       
[22] shinyjs_1.0               shiny_1.3.2               shinydashboard_0.7.1     
[25] htmltools_0.3.6           crayon_1.3.4             

loaded via a namespace (and not attached):
 [1] Rcpp_1.0.1        lattice_0.20-38   lubridate_1.7.4   tidyr_1.0.0       assertthat_0.2.1 
 [6] zeallot_0.1.0     digest_0.6.19     mime_0.8          R6_2.4.0          cellranger_1.1.0 
[11] backports_1.1.4   httr_1.4.1        pillar_1.4.2      rlang_0.4.2       lazyeval_0.2.2   
[16] curl_4.3          rstudioapi_0.10   data.table_1.12.2 googleAuthR_1.1.1 stringr_1.4.0    
[21] bit_1.1-14        munsell_0.5.0     compiler_3.6.0    httpuv_1.5.1      pkgconfig_2.0.2  
[26] askpass_1.1       openssl_1.4.1     tidyselect_0.2.5  tibble_2.1.3      viridisLite_0.3.0
[31] withr_2.1.2       later_0.8.0       grid_3.6.0        xtable_1.8-4      gtable_0.3.0     
[36] lifecycle_0.1.0   magrittr_1.5      zip_2.0.3         stringi_1.4.3     fs_1.3.1         
[41] promises_1.0.1    vctrs_0.2.1       tools_3.6.0       bit64_0.9-7       glue_1.3.1       
[46] purrr_0.3.3       crosstalk_1.0.0   yaml_2.2.0        colorspace_1.4-1  gargle_0.4.0     
[51] memoise_1.1.0  

bqr_query() returning 404

Hi,
We have been using bigqueryR package in our rshiny app and things were fine.
Recently bqr_query() started returning 404 error code and we haven't been able to find the cause for the same. On our local RStudio setup it is running fine. Please if you can help in the same.
Following is the log
Request failed [404]. Retrying in 1.9 seconds...
Request Status Code: 404
Error : lexical error: invalid char in json text.

Following is the screenshot of dashboard error

image

bqr_upload_data() wait = FALSE not returning for large uploads

cc @analytics-ml

I also want to mention that wait = FALSE argument in bqr_upload_data() doesn't return and waits for the job to complete. (it wait till the entire dataset is uploaded.)
where am I doing wrong?

I want to upload a dataset of around 1 Million rows and 39 columns to big query daily(without using google cloud. Or cloud is the only option?).

When only one column of results only returns first row

The data is returned in the API but the type conversion data parsing in R only recognises first row

library(bigQueryR)
bqr_auth()

result <- bqr_query("big-query-r","samples",
                     "SELECT repository.url FROM [publicdata:samples.github_nested] LIMIT 10")

##result is only 1 result, not 10

Unauthorized (HTTP 401). Failed to get an access token.

I encounter the following error while authentication with Google. (bqr_auth() )

Error in oauth2.0_access_token(endpoint, app, code = code, user_params = user_params, :
Unauthorized (HTTP 401). Failed to get an access token.
In addition: Warning message:
In googleAuthR::gar_auto_auth(required_scopes, new_user = new_user, :
travis_environment_var argument is now unsupported and does nothing

it was working perfectly before updating the package to version 0.3.2.

Other libraries like bigrquery authenticate perfectly but not bigQueryR.

Where is the problem any suggestions?

Error 409 : "JSON fetch error: Already Exists: "

Hi,

I create an script to export data from BigQuery with library BigQueryR.
When I use bqr_query, the script returns results.
When I use bqr_query_asynch, the script fails and return an error :
"Auto-refreshing stale OAuth token.
Request Status Code: 409

Error in checkGoogleAPIError(req) :
JSON fetch error: Already Exists: Job trim-saga-436:0yUe56n5wGynh5f "

This job doesn't exist. I check with : bqr_list_jobs(projectId, allUsers = FALSE, projection = c("full","minimal"), stateFilter = c("done", "pending", "running")).

Have you an idea ?

Thank you in advance.
Best regards

Using autodetect=TRUE for data.frame uploads removes top row

For bqr_upload() when uploading data.frames, there is no need to use autodetect=TRUE as the schema is inferred from the R classes of the data.frame's column.

Setting autodetect=TRUE means this is ignored (from June 28th - Google API change?) and schema is inferred from top row of the data, meaning the first row is removed in the uploaded table.

library(googleCloudStorageR) resets scope, thus prevents bigQueryR authentication

When I load the bigQueryR package and authenticate with a service key, its functions work just fine with the scope set to https://www.googleapis.com/auth/cloud-platform (default scope). However, after loading the googleCloudStorageR package I'm not able to use bigQueryR functions anymore due the the scope being reset to https://www.googleapis.com/auth/devstorage.full_control.

Can you please suggest a way to explicitly set multiple scopes and then re-authenticate to be able to use both packages at the same time? Thank you

Add paging through table results

Need more tables to test against, this commit lets you at least download more than 50 to 1000 and add page tokens, but look at detecting and looping through itself.

Error attempt to apply a non function bgr_query - using DATE schema instead of TIMESTAMP

Hi,

Thanks for making this awesome library it seems like the only way to connect R with Shiny, the effort you have put in is appreciated.

I am trying to do a simple Query with it and it is failing (this might just be my lack of knowledge with R).

It returning list of projects and data sets correctly so I know it is connected.

I followed the guides on querying:

"bqr_query(projectId, datasetId, query, maxResults = 1000)"

This is the command I put in:

result <- bqr_query("bigqyerytestproject2", "TestDataSet1", "SELECT * FROM TestTable3", maxResults = 1000)

and I get the error:

Error : attempt to apply non-function
Warning message:
In q(the_body = body, path_arguments = list(projects = projectId)) :
  API Data failed to parse.  Returning parsed from JSON content.
                    Use this to test against your data_parse_function.

But then I checked BigQuery and the query is going through successfully:

I am just connecting a small amount before I move a large data set but the results are:
[
{
"Name": "Season",
"Date": "2010-06-30",
"ID": "1"
},
{
"Name": "Test",
"Date": "2010-06-30",
"ID": "2"
}
]

Thanks for your help

Path exists and overwrite is FALSE

In
bqr_download_extract(eJob,"~/data/df1.csv")
getting an error:
Error: Path exists and overwrite is FALSE

How to turn overwrite to TRUE then?

Error when passing in #standardSQL

Hello!

If I use:

bqr_query(query = "#standardSQL [.....]")

I get an error:

API returned: 1.336 - 1.336: No query found.

The query works if I don't pass in #standardSQL in the query string but if I use the parameter useLegacySql = FALSE in the function call instead.
Maybe the docs could be updated to say that passing in #standardSQL will break and to use the
flag instead?

Thanks a bunch for the very useful package.

Error when using bqr_query to create views

I'm getting an error when creating a view with bqr_query(), though it still creates the view.

bqr_query(
projectId = projectId,
datasetId = datasetId,
query = "CREATE VIEW 'project_id.dataset_id.view_name' AS SELECT date, sum(volume) as vaolume FROM 'project_id.dataset_id.table_id' GROUP BY date",
useLegacySql = FALSE
)

Error in matrix(unlist(unlist(x$rows)), ncol = length(schema$name), byrow = TRUE) :
'data' must be of a vector type, was 'NULL'

Error : API Data failed to parse. Wrote diagnostic object to 'gar_parse_error.rds', use googleAuthR::gar_debug_parse('gar_parse_error.rds') to debug the data_parse_function.

error
1 SQL Error
message:
1 API Data failed to parse. \n Wrote diagnostic object to 'gar_parse_error.rds', use googleAuthR::gar_debug_parse('gar_parse_error.rds') to \n debug the data_parse_function.

Warning message:
In bqr_query("project_id", "dataset_id", "CREATE VIEW 'project_id.dataset_id.view_name' AS SELECT date, sum(volume) as volume FROM 'project_id.dataset_id.table_id' GROUP BY date", :
API Data failed to parse.
Wrote diagnostic object to 'gar_parse_error.rds', use googleAuthR::gar_debug_parse('gar_parse_error.rds') to
debug the data_parse_function.

Parsing one row results fails

When x$row$f is only one row, bug makes data_f a data.frame of N obs. of 1 variable instead of expected 1 obs. of N

grr

  data_f <- as.data.frame(Reduce(rbind, lapply(x$rows$f, function(x) x$v)), 
                          stringsAsFactors = FALSE)

Unable to specify schema for bqr_upload_data() for non-gs:// data sources

fields = schema_fields(upload_data)

Example:

# Create data.frame with field of class Date
df <- data.frame(
  dates = seq.Date(Sys.Date()-9, Sys.Date(), by = 1),
  x = 1:10
)

# Attempt to append to an existing BigQuery table with the following schema:
# [{name: dates, type: DATE}, {name: x, type: INTEGER}]
bqr_upload_data(bqProject, bqDataset, bqTable, df)

# Returns:
Error: API returned: Provided Schema does not match Table myProject:myDataset.myTable. Field dates has changed type from DATE to TIMESTAMP

Explanation:

I'm unable to append data from a non-gs:// source to an existing BigQuery table due to a mismatch between the existing schema, and the schema produced by schema_fields().

Specifically, Date values are being interpreted as Timestamps in the schema being passed to BigQuery.

Is there any way of passing custom schema to bqr_upload_data() for non cloud storage data sources?

download_url continues to display "job not done"

job_extract <- bqr_extract_data("marketing-insights", "Rob", "my_tbl", "causey")

bqr_wait_for_job(bqr_get_job('marketing-insights', job_extract$jobReference$jobId))

bqr_extract_data("marketing-insights", "Rob", "my_tbl", "causey")
### Error in bqr_grant_extract_access(job_extract, "[email protected]") : 
  Job not done

Turn into a fork of bigrquery

Ideally the features of this library would be in bigrquery, but it would involve a rewrite of bigrquery's authentication which I'm not wanting to tackle before gargle or http2 is released.

Authorization issue with Service Account

@MarkEdmondson1234 I am working some BigQuery API calls using this package. I love how the package works. And I am successful working with the package when authenticating with my own credentials using httr. However, I am struggling with getting authorization using a service account to work.

I get a Insufficient Permission error for all functions within the package after authenticating with googleAuthR. I am using:
gar_auth_service('\file path\xxx.json')

This is the error I get when I run any function, even the bqr_list_projects():
Request Status Code: 403
Error in checkGoogleAPIError(req) :
JSON fetch error: Insufficient Permission

I have validated that permissions for the service account are set correctly. In this case I have set the service account to have the highest (Owner/Admin) permission.
image

I have no issues using the service account with other packages of yours.

Here is my sessionInfo():

R version 3.3.2 (2016-10-31)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 7 x64 (build 7601) Service Pack 1

locale:
[1] LC_COLLATE=English_United States.1252 LC_CTYPE=English_United States.1252 LC_MONETARY=English_United States.1252
[4] LC_NUMERIC=C LC_TIME=English_United States.1252

attached base packages:
[1] stats graphics grDevices utils datasets methods base

other attached packages:
[1] googleAuthR_0.4.0.9000 bigQueryR_0.2.0

loaded via a namespace (and not attached):
[1] Rcpp_0.12.9 googleCloudStorageR_0.2.0 digest_0.6.12 crayon_1.3.2 withr_1.0.2
[6] mime_0.5 R6_2.2.0 jsonlite_1.2 xtable_1.8-2 magrittr_1.5
[11] httr_1.2.1 curl_2.3 testthat_1.0.2 devtools_1.12.0 tools_3.3.2
[16] shiny_1.0.0 httpuv_1.3.3 memoise_1.0.0 htmltools_0.3.5 openssl_0.9.6

Warning message - if multiple scopes set

Hi - small issue, but when authenticating via gar_auth(), I get a warning message if I have set multiple scopes, per the below:

options("googleAuthR.scopes.selected" = c("https://www.googleapis.com/auth/bigquery", 
                                          "https://www.googleapis.com/auth/bigquery.insertdata",
                                          "https://www.googleapis.com/auth/cloud-platform",
                                          "https://www.googleapis.com/auth/cloud-platform.read-only",
                                          "https://www.googleapis.com/auth/devstorage.full_control",
                                          "https://www.googleapis.com/auth/devstorage.read_only",
                                          "https://www.googleapis.com/auth/devstorage.read_write"))

googleAuthR::gar_auth()

library(bigQueryR)

bq <- bqr_query(projectId = "my_project",
                     datasetId = "hacker_news",
                     query = "SELECT * FROM [bigquery-public-data:hacker_news.stories] LIMIT 10")
# Warning message:
#   In if (!getOption("googleAuthR.scopes.selected") %in% cloud_scopes) { :
#       the condition has length > 1 and only the first element will be used

Not sure if this is intended as the scopes are correctly set - I wondered if it could maybe be rectified along the following lines:

is_in_cloud_scopes <- purrr::map_lgl(getOption("googleAuthR.scopes.selected"), 
    ~ .x %in% cloud_scopes)

if(!any(is_in_cloud_scopes)) {
    # do stuff
}

No big deal but the error message confuses me and wondered if it was deliberate.

Query fails that works in UI

#35

select tbl1.user_id as user_id from * (select user_id from flatten([project:dataset.table],key) where y >= 1) tbl1 inner join (select user_id from [project2:dataset.table]) tbl2 on tbl1.user_id = tbl2.user_id limit 10")

Fix bqr_grant_extract_access vignette example

In the vignette under the Asynchronous Queries paragraph the bqr_grant_extract_access example is incorrect. It says:

## Create the data extract from BigQuery to Cloud Storage
job_extract <- bqr_extract_data("your_project",
                                "your_dataset",
                                "bigResultTable",
                                "your_cloud_storage_bucket_name")
                                
## poll the extract job to check its status
## its done when job$status$state == "DONE"
bqr_get_job("your_project", job_extract$jobReference$jobId)

## to download via a URL and not logging in via Google Cloud Storage interface:
## Use an email that is Google account enabled
## Requires scopes:
##  https://www.googleapis.com/auth/devstorage.full_control
##  https://www.googleapis.com/auth/cloud-platform
## set via options("bigQueryR.scopes") and reauthenticate if needed

download_url <- bqr_grant_extract_access(job_extract, "[email protected]")

The issue is with the last line: the job_extract variable's ProjectID is RUNNING, which won't change to DONE even after the job is actually done.

> job_extract
==Google BigQuery Job==
JobID:           job_ovhibn_6GQBXELWsHOzYTEXXXXXX
ProjectID:       XXXX
Status:          RUNNING 
User:            XXXX
Created:         2018-01-05 15:46:17 
Start:           2018-01-05 15:46:17 
End:              
## View job configuration via job$configuration

To fix this, use the following code snippet instead: download_url <- bqr_grant_extract_access(bqr_wait_for_job(job_extract), "[email protected]")

> bqr_wait_for_job(job_extract)
2018-01-05 15:57:14 -- Waiting for job: job_ovhibn_6GQBXELWsHOzYTEXXXXXX - Job timer: 5.001426 secs
2018-01-05 15:57:15 -- Job status: DONE
==Google BigQuery Job==
JobID:           job_ovhibn_6GQBXELWsHOzYTEXXXXXX 
ProjectID:       XXXX 
Status:          DONE 
User:            XXXX
Created:         2018-01-05 15:46:17 
Start:           2018-01-05 15:46:17 
End:             2018-01-05 15:46:18 
## View job configuration via job$configuration

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.