Create a new extract job deprecated
Please use api-job and api-perform instead.
insert_extract_job( project, dataset, table, destination_uris, compression = "NONE", destination_format = "NEWLINE_DELIMITED_JSON", ..., print_header = TRUE, billing = project )
project |
Project and dataset identifiers |
dataset |
Project and dataset identifiers |
table |
name of table to insert values into |
destination_uris |
Fully qualified google storage url. For large extracts you may need to specify a wild-card since |
compression |
Compression type ("NONE", "GZIP") |
destination_format |
Destination format ("CSV", "ARVO", or "NEWLINE_DELIMITED_JSON") |
... |
Additional arguments passed on to the underlying API call. snake_case names are automatically converted to camelCase. |
print_header |
Include row of column headers in the results? |
billing |
project ID to use for billing |
a job resource list, as documented at https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs
Other jobs:
get_job()
,
insert_query_job()
,
insert_upload_job()
,
wait_for()
Please choose more modern alternatives, such as Google Chrome or Mozilla Firefox.