1. Access Control
    1. OBJECT_PRIVILEGES
      1. OBJECT_CATALOG
        1. The project ID of the project that contains the resource.
      2. OBJECT_SCHEMA
        1. The name of the dataset that contains the resource. This is null for dataset resource types.
      3. OBJECT_NAME
        1. The name of the table, view or dataset the policy applies to.
      4. OBJECT_TYPE
        1. The resource type, such as SCHEMA (dataset), TABLE, VIEW, and EXTERNAL.
      5. PRIVLEGE_TYPE
        1. The role ID, such as roles/bigquery.dataEditor
      6. GRANTEE
        1. The user type and user that the role is granted to.
    2. How do I
      1. Retrieves all columns from the INFORMATION_SCHEMA.OBJECT_PRIVILEGES view
    3. WHERE object_name='<dataset>' manadatory
  2. BI Engine
    1. BI_CAPACITIES
      1. project_id
        1. The project ID of the project that contains BI Engine capacity
      2. project_number
        1. The project number of the project that contains BI Engine capacity
      3. bi_capacity_name
        1. The name of the object. Currently there can only be one capacity per project, hence the name is always set to default
      4. size
        1. BI Engine RAM in bytes
      5. preferred_tables
        1. Set of preferred tables this BI Engine capacity must be used for. If set to null, BI Engine capacity is used for all queries in the current project
    2. BI_CAPACITY_CHANGES
      1. change_timestamp
        1. Timestamp when the current update to BI Engine capacity was made
      2. project_id
        1. The project ID of the project that contains BI Engine capacity
      3. project_number
        1. The project number of the project that contains BI Engine capacity
      4. bi_capacity_name
        1. The name of the object. Currently there can only be one capacity per project, hence the name is always default
      5. size
        1. BI Engine RAM in bytes
      6. user_email
        1. Email address of the user or subject of the workforce identity federation that made the change. google for changes made by Google. NULL if the email address is unknown
      7. preferred_tables
        1. The set of preferred tables this BI Engine capacity must be used for. If set to null, BI Engine capacity is used for all queries in the current project
    3. How do I
      1. Retrieve current BI Engine capacity changes
      2. Return the size of BI Engine capacity in gigabytes for the query project
      3. Get all changes made to BI Engine capacity by a user
      4. Get BI Engine capacity changes for the last seven days
  3. Configurations
    1. EFFECTIVE_PROJECT_OPTIONS
      1. OPTION_NAME
        1. default_time_zone
          1. The effective default time zone for this project
        2. default_kms_key_name
          1. The effective default key name for this project
        3. default_query_job_timeout_ms
          1. The effective default query timeout in milliseconds for this project
        4. default_interactive_query_queue_timeout_ms
          1. The effective default timeout in milliseconds for queued interactive queries for this project
        5. default_batch_query_queue_timeout_ms
          1. The effective default timeout in milliseconds for queued batch queries for this project
      2. OPTION_DESCRIPTION
        1. The option description
      3. OPTION_TYPE
        1. The data type of the OPTION_VALUE
      4. OPTION_SET_LEVEL
        1. The level in the hierarchy at which the setting is defined, with possible values of DEFAULT, ORGANIZATION, or PROJECTS
      5. OPTION_SET_ON_ID
        1. Set value based on value of OPTION_SET_LEVEL: If DEFAULT, set to null. If ORGANIZATION, set to "". If PROJECT, set to ID.
      6. OPTION_VALUE
        1. The current value of the option
    2. ORGANIZATION_OPTIONS
      1. OPTION_NAME
        1. default_time_zone
          1. The default time zone for this organization
        2. default_kms_key_name
          1. The default key name for this organization
        3. default_query_job_timeout_ms
          1. The default query timeout in milliseconds for this organization
        4. default_interactive_query_queue_timeout_ms
          1. The default timeout in milliseconds for queued interactive queries for this organization
        5. default_batch_query_queue_timeout_ms
          1. The default timeout in milliseconds for queued batch queries for this organization
      2. OPTION_DESCRIPTION
        1. The option description
      3. OPTION_TYPE
        1. The data type of the OPTION_VALUE
      4. OPTION_VALUE
        1. The current value of the option
    3. PROJECT_OPTIONS
      1. OPTION_NAME
        1. default_time_zone
          1. The default time zone for this project
        2. default_kms_key_name
          1. The default key name for this project
        3. default_query_job_timeout_ms
          1. The default query timeout in milliseconds for this project
        4. default_interactive_query_queue_timeout_ms
          1. The default timeout in milliseconds for queued interactive queries for this project
        5. default_batch_query_queue_timeout_ms
          1. The default timeout in milliseconds for queued batch queries for this project
      2. OPTION_DESCRIPTION
        1. The option description
      3. OPTION_TYPE
        1. The data type of the OPTION_VALUE
      4. OPTION_VALUE
        1. The current value of the option
    4. How do I
      1. Retrieve options
      2. Retrieve organization options
      3. Retrieve project options
  4. Datasets
    1. SCHEMATA
      1. CATALOG_NAME
        1. The name of the project that contains the dataset
      2. SCHEMA_NAME
        1. The dataset's name also referred to as the datasetId
      3. SCHEMA_OWNER
        1. The value is always NULL
      4. CREATION_TIME
        1. The dataset's creation time
      5. LAST_MODIFIED_TIME
        1. The dataset's last modified time
      6. LOCATION
        1. The dataset's geographic location
      7. DDL
        1. The CREATE SCHEMA DDL statement that can be used to create the dataset
      8. DEFAULT_COLLATION_NAME
        1. The name of the default collation specification if it exists; otherwise, NULL
    2. SCHEMATA_LINKS
      1. CATALOG_NAME
        1. The name of the project that contains the source dataset
      2. SCHEMA_NAME
        1. The name of the source dataset. The dataset name is also referred to as the datasetId
      3. LINKED_SCHEMA_CATALOG_NUMBER
        1. The project number of the project that contains the linked dataset
      4. LINKED_SCHEMA_CATALOG_NAME
        1. The project name of the project that contains the linked dataset
      5. LINKED_SCHEMA_NAME
        1. The name of the linked dataset. The dataset name is also referred to as the datasetId
      6. LINKED_SCHEMA_CREATION_TIME
        1. The time when the linked dataset was created
      7. LINKED_SCHEMA_ORG_DISPLAY_NAME
        1. The display name of the organization in which the linked dataset is created
    3. SCHEMATA_OPTIONS
      1. CATALOG_NAME
        1. The name of the project that contains the dataset
      2. SCHEMA_NAME
        1. The dataset's name also referred to as the datasetId
      3. OPTION_NAME OPTION_TYPE OPTION_VALUE
        1. default_partition_expiration_days
          1. The default lifetime, in days, of all partitioned tables in the dataset
        2. default_table_expiration_days
          1. The default lifetime, in days, of all tables in the dataset
        3. max_time_travel_hours
          1. The time travel window, expressed in multiples of 24 (48, 72, 96, 120, 144, 168) between 48 (2 days) and 168 (7 days)
        4. description
        5. friendly_name
        6. labels
          1. An array of STRUCT's that represent the labels on the dataset
        7. storage_billing_model
          1. The storage billing model of the dataset
    4. SHARED_DATASET_USAGE
      1. project_id
        1. (Clustering column) The ID of the project that contains the shared dataset
      2. dataset_id
        1. (Clustering column) The ID of the shared dataset
      3. table_id
        1. The ID of the accessed table
      4. table_id
        1. The resource path of the data exchange
      5. data_exchange_id
        1. The resource path of the listing
      6. listing_id
        1. The resource path of the listing
      7. job_start_time
        1. (Partitioning column) The start time of this job
      8. job_end_time
        1. The end time of this job
      9. job_id
        1. The job ID. For example, bquxjob_1234
      10. job_project_number
        1. The number of the project this job belongs to
      11. job_location
        1. The location of the job
      12. linked_project_number
        1. The project number of the subscriber's project
      13. linked_dataset_id
        1. The linked dataset ID of the subscriber's dataset
      14. subscriber_org_number
        1. The organization number in which the job ran. This is the organization number of the subscriber. This field is empty for projects that don't have an organization
      15. subscriber_org_display_name
        1. A human-readable string that refers to the organization in which the job ran. This is the organization number of the subscriber. This field is empty for projects that don't have an organization
      16. num_rows_processed
        1. The number of rows processed from this table by the job
      17. total_bytes_processed
        1. The total bytes processed from this table by the job
    5. SCHEMATA_REPLICAS
      1. catalog_name
        1. The project ID of the project that contains the dataset.
      2. schema_name
        1. The dataset ID of the dataset.
      3. replica_name
        1. The name of the replica.
      4. location
        1. The region or multi-region the replica was created in.
      5. replica_primary_assigned
        1. If the value is TRUE, the replica has the primary assignment.
      6. replica_primary_assignment_complete
        1. If the value is TRUE, the primary assignment is complete. If the value is FALSE, the replica is not (yet) the primary replica, even if replica_primary_assigned equals TRUE.
      7. creation_time
        1. The replica's creation time. When the replica is first created, it is not fully synced with the primary replica until creation_complete equals TRUE. The value of creation_time is set before creation_complete equals TRUE.
      8. creation_complete
        1. If the value is TRUE, the initial full sync of the primary replica to the secondary replica is complete.
      9. replication_time
        1. The value for replication_time indicates the staleness of the dataset. Some tables in the replica might be ahead of this timestamp. This value is only visible in the secondary region. If the dataset contains a table with streaming data, the value of replication_time will not be accurate.
    6. How do I
      1. Get all the datasets
      2. List all linked datasets by a shared dataset
      3. Retrieve the default table expiration time for all datasets
      4. Retrieve labels for all datasets in a project
      5. Get the total number of jobs executed on all shared tables
      6. Get the most used shared table based on the number of rows processed
      7. Find the top organizations that consume your tables
      8. Get usage metrics for your data exchange
  5. Jobs
    1. JOBS JOBS_BY_USER JOBS_BY_FOLDER JOBS_BY_ORGANIZATION
      1. bi_engine_statistics
        1. If the project is configured to use the BI Engine SQL Interface, then this field contains BiEngineStatistics. Otherwise NULL.
      2. cache_hit
        1. Whether the query results of this job were from a cache. If you have a multi-query statement job, cache_hit for your parent query is NULL.
      3. creation_time
        1. (Partitioning column) Creation time of this job. Partitioning is based on the UTC time of this timestamp.
      4. destination_table
        1. Destination table for results, if any.
      5. dml_statistics
        1. If the job is a query with a DML statement, the value is a record with the following fields:
        2. inserted_row_count: The number of rows that were inserted.
        3. deleted_row_count: The number of rows that were deleted.
        4. updated_row_count: The number of rows that were updated.
        5. For all other jobs, the value is NULL.
        6. This column is present in the INFORMATION_SCHEMA.JOBS_BY_USER and INFORMATION_SCHEMA.JOBS_BY_PROJECT views.
      6. end_time
        1. The end time of this job, in milliseconds since the epoch. This field represents the time when the job enters the DONE state.
      7. error_result
        1. Details of any errors as ErrorProto objects.
      8. job_id
        1. The ID of the job. For example, bquxjob_1234.
      9. job_stages
        1. Query stages of the job.
      10. job_type
        1. The type of the job. Can be QUERY, LOAD, EXTRACT, COPY, or NULL. A NULL value indicates an internal job, such as a script job statement evaluation or a materialized view refresh.
      11. labels
        1. Array of labels applied to the job as key-value pairs.
      12. parent_job_id
        1. ID of the parent job, if any.
      13. priority
        1. The priority of this job. Valid values include INTERACTIVE and BATCH.
      14. project_id
        1. (Clustering column) The ID of the project.
      15. project_number
        1. The number of the project.
      16. query
        1. SQL query text. Only the JOBS_BY_PROJECT view has the query column.
      17. referenced_tables
        1. Array of tables referenced by the job. Only populated for query jobs.
      18. reservation_id
        1. Name of the primary reservation assigned to this job, in the format RESERVATION_ADMIN_PROJECT:RESERVATION_LOCATION.RESERVATION_NAME.
        2. RESERVATION_ADMIN_PROJECT: the name of the Google Cloud project that administers the reservation
        3. RESERVATION_LOCATION: the location of the reservation
        4. RESERVATION_NAME: the name of the reservation
      19. session_info
        1. Details about the session in which this job ran, if any.
      20. start_time
        1. The start time of this job, in milliseconds since the epoch. This field represents the time when the job transitions from the PENDING state to either RUNNING or DONE.
      21. state
        1. Running state of the job. Valid states include PENDING, RUNNING, and DONE.
      22. statement_type
        1. The type of query statement. For example, DELETE, INSERT, SCRIPT, SELECT, or UPDATE. See QueryStatementType for list of valid values.
      23. timeline
        1. Query timeline of the job. Contains snapshots of query execution.
      24. total_bytes_billed
        1. If the project is configured to use on-demand pricing, then this field contains the total bytes billed for the job. If the project is configured to use flat-rate pricing, then you are not billed for bytes and this field is informational only. Note: This column's values are empty for queries that read from tables with row-level access policies. For more information, see best practices for row-level security in BigQuery.
      25. total_bytes_processed
        1. Total bytes processed by the job.
      26. total_modified_partitions
        1. The total number of partitions the job modified. This field is populated for LOAD and QUERY jobs.
      27. total_slot_ms
        1. Slot milliseconds for the job over its entire duration in the RUNNING state, including retries.
      28. transaction_id
        1. ID of the transaction in which this job ran, if any.
      29. user_email
        1. (Clustering column) Email address or service account of the user who ran the job.
      30. query_info
        1. .resource_warning
          1. The warning message that appears if the resource usage during query processing is above the internal threshold of the system. A successful query job can have the resource_warning field populated. With resource_warning, you get additional data points to optimize your queries and to set up monitoring for performance trends of an equivalent set of queries by using query_hashes.
        2. .query_hashes.normalized_literals
          1. Contains the hashes of the query. normalized_literals is a hexadecimal STRING hash that ignores comments, parameter values, UDFs, and literals. This field appears for successful GoogleSQL queries that are not cache hits.
        3. .performance_insights
          1. Performance insights for the job.
      31. transferred_bytes
        1. Total bytes transferred for cross-cloud queries, such as BigQuery Omni cross-cloud transfer jobs.
      32. materialized_view_statistics
        1. Statistics of materialized views considered in a query job.
    2. How do I
      1. Calculate average slot utilization
      2. Load job history
      3. Get the number of load jobs to determine the daily job quota used
      4. Get the last 10 failed jobs
      5. Query the list of long running jobs
      6. Bytes processed per user identity
      7. Hourly breakdown of bytes processed
      8. Query jobs per table
      9. Most expensive queries by project
      10. Get details about a resource warning
      11. Monitor resource warnings grouped by date
      12. Estimate slot usage and cost for queries
      13. View performance insights for queries
      14. View metadata refresh jobs
      15. Analyze performance over time for identical queries
  6. Jobs by timeslice
    1. JOBS_TIMELINE JOBS_TIMELINE_BY_USER JOBS_TIMELINE_BY_FOLDER JOBS_TIMELINE_BY_ORGANIZATION
      1. period_start
        1. Start time of this period.
      2. period_slot_ms
        1. Slot milliseconds consumed in this period.
      3. period_shuffle_ram_usage_ratio
        1. Shuffle usage ratio in the selected time period.
      4. project_id
        1. (Clustering column) ID of the project.
      5. project_number
        1. Number of the project.
      6. folder_numbers
        1. Number IDs of the folders that contain the project, starting with the folder that immediately contains the project, followed by the folder that contains the child folder, and so forth. For example, if `folder_numbers` is `[1, 2, 3]`, then folder `1` immediately contains the project, folder `2` contains `1`, and folder `3` contains `2`.
      7. user_email
        1. (Clustering column) Email address or service account of the user who ran the job.
      8. job_id
        1. ID of the job. For example, bquxjob_1234.
      9. job_type
        1. The type of the job. Can be QUERY, LOAD, EXTRACT, COPY, or null. Job type null indicates an internal job, such as script job statement evaluation or materialized view refresh.
      10. statement_type
        1. The type of query statement, if valid. For example, SELECT, INSERT, UPDATE, or DELETE.
      11. job_creation_time
        1. (Partitioning column) Creation time of this job. Partitioning is based on the UTC time of this timestamp.
      12. job_start_time
        1. Start time of this job.
      13. job_end_time
        1. End time of this job.
      14. state
        1. Running state of the job at the end of this period. Valid states include PENDING, RUNNING, and DONE.
      15. reservation_id
        1. Name of the primary reservation assigned to this job at the end of this period, if applicable.
      16. total_bytes_processed
        1. Total bytes processed by the job.
      17. error_result
        1. Details of error (if any) as an ErrorProto.
      18. cache_hit
        1. Whether the query results of this job were from a cache.
      19. period_estimated_runnable_units
        1. Units of work that can be scheduled immediately in this period. Additional slots for these units of work accelerate your query, provided no other query in the reservation needs additional slots.
    2. How do I
      1. Calculate the slot utilization for every second in the last day
      2. Find number of RUNNING and PENDING jobs over time
      3. See resource usage by jobs at a specific point in time
  7. Reservations
    1. ASSIGNMENTS
      1. ddl
        1. The DDL statement used to create this assignment.
      2. project_id
        1. ID of the administration project.
      3. project_number
        1. Number of the administration project.
      4. assignment_id
        1. ID that uniquely identifies the assignment.
      5. reservation_name
        1. Name of the reservation that the assignment uses.
      6. job_type
        1. The type of job that can use the reservation. Can be PIPELINE, QUERY, ML_EXTERNAL, or BACKGROUND.
      7. assignee_id
        1. ID that uniquely identifies the assignee resource.
      8. assignee_number
        1. Number that uniquely identifies the assignee resource.
      9. assignee_type
        1. Type of assignee resource. Can be organization, folder or project.
    2. ASSIGNMENT_CHANGES
      1. change_timestamp
        1. Time when the change occurred.
      2. project_id
        1. ID of the administration project.
      3. project_number
        1. Number of the administration project.
      4. assignment_id
        1. ID that uniquely identifies the assignment.
      5. reservation_name
        1. Name of the reservation that the assignment uses.
      6. job_type
        1. The type of job that can use the reservation. Can be PIPELINE or QUERY.
      7. assignee_id
        1. ID that uniquely identifies the assignee resource.
      8. assignee_number
        1. Number that uniquely identifies the assignee resource.
      9. assignee_type
        1. Type of assignee resource. Can be organization, folder or project.
      10. action
        1. Type of event that occurred with the assignment. Can be CREATE, UPDATE, or DELETE.
      11. user_email
        1. Email address of the user or subject of the workforce identity federation that made the change. google for changes made by Google. NULL if the email address is unknown.
      12. state
        1. State of the assignment. Can be PENDING or ACTIVE.
    3. CAPACITY_COMMITMENTS
      1. ddl
        1. The DDL statement used to create this capacity commitment.
      2. project_id
        1. ID of the administration project.
      3. project_number
        1. Number of the administration project.
      4. capacity_commitment_id
        1. ID that uniquely identifies the capacity commitment.
      5. commitment_plan
        1. Commitment plan of the capacity commitment.
      6. state
        1. State the capacity commitment is in. Can be PENDING or ACTIVE.
      7. slot_count
        1. Slot count associated with the capacity commitment.
      8. edition
        1. The edition associated with this reservation.
      9. is_flat_rate
        1. Whether the commitment is associated with the legacy flat-rate capacity model or an edition. If FALSE, the current commitment is associated with an edition. If TRUE, the commitment is the legacy flat-rate capacity model.
      10. renewal_plan
        1. New commitment plan after the end of current commitment plan. You can change the renewal plan for a commitment at any time until it expires.
    4. CAPACITY_COMMITMENT_CHANGES
      1. change_timestamp
        1. Time when the change occurred.
      2. project_id
        1. ID of the administration project.
      3. project_number
        1. Number of the administration project.
      4. capacity_commitment_id
        1. ID that uniquely identifies the capacity commitment.
      5. commitment_plan
        1. Commitment plan of the capacity commitment.
      6. state
        1. State the capacity commitment is in. Can be PENDING or ACTIVE.
      7. slot_count
        1. Slot count associated with the capacity commitment.
      8. action
        1. Type of event that occurred with the capacity commitment. Can be CREATE, UPDATE, or DELETE.
      9. user_email
        1. Email address of the user or subject of the workforce identity federation that made the change. google for changes made by Google. NULL if the email address is unknown.
      10. commitment_start_time
        1. The start of the current commitment period. Only applicable for ACTIVE capacity commitments, otherwise this is NULL.
      11. commitment_end_time
        1. The end of the current commitment period. Only applicable for ACTIVE capacity commitments, otherwise this is NULL.
      12. failure_status
        1. For a FAILED commitment plan, provides the failure reason, otherwise this is NULL. RECORD consists of code and message.
      13. renewal_plan
        1. The plan this capacity commitment is converted to after commitment_end_time passes. After the plan is changed, the committed period is extended according to the commitment plan. Only applicable for ANNUAL and TRIAL commitments, otherwise this is NULL.
      14. edition
        1. The edition associated with this reservation.
      15. is_flat_rate
        1. Whether the commitment is associated with the legacy flat-rate capacity model or an edition. If FALSE, the current commitment is associated with an edition. If TRUE, the commitment is the legacy flat-rate capacity model.
    5. RESERVATIONS
      1. ddl
        1. The DDL statement used to create this reservation.
      2. project_id
        1. ID of the administration project.
      3. project_number
        1. Number of the administration project.
      4. reservation_name
        1. User provided reservation name.
      5. ignore_idle_slots
        1. If false, any query using this reservation can use unused idle slots from other capacity commitments.
      6. slot_capacity
        1. Baseline of the reservation.
      7. target_job_concurrency
        1. The target number of queries that can execute simultaneously, which is limited by available resources. If zero, then this value is computed automatically based on available resources.
      8. autoscale
        1. .current_slots
          1. the number of slots added to the reservation by autoscaling
        2. max_slots
          1. the maximum number of slots that could be added to the reservation by autoscaling
      9. edition
        1. The edition associated with this reservation.
    6. RESERVATION_CHANGES
      1. change_timestamp
        1. Time when the change occurred.
      2. project_id
        1. ID of the administration project.
      3. project_number
        1. Number of the administration project.
      4. reservation_name
        1. User provided reservation name.
      5. ignore_idle_slots
        1. If false, any query using this reservation can use unused idle slots from other capacity commitments.
      6. action
        1. Type of event that occurred with the reservation. Can be CREATE, UPDATE, or DELETE.
      7. slot_capacity
        1. Baseline of the reservation.
      8. user_email
        1. Email address of the user or subject of the workforce identity federation that made the change. google for changes made by Google. NULL if the email address is unknown.
      9. target_job_concurrency
        1. The target number of queries that can execute simultaneously, which is limited by available resources. If zero, then this value is computed automatically based on available resources.
      10. autoscale
        1. current_slots
          1. the number of slots added to the reservation by autoscaling.
        2. max_slots
          1. the maximum number of slots that could be added to the reservation by autoscaling.
      11. edition
        1. The edition associated with this reservation.
    7. RESERVATION_CHANGES_TIMELINE
      1. period_start
        1. Start time of this one-minute period.
      2. project_id
        1. ID of the administration project.
      3. project_number
        1. Number of the administration project.
      4. reservation_name
        1. User provided reservation name.
      5. ignore_idle_slots
        1. If false, any query using this reservation can use unused idle slots from other capacity commitments.
      6. slots_assigned
        1. The number of slots assigned to this reservation.
      7. slots_max_assigned
        1. The maximum slot capacity for this reservation, including slot sharing. If ignore_idle_slots is true, this is the same as slots_assigned, otherwise this is the total number of slots in all capacity commitments in the admin project.
      8. autoscale
        1. current_slots
          1. the number of slots added to the reservation by autoscaling.
        2. max_slots
          1. the maximum number of slots that could be added to the reservation by autoscaling.
      9. reservation_id
        1. For joining with the jobs_timeline table. This is of the form project_id:location.reservation_name.
    8. How do I
      1. Get a project's currently assigned reservation and its slot capacity
      2. Display the user who has made the latest assignment update to a particular assignment within a specified date
      3. Return a list of active capacity commitments for the current project
      4. Find user who has made the latest capacity commitment update to the current project within the specified date
      5. See slot usage, slot capacity, and assigned reservation for a project with a reservation assignment, over the past hour
      6. Gets the history of changes for a given reservation
      7. Show per-minute slot usage from projects assigned to YOUR_RESERVATION_ID across all jobs
  8. Routines
    1. PARAMETERS
      1. SPECIFIC_CATALOG
        1. The name of the project that contains the dataset in which the routine containing the parameter is defined
      2. SPECIFIC_SCHEMA
        1. The name of the dataset that contains the routine in which the parameter is defined
      3. SPECIFIC_NAME
        1. The name of the routine in which the parameter is defined
      4. ORDINAL_POSITION
        1. The 1-based position of the parameter, or 0 for the return value
      5. PARAMETER_MODE
        1. The mode of the parameter, either IN, OUT, INOUT, or NULL
      6. IS_RESULT
        1. Whether the parameter is the result of the function, either YES or NO
      7. PARAMETER_NAME
        1. The name of the parameter
      8. DATA_TYPE
        1. The type of the parameter, will be ANY TYPE if defined as an any type
      9. PARAMETER_DEFAULT
        1. The default value of the parameter as a SQL literal value, always NULL
      10. IS_AGGREGATE
        1. Whether this is an aggregate parameter, always NULL
    2. ROUTINES
      1. SPECIFIC_CATALOG
        1. The name of the project that contains the dataset where the routine is defined
      2. SPECIFIC_SCHEMA
        1. The name of the dataset that contains the routine
      3. SPECIFIC_NAME
        1. The name of the routine
      4. ROUTINE_CATALOG
        1. The name of the project that contains the dataset where the routine is defined
      5. ROUTINE_SCHEMA
        1. The name of the dataset that contains the routine
      6. ROUTINE_NAME
        1. The name of the routine
      7. ROUTINE_TYPE
        1. FUNCTION
          1. BigQuery persistent user-defined function
        2. PROCEDURE
          1. BigQuery stored procedure
        3. TABLE FUNCTION
          1. BigQuery table function
      8. DATA_TYPE
        1. The data type that the routine returns. NULL if the routine is a stored procedure
      9. ROUTINE_BODY
        1. How the body of the routine is defined, either SQL or EXTERNAL if the routine is a JavaScript user-defined function
      10. ROUTINE_DEFINITION
        1. The definition of the routine
      11. EXTERNAL_LANGUAGE
        1. JAVASCRIPT
          1. If the routine is a JavaScript user-defined function
        2. NULL
          1. If the routine was defined with SQL
      12. IS_DETERMINISTIC
        1. YES if the routine is known to be deterministic, NO if it is not, or NULL if unknown
      13. SECURITY_TYPE
        1. Security type of the routine, always NULL
      14. CREATED
        1. The routine's creation time
      15. LAST_ALTERED
        1. The routine's last modification time
      16. DDL
        1. The DDL statement that can be used to create the routine, such as CREATE FUNCTION or CREATE PROCEDURE
    3. ROUTINE_OPTIONS
      1. SPECIFIC_CATALOG
        1. The name of the project that contains the routine where the option is defined
      2. SPECIFIC_SCHEMA
        1. The name of the dataset that contains the routine where the option is defined
      3. SPECIFIC_NAME
        1. The name of the routine
      4. OPTION_NAME
        1. description
          1. The description of the routine, if defined
        2. library
          1. The names of the libraries referenced in the routine. Only applicable to JavaScript UDFs
        3. data_governance_type
          1. The name of supported data governance type. For example, DATA_MASKING.
      5. OPTION_TYPE
      6. OPTION_VALUE
    4. How do I
      1. Retrieve all parameters from the PARAMETERS view?
      2. Retrieve all values from ROUTINES view?
      3. Retrieve all values from ROUTINE_OPTIONS view?
  9. Search indexes
    1. SEARCH_INDEXES
      1. index_catalog
        1. The name of the project that contains the dataset.
      2. index_schema
        1. The name of the dataset that contains the index.
      3. table_name
        1. The name of the base table that the index is created on.
      4. index_name
        1. The name of the index.
      5. index_status
        1. ACTIVE
          1. Index is usable or being created. Refer to the coverage_percentage to see the progress of index creation.
        2. PENDING DISABLEMENT
          1. Total size of indexed base tables exceeds your organization's limit; the index is queued for deletion. While in this state, the index is usable in search queries and you are charged for the search index storage.
        3. TEMPORARILY DISABLED
          1. Either the total size of indexed base tables exceeds your organization's limit, or the base indexed table is smaller than 10GB. While in this state, the index is not used in search queries and you are not charged for the search index storage.
        4. PERMANENTLY DISABLED
          1. There is an incompatible schema change on the base table, such as changing the type of an indexed column from STRING to INT64.
      6. creation_time
        1. The time the index was created.
      7. last_modification_time
        1. The last time the index configuration was modified. For example, deleting an indexed column.
      8. last_refresh_time
        1. The last time the table data was indexed. A NULL value means the index is not yet available.
      9. disable_time
        1. The time the status of the index was set to DISABLED. The value is NULL if the index status is not DISABLED.
      10. disable_reason
        1. The reason the index was disabled. NULL if the index status is not DISABLED.
      11. DDL
        1. The DDL statement used to create the index.
      12. coverage_percentage
        1. The approximate percentage of table data that has been indexed. 0% means the index is not usable in a SEARCH query, even if some data has already been indexed.
      13. unindexed_row_count
        1. The number of rows in the base table that have not been indexed.
      14. total_logical_bytes
        1. The number of billable logical bytes for the index.
      15. total_storage_bytes
        1. The number of billable storage bytes for the index.
      16. analyzer
        1. The text analyzer to use to generate tokens for the search index.
    2. SEARCH_INDEXES_COLUMNS
      1. index_catalog
        1. The name of the project that contains the dataset.
      2. index_schema
        1. The name of the dataset that contains the index.
      3. table_name
        1. The name of the base table that the index is created on.
      4. index_name
        1. The name of the index.
      5. index_column_name
        1. The name of the top-level indexed column.
      6. index_field_path
        1. The full path of the expanded indexed field, starting with the column name. Fields are separated by a period.
    3. How do I
      1. Show all active search indexes on tables in the dataset
      2. Create a search index on all columns of a table
      3. View the search index status and the data type of each column in a dataset
  10. Sessions
    1. SESSIONS_BY_PROJECT
      1. creation_time
        1. (Partitioning column) Creation time of this session. Partitioning is based on the UTC time of this timestamp.
      2. expiration_time
        1. (Partitioning column) Expiration time of this session. Partitioning is based on the UTC time of this timestamp.
      3. is_active
        1. Is the session is still active? TRUE if yes, otherwise FALSE.
      4. last_modified_time
        1. (Partitioning column) Time when the session was last modified. Partitioning is based on the UTC time of this timestamp.
      5. principal_subject
        1. (Clustering column) Principal identifier of the user who ran the job.
      6. project_id
        1. (Clustering column) ID of the project.
      7. project_number
        1. Number of the project.
      8. session_id
        1. ID of the session. For example, bquxsession_1234.
      9. user_email
        1. (Clustering column) Email address or service account of the user who ran the session.
    2. SESSIONS_BY_USER
      1. creation_time
        1. (Partitioning column) Creation time of this session. Partitioning is based on the UTC time of this timestamp.
      2. expiration_time
        1. (Partitioning column) Expiration time of this session. Partitioning is based on the UTC time of this timestamp.
      3. is_active
        1. Is the session is still active? TRUE if yes, otherwise FALSE.
      4. last_modified_time
        1. (Partitioning column) Time when the session was last modified. Partitioning is based on the UTC time of this timestamp.
      5. principal_subject
        1. (Clustering column) Principal identifier of the user who ran the job.
      6. project_id
        1. (Clustering column) ID of the project.
      7. project_number
        1. Number of the project.
      8. session_id
        1. ID of the session. For example, bquxsession_1234.
      9. user_email
        1. (Clustering column) Email address or service account of the user who ran the session.
    3. How do I
      1. List all users or service accounts that created sessions for a given project within the last day
      2. List sessions that were created by the current user
  11. Streaming
    1. STREAMING_TIMELINE STREAMING_TIMELINE_BY_FOLDER STREAMING_TIMELINE_BY_ORGANIZATION
      1. start_timestamp
        1. (Partitioning column) Start timestamp of the 1 minute interval for the aggregated statistics.
      2. project_id
        1. (Clustering column) ID of the project.
      3. project_number
        1. Number of the project.
      4. dataset_id
        1. (Clustering column) ID of the dataset.
      5. table_id
        1. (Clustering column) ID of the table.
      6. error_code
        1. Error code returned for the requests specified by this row. NULL for successful requests.
      7. total_requests
        1. Total number of requests within the 1 minute interval.
      8. total_rows
        1. Total number of rows from all requests within the 1 minute interval.
      9. total_input_bytes
        1. Total number of bytes from all rows within the 1 minute interval.
    2. How do I
      1. Calculate the per minute breakdown of total failed requests for all tables in the project
      2. Get per minute breakdown for all requests with error codes
      3. List tables with the most incoming traffic
      4. Get streaming error ratio for a table
  12. Tables
    1. COLUMNS
      1. TABLE_CATALOG
        1. The project ID of the project that contains the dataset
      2. TABLE_SCHEMA
        1. The name of the dataset that contains the table also referred to as the datasetId
      3. TABLE_NAME
        1. The name of the table or view also referred to as the tableId
      4. COLUMN_NAME
        1. The name of the column
      5. ORDINAL_POSITION
        1. The 1-indexed offset of the column within the table; if it's a pseudo column such as _PARTITIONTIME or _PARTITIONDATE, the value is NULL
      6. IS_NULLABLE
        1. YES or NO depending on whether the column's mode allows NULL values
      7. DATA_TYPE
        1. The column's GoogleSQL data type
      8. IS_GENERATED
        1. The value is always NEVER
      9. GENERATION_EXPRESSION
        1. The value is always NULL
      10. IS_STORED
        1. The value is always NULL
      11. IS_HIDDEN
        1. YES or NO depending on whether the column is a pseudo column such as _PARTITIONTIME or _PARTITIONDATE
      12. IS_UPDATABLE
        1. The value is always NULL
      13. IS_SYSTEM_DEFINED
        1. YES or NO depending on whether the column is a pseudo column such as _PARTITIONTIME or _PARTITIONDATE
      14. IS_PARTITIONING_COLUMN
        1. YES or NO depending on whether the column is a partitioning column
      15. CLUSTERING_ORDINAL_POSITION
        1. The 1-indexed offset of the column within the table's clustering columns; the value is NULL if the table is not a clustered table
      16. COLLATION_NAME
        1. The name of the collation specification if it exists; otherwise, NULL
        2. If a STRING or ARRAY<STRING> is passed in, the collation specification is returned if it exists; otherwise NULL is returned
      17. COLUMN_DEFAULT
        1. The default value of the column if it exists; otherwise, the value is NULL
      18. ROUNDING_MODE
        1. The mode of rounding that's used for values written to the field if its type is a parameterized NUMERIC or BIGNUMERIC; otherwise, the value is NULL
    2. COLUMN_FIELD_PATHS
      1. TABLE_CATALOG
        1. The project ID of the project that contains the dataset
      2. TABLE_SCHEMA
        1. The name of the dataset that contains the table also referred to as the datasetId
      3. TABLE_NAME
        1. The name of the table or view also referred to as the tableId
      4. COLUMN_NAME
        1. The name of the column
      5. FIELD_PATH
        1. The path to a column nested within a `RECORD` or `STRUCT` column
      6. DATA_TYPE
        1. The column's GoogleSQL data type
      7. DESCRIPTION
        1. The column's description
      8. COLLATION_NAME
        1. The name of the collation specification if it exists; otherwise, NULL
        2. If a STRING, ARRAY<STRING>, or STRING field in a STRUCT is passed in, the collation specification is returned if it exists; otherwise, NULL is returned
      9. ROUNDING_MODE
        1. The mode of rounding that's used when applying precision and scale to parameterized NUMERIC or BIGNUMERIC values; otherwise, the value is NULL
    3. CONSTRAINT_COLUMN_USAGE
      1. TABLE_CATALOG
        1. The name of the project that contains the dataset.
      2. TABLE_SCHEMA
        1. The name of the dataset that contains the table. Also referred to as the datasetId.
      3. TABLE_NAME
        1. The name of the table. Also referred to as the tableId.
      4. COLUMN_NAME
        1. The column name.
      5. CONSTRAINT_CATALOG
        1. The constraint project name.
      6. CONSTRAINT_SCHEMA
        1. The constraint dataset name.
      7. CONSTRAINT_NAME
        1. The constraint name. It can be the name of the primary key if the column is used by the primary key or the name of foreign key if the column is used by a foreign key.
    4. KEY_COLUMN_USAGE
      1. CONSTRAINT_CATALOG
        1. The constraint project name.
      2. CONSTRAINT_SCHEMA
        1. The constraint dataset name.
      3. CONSTRAINT_NAME
        1. The constraint name.
      4. TABLE_CATALOG
        1. The project name of the constrained table.
      5. TABLE_SCHEMA
        1. The name of the constrained table dataset.
      6. TABLE_NAME
        1. The name of the constrained table.
      7. COLUMN_NAME
        1. The name of the constrained column.
      8. ORDINAL_POSITION
        1. The ordinal position of the column within the constraint key (starting at 1).
      9. POSITION_IN_UNIQUE_CONSTRAINT
        1. For foreign keys, the ordinal position of the column within the primary key constraint (starting at 1). This value is NULL for primary key constraints.
    5. PARTITIONS
      1. TABLE_CATALOG
        1. The project ID of the project that contains the table
      2. TABLE_SCHEMA
        1. The name of the dataset that contains the table, also referred to as the datasetId
      3. TABLE_NAME
        1. The name of the table, also referred to as the tableId
      4. PARTITION_ID
        1. A single partition's ID. For unpartitioned tables, the value is NULL. For partitioned tables that contain rows with NULL values in the partitioning column, the value is __NULL__.
      5. TOTAL_ROWS
        1. The total number of rows in the partition
      6. TOTAL_LOGICAL_BYTES
        1. The total number of logical bytes in the partition
      7. LAST_MODIFIED_TIME
        1. The time when the data was most recently written to the partition
      8. STORAGE_TIER
        1. ACTIVE
          1. the partition is billed as active storage
        2. LONG_TERM
          1. the partition is billed as long-term storage
    6. TABLES
      1. table_catalog
        1. The project ID of the project that contains the dataset.
      2. table_schema
        1. The name of the dataset that contains the table or view. Also referred to as the datasetId.
      3. table_name
        1. The name of the table or view. Also referred to as the tableId.
      4. table_type
        1. BASE TABLE
        2. CLONE
        3. SNAPSHOT
        4. VIEW
        5. MATERIALIZED VIEW
        6. EXTERNAL
      5. is_insertable_into
        1. YES or NO depending on whether the table supports DML INSERT statements
      6. is_typed
        1. The value is always NO
      7. creation_time
        1. The table's creation time
      8. base_table_catalog
        1. For table clones and table snapshots, the base table's project. Applicable only to tables with table_type set to CLONE or SNAPSHOT.
      9. base_table_schema
        1. For table clones and table snapshots, the base table's dataset. Applicable only to tables with table_type set to CLONE or SNAPSHOT.
      10. base_table_name
        1. For table clones and table snapshots, the base table's name. Applicable only to tables with table_type set to CLONE or SNAPSHOT.
      11. snapshot_time_ms
        1. For table clones and table snapshots, the time when the clone or snapshot operation was run on the base table to create this table. If time travel was used, then this field contains the time travel timestamp. Otherwise, the snapshot_time_ms field is the same as the creation_time field. Applicable only to tables with table_type set to CLONE or SNAPSHOT.
      12. ddl
        1. The DDL statement that can be used to recreate the table, such as CREATE TABLE or CREATE VIEW
      13. default_collation_name
        1. The name of the default collation specification if it exists; otherwise, NULL.
      14. upsert_stream_apply_watermark
        1. For tables that use change data capture (CDC), the time when row modifications were last applied. For more information, see Monitor table upsert operation progress.
    7. TABLE_OPTIONS
      1. TABLE_CATALOG
        1. The project ID of the project that contains the dataset
      2. TABLE_SCHEMA
        1. The name of the dataset that contains the table or view also referred to as the datasetId
      3. TABLE_NAME
        1. The name of the table or view also referred to as the tableId
      4. OPTION_NAME
        1. partition_expiration_days
          1. The default lifetime, in days, of all partitions in a partitioned table
        2. expiration_timestamp
          1. The time when this table expires
        3. kms_key_name
          1. The name of the Cloud KMS key used to encrypt the table
        4. friendly_name
          1. The table's descriptive name
        5. description
          1. A description of the table
        6. labels
          1. An array of STRUCT's that represent the labels on the table
        7. require_partition_filter
          1. Whether queries over the table require a partition filter
        8. enable_refresh
          1. Whether automatic refresh is enabled for a materialized view
        9. refresh_interval_minutes
          1. How frequently a materialized view is refreshed
        10. allow_jagged_rows
          1. If true, allow rows that are missing trailing optional columns. For CSV.
        11. allow_quoted_newlines
          1. If true, allow quoted data sections that contain newline characters in the file.
        12. bigtable_options
          1. Only required when creating a Bigtable external table. Specifies the schema of the Bigtable external table in JSON format.
        13. compression
          1. The compression type of the data source. Supported values include: GZIP. If not specified, the data source is uncompressed. Applies to CSV and JSON data.
        14. decimal_target_types
          1. Determines how to convert a Decimal type. Equivalent to ExternalDataConfiguration.decimal_target_types Example: ["NUMERIC", "BIGNUMERIC"].
        15. enable_list_inference
          1. If true, use schema inference specifically for Parquet LIST logical type.
        16. enable_logical_types
          1. If true, convert Avro logical types into their corresponding SQL types. For more information, see Logical types.
        17. encoding
          1. The character encoding of the data. Supported values include: UTF8 (or UTF-8), ISO_8859_1 (or ISO-8859-1). Applies to CSV data.
        18. enum_as_string
          1. If true, infer Parquet ENUM logical type as STRING instead of BYTES by default.
        19. expiration_timestamp
          1. The time when this table expires. If not specified, the table does not expire.
        20. field_delimiter
          1. The separator for fields in a CSV file.
        21. format
          1. The format of the external data. Supported values for CREATE EXTERNAL TABLE include: AVRO, CSV, DATASTORE_BACKUP, GOOGLE_SHEETS, NEWLINE_DELIMITED_JSON (or JSON), ORC, PARQUET, CLOUD_BIGTABLE. Supported values for LOAD DATA include: AVRO, CSV, NEWLINE_DELIMITED_JSON (or JSON), ORC, PARQUET.
        22. hive_partition_uri_prefix
          1. A common prefix for all source URIs before the partition key encoding begins. Applies only to hive-partitioned external tables. Applies to Avro, CSV, JSON, Parquet, and ORC data.
        23. file_set_spec_type
          1. FILE_SYSTEM_MATCH
          2. Expands source URIs by listing files from the object store. This is the default behavior if FileSetSpecType is not set.
          3. NEW_LINE_DELIMITED_MANIFEST
          4. Indicates that the provided URIs are newline-delimited manifest files, with one URI per line. Wildcard URIs are not supported in the manifest files.
        24. ignore_unknown_values
          1. If true, ignore extra values that are not represented in the table schema, without returning an error. Applies to CSV and JSON data.
        25. json_extension
          1. For JSON data, indicates a particular JSON interchange format. If not specified, BigQuery reads the data as generic JSON records. Supported values include: GEOJSON. Newline-delimited GeoJSON data.
        26. max_bad_records
          1. The maximum number of bad records to ignore when reading the data.
        27. max_staleness
          1. Applicable for BigLake tables and object tables. Specifies whether cached metadata is used by operations against the table, and how fresh the cached metadata must be in order for the operation to use it. To disable metadata caching, specify 0. This is the default.
        28. metadata_cache_mode
          1. Applicable for BigLake tables and object tables. Specifies whether the metadata cache for the table is refreshed automatically or manually. Set to AUTOMATIC for the metadata cache to be refreshed at a system-defined interval, usually somewhere between 30 and 60 minutes. Set to MANUAL if you want to refresh the metadata cache on a schedule you determine. In this case, you can call the BQ.REFRESH_EXTERNAL_METADATA_CACHE system procedure to refresh the cache.
        29. null_marker
          1. The string that represents NULL values in a CSV file.
        30. object_metadata
          1. Only required when creating an object table. Set the value of this option to SIMPLE when creating an object table.
        31. preserve_ascii_control_characters
          1. If true, then the embedded ASCII control characters which are the first 32 characters in the ASCII table, ranging from '\x00' to '\x1F', are preserved.
        32. projection_fields
          1. A list of entity properties to load.
        33. quote
          1. The string used to quote data sections in a CSV file. If your data contains quoted newline characters, also set the allow_quoted_newlines property to true.
        34. reference_file_schema_uri
          1. User provided reference file with the table schema. Applies to Parquet/ORC/AVRO data.
        35. require_hive_partition_filter
          1. If true, all queries over this table require a partition filter that can be used to eliminate partitions when reading data. Applies only to hive-partitioned external tables. Applies to Avro, CSV, JSON, Parquet, and ORC data.
        36. sheet_range
          1. Range of a Google Sheets spreadsheet to query from. Applies to Google Sheets data.
        37. skip_leading_rows
          1. The number of rows at the top of a file to skip when reading the data. Applies to CSV and Google Sheets data.
        38. uris
          1. For external tables, including object tables, that aren't Cloud Bigtable tables
        39. For external tables
      5. OPTION_TYPE
      6. OPTION_VALUE
    8. TABLE_CONSTRAINTS
      1. CONSTRAINT_CATALOG
        1. The constraint project name.
      2. CONSTRAINT_SCHEMA
        1. The constraint dataset name.
      3. CONSTRAINT_NAME
        1. The constraint name.
      4. TABLE_CATALOG
        1. The constrained table project name.
      5. TABLE_SCHEMA
        1. The constrained table dataset name.
      6. TABLE_NAME
        1. The constrained table name.
      7. CONSTRAINT_TYPE
        1. Either PRIMARY KEY or FOREIGN KEY.
      8. IS_DEFERRABLE
        1. YES or NO depending on if a constraint is deferrable. Only NO is supported.
      9. INITIALLY_DEFERRED
        1. Only NO is supported.
      10. ENFORCED
        1. YES or NO depending on if the constraint is enforced.
        2. Only NO is supported.
    9. TABLE_SNAPSHOTS
      1. table_catalog
        1. The name of the project that contains the table snapshot
      2. table_schema
        1. The name of the dataset that contains the table snapshot
      3. table_name
        1. The name of the table snapshot
      4. base_table_catalog
        1. The name of the project that contains the base table
      5. base_table_schema
        1. The name of the dataset that contains the base table
      6. base_table_name
        1. The name of the base table
      7. snapshot_time
        1. The time that the table snapshot was created
    10. TABLE_STORAGE TABLE_STORAGE_BY_ORGANIZATION
      1. PROJECT_ID
        1. The project ID of the project that contains the dataset
      2. TABLE_CATALOG
        1. The project ID of the project that contains the dataset
      3. PROJECT_NUMBER
        1. The project number of the project that contains the dataset
      4. TABLE_SCHEMA
        1. The name of the dataset that contains the table or materialized view, also referred to as the datasetId
      5. TABLE_NAME
        1. The name of the table or materialized view, also referred to as the tableId
      6. CREATION_TIME
        1. The table's creation time
      7. DELETED
        1. Indicates whether or not the table is deleted
      8. STORAGE_LAST_MODIFIED_TIME
        1. The most recent time that data was written to the table.
      9. TOTAL_ROWS
        1. The total number of rows in the table or materialized view
      10. TOTAL_PARTITIONS
        1. The number of partitions present in the table or materialized view. Unpartitioned tables return 0.
      11. TOTAL_LOGICAL_BYTES
        1. Total number of logical (uncompressed) bytes in the table or materialized view
      12. ACTIVE_LOGICAL_BYTES
        1. Number of logical (uncompressed) bytes that are less than 90 days old
      13. LONG_TERM_LOGICAL_BYTES
        1. Number of logical (uncompressed) bytes that are more than 90 days old
      14. TOTAL_PHYSICAL_BYTES
        1. Total number of physical (compressed) bytes used for storage, including active, long term, and time travel (deleted or changed data) bytes
      15. ACTIVE_PHYSICAL_BYTES
        1. Number of physical (compressed) bytes less than 90 days old, including time travel (deleted or changed data) bytes
      16. LONG_TERM_PHYSICAL_BYTES
        1. Number of physical (compressed) bytes more than 90 days old
      17. TIME_TRAVEL_PHYSICAL_BYTES
        1. Number of physical (compressed) bytes used by time travel storage (deleted or changed data)
      18. FAIL_SAFE_PHYSICAL_BYTES
        1. Number of physical (compressed) bytes used by fail-safe storage (deleted or changed data)
      19. TABLE_TYPE
        1. The type of table. For example, `EXTERNAL` or `BASE TABLE`
    11. TABLE_STORAGE_USAGE_TIMELINE TABLE_STORAGE_USAGE_TIMELINE_BY_ORGANIZATION
      1. USAGE_DATE
        1. The billing date for the bytes shown
      2. PROJECT_ID
        1. The project ID of the project that contains the dataset
      3. TABLE_CATALOG
        1. The project ID of the project that contains the dataset
      4. PROJECT_NUMBER
        1. The project number of the project that contains the dataset
      5. TABLE_SCHEMA
        1. The name of the dataset that contains the table or materialized view, also referred to as the datasetId
      6. TABLE_NAME
        1. The name of the table or materialized view, also referred to as the tableId
      7. BILLABLE_TOTAL_LOGICAL_USAGE
        1. The total logical usage, in MB second. Returns 0 if the dataset uses the physical storage billing model.
      8. BILLABLE_ACTIVE_LOGICAL_USAGE
        1. The logical usage that is less than 90 days old, in MB second. Returns 0 if the dataset uses the physical storage billing model.
      9. BILLABLE_LONG_TERM_LOGICAL_USAGE
        1. The logical usage that is more than 90 days old, in MB second. Returns 0 if the dataset uses the physical storage billing model.
      10. BILLABLE_TOTAL_PHYSICAL_USAGE
        1. The total usage in MB second. This includes physical bytes used for fail-safe and time travel storage. Returns 0 if the dataset uses the logical storage billing model.
      11. BILLABLE_ACTIVE_PHYSICAL_USAGE
        1. The physical usage that is less than 90 days old, in MB second. This includes physical bytes used for fail-safe and time travel storage. Returns 0 if the dataset uses the logical storage billing model.
      12. BILLABLE_LONG_TERM_PHYSICAL_USAGE
        1. The physical usage that is more than 90 days old, in MB second. Returns 0 if the dataset uses the logical storage billing model.
    12. How do I
      1. Retrieve columns metadata for a table?
      2. Retrieve metadata from the column field paths?
      3. Show the constraints for a single table in a dataset
      4. DDL statements to create a primary key table and a foreign key table
      5. Calculates the number of logical bytes used by each storage tier in all of the tables in a dataset
      6. Create a column that extracts the partition type from the partition_id field and aggregates partition information at the table level of the dataset
      7. Retrieve table metadata for all of the tables in the dataset
      8. Retrieve table metadata for all tables of type CLONE or SNAPSHOT
      9. Retrieve table_name and ddl columns
      10. Retrieve the default table expiration times for all tables in a dataset
      11. Retrieve metadata about all tables in the dataset that contains %test% description
      12. Show the constraints for a single table in a dataset
      13. Get all the snapshots
      14. Show total logical bytes billed for the current project
      15. Forecast the price difference per dataset between logical and physical billing models?
      16. Sum the storage usage by day for projects in a specified region
      17. Show the storage usage for a specified day for tables in a dataset that uses logical storage
      18. Show the storage usage for the most recent usage date for tables in a dataset that uses physical storage
  13. Views
    1. VIEWS
      1. TABLE_CATALOG
        1. The name of the project that contains the dataset
      2. TABLE_SCHEMA
        1. The name of the dataset that contains the view also referred to as the dataset id
      3. TABLE_NAME
        1. The name of the view also referred to as the table id
      4. VIEW_DEFINITION
        1. The SQL query that defines the view
      5. CHECK_OPTION
        1. The value returned is always NULL
      6. USE_STANDARD_SQL
        1. YES if the view was created by using a GoogleSQL query; NO if useLegacySql is set to true
    2. MATERIALIZED_VIEWS
      1. TABLE_CATALOG
        1. The name of the project that contains the dataset. Also referred to as the projectId.
      2. TABLE_SCHEMA
        1. The name of the dataset that contains the materialized view. Also referred to as the datasetId.
      3. TABLE_NAME
        1. The name of the materialized view. Also referred to as the tableId.
      4. LAST_REFRESH_TIME
        1. The time when this materialized view was last refreshed.
      5. REFRESH_WATERMARK
        1. The refresh watermark of the materialized view. The data contained in materialized view base tables up to this time are included in the materialized view cache.
      6. LAST_REFRESH_STATUS
        1. Error result of the last automatic refresh job as an ErrorProto object. If present, indicates that the last automatic refresh was unsuccessful.
    3. How do I
      1. Get all the views
      2. Retrieve all the unhealthy materialized views
      3. Retrieve all materialized views