1

I am encountering an error while trying to copy a CSV file into Snowflake from an S3 bucket. This process was functioning correctly until an incident occurred in Snowflake on July 3rd. The error message indicates an issue with parsing JSON (I have one column 'src' that is type:variant) in my CSV file:

Error parsing JSON: 0xB10xBB_0xA0_id0xA0667bcf7b2aec09997408c3890xA0activeɠapplications0xAD0x820xB10xA0custom0xB1M0xA0applicationjnaccoffurltx��application File 'file_name_and_path.csv.gz', line 2, character 0 Row 1, column "DATA"["SRC":3] If you would like to continue loading when an error is encountered, use other values such as 'SKIP_FILE' or 'CONTINUE' for the ON_ERROR option. For more information on loading options, please run 'info loading_data' in a SQL client.

Here is the COPY INTO command I am using:

COPY INTO MY_TABLE
FROM 's3://<stage>'
FILE_FORMAT = (
    TYPE = CSV 
    FIELD_DELIMITER = ',' 
    NULL_IF = ('\\N')
    FIELD_OPTIONALLY_ENCLOSED_BY = '"'
    SKIP_HEADER = 1
    ERROR_ON_COLUMN_COUNT_MISMATCH = TRUE
    COMPRESSION = GZIP
)
credentials=(aws_key_id='***************' aws_secret_key='************************')

This worked for 3 years perfectly for like 25 pipelines I currently have, until the incident.

What did I try?

  • Verified the integrity of the CSV file by checking its content and structure.

  • Ensured that the S3 bucket and file path are correct.

  • Confirmed that the AWS credentials are valid and have the necessary permissions.

  • Reviewed the Snowflake documentation for any changes in the COPY INTO command syntax or file format options.

  • Attempted to use the ON_ERROR = 'SKIP_FILE' and ON_ERROR = 'CONTINUE' options to bypass the error, but the issue persisted.

What was I expecting?

  • I expected the CSV file to be successfully copied into the Snowflake table without encountering a JSON parsing error, as it did before the incident on July 3rd. The data should be loaded correctly, respecting the specified file format options.
1
  • It's hard to answer this on Stack Overflow, as we can't reproduce - but please send a ticket to support that will look at this Commented Jul 9 at 22:40

0