I have a large pipe delimited file that I need to load into a teradata database.
The file is approx 1,000,000 records long and 150 fields wide.
What I would like to do is write any error records to a error log txt file and keep the load running until the end.
Is this possible? If so, what are the steps?
On a readCSV task, you can set the Skip Invalid Records value (on the advanced tab) to true, and if you have a record that doesn't have the correct number of fields, a record will be written to the error log that will specify which record was in error, how many fields it has and how many it was supposed to have.