csv file with different number of columns in rows

Post any question you may have in regards to GoAnywhere Director and let our talented support staff and other users assist you.
5 posts Page 1 of 1

dprice583

Posts: 1
Joined: Thu Jul 17, 2014 8:59 am

Post by dprice583 » Thu Jul 17, 2014 9:14 am
Hi,

We are reading in a CSV file where there are different record formats. The first two records in the CSV file have 2 columns and 7 columns respectively. All of the other records in the CSV have 10 columns. Right now I am getting around this by making the first two records have 10 columns also. I was wondering if you had any advice on another way to handle this problem.

Thanks,
Don Price for the Kansas Turnpike Authority

Support_Rick

Support Specialist
Posts: 592
Joined: Tue Jul 17, 2012 2:12 pm
Location: Phoenix, AZ

Post by Support_Rick » Mon Jul 21, 2014 12:07 pm
Don,

The only options you have here are to do as you are ... (adding columns so that all records have 10) or by removing the first 2 columns (separating them from the table) and reading those as separate values all together from a Temp file.

In either case, the ReadCSV must have an equal number of rows for each row, even the header(sub-header rows)
Rick Elliott
Lead Solutions Consultant
(402) 944.4242
(800) 949-4696

madiom

Posts: 2
Joined: Fri Sep 19, 2014 1:21 pm

Post by madiom » Fri Oct 21, 2016 6:28 pm
Is this still true in the current version of MFT? All post I found referencing this seem to be a few years old and I am hoping there is a workaround under the current system, because I am running into the same issue and manually deleting rows from a file make the automation features of MFT pointless.

Support_Rick

Support Specialist
Posts: 592
Joined: Tue Jul 17, 2012 2:12 pm
Location: Phoenix, AZ

Post by Support_Rick » Mon Oct 24, 2016 9:50 am
Yes, this is true for all CSV files as the structure of the file must be consistent.

You do have the option of using the ModifyRowset task to manipulate your data before writing it out ... but, in either case, if you're using CSV, you will need the same number of columns for all rows (including headers).

You could read flat files this way (basing it on the end of record marker instead of the field markers) .. but, then you have to parse out each field based on record size and/or type.

There are multiple ways to do this, just have to pick which is easier for you to accomplish.
Rick Elliott
Lead Solutions Consultant
(402) 944.4242
(800) 949-4696

kgutierrez

Posts: 6
Joined: Fri Aug 28, 2015 11:42 am

Post by kgutierrez » Tue Nov 15, 2016 12:50 pm
I am having this same issue so is it possible to split out the different rows to different temp storage and then read those temp spaces individually to add to database? For all my files, the header record is always different than the detail records and ignoring first row isn't working. This is causing quite a few headaches in processing these files so looking for some alternate solutions. Header record in this case has 4 fields and detail has 256 fields but even ignoring the first row, the insert into SQL is always trying to do 4 fields instead of the 256 and generating an error.
5 posts Page 1 of 1