For some testing or business purpose, it is required to load the data into HANA table.
Here is what one can follow simple 6 steps process for the above:-
1. Create an CSV file with your data or otherwise, arrange to get a CSV file
If you are moving data from one HANA system to another HANA - use HDBSQL at HANA Source
hdbsql -S <HANASID> -n <Hostname>:3<Instance#>15 -u system -p <password>
[password can be given later also]
hdbsql=> \o <Destination_Pathname>/<filename.CSV>
hdbsql=> select top 7000000 * from <SCHEMA_Name>.<TABLE_Name>
I experienced problem if I am choosing more than 8000000 records - feedback from others required !!
2. Copy the file to the HANA Target location (via FTP/SCP)
3. Create a table in HANA with the structure of your CSV data file
- Manually we have to create as per our CSV data file.
- If you are exporting a table in one HANA system from another system, then you can do structure copy from HANA source using HANA Studio.
Select the table in "Open Definition" mode; right click and choose "Export SQL"
This will open SQL Editor, you can copy the required SQL and paste into Target HANA Server's "SQL Editor".
NOTE : 1) MUST Change the schema name
2) If required, change the table name
Then execute the query - it will create Table.
4. Create the control file to have complete overview of data loading
- Sample
cat > /debajit/hanatest/test.ctl
import data
into table [SCHEMA_Name].[TABLE_Name]
from '/debajit/hanatest/test.csv'
record delimited by '\n'
field delimited by ','
optionally enclosed by '"'
error log /debajit/hanatest/test.bad
5. Provide proper access/permission to newly created Control File "test.ctl"
- chown/chmod, etc.
6. Again open HDBSQL at destination system and execute the following:
hdbsql => import from '/debajit/hanatest/test.ctl' with table lock without type check
Note : Importing any large CSV file is too slow with HANA Studio – therefore it is better to use hdbsql to import (using threads and batches one can make this even quicker).