Applied Solutions 2

Take a shortcut

When a quick and simple data transfer is needed, Teradata command lines can help.

Data warehouses and broader analytical ecosystems rely on managed and automated data integration processes for smooth operation. Extract, transform and load (ETL) and extract, load and transform (ELT) jobs are designed, developed and tested before being put into production. Additionally, production data movement jobs are integrated with the enterprise data warehouse (EDW) load processes so that any data marts are kept up to date as required by their applications.

As critical as ongoing production processes are, many one-time events occur that require loading data into the EDW or moving data between systems. These might be standard operations not typically repeated, such as moving a unique subset of data to a test system for a one-time performance run, or they might be completely unexpected one-time jobs that are required for unusual situations. Sometimes, these “one-off” jobs seem as common as routine ones.

Analytic ecosystem

An environment of one or more systems, in which data-movement and -replication processes are managed to derive the maximum possible value from an organization’s enterprise data.

Load and data-movement tools

The load and data-movement products from Teradata include high-powered and flexible control interfaces for running any type of job and for handling all sorts of error and exception conditions:
  • Teradata Parallel Transporter is the primary product for loading data from other sources into the Teradata Database. It includes a scripting language and an application programming interface (API), which defines the load job, source data and target tables, as well as identifies which load operator to use. These interfaces can take full advantage of the product’s performance and error-handling capabilities.
  • Teradata Data Mover transfers data and objects between Teradata systems within an analytical ecosystem. A metadata table-driven interface is included for event-based data movement, and a flexible XML file defines jobs initiated from the command line.



These are powerful and flexible tools, but what about those one-off jobs that don’t require such potency? Rather than working with the full functionality and options that these tools provide, DBAs or ETL programmers can employ the easy-to-use command line interfaces that are also a part of these products.

Teradata Parallel Transporter Easy Loader

The Teradata Parallel Transporter Easy Loader command line interface allows rapid job definition to load data from a flat file into the Teradata Database. It has several UNIX-like options, which consist of the single command “tdload” followed by a series of options preceded by a hyphen (or two hyphens for the long option name versions).

For example, the –f <filename> option (or --SourceFilename) is used to specify the name of the file containing the source data to be loaded. There are several options to define basic job parameters, such as the user name, target table, job variables and delimiter other than comma. If preferred, command options can be specified in a job variables file rather than on the command line.

Some Teradata Parallel Transporter job tuning functions are also available by specifying these command line options in the load job

  • --LoadInstances <number>
  • --MaxSessions <number>
  • --MinSessions <number>
  • --ErrorLimit <number>
  • --TargetWorkingDatabase <database_name>
  • --TargetAccountID <accountid>

Because the schema of the load job is derived from the definition of the target table, the layout of the data records in the flat file must match that of the Teradata Database table.

Teradata Data Mover

The Teradata Data Mover command line interface can be used by itself for fairly simple one-time or exception jobs or to further define a job. These commands serve two purposes:

  • The “move” command enables an experienced DBA to quickly execute a one-time job to transfer a subset of data from the production system to the test system.
  • The “create” command, accompanied by an XML file, will store and run a complex job or define a repeatable production job that is run periodically at a scheduled time.

A complete “move” command consists of its keyword followed by a series of parameter name and value pairs. For example, source_tdpid would be followed by the system name for the source system. Parameters exist that give the user complete flexibility to specify the system, user name and account string to apply on each system, as well as the table or other objects to be copied. The “query” command can be employed before the “move” command to get a list of databases and tables on the source system.

This sample command will move a table from one system (source_tdpid) to another (target_tdpid):

Datamove move –job_name job1 
–source_tdpid Checks –source_user
TD_API_user –source_account_string
77112345 –target_tdpid Leo –target_user
NN1945a4 –target_account_string
Tstr1233 –table db1.table2

Teradata Data Mover can also transfer other database objects such as tables with less common characteristics (e.g., partitioned primary index, large objects and user-defined type columns), global temp tables, statistics, join and hash indexes, and triggers. Some objects, such as triggers, can be moved only along with the associated table.

Another way to exercise Teradata Data Mover is to start with an existing saved job and modify it. This allows the full use of features of an XML parameter file and other parameters. Because XML is simple to read and understand, a previously created job can be easily modified with any editor and saved as a new job. A short and simple command line can execute the job by referencing the XML file. Over time, job templates can be accumulated by an experienced DBA so that most short-term data- and object-movement tasks can be quickly handled with little effort or risk of error. (See table for list of commands.)

Quick transfer

Sometimes a graphical user interface is required to complete a complex task. Other times a full-featured language or API with production job attributes is necessary. For simple and one-off jobs that need to move data from one system to another, the command lines available through Teradata Parallel Transporter Easy Loader and Teradata Data Mover offer a quick option.

Your Comment:
Your Rating:

I think TPT easy loader can load from table to table too besides file. You can try loading xml example .IMPORT VARTEXT .... .REPEAT * USING (a CLOB AS DEFERRED, b VARCHAR(50)) INSERT INTO TABLE VALUES(:b, :a); ...

5/26/2014 4:57:51 AM
— Anonymous
Hi, Can anyone please suggest me how to load XML file from UNIX to Teradata.

11/8/2012 1:10:11 AM
— Anonymous
Can Teradata Data Mover move full databases from a data center in Illinois to another data center in Ohio ?

1/19/2012 2:41:42 PM
— Anonymous
Alan, We are about to employ a system that uses 2 TD databases (ie A/B sides). We will load one set of tables through our ELT system (side B), have the customer QA the load (on side B) and if approved, copy all the tables to the regular production sytems (side A). I always want to copy the entire database (including indexes) from side B to side A. Once the client approves of the new dataset, I follow this process. Drop all objects in the target database (side A). Dump the data (using a simple arcmain script to disk). Load the target from the arcmain dump to side A. I experimented with dumping to arcmain using named pipes to skip writing the data to disk but had limited success with this approach. Do you have an example TD mover script that will copy an entire database from one TD system to another without having to script all the table names and/or structures (as seems to be the case with data mover)?

1/20/2011 1:43:49 PM
— Anonymous
EMC Q3-2014