UDM - Transfer Modes and Attributes
Setting the Transfer Type
There are two basic types of file transfers:
Binary | Moves the data as it is, without any translation. |
Text | Translates the data from the source server's code page to the destination server's code page as it is transferred from one server to another. |
The default transfer type for UDM is binary.
To set the transfer type, use the mode command.
Binary |
|
Text |
|
Issuing the mode command by itself displays the current transfer mode. The mode command also can be used tell UDM to trim trailing spaces at the end of each line (or record, for record-based file systems such as dd and dsn in z/OS).
Transfer Attributes
While the mode command is used to control the settings for transfer operations as a whole, the attrib command can be used to set up the handling of transfer operations for each side of the transfer session.
The attrib command can set transfer attributes that apply to either the primary or secondary server. It takes the following form:
attrib lname[={dd|dsn|hfs}] [attribute 1=value1]...[attribute n=valuen]
Where lname is the logical name of the server, the attributes are to be applied.
By default, any attributes listed in the attrib command are applied to the currently selected files system unless a specific file system is assigned to the logical name. In that case, the attributes are applied to the specified file system.
The remainder of the attrib command contains a series of attributes and their values, some of which will be discussed in further detail in the remainder of this section. If the attrib command is issued with just a logical name, UDM will list the currently set attributes for the corresponding server.
A Stonebranch Tip
When you change file systems for a server using the filesys command, the currently set attributes are those that were applied to that file system type.
In other words, attributes are not carried over from one file system to another.
End of Line Sequence
Text mode transfers have the concept of a line in UDM. For record-oriented file systems, such as z/OS's DD and DSN, and IBM i's LIB, each line is a single record. However, for UNIX, Windows, and the HFS file system under USS and IBM i, there is no inherent structure imposed by the operating system on file data.
To determine what constitutes a line in the data for these types of files, UDM looks for an end of line sequence on the source side of a transfer. This can be any sequence of characters (including a zero length sequence, in which case the entire file is considered to be a single line). UDM determines when it has read a complete line of data when this sequence is encountered.
In addition to the normal printable character sets on each platform, an end of line sequence also can be:
- \r character sequence, to denote a carriage return character.
- \l sequence, to denote a line feed.
- \n sequence, to indicate a new line character.
A Stonebranch Tip
When UDM transfers a line of text data from one server to another, it does not transfer the end of line sequence.
Instead, UDM transfers all of the data in each line up to the end of line sequence.
The end of line sequence also is used on the destination side of a text transfer. The end of line sequence set for the destination side of the transfer is appended to the end of each line of data.
UDM also does this for record-oriented file systems as well. By managing the end of line sequence this way, UDM easily can be used to translate end of line characters across platforms (such as a transfer from UNIX to Windows), strip end of line characters from the data completely, or even add a completely new end of line sequence for use by other applications. For most operations, though, the end of line sequence will not need to be changed.
eol Attribute
The end of line sequence is set with the eol attribute.
The default value for eol depends on the platform and file system selected:
- For Windows-based platforms, the default value is \r\n.
- For UNIX platforms and the HFS file system under USS, the default value is \n.
- For the HFS file system under IBM i, the default is FILE, which makes end of line terminator consistent with file CCSID.
- For record-oriented file systems (z/OS's dd and dsn, and IBM i's LIB), the value for eol is not set.
To provide consistent eol definitions under the IBM i HFS file system, specific ASCII and EBCDIC values are defined for the symbolic values.
- As ASCII, \n = x0A, \r = x0D, \t = x09 and \l = x0A.
- As EBCDIC, \n = x15, \r = x0D, \t = x05 and \l = x25.
By default, the file CCSID determines the type of eol, ASCII vs. EBCDIC. The default ASCII eol is \n and the default EBCDIC eol is \r\l.
It is important to note the difference between eol definitions as just described and eol characters when transferred as data. Due to code page translations and Unicode mappings that take place during data transfer, translated values may be surprising.
Please refer to appropriate translation tables or Unicode mapping tables to understand the values used when eol and other control characters are transferred as data. UDM provides default definitions and allows user-defined eol attribute overrides in order to avoid translation surprises and associated difficulties.
The following example sets an end of line sequence of an exclamation point (!) for a transfer server:
attrib mylogicalname eol=!
Line Length and Line Operations
Note
The attributes discussed in this subsection apply solely to the destination side of the transfer.
Other attributes can be used to manipulate transferred data as well.
The linelen attribute is used to specify the length, in characters, of a line of data that has been transferred. This value is independent of the end of line sequence and, for record-oriented file systems, the transfer type. If linelen is set to a value other than zero (its default value), UDM will manipulate the data according to the method specified with the lineop attribute.
The lineop attribute specifies what happens to each line (or record, from z/OS's dd and dsn file systems) of data coming from the source transfer.
- If the value for lineop is none, the line/record is written as is. However, if its length from the source is greater than the value of linelen, UDM issues an error.
- If the value of lineop is stream, the data from the source side of the transfer is treated as a single record and is subdivided when it is written as a series of lines or records (depending on the file system) each linelen characters in length.
- If the value of lineop is trunc, each record or line from the source is truncated so that it is at most linelen characters in length.
- If the value of lineop is wrap, each line or record from the source side of the transfer that is longer than linelen characters is wrapped into multiple lines/records so that the maximum length of each line on the destination side is at most linelen characters long.
A Stonebranch Tip
Binary data that is transferred from a Windows or UNIX platform (including HFS under USS) is looked at by UDM as one large line or record of source data.
The same can be said when transferring text data from these platforms if the end of line sequence is zero length for the source server or the end of line sequence does not exist in the source data.
Under z/OS (except for the HFS file system), UDM will set the linelen attribute to be the same as the lrecl allocation option for new data sets or the LRECL DCB attribute of existing data sets if the value of linelen is zero. UDM also will set the lineop attribute to a value appropriate for the transfer type and destination allocation attributes if lineop has previous not be set.