Universal Agent Database Configuration

z/OS UNIX File System Introduction

The z/OS implementation of Universal Agent databases utilizes the z/OS UNIX File System. Stonebranch recommends the use of the z/OS File System (zFS) for use by the Universal Broker and Universal Enterprise Controller started tasks. In a Sysplex environment, use of zFS is required.

zFS is a file system used by z/OS UNIX System Services (USS). It is a POSIX conforming hierarchical file system stored in one or more zFS data sets bound together into one hierarchical directory structure. A single zFS data set consists of a directory tree and files. Refer to the IBM UNIX System Services Planning manual for a complete discussion of the z/OS UNIX file system and its administration.

A zFS data set must be mounted before a program can access any file or directory within it. A mount operation binds the root directory of the zFS data set to an existing directory in the hierarchical file system referred to as the mount point. After the mount operation completes, the zFS data set's directory structure becomes part of the file system hierarchy starting at the mount point. A zFS data set can only be mounted one at a time.

The mount operation makes the files and directories within the zFS data set accessible to all users. User access is controlled with directory and file permissions contained within the zFS data set. Initially, a zFS data set's root directory is owned by the user that allocated the data set, and the directory permissions are set so that only that user has read, write, and execute permissions (permission mode 700). No other users have access.

zFS Configuration

zFS data sets are created by the installation JCL. The zFS data sets are used by Universal Broker and Universal Enterprise Controller.

A zFS data set is referred to as a zFS aggregate. Universal Agent uses compatibility mode aggregates only.

The Universal Broker zFS data set names must be specified with the UNIX_DB_DATA_SET and UNIX_SPOOL_DATA_SET Universal Broker configuration options. The Universal Enterprise Controller zFS data set name must be specified with the UNIX_DB_DATA_SET Universal Enterprise Controller configuration option.

Note

Unless otherwise stated in the release notes or install instructions, backward compatibility is always preserved in the Universal Broker and Universal Enterprise Controller databases. After completing the steps listed in any of the upgrade scenarios listed here, any existing databases used for the old version also can be used after the upgrade. This means that creating new databases using install job UNVIN07 is not necessary when upgrading.

Mounting and Unmounting the Databases

When the Universal Broker and Universal Enterprise Controller started tasks are started, they check if their zFS data sets have been mounted. If they are mounted, the started tasks will attempt to use them. If they are not mounted, the started tasks will attempt mount the data sets dynamically.

Note

In a Sysplex, the data set specified on UNIX_SPOOL_DATA_SET must be mounted in RWSHARE mode on a file system that is shared across the Sysplex. The mount point specified on SHARED_MOUNT_POINT must exist on a shared file system.

Dynamic Mounts

The started tasks will mount the zFS data sets if they are not mounted. The data sets are mounted at mount points defined in the directory specified in the Universal Broker configuration options:

  • MOUNT_POINT, which defaults to the /tmp directory, is used as the location to mount file systems which are not shared in a Sysplex.
  • SHARED_MOUNT_POINT, which defaults to the location specified for MOUNT_POINT, is used as the location to mount file systems which are shared in a Sysplex (currently only UNVSPOOL).

In a non-Sysplex environment, SHARED_MOUNT_POINT can be ignored. The mount points are sub-directories named after the data set names. For example, if the zFS data set name is UNV.UNVDB, the mount point is /tmp/UNV.UNVDB.

When the started tasks mount a zFS data set, the mount parameter AGGRGROW is used to specify that the zFS data set should automatically utilize secondary extents to expand if it runs out of allocated space.

The zFS data sets are not unmounted when the started tasks are stopped. It is not known whether other users are using the mounted data sets.

Manual Mounts

The started tasks will use the existing mounts of the zFS data sets. Dynamic mounts provide the easiest administration, but you may want to manually mount the data sets to take advantage of several available mount options. For example, the FSFULL PARM value can be used to issue operator messages when a file system reaches a specified percent full.

When mounting zFS data sets, the mount parameter AGGRGROW should be used to specify that the zFS data set should automatically utilize secondary extents to expand if it runs out of allocated space.

In a Sysplex, the mount parameter RWSHARE should be used when mounting the UNVSPOOL file system.

When the zFS data sets are manually mounted, the mount point can be any z/OS UNIX directory that satisfies the requirements of the file system. The name of the directory does not matter. The started tasks will locate the mount point regardless of location or name.

zFS data sets can be mounted using the TSO MOUNT command, the USS mount command or with PARMLIB member BPXPRMxx at IPL. The TSO MOUNT and USS mount commands mount it for the current IPL only while the BPXPRMxx member will mount the data set for each IPL.

zFS data sets can be unmounted using the TSO UNMOUNT or USS unmount commands or with the MODIFY BPXOINIT console command.

TSO Commands

The TSO commands to mount and unmount zFS data set UNV.UNVDB at mount point /opt/unvdb are illustrated below:

zFS Mount Command

MOUNT FILESYSTEM('UNV.UNVDB') MOUNTPOINT('/opt/unvdb') TYPE(ZFS) PARM(AGGRGROW)

Unmount Command

UNMOUNT FILESYSTEM('UNV.UNVDB')


The TSO command to mount zFS data set UNV.UNVSPOOL at mount point /shared/unvspool in a Sysplex is illustrated below:

zFS Mount Command

MOUNT FILESYSTEM('UNV.UNVSPOOL') MOUNTPOINT('/shared/unvspool') TYPE(ZFS) PARM(‘AGGRGROW,RWSHARE’)


The user ID that issues the mount or unmount commands must have an OMVS UID of 0 or READ access to the BPX.SUPERUSER profile in the FACILITY class.

Console Commands

The console commands to unmount zFS data set UNV.UNVDB is illustrated below in addition to the console command to list currently mounted data sets.

Unmount Command

F BPXOINIT,FILESYS=UNMOUNT,FILESYSTEM=UNV.UNVDB

Note

A console reply message will ask for confirmation.

Display Command

D OMVS,FILE

BPXPRMxx

The BPXPRMxx statement to mount zFS data set UNV.UNVDB at mount point /opt/unvdb is illustrated below:

MOUNT FILESYSTEM('UNV.UNVDB') TYPE(ZFS) MODE(RDWR) MOUNTPOINT('/opt/unvdb') PARM('AGGRGROW')

Both zFS data sets must be mounted with mode read/write, which is the default.

In a Sysplex configuration, file system UNVSPOOL should be mounted with PARM(‘AGGRGROW,RWSHARE’).

Data Set Initialization

When the started tasks start, they find the mount point for their zFS data sets. Regardless of whether the zFS data sets were dynamically mounted or statically mounted, the started tasks check for an initialization flag file named .inited in the root directory of the mounted data set.

If the file is not found, which is the case when they are first mounted, the started tasks change the owner of the root directory to the user ID with which they are executing and change the permission mode to the MOUNT_POINT_MODE configuration option value, which defaults to 750. In a Sysplex configuration, SHARED_MOUNT_POINT_MODE, which defaults to the value specified on MOUNT_POINT_MODE, can be used to specify a different set of permissions for the shared mount point.

If you want to customize either the owner or permission of the directories, manually create the .inited file in the root directory of the zFS data set to prevent the started tasks from performing the initialization when they start. The USS command touch .inited can be used to create an empty file.

Memory Management

Berkeley DB uses a temporary cache in memory to manage its databases. If this cache becomes sufficiently large, it must be written to disk.

Berkeley DB has a default location for storing temporary cache files, but if UEC cannot access that location, or there is no space to write these files in the default location, the following error can occur in UEC, and UEC shuts down:

UNV4301D Database error: 'temporary: write failed for page XXXXX'

To work around this issue, the following steps write the temporary cache files to the UEC database directory:

Step 1

Mount the UECDB zFS data set.

Step 2

Inside the mount point, create a text file named DB_CONFIG.

Step 3

Inside the DB_CONFIG file, add the following string:
set_tmp_dir *dbpath*
Where dbpath is the path to the location in which the database files reside.

Step 4

Start / restart UEC.