Chapter 10. WS GRAM Configuration

10.1. Introduction

This guide contains advanced configuration information for system administrators working with WS GRAM. It provides references to information on procedures typically performed by system administrators, including installation, configuring, deploying, and testing the installation. It also describes additional prerequisites and host settings necessary for WS GRAM operation. Readers should be familiar with the Key Concepts and Implementation Approach for WS GRAM to understand the motivation for and interaction between the various deployed components.

This is a partially-complete docbook translation of the WS GRAM Admin Guide. Please see that document for additional information.

10.2. Local Prerequisites

WS GRAM requires the following:

10.2.1. Host credentials

In order to use WS GRAM, the services running in the WSRF hosting environment require access to an appropriate host certificate.

10.2.2. GRAM service account

WS GRAM requires a dedicated local account within which the WSRF hosting environment and GRAM services will execute. This account will often be a globus> account used for all local services, but may also be specialized to only host WS GRAM. User jobs will run in separate accounts as specified in the grid-mapfile or associated authorization policy configuration of the host.

10.2.3. Gridmap authorization of user account

In order to authorize a user to call GRAM services, the security configuration must map the Distinguished Name (DN) of the user to the name of the user in the system where the GRAM services run. Here are the configuration steps:

  1. In order to obtain the DN, which is the subject of the user certificate, run the bin/grid-cert-info command in $GLOBUS_LOCATION on the submission machine:

    % bin/grid-cert-info -identity
    /O=Grid/OU=GlobusTest/OU=simpleCA-foo.bar.com/OU=bar.com/CN=John Doe

  2. Create a /etc/grid-security/grid-mapfile. The syntax is to have one line per user, with the distinguished name followed by a whitespace and then the user account name on the GRAM machine. Since the distinguished name usually contains whitespace, it is placed between quotation marks, as in:

    "/O=Grid/OU=GlobusTest/OU=simpleCA-foo.bar.com/OU=bar.com/CN=John Doe" johndoe

10.2.4. Functioning sudo

WS GRAM requires that the sudo command is installed and functioning on the service host where WS GRAM software will execute.

Authorization rules will need to be added to the sudoers file to allow the WS GRAM service account to execute (without a password) local scheduler adapters in the accounts of authorized GRAM users. See Configuring sudo below

10.2.5. Local scheduler

WS GRAM depends on a local mechanism for starting and controlling jobs. If the fork-based WS GRAM mode is to be used, no special software is required. For batch scheduling mechanisms, the local scheduler must be installed and configured for local job submission prior to deploying and operating WS GRAM. The supported batch schedulers in the GT 3.9.5 release are: PBS, Condor, LSF

10.2.6. RFT Dependency

RFT prerequisites include PostgreSQL to be installed and configured. The instructions are here. WS GRAM depends on RFT for file staging and cleanup. File staging from client host to compute host and visa versa.

Important

Jobs requesting these functions will fail if RFT is not properly setup.

10.3. Configuring

10.3.1. Configuration settings

include

10.3.2. Setting up service credentials

In a default build and install of the Globus Toolkit, the local account is configured to use host credentials at /etc/grid-service/containercert.pem and containerkey.pem. If you already have host certs, then you can just copy them to the new name and set ownership.

% cd /etc/grid-security
% cp hostcert.pem containercert.pem
% cp hostkey.pem containerkey.pem
% chown globus.globus container*.pem

Replace globus.globus with the user and group the container is installed as.

You should now have something like:

/etc/grid-security$ ls -l *.pem
-rw-r--r--  1 globus globus 1785 Oct 14 14:47 containercert.pem
-r--------  1 globus globus  887 Oct 14 14:47 containerkey.pem
-rw-r--r--  1 root   root   1785 Oct 14 14:42 hostcert.pem
-r--------  1 root   root    887 Sep 29 09:59 hostkey.pem

The result is a copy of the host credentials which are accessible by the container.

If this is not an option, then you can configure an alternate location to point to host credentials -or- configure to use just a user proxy (personal mode).

10.3.3. Enabling Local Scheduler Adapter

The batch scheduler interface implementations included in the release tarball are: PBS, Condor and LSF. To install one of the batch scheduler adapters, follow these steps (shown for pbs):

% cd $GLOBUS_LOCATION\gt3.9.5-all-source-installer
% make gt4-gram-pbs postinstall
% gpt-postinstall

Using PBS as the example, make sure the batch scheduler commands are in your path (qsub, qstat, pbsnodes).

For PBS, another setup step is required to configure the remote shell for rsh access:

% cd $GLOBUS_LOCATION/setup/globus
% ./setup-globus-job-manager-pbs --remote-shell=rsh

The last thing is to define the GRAM and GridFTP file system mapping for PBS.

Done! You have added the PBS scheduler adapters to your GT installation.

10.3.4. Configuring sudo

When the credentials of the service account and the job submitter are different (multi user mode), then GRAM will prepend a call to sudo to the local adapter callout command.

Important

If sudo is not configured properly, the command and thus job will fail.

As root, here are the two lines to add to the /etc/sudoers file for each GLOBUS_LOCATION installation, where /opt/globus/GT3.9.5 should be replaced with the GLOBUS LOCATION for your installation:

# Globus GRAM entries
globus  ALL=(username1,username2) 
        NOPASSWD: /opt/globus/GT3.9.5/libexec/globus-gridmap-and-execute 
        /opt/globus/GT3.9.5/libexec/globus-job-manager-script.pl *
globus  ALL=(username1,username2) 
        NOPASSWD: /opt/globus/GT3.9.5/libexec/globus-gridmap-and-execute 
        /opt/globus/GT3.9.5/libexec/globus-gram-local-proxy-tool *

10.3.5. Extra steps for non-default installation

10.3.5.1. Non-default service credentials

10.3.5.2. Alternate location for host credentials

If setting up host credentials in the default location of /etc/grid-security/containercert.pem and containerkey.pem is not an option for you, then you can configure an alternate location to point to host credentials.

Security descriptor configuration details are here, but the quick change is to edit this file - $GLOBUS_LOCATION/etc/globus_wsrf_core/global_security_descriptor.xml - by changing the cert and key paths to point to host credentials that the service account owns.

10.3.5.3. User proxy

To run the container using just a user proxy, simply comment out the ContainerSecDesc parameter in this file $GLOBUS_LOCATION/etc/globus_wsrf_core/server-config.wsdd as follows:

<!--
    <parameter 
        name="containerSecDesc" 
        value="etc/globus_wsrf_core/global_security_descriptor.xml"/>
 -->
  

Running in personal mode (user proxy), another GRAM configuration setting is required. For GRAM to authorize the RFT service when performing staging functions, it needs to know the subject DN for verification. Here are the steps:

% cd $GLOBUS_LOCATION/setup/globus
% ./setup-gram-service-common --staging-subject=
 "/DC=org/DC=doegrids/OU=People/CN=Stuart Martin 564720"

You can get your subject DN by running this command:

% grid-cert-info -subject

10.3.5.4. Non-default GridFTP server

By default, the GridFTP server is assumed to run as root on localhost:2811. If this is not true for your site then you must updated the configuration by editing the GRAM and GridFTP file system mapping config file: $GLOBUS_LOCATION/etc/gram-service/globus_gram_fs_map_config.xml.

10.3.5.5. Non-default container port

By default, the globus services will assume 8443 is the port the Globus container is using. However the container can be run under a non-standard port, for example:

% globus-start-container -p 4321

When doing this, GRAM needs to be told the port to use to contact the RFT service, like so:

% cd $GLOBUS_LOCATION/setup/globus
% ./setup-gram-service-common --staging-port="4321"

10.3.5.6. Non-default gridmap

If you wish to specify a non-standard gridmap file in a multi-user installation, two basic configurations need to be changed:

  • $GLOBUS_LOCATION/etc/globus_wsrf_core/global_security_descriptor.xml

    As specified in the gridmap config instructions, add a <gridmap value="..."/> element to the file appropriately.

  • /etc/sudoers

    Add "-g /path/to/grid-mapfile" as the first argument to all instances of the globus-gridmap-and-exec command.

Example: global_security_descriptor.xml

...

<gridmap value="/opt/grid-mapfile"/>

...
sudoers
...

# Globus GRAM entries
globus  ALL=(username1,username2) 
        NOPASSWD: /opt/globus/GT3.9.5/libexec/globus-gridmap-and-execute 
        -g /opt/grid-mapfile
        /opt/globus/GT3.9.5/libexec/globus-job-manager-script.pl *
globus  ALL=(username1,username2) 
        NOPASSWD: /opt/globus/GT3.9.5/libexec/globus-gridmap-and-execute 
        -g /opt/grid-mapfile
        /opt/globus/GT3.9.5/libexec/globus-gram-local-proxy-tool *

...

10.3.5.7. Non-default job resource limit

The current limit on the number of job resources (both exec and multi) allowed to exist at any one time is 1000. This limit was chosen from scalability tests as an appropriate precaution to avoid out-of-memory errors. To change this value to, say, 150, use the setup-gram-service-common script as follows:

	% cd $GLOBUS_LOCATION/setup/globus
	% ./setup-gram-service-common --max-job-limit="150"

10.4. Testing

See the WS GRAM users guide for information about submitting a test job.

10.5. Security Considerations

import

10.6. Troubleshooting

[todo]