GT 4.2 Quickstart


This is a quickstart that shows a full installation of the Toolkit on two Debian 3.1 machines. It shows the installation of prereqs, installation of the toolkit, creation of certificates, and configuration of services. It is designed to supplement the main admin guide.

1. Setting up the first machine

1.1. Pre-requisites

I will be installing all of the toolkit from source, so I'm going to double-check my system for pre-requisites. The full list of prereqs is available at Software Prerequisites in the GT 4.2 Admin Guide.

First I'll check for security libraries:

elephant % openssl version
OpenSSL 0.9.7e 25 Oct 2004

elephant % dpkg --list | grep zlib
ii  zlib-bin       1.2.2-4.sarge. compression library - sample programs
ii  zlib1g         1.2.2-4.sarge. compression library - runtime
ii  zlib1g-dev     1.2.2-4.sarge. compression library - development

openssl 0.9.7 (or newer, 0.9.8 is okay) and the zlib development libraries are required.


The package names for zlib may vary for non-Debian systems. The RPM name we would look for is zlib-devel.

I also have j2sdk1.5-sun installed under /usr/lib/j2sdk1.5-sun from the sun-j2sdk1.5 dpkg:

elephant % dpkg -S /usr/lib/j2sdk1.5-sun
sun-j2sdk1.5: /usr/lib/j2sdk1.5-sun


Note that GT4.2 requires Java 5 or higher. Java 1.4.2 is no longer supported.

I also have ant installed:

elephant % ls /home/dsl/javapkgs/apache-ant-1.6.5/
docs  KEYS     LICENSE.dom  NOTICE	    welcome.html
etc   lib      LICENSE.sax  README	    WHATSNEW


Most RedHat and Fedora Core boxes already ship with ant, but it is configured to use gcj. We don't want to use gcj! To fix this, look for an /etc/ant.conf file. If you have one, rename it to /etc/ant.conf.orig for the duration of this quickstart.

My system already has C/C++ compilers:

elephant % which gcc
elephant % which g++

GNU versions of tar/make/sed:

elephant % tar --version
tar (GNU tar) 1.14
Copyright (C) 2004 Free Software Foundation, Inc.
This program comes with NO WARRANTY, to the extent permitted by law.
You may redistribute it under the terms of the GNU General Public License;
see the file named COPYING for details.
Written by John Gilmore and Jay Fenlason.
elephant % sed --version
GNU sed version 4.1.2
Copyright (C) 2003 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
to the extent permitted by law.
elephant % make --version
GNU Make 3.80
Copyright (C) 2002  Free Software Foundation, Inc.
This is free software; see the source for copying conditions.
There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A

Finally, I have sudo and XML::Parser for GRAM:

elephant % sudo -V
Sudo version 1.6.8p7
elephant % locate XML/

1.2. Building the Toolkit

That completes the list of build prereqs, so now I will download the installer and build it. The long version of these instructions is at Installing GT. First I created a globus user, and I will start the installation from that. First I will setup my ANT_HOME and JAVA_HOME:

globus@elephant:~$ export ANT_HOME=/home/dsl/javapkgs/apache-ant-1.6.5/
globus@elephant:~$ export JAVA_HOME=/usr/lib/j2sdk1.5-sun
globus@elephant:~$ export PATH=$ANT_HOME/bin:$JAVA_HOME/bin:$PATH
globus@elephant:~$ tar xzf gt4.2.1-all-source-installer.tar.gz
globus@elephant:~$ cd gt4.2.1-all-source-installer
globus@elephant:~/gt4.2.1-all-source-installer$ ./configure --prefix=/sandbox/globus/globus-4.2.1/
checking build system type... i686-pc-linux-gnu
checking for javac... /usr/lib/j2sdk1.5-sun/bin/javac
checking for ant... /home/dsl/javapkgs/apache-ant-1.6.5//bin/ant
configure: creating ./config.status
config.status: creating Makefile


The machine I am installing on doesn't have access to a scheduler. If it did, I would have specified one of the wsgram scheduler options, like --enable-wsgram-condor, --enable-wsgram-lsf, or --enable-wsgram-pbs.


I could have used the binary installer for this example, because Debian ia32 binaries are available. To make the quickstart more general, I decided to use source instead.

Now it's time to build the toolkit:

globus@elephant:~/gt4.2.1-all-source-installer$ make | tee installer.log
cd gpt-3.2autotools2004 && OBJECT_MODE=32 ./build_gpt
build_gpt ====> installing GPT into /sandbox/globus/globus-4.2.1/
Time for a coffee break here, the build will take over an hour, possibly
longer depending on how fast your machine is
Your build completed successfully.  Please run make install.

globus@elephant:~/gt4.2.1-all-source-installer$ make install


1.3. Setting up security on your first machine

All of the work we're going to do now requires that we be authenticated and authorized. We use certificates for this purpose. The Distinguished Name (DN) of a certificate will serve as our authenticated identity. That identity will then be authorized. In this simple tutorial, the authorization will happen in a file lookup.

We will need identites for both the services and users. For the services, we will use an identity that is equal to their hostname. For the users, we'll use their full name. To create the certificates, we're going to use the SimpleCA that is distributed with the toolkit. Here's how we set it up, based on the instructions at SimpleCA Admin:

root@elephant:~# export GLOBUS_LOCATION=/sandbox/globus/globus-4.2.1
root@elephant:~# source $GLOBUS_LOCATION/etc/
root@elephant:~# cd ~globus/gt4.2.1-all-source-installer
root@elephant:gt4.2.1-all-source-installer# perl -y
Setting up /sandbox/globus/globus-4.2.1/
Please enter a password of at least four characters for the CA: 
Confirm password:
Creating a new simpleCA, logging to gt-server-ca.log...
Running setup-gsi...
Your CA hash is: 1bcdfe89
It is located at /sandbox/globus/globus-4.2.1//share/certificates/1bcdfe89.0
Your host DN is /O=Grid/OU=GlobusTest/
The hostcert is located at /sandbox/globus/globus-4.2.1//etc/hostcert.pem


This will fail if /tmp is mounted noexec. If you get a failure, you might try setting GLOBUS_SH_TMP=`pwd` and trying again.

Here's what has happened:

root@elephant:~# ls ~/.globus/
root@elephant:~# ls ~/.globus/simpleCA/
cacert.pem  globus_simple_ca_1bcdfe89_setup-0.18.tar.gz  newcerts
certs       grid-ca-ssl.conf                             private
crl         index.txt                                    serial

That's the directory where my simpleCA has been created. These files are all explained in the Security Admin Guide.

Our last step is to copy that signed certificate into /etc:

root@elephant:~# mkdir /etc/grid-security
root@elephant:~# mv $GLOBUS_LOCATION/etc/host*.pem /etc/grid-security/

We'll make the containercerts owned by globus:

root@elephant:~# cd /etc/grid-security
root@elephant:/etc/grid-security# cp hostcert.pem containercert.pem
root@elephant:/etc/grid-security# cp hostkey.pem containerkey.pem
root@elephant:/etc/grid-security# chown globus:globus container*.pem
root@elephant:/etc/grid-security# ls -l *.pem
-rw-r--r--  1 globus globus 2724 2008-06-16 14:26 containercert.pem
-r--------  1 globus globus  887 2008-06-16 14:26 containerkey.pem
-rw-r--r--  1 root root     2724 2008-06-16 14:26 hostcert.pem
-rw-r--r--  1 root root     1404 2008-06-16 14:26 hostcert_request.pem
-r--------  1 root root      887 2008-06-16 14:26 hostkey.pem

1.4. Creating a MyProxy server

We are going to create a MyProxy server on elephant, following the instructions at configuring MyProxy. This will be used to store our user's certificates. Recall that so far we have made a host certificate, but we don't have any certificates for end users yet.

root@elephant:~# export GLOBUS_LOCATION=/sandbox/globus/globus-4.2.1/
root@elephant:~# cp $GLOBUS_LOCATION/share/myproxy/myproxy-server.config /etc
root@elephant:~# vim /etc/myproxy-server.config 
root@elephant:~# diff /etc/myproxy-server.config $GLOBUS_LOCATION/share/myproxy/myproxy-server.config
< accepted_credentials  "*"
< authorized_retrievers "*"
< default_retrievers    "*"
< authorized_renewers   "*"
< default_renewers      "none"
< authorized_key_retrievers "*"
< default_key_retrievers "none"
> #accepted_credentials  "*"
> #authorized_retrievers "*"
> #default_retrievers    "*"
> #authorized_renewers   "*"
> #default_renewers      "none"
> #authorized_key_retrievers "*"
> #default_key_retrievers "none"
root@elephant:~# cat $GLOBUS_LOCATION/share/myproxy/ >> /etc/services 
root@elephant:~# tail /etc/services 
binkp           24554/tcp                       # binkp fidonet protocol
asp             27374/tcp                       # Address Search Protocol
asp             27374/udp
dircproxy       57000/tcp                       # Detachable IRC Proxy
tfido           60177/tcp                       # fidonet EMSI over telnet
fido            60179/tcp                       # fidonet EMSI over TCP
# Local services
myproxy-server  7512/tcp                        # Myproxy server
root@elephant:~# cp $GLOBUS_LOCATION/share/myproxy/etc.xinetd.myproxy /etc/xinetd.d/myproxy
root@elephant:~# vim /etc/xinetd.d/myproxy 
root@elephant:~# cat /etc/xinetd.d/myproxy 
service myproxy-server
  socket_type  = stream
  protocol     = tcp
  wait         = no
  user         = root
  server       = /sandbox/globus/globus-4.2.1/sbin/myproxy-server
  env          = GLOBUS_LOCATION=/sandbox/globus/globus-4.2.1 LD_LIBRARY_PATH=/sandbox/globus/globus-4.2.1/lib 1
  disable      = no
root@elephant:~# /etc/init.d/xinetd reload
Reloading internet superserver configuration: xinetd.
root@elephant:~# netstat -an | grep 7512
tcp        0      0  *               LISTEN     

1 Your system may require a different environment variable than LD_LIBRARY_PATH if you're using MacOS X or IRIX

Now that myproxy is setup, we'll get a usercert for bacon. The globus user will add a new credential into myproxy. I have to specify a full name and a login name. I'll be using "Charles Bacon" and "bacon" for my user. I have to supply two different passwords. The first password is going to be the bacon user's password. The second password has to be my SimpleCA password from when I ran

root@elephant:~ # myproxy-admin-adduser -c "Charles Bacon" -l bacon
iA certificate request and private key is being created.
You will be asked to enter a PEM pass phrase.
This pass phrase is akin to your account password, 
and is used to protect your key file.
If you forget your pass phrase, you will need to
obtain a new certificate.

Generating a 1024 bit RSA private key
writing new private key to '/tmp/myproxy_adduser_HUTit8/myproxy_adduser_key.pem'
Enter PEM pass phrase: bacon's new password
Verifying - Enter PEM pass phrase: bacon's new password
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
Level 0 Organization [Grid]:Level 0 Organizational Unit [GlobusTest]:Level 1 Organizational Unit []:Level 2 Organizational Unit []:Name (e.g., John M. Smith) []:

A private key and a certificate request has been generated with the subject:

/O=Grid/OU=GlobusTest/ Bacon

If the CN=Charles Bacon is not appropriate, rerun this
script with the -force -cn "Common Name" options.

Your private key is stored in /tmp/myproxy_adduser_HUTit8/myproxy_adduser_key.pem
Your request is stored in /tmp/myproxy_adduser_HUTit8/myproxy_adduser_cert_request.pem

Please e-mail the request to the Globus Simple CA 
You may use a command similar to the following:

  cat /tmp/myproxy_adduser_HUTit8/myproxy_adduser_cert_request.pem | mail 

Only use the above if this machine can send AND receive e-mail. if not, please
mail using some other method.

Your certificate will be mailed to you within two working days.
If you receive no response, contact Globus Simple CA at 

To sign the request
please enter the password for the CA key: SimpleCA password

The new signed certificate is at: /homes/globus/.globus/simpleCA//newcerts/05.pem

using storage directory /var/myproxy
Credential stored successfully

Our last act will be to create a grid-mapfile as root for authorization. You can copy and paste the /O=Grid/OU=... subject name from the output above:

root@elephant:/etc/grid-security# vim /etc/grid-security/grid-mapfile
"/O=Grid/OU=GlobusTest/ Bacon" bacon


The globus user doesn't need a user certificate! It's a dummy account that we're using to own the GLOBUS_LOCATION. When it starts the container, it will use the containercert. Only real people need user certs.

1.5. Set up GridFTP

Now that we have our host and user credentials in place, we can start a service. This setup comes from the GridFTP Admin Guide.

  root@elephant:/etc/grid-security# vim /etc/xinetd.d/gridftp 1
root@elephant:/etc/grid-security# cat /etc/xinetd.d/gridftp
service gsiftp
instances               = 100
socket_type             = stream
wait                    = no
user                    = root
env                     += GLOBUS_LOCATION=/sandbox/globus/globus-4.2.1
env                     += LD_LIBRARY_PATH=/sandbox/globus/globus-4.2.1/lib 2
server                  = /sandbox/globus/globus-4.2.1/sbin/globus-gridftp-server
server_args             = -i
log_on_success          += DURATION
disable                 = no
root@elephant:/etc/grid-security# vim /etc/services 
root@elephant:/etc/grid-security# tail /etc/services 
vboxd           20012/udp
binkp           24554/tcp                       # binkp fidonet protocol
asp             27374/tcp                       # Address Search Protocol
asp             27374/udp
dircproxy       57000/tcp                       # Detachable IRC Proxy
tfido           60177/tcp                       # fidonet EMSI over telnet
fido            60179/tcp                       # fidonet EMSI over TCP

# Local services
myproxy-server  7512/tcp                        # Myproxy server
gsiftp          2811/tcp
root@elephant:/etc/grid-security# /etc/init.d/xinetd reload
Reloading internet superserver configuration: xinetd.
root@elephant:/etc/grid-security# netstat -an | grep 2811
tcp        0      0  *               LISTEN     


I already had xinetd installed:

bacon@elephant:~$ dpkg --list xinetd
| Status=Not/Installed/Config-files/Unpacked/Failed-config/Half-installed
|/ Err?=(none)/Hold/Reinst-required/X=both-problems (Status,Err: uppercase=bad)
||/ Name           Version        Description
      ii  xinetd         2.3.13-3       replacement for inetd with many enhancements

You can use inetd instead, see "Configuring the GridFTP server to run under xinetd/inetd" in System Administrator's Guide for details. For now, though, you might want to apt-get install xinetd.

2 On MacOS X, this would be DYLD_LIBRARY_PATH. Check your system documentation if LD_LIBARARY_PATH doesn't work on your system.

Now the gridftp server is waiting for a request, so we'll run a client and transfer a file:

bacon@elephant $ myproxy-logon -s elephant
Enter MyProxy pass phrase: ******
A credential has been received for user bacon in /tmp/x509up_u1817.
bacon@elephant $ globus-url-copy gsi file:///tmp/bacon.test.copy
bacon@elephant $ diff /tmp/bacon.test.copy /etc/group
bacon@elephant $ 

Okay, so the GridFTP server works. If you had trouble, check the security troubleshooting section in the Security Admin Guide. Now we can move on to starting the webservices container.

1.6. Starting the webservices container

Now we'll setup an /etc/init.d entry for the webservices container. You can find more details about the container at Java WS Core Admin Guide.

root@elephant:~# cp $GLOBUS_LOCATION/etc/init.d/globus-ws-java-container /etc/init.d

globus@elephant:~$ /etc/init.d/globus-ws-java-container start
Starting Globus container. PID: 29985

At this point, we can use one of the sample clients/services to interact with the container:

bacon@elephant $ globus-check-remote-environment -s https://localhost:8443

### Remote Endpoint Version Information ###
Axis Version on remote endpoint https://localhost:8443:
Apache Axis version: 1.4
Built on Mar 01, 2007 (10:42:15 CST)

Java WS Core Version on remote endpoint https://localhost:8443:

That is the expected output, so it looks like the container is up and running.

1.7. Configuring RFT

We will use the globus-crft command to start a reliable file transfer. It takes an input file whose syntax is one pair of URLs per line. It will use RFT to manage the transfer of all the URLs in the transfer file. For this example, we'll just move a single file:

bacon@elephant $ cat transfer
gsi gsi
bacon@elephant $ globus-crft -ez -f transfer
Communicating with delegation service.
Creating the RFT service.
Starting the RFT service.
Waiting for the RFT transfers to complete.
Transfered 1 of 1			| Status: Done       

bacon@elephant $ diff /etc/group /tmp/asdf
bacon@elephant $ 

RFT did its job, starting up a reliable transfer and notifying us of the status and results. The globus-crft command has many options. You may want to explore using it asynchronously, see the -help for details.

1.8. Setting up GRAM4

Now that we have GridFTP and RFT working, we can setup GRAM for resource management. First we have to setup sudo so the globus user can start jobs as a different user. For reference, you can see the System Administrator's Guide.

    root@elephant:~# visudo
root@elephant:~# cat /etc/sudoers 
Runas_Alias GLOBUSUSERS = ALL, !root;
globus ALL=(GLOBUSUSERS) NOPASSWD: /sandbox/globus/globus-4.2.1/libexec/globus-gridmap-and-execute
-g /etc/grid-security/grid-mapfile /sandbox/globus/globus-4.2.1/libexec/ *
globus  ALL=(GLOBUSUSERS) NOPASSWD: /sandbox/globus/globus-4.2.1/libexec/globus-gridmap-and-execute
-g /etc/grid-security/grid-mapfile /sandbox/globus/globus-4.2.1/libexec/globus-gram-local-proxy-tool *

Make sure they're all on one line. I split them up in the HTML to keep the page width down. With that addition, we can now run jobs:

bacon@elephant $ globusrun-ws -submit -c /bin/true
Submitting job...Done.
Job ID: uuid:a4b5e324-3bec-11dd-95ac-003048241085
Termination time: 06/16/3008 21:39 GMT
Current job state: Active
Current job state: CleanUp
Current job state: Done
Destroying job...Done.
bacon@elephant $ echo $?
bacon@elephant $ globusrun-ws -submit -c /bin/false
Submitting job...Done.
Job ID: uuid:b49462c0-3bec-11dd-9441-003048241085
Termination time: 06/16/3008 21:39 GMT
Current job state: Active
Current job state: CleanUp
Current job state: Done
Destroying job...Done.

bacon@elephant $ echo $?

Success. Now we've got a working GRAM installation.

2. Setting up your second machine

2.1. Setting up your second machine: Prereqs

Alas, it's not much of a grid with just one machine. So let's start up on another machine and add it to this little test grid. For a change of pace, I'm going to use the binary installer on this machine.

globus@cognito:~$ tar xzf gt4.2.1-ia32_debian_3.1-binary-installer.tar.gz
globus@cognito:~$ export ANT_HOME=/home/dsl/javapkgs/apache-ant-1.6.5/
globus@cognito:~$ export JAVA_HOME=/usr/lib/j2sdk1.5-sun
globus@cognito:~$ export PATH=$ANT_HOME/bin:$JAVA_HOME/bin:$PATH

2.2. Setting up your second machine: Installation

Now we can install from binaries:

globus@cognito:~/gt4.2.1-ia32_debian_3.1-binary-installer$ ./configure \
checking for javac... /usr/lib/j2sdk1.5//bin/javac
checking for ant... /home/dsl/javapkgs/apache-ant-1.6.5//bin/ant
configure: creating ./config.status
config.status: creating Makefile
globus@cognito:~/gt4.2.1-ia32_debian_3.1-binary-installer$ make
cd gpt-3.2autotools2004 && OBJECT_MODE=32 ./build_gpt
Binaries are much faster!  This is done in less than 10 minutes.
tar -C /usr/local/globus-4.2.1 -xzf binary-trees/globus_wsrf_rft_test-*/*.tar.gz
tar -C /usr/local/globus-4.2.1 -xzf binary-trees/globus_rendezvous-*/*.tar.gz
Your build completed successfully.  Please run make install.
globus@cognito:~/gt4.2.1-ia32_debian_3.1-binary-installer$ make install
ln -s /usr/local/globus-4.2.1/etc/gpt/packages /usr/local/globus-4.2.1/etc/globus_packages
config.status: creating

2.3. Setting up your second machine: Security

Now let's get security setup on the second machine. We're going to just add trust for the original simpleCA to this new machine, there's no need to create a new one. All we need to do is copy the $GLOBUS_LOCATION/share/certificates from our first machine to our second:

globus@cognito:~$ export GLOBUS_LOCATION=/usr/local/globus-4.2.1
globus@cognito:~$ scp -r elephant:/sandbox/globus/globus-4.2.1/share/certificates $GLOBUS_LOCATION/share

We're going to create the host certificate for cognito, but we create it on elephant:

root@elephant:~# myproxy-admin-addservice -c "" -l cognito

Then as root on cognito:

root@cognito:~# export GLOBUS_LOCATION=/usr/local/globus-4.2.1
root@cognito:~# source $GLOBUS_LOCATION/
root@cognito:~# myproxy-retrieve -s elephant -k -l cognito
Enter MyProxy pass phrase:******
Credentials for bacon have been stored in
/etc/grid-security/hostcert.pem and
root@cognito:~# cd /etc/grid-security
root@cognito:/etc/grid-security# cp hostcert.pem containercert.pem
root@cognito:/etc/grid-security# cp hostkey.pem containerkey.pem
root@cognito:/etc/grid-security# chown globus:globus container*.pem
root@cognito:/etc/grid-security# ls -l *.pem
-rw-------  1 root root 912 2008-06-19 13:50 containercert.pem
-rw-------  1 root root 887 2008-06-19 13:50 containerkey.pem
-rw-------  1 root root 912 2008-06-19 13:45 hostcert.pem
-rw-------  1 root root 887 2008-06-19 13:45 hostkey.pem
root@cognito:/etc/grid-security# myproxy-destroy -s  elephant -k -l cognito
MyProxy credential '' for user cognito was successfully removed.

There. Now cognito is setup with host and container certs, and it trusts the CA of my grid. The last step for root is to create a grid-mapfile for myself again:

root@cognito:/etc/grid-security# vim grid-mapfile
root@cognito:/etc/grid-security# cat grid-mapfile 
"/O=Grid/OU=GlobusTest/ Bacon" bacon

2.4. Setting up your second machine: GridFTP

GridFTP setup on the second machine is identical to the first. I'll just list the commands here, see Section 1.5, “Set up GridFTP” for the file contents, or just copy them from the first machine.

root@cognito:/etc/grid-security# vim /etc/xinetd.d/gridftp
root@cognito:/etc/grid-security# vim /etc/services 
root@cognito:/etc/grid-security# /etc/init.d/xinetd reload
Reloading internet superserver configuration: xinetd.

Now we can test it:

cognito % setenv GLOBUS_LOCATION /usr/local/globus-4.2.1
cognito % source $GLOBUS_LOCATION/etc/globus-user-env.csh
cognito % myproxy-logon -s elephant
Enter MyProxy pass phrase: ******
A credential has been received for user bacon in /tmp/x509up_u1817.
cognito % globus-url-copy gsi \

That was a slightly fancier test than I ran on elephant. In this case, I did a third-party transfer between two GridFTP servers. It worked, so I have the local and remote security setup correctly.

If it did not work, perhaps you have a firewall between the two machines? GridFTP needs to communicate on data ports, not just port 2811. The error looks like:

error: globus_ftp_client: the server responded with an error
500 500-Command failed. : callback failed.
500-globus_xio: Unable to connect to
500-globus_xio: System error in connect: No route to host
500-globus_xio: A system call failed: No route to host
500 End.

You can setup a range of ports to be open on the firewall and configure GridFTP to use them, see GridFTP Firewall HOWTO for details. That document also contains firewall information for the rest of the services too.

2.5. Setting up your second machine: Webservices

Setting up the container on the second machine is a lot like the first. I'll list the commands here. See Section 1.6, “Starting the webservices container”, or you can just copy the files from the first machine. First globus creates the start-stop script:

root@cognito:~# cp $GLOBUS_LOCATION/etc/init.d/globus-ws-java-container /etc/init.d

globus@cognito:~ $ /etc/init.d/globus-ws-java-container start
Starting Globus container. PID: 19745

2.6. Setting up your second machine: GRAM4

As with last time, we'll need to setup the sudoers. See Section 1.8, “Setting up GRAM4” for the sudo contents, or copy the sudoers from the first machine.

root@cognito:/etc/grid-security# visudo

Now we can submit a staging job. This job will copy the /bin/echo command from cognito to a file called $HOME/my_echo. Then it runs it with some arguments, and captures the stderr/stdout. Finally, it will clean up the my_echo file when execution is done.

cognito % vim a.rsl
cognito % cat a.rsl
cognito % cat a.rsl


cognito % globusrun-ws -submit -S -f a.rsl
Delegating user credentials...Done.
Submitting job...Done.
Job ID: uuid:1223d7e6-3e35-11dd-a209-003048241085
Termination time: 06/19/3008 19:22 GMT
Current job state: StageIn
Current job state: Active
Current job state: CleanUp
Current job state: Done
Destroying job...Done.
Cleaning up any delegated credentials...Done.
cognito % cat ~/stdout
Hello World!
cognito % ls ~/my_echo
ls: /home/bacon/my_echo: No such file or directory

You can get other examples of GRAM RSL files from GRAM usage scenarios.

3. VO-level services

3.1. Setting up an Index Service hierarchy

Now that we have two machines, we can also setup some information services to monitor them together. Let's have cognito register its index service into choate so we can have an aggregated view of the two machines, as described at Building VOs in the MDS documentation:

globus@cognito:~$ vim /usr/local/globus-4.2.1/etc/globus_wsrf_mds_index/hierarchy.xml 
globus@cognito:~$ grep upstream $GLOBUS_LOCATION/etc/globus_wsrf_mds_index/hierarchy.xml

<!-- <upstream> elements specify remote index services that the local index
    Set an upstream entry for each VO index that you wish to participate in.

globus@cognito:~$ /etc/init.d/globus-ws-java-container restart
Stopping Globus container. PID: 18069
Container stopped
Starting Globus container. PID: 18405

Now I can run some index service clients and check that the registration worked:

cognito % setenv JAVA_HOME /usr/java/j2sdk1.4.2_10/
cognito % setenv ANT_HOME /usr/local/apache-ant-1.6.5/
cognito % setenv PATH $ANT_HOME/bin:$JAVA_HOME/bin:$PATH
cognito % host cognito has address
cognito % wsrf-query -s '/*' | grep | wc -l

So we've got seven entries in the remote index that reference our machine. That means our upstream registration was processed successfully. But what do those entries look like? Here's an example:

      <ns15:Address xmlns:ns15=""></ns15:Address>

It's hard to read, isn't it? That's an entry in choate that points to the GRAM4 service running on cognito that we just setup. But our life would be easier if we setup WebMDS to visualize the contents of the Index Service. So let's do that next.


Notice that I hadn't setup my java variables yet, but the GRAM client above worked just fine. That's because it's written in C, even though it interacts with the java container. Language neutrality is one of the features of webservices.

3.2. Configuring WebMDS

WebMDS has a dependency on the Tomcat container, so we'll install that now. The recommended version is 5.0.28, which is available from the Apache Tomcat website. We're following the standard install instructions from the WebMDS Admin Guide.

root@cognito:/usr/local# tar xzf jakarta-tomcat-5.0.28.tar.gz 
root@cognito:/usr/local# chown -R globus:globus jakarta-tomcat-5.0.28
root@cognito:/usr/local# cd /etc/grid-security
root@cognito:/etc/grid-security# ln -sf $GLOBUS_LOCATION/share/certificates certificates

Now the globus user can configure WebMDS:

globus@cognito:~$ vim $GLOBUS_LOCATION/lib/webmds/conf/indexinfo
globus@cognito:~$ grep choate /usr/local/globus-4.2.1/lib/webmds/conf/indexinfo
globus@cognito:~$ export CATALINA_HOME=/usr/local/jakarta-tomcat-5.0.28
globus@cognito:~$ $GLOBUS_LOCATION/lib/webmds/bin/webmds-create-context-file \
globus@cognito:~$ $CATALINA_HOME/bin/
Using CATALINA_BASE:   /usr/local/jakarta-tomcat-5.0.28
Using CATALINA_HOME:   /usr/local/jakarta-tomcat-5.0.28
Using CATALINA_TMPDIR: /usr/local/jakarta-tomcat-5.0.28/temp
Using JAVA_HOME:       /usr/java/j2sdk1.4.2_10/

That started Tomcat on port 8080, so now I can browse to the /webmds directory on that port of my machine ( but that's behind a firewall. You can visit your own machine, though). Now I can read the info stored in the index in human-readable format. For instance, I can see this:

RFT	0 active transfer resources, transferring 0 files.
26.06 KB transferred in 2 files since start of database.

Those two RFT transfers were the one I ran by hand in the RFT section, then the RFT transfer that happened because of my GRAM job that used file staging. I can also see some information about my GRAM services:

GRAM	1 queues, submitting to 0 cluster(s) of 0 host(s).

If I click for details, I get:

Name: default
UniqueID: default
TotalCPUs: 1

This works because the GRAM and RFT services are configured to register into the local service automatically. When we edited the hierarchy.xml file to point to choate, all the information started to be cached centrally.