Frequently Asked Questions

What is High Performance Computing (HPC)?

From the first reference given below: “High Performance Computing most generally refers to the practice of aggregating computing power in a way that delivers much higher performance than one could get out of a typical desktop computer or workstation in order to solve large problems in science, engineering, or business.”

A high performance computer, sometimes called a supercomputer or a parallel computer is one that has access to some number of computing resources, in particular processors. The computing resources are used together to solve a problem. The resources are connected together by some network or bus and they work together by passing messages over the network.

The individual resource could be a simple collection of processors similar to what you might have in a laptop computer. Another type of resource that is becoming popular are GPUs or graphic processor units. Thes are similar to the chips that drive a lot of video games.

The basic idea behind HPC is if you have a problem that takes 64 hours on a single computer than why not use 64 computers and run it in 1 hour. Or you may have a problem that does not fit on a single computer so you could split it across several.

You may hear of a supercomputer being called a parallel computer. They are parallel because the the processors work together in parallel. Parallel computing can be thought of computing by committee, with all the same advantages and disadvantages. Each processor (committee member) works on a section of the problem. If the processors all have about the same amount of work to do the committee approach may work well. If there is too much communication (too many committee meetings) and/or if, say, one processor is lagging behind the others the calculation will be slowed.

An important point: the individual resources in a supercomputer might not be any more powerful than your laptop. What makes them fast is using many such resources together. So if you have software that runs on your laptop moving it to a supercomputer it might not run any faster unless it is rewritten to take advantage of multiple resources.

Some References:

What is high performance computing – insideHPC
http://insidehpc.com/hpc-basic-training/what-is-hpc/
Overview of High Performance Computing
http://osage.mines.edu/~tkaiser/hpc_at_mines/tutorials/HPC-Overview.pdf
Tell me about supercomputing (SC)!

These posters were presented in the Mines booth at the SC16 International Conference for High Performance Computing held in Salt Lake City in November 2016.

We have pictures and a video of a walk-around of our booth here.

SC17 was in Denver November 12-17, 2017.

First
Second
Third
Fourth

How can I get help?
  1. Use our email hotline hpcinfo!The HPC group responds as soon as possible to hpcinfo requests. Our service hours are 8am – 5pm M-F, with off-hours assistance at our discretion. We exist to facilitate researchers’ computational goals, and we all proudly take that mission seriously. Email us at hpcinfo@mines.edu!
  2. Consult the FAQs on our website!
    Our FAQ page offers additional useful links.
  3. Avail yourself of our plentiful Examples and Tutorials!
    • The “How do I do a simple build and run” section shows how to build and run a simple example. A good resource if you encounter problems during your research; check your approach by trying this again;
    • The “local HPC tutorials” section has links to many tutorials;
    • See the “How do I connect?” section for information about connecting to HPC platforms;
    • If you are new to Linux then you might find the “I am new to linux Help” section useful.
  4. For higher-level, targeted scenarios, examine our campus HPC-specific Tech Reports!
    The Mines HPC Group Tech Reports provide some obscure but maybe useful discussions of advanced topics.
I am new to linux, help!

A computer operating system is a program, or rather a collection of programs, running on a computer that enables people to interact with it. It allows the computer to be controlled, it presents information to the user, and it permits the user to pass information and issue instructions.

Windows is an operating system. Apple’s OSX is an operating system, as is iOS on iPhones.

Linux is an operating system that is used on many high performance computing machines, as well as smaller computers. There are versions of Linux that use graphical user interfaces (GUIs) and those that just use a command line (typing) interface. Most of the interactions with our HPC platforms are via a command line interface.

After you get a feel for Linux you will be comfortable at just about any high performance computing site. You will be surprised that you will feel more comfortable using the lower level features of the Mac’s OSX. As far as Windows, you may feel a bit more comfortable or you may even want to start using Linux on your laptop.

There are many tutorials available on Linux. Here is a short list.

Tutorials:

General Interest:

We also have a rather extensive presentation developed locally. There are three versions available:

  1. a slide show,
  2. a PDF version that you can print, and
  3. a movie version.

You may want to print out the PDF version before watching the movie.The information can be found at:
http://geco.mines.edu/files/userguides/techReports/linux/index.html.

You may be interested in our local scripting tutorials “Advanced Scripts” also under the “How can I get help” section of our FAQs.

Finally, you may also be interested in the “How do I connect” section of our FAQs. It describes the basics of connecting to our HPC platforms as well as some advanced techniques to make your life easier, showing how you can “hop” from one machine to another without needing to enter a password.

Show me some local HPC tutorials:
Introduction to High Performance computing
What is High performance computing? Why is it of interest? When is it applicable or not? Overview of hardware.
Slides
Linux for HPC
A very fast paced introduction to the common operating system for most HPC systems. Lots of tips and tricks. If you have only ever worked on a Windows machine this session is a must.
Slides
Message Passing Interface (MPI) Introduction
The Message Passing Interface Standard (MPI) is a message passing library standard. MPI is the basis of most large scale parallel HPC applications. This will provide a “hello world” introduction and discussion of some of the more used calls.
Slides 1, Slides 2, Slides 3
Message Passing Interface – Sample Applications
We will show building of a “simple” MPI application.
Slides 1, Slides 2, Slides 3
OpenMP – Single node threaded applications
OpenMP specifies a collection of compiler directives, library routines, and environment variables that can be used to specify shared-memory parallelism in C, C++ and Fortran programs.
OpenMP
Batch Scripting for HPC
Show a bunch of techniques and tricks for batch scripting for parallel jobs.
batch_slurm
batch_slurm the movie
Bag of Task / Embarrassing Parallel / Large numbers of serial applications
Say you have a bunch of similar but independent jobs to run. This will show you ways to do that.
Memory Profiling and Building for multiple architectures
Two unrelated short topics. First we will show subroutine calls for tracking memory usage and then talk about building applications that need to run on several generations of X86 chips.
Slides for both
Hybrid Applications and Thread Affinity
We will combine MPI and OpenMP to make a hybrid program. Also, we will show how to ensure that you are using all available cores.
Slides
Debugging
Introduction to the DDT program debugger
Slides
Introduction to GPUs and Machine learning (Running Tensorflow)
Discuss GPUs, GPU programming, and the in demand Tensorflow program for Machine learning. (See section below)
Technical Session
Discussion of a technique for finding the optimum function, F(x) such that F(x) closely matches a target function, T(x) and F(x) has a low curvature.
Libraries

Laptop software recommendations:

If you have a Linux laptop you should be good to go.

If you have a Macintosh, it is suggested that you install XQuartz. This will be needed for some of the GUI based topics such as Debugging and Profiling.

Windows Laptop software recommendations:

If you run Windows on your laptop we have a set of recommendations for software. Each of these recommendations will give you various levels of functionality.

Easy install and basic functionality

Most difficult install — high functionality

This option gives you a nearly full Linux operating system running along side of Windows. The instructions under Bash on Ubuntu on Windows show how to install the base system. Unfortunately, the X Window system needed for running GUI based programs is a seperate install. One way to get the required components is to install Xming and XLanuch. Note: these also can provide X Window support for the Putty and BitWise ssh clients. But we are not recommending using either of these two packages at this time.

The following page discusses the setup of Xming. It also discusses putty which had been deprecated. http://www.geo.mtu.edu/geoschem/docs/putty_install.html

Relatively Easy install — good functionality — Easy to use

MobaXterm provides another Linux like subsystem operating under Windows. It also adds GUI based terminal connection tools and file transfer tools and an editor. It supports remore X Windows also.

A few notes:

The Free version works fine for most people. There are actually two free versions. The “Installer edition” is most likely better.

The shortcut installed on the Windows desktop does not work. Delete it and start from the menu.

When you start MobaXterm if you see the message “CygUtils not installed on you system” follow the directions to install it. The plugin needs to be installed in the same folder as the MobaXterm program. You may need to save it to your desktop first and drag it into your install directory.

 


Materials from older workshops and a guide
for setting up remote access to BlueM

Linux for HPCLinux
HPC OverviewHPC-Overview.pdf
Overview of BlueMnewblue.pdf
MPI Part 1mpi01.pdf
MPI Part 2mpi02.pdf
Finite Difference Code in MPI (description)stoma.pdf
Finite Difference Code in MPI (basic versions)stomb.pdf
Finite Difference Code in MPI (advanced versions)stomc.pdf
OpenMPopenmp.pdf
Hybrid MPI/OpenMPhybrid.pdf
Source Code for the Above Tutorialsexamples
Full List of Tutorial Examples
(very large - growing list)
Examples
Fortran 90 for Fortran .le. 77 ProgrammersFortran 90
Batch Scripting for Parallel Systems
Updated to include SLURM examples
Batch
Connecting to Mio/AuN/Mc2
Setting up Keys
Connecting

How do I connect?
General Overview:

We have three High Performance Computing (HPC) systems on campus. Mio, Mc2 (Energy), and AuN (Golden). This document describes how to log on to these systems, once you have been granted an account. For information about how to get an account see the “How do you get an account” FAQ section.

After you have logged in please see the “How do I do a simple build and run” FAQ section to see how to build and run applications.

The only way to access the HPC platforms is by using ssh. Unix and Unix-like operating systems, (OSX, Linux, Unicos…) have ssh built in. If you are using a Windows-based machine then you must use a terminal package that supports ssh, such as puTTY (available from http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html). We have a description of how to connect using puTTY from a Windows-based machine at: http://geco.mines.edu/ssh.

All of the HPC platforms are behind the campus firewall. The firewall blocks access from off campus. Thus you need to be on campus to get access, or you need to use VPN software discussed on the CCIT VPN page. There is a third method for gaining access discussed below under the section Setting up keys to make your life much easier (below). This method will allow you access for a fixed period of time without needing to reenter your password; transparently tunneling to AuN and Mc2.

  • Log in to Mio or AuN

Assuming you are on campus and you are using a machine that supports ssh directly, you can get to Mio by entering the following in a terminal window:

ssh mio.mines.edu

or

ssh aun.mines.edu

You will be asked for your password. The password required here is your MultiPass password. The session should look like the following with “joeuser” replaced with your username and “petra” replaced with the name of the machine from which you are connecting.

Mio

[joeuser@petra ~]$ ssh mio.mines.edu
joeuser@mio.mines.edu's password: 
*****************************
** For Mio questions email **
**    hpcinfo@mines.edu    **
*****************************
[joeuser@mio001 ~]$ 

AuN

[joeuser@petra ~]$ ssh aun.mines.edu
joeuser@mio.mines.edu's password: 
*****************************
** For AuN questions email **
**    hpcinfo@mines.edu    **
*****************************
[joeuser@aun ~]$ 


Setting up keys to make your life much easier:

Using ssh keys might make your life easier. This can work from both on campus and off. Also, the procedure discussed below will allow you to log in only entering a passphrase every 8 hours.

The following is a quick guide for setting up keys and tunnels to access aun.mines.edu, and mio.mines.edu from an on campus Linux box or OSX (Mac) machine. The commands you will enter are shown in red. The procedure for setting up off campus access via tunneling is similar but the configuration file is different and there is an extra step. This is documented below. Note: Non-Mines people are not allowed to tunnel into campus and must use VPN. After VPN is set up off campus users can use the procedure outlined for on campus usage.

For Windows users, information on setting up PuTTY and tunneling with PuTTY can be found at
http://geco.mines.edu/ssh/
and
http://howto.ccs.neu.edu/howto/windows/ssh-port-tunneling-with-putty

Setting up access from an on campus Linux or OSX box:

Generate your key pair (do not use an empty passphrase):

osage:~ joeuser$ ssh-keygen -f $HOME/.ssh/forbluem -tdsa
Generating public/private dsa key pair.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /Users/joeuser/.ssh/forbluem.
Your public key has been saved in /Users/joeuser/.ssh/forbluem.pub.
The key fingerprint is:
67:60:3c:5e:42:64:23:c5:79:70:62:d1:da:74:97:45 joeuser@osage.mines.edu
The key's randomart image is:
+--[ DSA 1024]----+
|      .+@=.    +E|
|       *o++ . o  |
|        *=.. .   |
|       o.=.      |
|        S o      |
|         o       |
|                 |
|                 |
|                 |
+-----------------+
osage:~ joeuser$ 

Copy the public key to BlueM:

osage:.ssh joeuser$ cat ~/.ssh/forbluem.pub | ssh bluem.mines.edu "cat >> ~/.ssh/authorized_keys"

Copy the public key to Mio:

If you have an account on Mio then you will want to copy your new key there also, allowing you to log in using the same key.

osage:.ssh joeuser$ cat ~/.ssh/forbluem.pub | ssh mio.mines.edu "cat >> ~/.ssh/authorized_keys"

Add the following lines to your ~/.ssh/config file. Create one if it does not exist. Replace “joeuser” with your Mines username.

#Next 5 lines are optional if you don't do X-Windows.  The location of XAuthLocation might be different.
ForwardAgent yes
ForwardX11 yes
ForwardX11Trusted yes
XAuthLocation /Users/joeuser/.Xauthority
#XAuthLocation /opt/X11/bin/xauth
ServerAliveInterval 60
PubkeyAcceptedKeyTypes=+ssh-dss
AddKeysToAgent yes 

Host mio,mio.mines.edu
HostName 138.67.132.244
User joeuser
Identityfile2 ~/.ssh/forbluem

Host aun,aun.mines.edu
HostName aun.mines.edu
User joeuser
Identityfile2 ~/.ssh/forbluem

Note: You can run the following command on your local machine to get a copy of this template.

curl http://geco.mines.edu/prototype/How_do_I_connect/config_template -o config_template

Set the permissions on your config file:

chmod 600 ~/.ssh/config

Run the following to set an 8-hour limit on your key:

ssh-add -t 28800 ~/.ssh/forbluem

Log in to AuN or Mio using ssh:

ssh mio

This time you should not need to enter a password.


Setting up access from an off campus Linux or OSX box:

Generate your key pair (do not use an empty passphrase):

petra:~ joeuser$ ssh-keygen -f $HOME/.ssh/forbluem -tdsa
Generating public/private dsa key pair.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /Users/joeuser/.ssh/forbluem.
Your public key has been saved in /Users/joeuser/.ssh/forbluem.pub.
The key fingerprint is:
67:60:3c:5e:42:64:23:c5:79:70:62:d1:da:74:97:45 joeuser@osage.mines.edu
The key's randomart image is:
+--[ DSA 1024]----+
|      .+@=.    +E|
|       *o++ . o  |
|        *=.. .   |
|       o.=.      |
|        S o      |
|         o       |
|                 |
|                 |
|                 |
+-----------------+
petra:~ joeuser$ 

Copy the public key to jumpbox and set the permission for the keys file:

[joeuser@petra ~]$  cat ~/.ssh/forbluem.pub | ssh jumpbox.mines.edu "cat >> ~/.ssh/authorized_keys"
[joeuser@petra ~]$  ssh jumpbox.mines.edu "chmod 600 ~/.ssh/authorized_keys"

Add the following lines to your ~/.ssh/config file. Create one if it does not exist. Replace “joeuser” with your Mines username.

#Next 5 lines are optional if you don't do X-Windows.  The location of XAuthLocation might be different.
ForwardAgent yes
ForwardX11 yes
ForwardX11Trusted yes
XAuthLocation /Users/joeuser/.Xauthority
#XAuthLocation /opt/X11/bin/xauth
ServerAliveInterval 60
PubkeyAcceptedKeyTypes=+ssh-dss
AddKeysToAgent yes 

Host MIO
Hostname mio.mines.edu
User joeuser
ProxyCommand ssh jumpbox.mines.edu -W %h:%p
Identityfile2 ~/.ssh/forbluem

Host AUN
Hostname aun.mines.edu
User joeuser
ProxyCommand ssh jumpbox.mines.edu -W %h:%p
Identityfile2 ~/.ssh/forbluem


Host jumpbox.mines.edu
Hostname jumpbox.mines.edu
User joeuser
Identityfile2 ~/.ssh/forbluem
#ControlMaster auto
#ControlPath   /Users/joeuser/.ssh/tmp/%h_%p_%r

Note: You can run the following command on your local machine to get a copy of this template.


curl http://geco.mines.edu/prototype/How_do_I_connect/config_template -o config_template

Run the following to set an 8-hour limit on your key:

ssh-add -t 28800 ~/.ssh/forbluem

This command should be run as needed to renew your key. You will enter the passphrase that you used to set up the key.

Log in to jumpbox using ssh:

ssh jumpbox
Copy your key from jumpbox to Aun and/or Mio.

Copy the public key to Mio. You should not need to set the permissions.

[joeuser@petra ~]$ cat ~/.ssh/forbluem.pub | ssh mio.mines.edu "cat >> ~/.ssh/authorized_keys"
[joeuser@petra ~]$ ssh mio.mines.edu "chmod 600 ~/.ssh/authorized_keys"

Do the same thing for AuN.

You should now be able to ssh directly to Mio or AuN from off campus using the capitalized machine names, AUN and/or MIO.

[joeuser@petra ~]$ ssh MIO
Last login: Thu Jul  5 11:58:57 2018 from 138.67.123.231
[joeuser@mio001 ~]$ 

Show me some Power8 and GPU examples!

Mio has two IBM Power 8 GPU enhanced nodes. Each node has 20 Power cores and two Nvidia K80 GPU cards, each with two GPUs. See: DescriptionPower8Nodes.pdf

Building and running on these nodes is slightly different.

  1. There are several version of MPI, one of which requires a special launch command.
  2. The vendor supplied math library is ESSP/PESSL not MKL.
  3. They have GPUs

We have here examples showing the build and run procedures for these cases:

  1. Different versions of MPI
  2. PESSL
  3. FFTW3 examples
  4. GPU examples
  5. Machine Learning with both CPU and GPU examples

The document Threading on Power nodes discusses mapping of hybrid MPI/OpenMP programs to cores.

Show me some Machine Learning examples!

We have the IBM PowerAI machine learning framework available on Mio’s Power8 GPU enabled nodes. PowerAI release 3.4 provides software packages for several Deep Learning frameworks, supporting libraries, and tools:

  • Bazel
  • Caffe – BVLC, IBM, and NVIDIA variants
  • Chainer
  • DIGITS
  • NCCL
  • OpenBLAS
  • TensorFlow
  • Theano
  • Torch

Click here for additional information.

Machine Learning on Power8 and x86 nodes (deprecated)

The package Theano can be used for machine learning. It is available both on the regular Mio X86 nodes and on the Power 8 nodes. It runs well on the GPUs attached to the Power 8 nodes. We have some run scripts and some slightly modified examples from the Deep Learning Tutorial from the University of Montreal.

As you can see above Theano is available as part of the IBM PowerAI. It is advised that you use the Power 8 GPU enabled nodes to run Machine Learning codes. However, if you need to run on both the the Power 8 nodes and/or the X86 nodes click here.

Show me some Intel Phi examples!

Node Configuration

Mio has two Intel Phi enhanced nodes. Each of the nodes has 4 Phi 5510P cards. Each Phi card contains 60 cores, 8GB memory, and supports 240 threads. The configuration is diagrammed below.

diagram

An “information dump” for one of the cards is given here. Except for the device number and name the information is identical for each card.

If you are logged on to phi001 or phi002 you can reference the cards connected to that nodes for the purpose of launching a job as mic0, mic1, mic2, and mic3. They can also be referenced from mio or another node as phi001-mic0, phi001-mic1, phi001-mic2, and phi001-mic3 and phi002-mic0, phi002-mic1, phi002-mic2, and phi002-mic3.

The specifications of the card family can be found here along with a Product Brief.

Modes of operation

The cards can be run in several modes. They support:

  • MPI jobs
    1. On card
    2. Across multiple cards
    3. With phi00x participating with one or more cards
  • Treading (OpenMP)
  • MKL
    1. Programs that make calls to the MKL library running on the card
    2. Offload – programs running on phi00x making MKL calls that are actually run on the card
  • Offload
    1. Programs run on phi00x can call programs on the card
    2. Programs run on phi00x call subroutines to run on the card.

Examples of running in these modes can be found here

To run the examples on Mio…

[joeuser@mio001 ~]$ mkdir dophi
[joeuser@mio001 ~]$ cd dophi
[joeuser@mio001 dophi]$ wget http://geco.mines.edu/prototype/Show_me_Intel_Phi_examples/source/phi.tgz
[joeuser@mio001 dophi]$ tar -xzf phi.tgz 
[joeuser@mio001 dophi]$ ls */README
basic/README  coi/README  directive/README  mpi_openmp/README
[joeuser@mio001 dophi]$ 

Then follow the instructions in the README files.

Some Links

How do I get an account?

Overview

We have two distinct HPC platforms at Mines:

  • Mio.mines.edu
  • BlueM.mines.edu

For machine details, see the “What HPC Platforms do we have?” section.

BlueM

BlueM actually has two separate compute platforms AuN.mines.edu and Mc2.mines.edu. AuN (Golden) and Mc2 (energy) share a file system and are accessed through the frontend machine, BlueM.

Mio

Mio is a shared resource funded in part by the Mines Administration and in part by money from individual researchers. Mio came on line March 2010. Initially it was a relatively small cluster dedicated to a single group of research projects. Mio grew quickly into a supercomputing class machine, now bigger than AuN.

Mines funds provide the infrastructure; individual researchers can purchase compute nodes that are added to the cluster. The researchers own their nodes, that is, they have exclusive access when they need them. A number of the nodes were purchased using TechFee money so they belong to students.

Getting Accounts

Mio

A researcher who owns nodes on Mio can add people to the machine by emailing hpcinfo@mines.edu. Researchers who do not own nodes are not allowed to access Mio.

Students who are not currently working for a professor can also email hpcinfo@mines.edu. Students who are working for a professor are not allowed to get Mio accounts unless their professor owns nodes. This is to prevent a professor from getting free access to a machine for which others have paid.

Information about purchasing nodes can also be obtained via hpcinfo@mines.edu. The most recent (Feb 2016) specs for nodes are:

  • Supermicro, Inc. motherboard and enclosure;
  • 2 12-core Intel Xeon E5-2680 processors at 2.5 GHz for a total of 24 cores;
  • 64 Gbytes of RAM;
  • 2 Tb disk;
  • FDR Infinband network;
  • Cost about $5,600.

BlueM

Access to BlueM is via a proposal process. We periodically have a call for proposals. In between calls researchers can still request an account by filling out the form at http://petra.mines.edu/proposal/index.html. Only faculty are allowed to request accounts. After the account is granted they can request that their students be authorized to have an account also.

How do I do a simple build and run?

This page shows you how to build and run a simple example on AuN, Mc2, or Mio. To run the example, copy/paste the text shown below in red:

If you would like a copy of the example files and you don’t have an account on one of Mines’ machines the files discussed here can also be obtained from:
http://hpc.mines.edu/bluem/quickfiles/example.tgz

While these examples were completed on Mio, the procedure is the same on Mc2 and Aun with a minor exception noted below.

Note that the “makefile” and run scripts discussed here can be used as templates for other applications.

To run the quick start example, create a directory for your example and go to it.

[joeuser@mio001 bins]$  mkdir guide
[joeuser@mio001 bins]$  cd guide

Copy the file that contains our example code to your directory and unpack it.

[joeuser@mio001 guide]$  wget http://geco.mines.edu/prototype/How_do_I_do_a_simple_build_and_run/example.tgz
[joeuser@mio001 guide]$  tar -xzf *

If you like, do an ls to see what you have.

[joeuser@mio001 guide]$  ls
aun_script     docol.f90    helloc.c      makefile    mio1_script  phostname.c  
power_script   simple       slurm_script  color.f90   example.tgz  info.html  
mc2_script     out.dat      phostone.c    set_alias   simple_slurm

Special instructions for building and running on the ppc001 and ppc002 (Power) nodes of Mio.

Mio has two nodes, ppc001 and ppc002, that are based on IBM Power processors instead of the more common Intel x86 processor family. There are minor changes to the build and run procedures for these nodes. See the section about this below.


Next we want to ensure that your environment is set up to run parallel applications. The following two commands will give you a clean, tested environment:

[joeuser@mio001 guide]$  module purge
[joeuser@mio001 guide]$  module load StdEnv

Make the program:

[joeuser@mio001 guide]$ make
echo mio001
mio001
mpif90 -c color.f90
mpicc -DNODE_COLOR=node_color_  helloc.c color.o -lifcore -o helloc
rm -rf *.o

On AuN and Mc2 you need to supply an account number to run parallel applications. Mio does not require account numbers. So, next find out which accounts you are authorized to use on each machine:

[joeuser@aun002 auto]$  /opt/utility/accounts
Account
--------------------
science
test

If you run this command on Mio you will get:

[joeuser@mio001 guide]$  /opt/utility/accounts 
Accounts strings are not required on Mio

So, to run a parallel application on Mio you would do the following:

[joeuser@mio001 guide]$  sbatch  simple_slurm
Submitted batch job 1993

On AuN and Mc2 you add a -A option to the command line followed by the account string from the command given above.

[joeuser@aun001 guide]$  sbatch  -A test  simple_slurm
Submitted batch job 1993

If you receive the message shown below that means that the account you have specified has run out of time. Try another.

sbatch: error: Batch job submission failed: Job violates accounting/QOS policy 
(job submit limit, user's size and/or time limits)

If you quickly enter the command below you may/will see your job waiting to run or running. A “USER ST” of “PD” implies that it is waiting; “R” means it is running.

[joeuser@mio001 guide]$  squeue -u $USER
  JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)
   1993   compute   hybrid  joeuser PD       0:00      2 (Priority)

If this command returns no jobs listed then your job is finished. If the machine is very busy then it could take some time to run.

When the job is complete there will be an output file in your directory that starts with the word “slurm” then contains the jobid from the sbatch command followed by the word out.

For example:

[joeuser@mio001 guide]$  ls slurm*
slurm-722122.out

This simple test program is a glorified parallel “hello world” program. You will see 16 lines that start with the name of the nodes on which you are running, followed by the MPI task id which should be in the range 0-15 and the the number 16 which is the number of tasks you are running. Next we have a number which will be either 0 or 8. This is the MPI task number of the lowest task running on a node.

You will also see two additional lines that are basically the same output described above but for the words “First task”. There is one line output per node.

The command cat slurm*.out will show you the output of the job. To see your output in a nice order you can use the sort command:

[joeuser@mio001 guide]$  sort slurm*.out  -k1,1 -k2,2n | grep 16
compute028 0 16 0
compute028 1 16 0
compute028 2 16 0
compute028 3 16 0
compute028 4 16 0
compute028 5 16 0
compute028 6 16 0
compute028 7 16 0
compute029 8 16 8
compute029 9 16 8
compute029 10 16 8
compute029 11 16 8
compute029 12 16 8
compute029 13 16 8
compute029 14 16 8
compute029 15 16 8
First task on node compute028 is 0 16 0
First task on node compute029 is 8 16 8

Just to note, the sort options -k1,1 sorts on the first word in the output. The next option -k2,2n sorts on the second column numerically. The grep command filters out every line that does not contain “16”, giving us only those lines of interest.

Congratulations, you have run your first supercomputing program.

The script complex_slurm runs the same program but it adds a number of features to the run. It first creates a new directory for your run, then goes to it and runs your program there.

The script threads_slurm shows how to run a hybrid MPI/OpenMP program. The program it runs is /opt/utility/phostname. This is again a glorified “hello world” program that also prints thread ID. Note the source for this program is included in the directory and it can be made using the command make phostname.

Queue and Partition Information

On Mio, individual research groups own nodes. They have priority access to their nodes. You request priority access to your nodes by specifying a partition. Please ask your PI or instructor which partition you should be using on Mio.

On AuN there is a debug partition which allows for short small jobs, no more than 15 minutes and up to 4 nodes.

Add the string -p PARTITION_NAME to your sbatch command line. For example:

[joeuser@aun001 guide]$  sbatch  -A test  -p debug simple_slurm
Submitted batch job 1993

Special instructions for building and running on the ppc001 and ppc002 (Power) nodes of Mio

Mio has two nodes, ppc001 and ppc002, that are based on IBM Power processors instead of the more common Intel x86 processor family. It is not possible to build applications for these nodes on the Mio headnode. You must launch an interactive session on one of these two nodes to build applications for them. An interactive session can be launched by running the command:

[joeuser@mio001 guide]$ srun -N 1 --tasks-per-node=1 -p ppc-build --share --time=1:00:00  --pty bash
[joeuser@ppc002 guide]$ 

Note that the prompt has changed to ppc002 or ppc001 to show that you are now on the Power nodes.

Alternatively, the file set_alias creates an alias p8for this command. You could do a

source set_alias
p8

You may want to add this alias to your .bashrc file so it is available every time you login.

Running this command is a little different from doing an ssh. In particular you are placed in the directory from which you launched the command instead of your home directory.

Also, if the nodes are busy running batch jobs you may not get the interactive session immediately.

After you have obtained the interactive session you proceed as shown above.

We want to ensure that your environment is set up to run parallel applications. The following two commands will give you a clean, tested environment:

[joeuser@mio001 guide]$  module purge
[joeuser@mio001 guide]$  module load StdEnv

Make the program:

[joeuser@mio001 guide]$ make
make
mpicc -DNODE_COLOR=node_color_  helloc.c color.o -lgfortran -lmpi_mpifh -o helloc
rm -rf *.o

At this point you should exit your interactive session by entering exit.

[joeuser@ppc002 guide]$ exit
exit

So, to run a parallel application on Mio Power nodes you would do the following:

[joeuser@mio001 guide]$  sbatch -p ppc power_script
Submitted batch job 1299071

The option -p ppc forces your job to run on the Power nodes. This can also be specified in the script.

Click here to see typical output.

There are a few special requirements for scripts for the Power nodes. Here is a slightly edited version of the run script with the important differences in red.

Script for Mio Power Nodes

Explanation of the differences
#!/bin/bash 
#SBATCH --job-name="hybrid"
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=4
#SBATCH --ntasks=4
##SBATCH --exclusive
#SBATCH --time=00:05:00
#SBATCH -p ppc
#SBATCH --export=NONE
#SBATCH --get-user-env=10L


# Go to the directory from
# which our job was launched
cd $SLURM_SUBMIT_DIR

module purge
module load StdEnv

srun --mpi=pmi2 --export=ALL ./helloc

#SBATCH -p ppc

Forces the job to run on the Power nodes

#SBATCH --export=NONE

Prevents the environmental variables set on Mio from being used in the script

#SBATCH --get-user-env=10L

Sets up the environment similar to what you would get if you logged on the node.

--mpi=pmi2

Required to get the proper setting for MPI

--export=ALL

Makes all of the Power nodes variables available to your program

Example files:

info.html
this list
example.tgz
all of the files in this directory
color.f90
part of the hello world example
docol.f90
part of the hello world example
helloc.c
part of the hello world example
phostname.c
source for /opt/utility/phostname
phostone.c
same as phostname.c but pure C
out.dat
Example output from phostname with help file
makefile
makefile for the examples
slurm_script
a fancy run script (same as mio1_script)
aun_script
a fancy run script (same as mio1_script)
mio1_script
a fancy run script (same as aun_script)
mc2_script
a fancy run script with a few extras for Mc2
simple_slurm
runs on all platforms except the
ppc001 and ppc002 nodes of Mio
power_script
Script for running on the ppc001
and ppc002 nodes of Mio
set_alias
An alias for a command to get an interactive
session on the ppc001 and ppc002 nodes of Mio
What is the Machine Status?

Ganglia

Ganglia is a scalable distributed monitoring system for high-performance computing systems such as clusters and Grids.

BlueM Ganglia
Mio Ganglia

Running Jobs

The following links show a web page displaying the running jobs (same info as the command line tool).

AuN Jobs
Mc2 Jobs
Mio Jobs

Node Usage

The following links show a web page displaying each node’s status (same info as the command line tool).

BlueM Nodes
Mio Nodes

What HPC platforms do we have?

What do we have?

General Overview

We have three High Performance Computing (HPC) systems on campus. Mio, Mc2 (Energy), and AuN (Golden). Mc2 and AuN are collectively know as BlueM.

BLUEM OVERVIEW

BlueM is a unique high performance computing system from IBM. The overall specifications are:

FeatureValue
Teraflop rating154 teraflops. (Roughly 7xRA)
Memory17.4 terabytes
Nodes656
Cores10,496
Disk480 terabytes

One of the unique characteristics of this machine is its small footprint both in physical size and energy usage. It requires only 85 kW. The new machine occupies a total of five racks, requiring only three compute racks, a management rack and a file system rack.

The IBM supercomputer is also unique in configuration. It contains two independent compute partitions that share a common file system. The combined compute partitions and their file system are known collectively as BlueM.

The two partitions are built using different architectures. The first partition, known as Mc2 (Energy), runs on an IBM BlueGene Q (BGQ). The second partition, known as Aun (Golden), uses the iDataplex architecture. Each of the architectures is optimized for a particular type of parallel application.

MC2 (ENERGY)

Mc2, the IBM BlueGene Q is designed to handle programs that can take advantage of large numbers of compute cores. Also, the BGQ is designed to run applications that use multiple levels of parallelism, such as combining threading and message passing. Multilevel parallelism is expected to be the dominant paradigm in the future of HPC. Our BGQ contains 512 nodes with each node having 16 cores. It has 8.192 terabytes of memory and a peak computational rate of 104.9 teraflops. The BGQ rack is currently half populated. That is, there is room for an additional 512 nodes within the same cabinet.

SpecificationFeatures
Blue Gene QNew Architecture
PowerPC A2 17 CoreDesigned for large core count jobs
512 NodesHighly Scaleable
8,192 CoresMultilevel parallelism - Direction of HPC
8,192 GbytesRoom to Grow
104 TflopsFuture looking machine

The processors on the Blue Gene are in a different family from those on RA and Mio and the network is significantly different. Code will need to be recompiled to run on Mc2. We have below two lists of programs and libraries that have been built on other Blue Gene machines. Some of the listed items are from a earlier model of the Blue Gene, the “P”. Some listed have been ported but there has not been a high level of optimization performed

Aun (Golden)

Aun, based on the IBM iDataplex platform, is designed to handle applications that may require more memory per core. The nodes employ the x86 Sandy Bridge generation architecture. Each of the 144 nodes has 64 gigabytes of memory and 16 processors for a total of 2,304 cores and 9.216 terabytes of memory.

Aun uses the same compiler suite as Mines’ Mio supercomputer. Many applications that are being run on these machines today could run on the new machine without a recompile. However, because of the updated processor instruction set available on the new machine we would expect improved performance with a recompile.

SpecificationFeature
iDataPlexLatest Generation Intel Processors
Intel 8x2 core SandyBridgeLarge Memory / Node
144 NodesCommon architecture
2,304 CoresSimilar user environment to RA and Mio
9,216 GbytesQuickly get researchers up and running
50 Tflops

MIO OVERVIEW

The machine Mio.mines.edu represents a new concept in computing at Mines. Mio is a shared resource funded in part by the Mines Administration and in part by money from individual researchers. Mio came on line March 2010. Initially was a relatively small cluster dedicated to a single group of research projects. We expect that Mio will quickly grow in to a supercomputing class machine.

Concept

Supercomputing has become an important part of engineering and scientific research. Most current generation supercomputers are actually comprised of a collection of compute nodes with each node containing several compute cores. Such machines are often called clusters. A typical cluster supercomputer might have hundreds to thousands of compute cores. The individual compute cores work on the same computation simultaneously. The compute nodes or cores communicate with each other via a high speed network. The nodes are normally housed in a rack with infrastructure such as the communications hardware, management nodes, network connections, and power supplies.

The Mio concept is simple. Mines funds provide the infrastructure discussed above and individual professors purchase compute nodes that are added to the cluster. The professors own their nodes, that is, they have exclusive access when they need them. When they are not in use by the owners the nodes are available for use by others.

Mio will be managed by the High Performance Computing group at Mines. The advantage of Mio for the professors is that they:

  • Don’t need to manage resources;
  • Have full access to their resource;
  • Have access to other professor’s resources;
  • Get the infrastructure provided by the school for free. This includes the Infiniband network which will greatly improve the scalability of multinode applications.

What’s in a name?

The name “Mio” is a play on words. It is a Spanish translation of the word “mine” as in belongs to me, not the hole in the ground. The phrase “The computer is mine.” can be translated as “El ordenador es mío.”

Financial Considerations

The Mines Administration has purchased the initial infrastructure for Mio at a cost of roughly $19,000. Professors can purchase nodes at a cost of $5,500-$6,600. These nodes contain high end processors, 16 cores and are populated with 4 Gbytes of memory per core or 64 Gbytes per node.

Initial Configuration

Initially, Mio consisted of a Relion 2701 Head Node, 2 Relion 1702 Twin Compute Nodes (each Relion 1702 contains 2 nodes in a 1u enclosure), Infiniband and Ethernet connectivity, power supplies and a single enclosure rack. Each of the compute nodes contained two Intel 5570 Nehalem processors running at 2.93 GHz. Each Intel 5570 Nehalem processor contains 4 cores. There were be a total of 4 nodes x 2 processors per node x 4 cores per processor = 32 cores. For a complete machine description click on the Configuration link.

Current Compute Node Configuration (Updated 07/14/17):

  • Supermicro SuperServer 6018TF
  • 2 x Intel Xeon E5-2680v4 14 Core 2.4 GHz Processor (28 core total)
  • 256GB (DDR4 2133 2Rx4 ECC REG DIMM )
  • 2 TB Seagate Enterprise Class HDD SA T A 7200RPM
  • Mellanox Connect X3 FDR IB PCIe 1port

That is, each node contains 2 x Intel 14 core processor for a total of 28 cores, along with 256 GB of memory, 2 TB of internal disk, and an FDR Infinband connector. These come grouped in a 2 node box and the cost per node is about $6,600. With 64 GB of memory the cost is about $5,300.

Mio also contains a number of “special” nodes. It has two Intel Phi nodes, three x86 nodes with GPUs and two IBM Power8 nodes with K80 GPUs. The IBM nodes have 20 cores, supporting 160 threads across 256 GB of memory. They have two K80 cards each with two GPUs. We have a number of specific examples for these nodes; start with these sections of the FAQs:

  1. “Show me some Power8 and GPU examples!”
  2. “Show me some Machine Learning examples!”

With a purchase of a Mio node you are gaining several advantages. You will not need to manage the node. You have the infrastructure provided by the school. This includes the Infiniband network which will greatly increase the scalability of your multinode applications. You will gain the option of using other peoples nodes when they are not in use. To purchase a node or get pricing information email Dr. Timothy Kaiser tkaiser@mines.edu.

Who owns nodes on Mio and what are their specs?

Mio Block Diagram

Current Mio Configuration

OwnerDepartmentReferenceNodes
Brennecka, GeoffMetallurgical &
Materials Eng.
gbrenneccompute[198-201]
Brune, JuergenMining Eng.jbrunecompute[032-033]
compute[036-037]
compute[100-101]
Carr, LincolnPhysicslcarrcompute024
compute[062-067]
compute[073-077]
compute[128-129]
compute[172-173]
compute196
Ciobanu, CristianMechanical Eng.cciobanucompute054
compute[090-091]
Durfee, ChipPhysicscdurfeecompute[176-177]
Eberhart, MarkChemistrymeberharcompute[194-195]
Ganesh, MahadevanApplied Mathematics &
Statistics
mganeshcompute[056-059]
compute061
compute[160-167]
gpu003
Gomez Gualdron,Diego Chemical &
Biological Eng.
gualdroncompute[180-191]
compute197
Gregg,Karen(Leiderman)Applied Mathematics &
Statistics
kleidermancompute025
compute[178-179]
Kaiser,TimHPChpccompute[078-079]
compute[084-089]
compute[192-193]
ppc[001-002]
Kappes, BrandenMechanical Eng.bkappescompute[174-175]
Kazemi, HosseinPetroleum Eng.hkazemicompute080
Lusk, MarkPhysicsmluskcompute[038-039]
compute[092-093]
compute[126-127]
Monney, MikeCivil &
Environmental Eng.
mooneycompute[049-050]
Newman, AlexandraMechanical Eng.anewmancompute055
Packard, CorinneMetallurgical &
Materials Eng.
cpackardcompute125
Pankavich, StephenApplied Mathematics &
Statistics
pankaviccompute124
Pankavich,Stephen Applied Mathematics &
Statistics
pankaviccompute026
compute124
Sava, PaulGeophysicspsavacompute083
compute103
compute[105-112]
compute[114-121]
compute[136-159]
Shragge,JeffreyGEOPgeopcompute[000-011]
Sullivan, NealMechanical Eng.nsullivacompute[122-123]
compute[132-135]
Sum, AmadeuChemical &
Biological Eng.
asumcompute[051-052]
compute[094-099]
Taylor, PatMetallurgical &
Materials Eng.
prtaylorgpu004
Thomas, BrianMechanical Eng.bgthomascompute[168-171]
Tilton, NilsMechanical Eng.ntiltoncompute[130-131]
compute[202-203]
Tucker, GarrittMechanical Eng.tuckercompute[204-219]
Tura, AliRCPrcp
compute[012-023]
Vyas, Shubham Chemistrysvyascompute[040-041]
compute[043-045]
compute[068-072]
Zimmerman, JeramyPhysicsjdzimmercompute027

The commands:

/opt/utility/slurmnodes -fAvailableFeatures -fRealMemory | /opt/utility/jlines 3


sinfo -a

will return the number of cores, memory, and ownership information for the nodes.

compute[030-031]
2x(Intel X5570) 8 cores 2.93 GHz 24 GB
compute[032-033,036-041,043-045,049-052,054-059,061]
2x(Intel X5670) 12 Cores 2.93 GHz 24 GB
compute[062-081,083-101]
2x(Intel X5675) 12 cores 3.06 GHz 24 GB
compute[102-103,105-112,114-125]
2x(Intel e5-2680) 16 Cores 2.70 GHz 64 GB
compute 126-131
2x(Intel e5-2690) 20 Cores 2.70 GHz 64 GB
compute 132-173
2x(Intel e5-2680) 24 Cores 2.50 GHz 64 GB
compute 174-179
2x(Intel e5-2680) 24 Cores 2.50 GHz 256 GB
compute197
2x(Intel e5-2680 V4) 28 Cores 2.40 GHz 64 GB
compute[000-027,180-196,198-219]
2x(Intel e5-2680 V4) 28 Cores 2.40 GHz 256 GB
gpu003
2x(Intel X5670) 12 Cores 2.93 GHz 48 GB, 3 x Fermi GPUs
gpu004
Skylake Gold 6130 16 Cores 2.1 GHz 192 GB, Pascal GPU

 

How do I use the file system?

Important notes:

  • The file system on HPC platforms is provided by the school. No individual group owns any portion of the file system.
  • The file system is shared by all groups.
  • No group or user will be allowed to jeopardize the access to HPC platforms by abusing the file system.
  • Backups are not done of users’ data.

Each user has three base directories which can be accessed either by their name or by the their environmental variable:

Directory Environmental variable
Your home directory $HOME
$HOME/bins $BINS
$HOME/scratch $SCRATCH

In addition a group may have a $SETS directory which is designed for semipermanent data sets that will be used repeatedly by the group. $SETS can contain things like equations of state or velocity fields. It may also contain programs used by multiple members of a group. $SETS will be readable on the compute nodes. Not all groups have $SETS directories.

$HOME – Should be kept very small, having only start up scripts and other simple scripts. Output from parallel jobs can not be directed to $HOME. It should only be read from compute nodes.

$BINS – Should contain programs users have built for personal use and small data sets and run scripts. Output from parallel jobs can not be directed to $DATA It will be read only from compute nodes.

$SCRATCH – The main area for running applications. Output from parallel runs should be done to this directory.

FILE SYSTEM QUOTAS

Machine $SCRATCH $HOME + $BINS (Combined Total)
Aun/Mc2 2,000,000 Files 20 GBs
Mio 2,000,000 Files 20 GBs

Note: most unix style file systems will see a performance decrease as the number of files per directory increases, this will be noticeable as the number of files per directory gets into the hundreds. This will cause a performance hit for all users when a user access files in a directory that contains a large number of files. Please keep the number of files per directory reasonable.

The organizational structure of the file system is the same on Mio, AuN and Mc2; however, Mio has its own file system while AuN and Mc2 actually share the same file system. Also, from BlueM it is possible to see the AuN/Mc2 file system and the Mio file system. Technically we say that BlueM mounts the AuN/Mc2 file system and it mounts the Mio file system.

GETTING AROUND THE VARIOUS FILE SYSTEMS

When you first login to AuN or Mc2 you will see that you have the directories:

On AuN:
bins scratch mc2
On Mc2:
bins scratch aun

The Mc2 directory on AuN is a link to your home directory on Mc2 and the AuN directory on Mc2 is the reverse.

Scratch is shared directly across AuN and Mc2. This is where runs should be done, not in your home directory. The bins directory is distinct on the two machines. Files created in bins on Mc2 are not in bins on AuN. The bins directory is where you should store applications that you build.

When you log in to BlueM you will see a directory remote that contains:

remote/aun/:
bins home scratch
remote/mc2/:
bins home scratch

And possibly

remote/mio/:
bins home scratch

These “remote” directories have links to bins, home and scratch on the given machine. Thus, to copy a file from your desktop machine to AuN you only need to copy it to remote/aun on BlueM. The same holds for remote directories for Mc2 and Mio.

If you have an account on Mio the remote directory on BlueM will contain subdirectories for Mio. Thus it is possible to move files among Mio, AuN and Mc2 by doing a “cp”.

On Mio, by default, you have only bins and scratch directories. There is no remote directory.

How do I run?

Loading_modules

Coming soon.

Scripts_(simple)

Please see How_do_I_do_a_simple_build_and_run for an example simple script.

Running_scripts

Coming soon.

Seeing_what’s_running

The command squeue shows what is currently running and the command sinfo shows what nodes are in use. This section will be expanded with additional information shortly.

Advanced_scripts

Please see I want to run complex scripts. Any advice? for an example advanced scripts.

Managing_jobs

The command scancel followed by a JOBID will delete the job. Please note that it can take a few minutes for a job to be removed from the list of jobs shown by squeue.

I want to run complex scripts; any advice?

The scheduler we use on our HPC platforms is SLURM. You may want to look at the documentation at: http://slurm.schedmd.com/documentation.html

We have a tutorial on scripting at: http://geco.mines.edu/scripts. Subjects include:

    • Bash useful concepts
    • Basic Scripts
    • Using Variables in Scripts
    • Redirecting Output, getting output before a job finishes
    • Getting Notifications
    • Keeping a record of what you did
    • Creating directories on the fly for each job
    • Using local disk space

Multiple jobs on a node

      • Sequential
      • Multiple scripts – one node
      • One Script – different MPI jobs on different cores

Mapping tasks to nodes

      • Less than N tasks per node
      • Running on heterogeneous nodes using all cores
      • Different executables working together
      • Hybrid MPI/OpenMP jobs (MPI and Threading)

Chaining jobs

    • Job dependencies
    • Jobs submitting new jobs

The HPC Tech Reports page at: http://geco.mines.edu/files/userguides/techReports has a link to:

Chaining jobs in Slurm and dealing with script errors

This note discusses how you can set up dependencies in slurm jobs so a second job waits for a first to finish before automatically starting. In particular, this shows how to set it up so that if the first job fails then the second will not start.

What prebuilt apps and libs do we have? The Module System

General Overview

HPC-MODULES HPC@MINES MODULE SYSTEM

HPC@Mines has a module system. The module system allows setting up the environment for running applications using one or two simple commands. Module commands can be run from the command line or they can be placed in your .bashrc file. The primary module command is

module load Name_of_module_to_load

This would load a module, which sets your environment to run some application. This typically would involve changing your PATH environmental variable and possibly your LD_LIBRARY_PATH variable. There are also modules for setting up one of several different programming environments.

Module loads “go away” when you logout. That is you need to load modules every time you login or put the module load commands in your .bashrc file so they get run automatically when you login.

It is important to load only the module you need. If, for example, you were to load every module it would cause your interactive session to not work properly because it would overload key environmental variables.

Most nonstandard linux applications on our machine have modules associated with them.

AVAILABLE MODULES

There are two ways to see available modules. On a web page and by running the module avail command.

Links to list of available modules:

The module avail command:

Running the command

module avail

on the the machine in question will give you a current list.

MODULE Notate Bene and FAQs:

RESETTING YOUR ENVIRONMENT:

Running the commands

module purge
module load StdEnv

will reset your environment to a known simple working state.

RESOLVING PYTHON MODULE ISSUES:

The information below describes a common happenstance with python modules:

As a general rule, and as displayed above, HPC recommends doing a module purge, then loading the StdEnv module into your environment. The StdEnv module in turn loads the following modules:

  1. PrgEnv/intel/15.0.090
  2. PrgEnv/mpi/openmpi/intel/1.6.5
  3. PrgEnv/python/gcc/3.4.3 .

With regard to Python, after loading StdEnv, version 3.4.3 is now available to you. This is the most recent version accessible on Mio, and requires the command “python3” at the prompt to run. By default, version 2.6.6 (the system version) is in your path; the command ‘python’ will run version 2.6.6. The significance of this setup is that the system version of Python (2.6.6) is kept clean, while later versions (which require the appropriate module be loaded to the environment) include non-standard Python modules.

Another salient point is that the StdEnv module forces the loading of an Intel compiler module (see list above). This module links MKL libraries to the environment, which are required by all Python versions. An error such as ‘ImportError: libmkl_rt.so: cannot open shared object file: No such file or directory’ implies that most likely the MKL libraries made accessible by the Intel module are missing.

How can I run better?

Important man pages

Intel C compiler
icc
Intel Fortran compiler
ifort
Portland Group C compiler (power version)
pgcc
Portland Group C compiler (power version)
pgfortran
IBM C compiler (power version)
xlc
IBM Fortran compiler (power version)
xlf90
sbatch – Submit a batch script to Slurm.
sbatch
scancel – Used to signal Slurm jobs
scancel
sinfo – view information about Slurm nodes and partitions.
sinfo
squeue – view information about jobs located in the Slurm scheduling queue.
squeue
srun – Run parallel jobs
srun

Tech Reports

We have a collection of longer articles that describe aspects of high performance computing. This includes:

  1. FFTs and other wrapper library calls available in MKL 03/31/15
  2. Chaining jobs in Slurm and dealing with script errors 03/31/15
  3. OpenMP threading on Mio and AuN 04/01/15
  4. Qbox – Hybrid MPI/threading on Mc2 04/16/15
  5. Quantum Espresso – Optimization on Mc2 06/04/15
  6. Linux for High Performance Computing 06/09/15
  7. Threading on Power Nodes 01/010/17

Debugging

For now, we provide links to descriptions on ways to help you debug programs. The first link is for a page that discusses command line options you can use when you build your applications to try to help track down problems. The second link discusses the steps necessary to debug a program using the Allinea ddt debugger.

  1. Command line options for debugging
  2. Starting the DDT debugger
    DDT User Guide
    Movie of ddt starting under X
    Movie of ddt starting the remote client

Optimization

Determining where you program spends its time is an important part of source code level optimization. We have a number of slides and a short video that show how to get started with the Allinea map profiler.

  1. Starting the MAP profiler
    MAP User Guide

There are many optimizations that can be performed simply by selecting compile line options. We have full compiler documentation available on campus. (Note the pages listed below will not open off campus.)

  1. Intel compiler and library Documentation
  2. Portland Group compiler, debugger, profiler, and OpenACC documentation
How do I select MY nodes?

Reservations, Node Selection, Interactive Runs

Reservations on AuN are not currently supported.

Reservations on Mio

Reservations are no longer required on Mio to evict people from your nodes. In the past people would set a reservation for their nodes and in doing so purge jobs from users not belonging to their group. Now, people need only run the job, selecting to run in their group’s partition. See Selecting Nodes on Mio and Running only on nodes you own below.

Selecting Nodes on Mio

There are two ways to manually select nodes on which to run. They can be listed on the command line or by selecting a partition. The “partition” method is discussed in the next section.

We have below a section of the man page for srun command describing how to specify a list of nodes on which to run:

-w, --nodelist=<host1,host2,... or="" filename="">
    Request a specific list of hosts. The job will contain at least these hosts.
    The list may be specified as a comma-separated list of  hosts, a range of hosts
    (compute[1-5,7,...] for example), or a filename.  The host list will be assumed to 
    be a filename if it contains a "/" character. If you specify a max node count 
    (-N1-2) if there are more than 2 hosts in the file only the first 2 nodes will 
    be used in the  request list.   Rather  than  repeating  a host name multiple 
    times, an asterisk and a repitition count may be appended to a host name. For 
    example "compute1,compute1" and "compute1*2" are equivalent.
</host1,host2,...>

Example: running the script myscript on compute001, compute002, and compute003…

[joeuser@mio001 ~]sbatch --nodelist=compute[001-003]  myscript

Example: running the “hello world” program /opt/utility/phostname interactively on compute001, compute002, and compute003…

[joeuser@mio001 ~]srun --nodelist=compute[001-003]  --tasks-per-node=4 /opt/utility/phostname
compute001
compute001
compute001
compute001
compute002
compute002
compute002
compute002
compute003
compute003
compute003
compute003
[joeuser@mio001 color]$ 

Running only on nodes with particular features such as number of cores

There are several generation of nodes on Mio each with different “features.” You can see the features by running the command:

[joeuser@mio001 ~]/opt/utility/slurmnodes -fAvailableFeatures
compute000
   Features core8,nehalem,mthca,ddr
compute001
   Features core8,nehalem,mthca,ddr
...
compute032
   Features core12,westmere,mthca,ddr
compute033
   Features core12,westmere,mthca,ddr
...
compute157
   Features core24,haswell,mlx4,fdr
...
...

Features can be used to select subsets of nodes. For example, if you want to run on nodes with 24 cores you can add an option –constraint=core24 to your sbatch command line or script.

[joeuser@mio001 ~]sbatch --constraint=core24 simple_slurm 
Submitted batch job 1289851
[joeuser@mio001 ~]

Which gives us:

[joeuser@mio001 ~]squeue -u joeuser
             JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)
           1289851   compute   hybrid  joeuser  R       0:01      2 compute[157-158]
[joeuser@mio001 ~]

Running only on nodes you own (or in a particular partition)

Every normal compute node (exceptions are GPU and PHI nodes) on mio is part of two partitions or groupings. They are part of the compute partition and they are part of a partition that is assigned to a research group. That is, each research group has a partition and their nodes are in that partition. The GPU and PHI nodes are in their own partition to prevent people from accidentally running on them.

You can see the partitions that you are allowed to use (compute, phi, gpu and your groups partions) by running the command sinfo. sinfo -node will display which partitions you are allowed to run in. sinfo -a will show all partitions. sinfo -a –format=”%P %N” shows a compact list of all partitions and nodes.

Add the option -p partition_name to your srun command run in the named partition. The default partition is compute which is all of the normal nodes. By default your job can end up on any nodes. Specifying your groups partition will restrict your job to “your” nodes.

Also, starting a job in your groups partition will purge any job running on your nodes that are run under the default partition. Thus, it is not necessary to create a reservation to gain access to your nodes. If you do not run in your partition your jobs have the potential to be deleted by the group owning the nodes.

There is a shortcut command that will show you the partitions in which you can run, /opt/utility/partitions. For example:

[joeuser@mio001 utility]$ /opt/utility/partitions
Partitions and their nodes available to joeuser
    compute   compute[000-003,008-013,016-033,035-041,043-047,049-052,054-081,083-193]
        phi   phi[001-002]
        gpu   gpu[001-003]
  joesgroup   compute[056-061,160-167]
[tkaiser@mio001 utility]$ 

We see that joeuser can run on nodes in the compute partition. The partitions compute, phi, and gpu are available to everyone. Joes group “owns” compute[056-061,160-167] and running in the joesgroup partition will allow preemption.

Running threaded jobs and/or Running with less than N MPI tasks per node Slurm will try to pack as many tasks on a node as it can to try to fill it so that there is at least 1 task or thread per core. So if you are running less than N MPI tasks per node where N is the number of cores slurm may put additional jobs on your node.

You can prevent this from happening by selecting setting values for the flags –tasks-per-node and –cpus-per-task on your sbatch command line or in you slurm script. The value for –tasks-per-node times –cpus-per-task should be the number of cores on the node. For example, if you are running on 2 16 core nodes you want 8 MPI tasks you might say

--nodes=2 --tasks-per-node=4 --cpus-per-task=4

where 2*4*4=32 or the total number of cores on two nodes.

You can also prevent additional jobs from running on nodes by using the –exclusive flag

How do I build applications?

Common Compiler Options

The following are common options for various vendors C/C++ and Fortran compilers. In particular they show how to:

  1. Generate optimized code
  2. Enable OpenMP
  3. Enable traceback features

For production builds you may want to remove the debug “-g” and traceback options. For development you may want to set optimization to -O0.

Portland Group Compilers

  • pgf77 -g -O3 -traceback -mp example.f
  • pgf90 -g -O3 -traceback -mp example.f90
  • pgcc -g -O3 -traceback -mp example.c
  • pgc++ -g -O3 -traceback -mp example[.c|.C|.cc|.cpp]

Intel Compilers

  • ifort -g -O3 -traceback -qopenmp example[.f|.f90]
  • icc -g -O3 -traceback -qopenmp example[.c|.C|.cc]

Intel Compilers Notes:

The Intel C compiler can be invoked using the command. icpc. The icpc command uses the same compiler options as the icc command. Invoking the compiler using icpc compiles .c and .i files as C++. Invoking the compiler using icc compiles .c and .i files as C. Using icpc always links in C++ libraries. Using icc only links in C++ libraries if C++ source is provided on the command line.

For ifort, filenames with the suffix .f90 are interpreted as free-form Fortran 95/90 source files. Filenames with the suffix .f, .for, or .ftn are interpreted as fixed-form Fortran source files.

IBM Compilers

  • xlc_r -g -O3 -qtbtable -qsmp=omp example[.c|.C|.cpp|.cxx|.cc|.cp|.c++]
  • xlf90_r -g -O3 -qtbtable -qsmp=omp example[.f|.f77|.f90|.f95|.f03|.f08]

IBM Compilers for Blue Gene Q (Mc2)

  • bgxlc_r -g -O3 -qtbtable -qsmp=omp example[.c|.C|.cpp|.cxx|.cc|.cp|.c++]
  • bgxlf90_r -g -O3 -qtbtable -qsmp=omp example[.f|.f77|.f90|.f95|.f03|.f08]

IBM Compiler notes:

The IBM C compilers can be invoked using any of the commands: xlc, xlc++, xlC, cc, c89, c99, xlc_r, xlc++_r, xlC_r, cc_r, c89_r, c99_r. All invocations with a suffix of _r allow for thread-safe compilation. See the man page below for additional information on the differences in the invocations.

The IBM fortran compilers can be invoked using any of the commands: xlf, xlf_r, f77, fort77, xlf90, xlf90_r, f90, xlf95, xlf95_r, f95, xlf2003, xlf2003_r, f2003, xlf2008, xlf2008_r. All invocations with a suffix of _r allow for thread-safe compilation. See the man page below for additional information on the differences in the invocations.

Portland Group Compiler Documentation
(Served from Portland Group Site)

Portland Group Compiler manpages


Intel Compiler Documentation

Intel C compiler
(Only available on Campus)

Intel Fortran compiler
(Only available on Campus)

Samples
(Only available on Campus)

AuN: There are many sample programs in the directory: /opt/intel/2016/samples/en_US
On Mio, see: /opt/intel/2016/parallel_studio_xe_2016.0.047/samples_2016/en

MKL (Math Kernel Library)
(Only available on Campus)

For additional information on the Intel Compilers see:
https://software.intel.com/en-us/intel-software-technical-documentation

Intel Compiler manpages

IBM Compiler Documentation

There are different compilers for the Mc2 compute nodes and the front end node because they have different (but similar) processors. If you build an application with the IBM compilers it will most likely not work on the front end node. If you build an application with the default gcc or gfortran compilers it will most likely not work on the compute nodes. See below for some examples.

Blue Gene Q manpages

Power manpages and reference docs

We have a complete example of build and run both the IBM and gnu compilers on Mc2. To see this example do a wget on Mc2 followed by a tar command. The look at the README file.

mkdir test
cd test
wget http://geco.mines.edu/prototype/How_do_you_build_applications/bgq.tgz
tar -xzf bgq.tgz
cat README

Mc2 gcc and gfortran versions – more information

There are different compilers for the Mc2 compute nodes and the front end node because they have different (but similar) processors. Error messages of the form:


2016-11-08 09:30:30.074 (FATAL) [0xfff81c88f40]
3574:ibm.runjob.client.Job: could not start job: job failed to start
2016-11-08 09:30:30.074 (FATAL) [0xfff81c88f40] 3574:ibm.runjob.client.Job: 
Load failed on R00-ID-J02: Application executable ELF header contains 
invalid value, errno 8 Exec format error

are caused by trying to run an application on the compute nodes that was built with the head node version of the compiler. The opposite can also occur.

Consider the simple Fortran program. It will sum values of sin and print it. The output should be close to 0.

      program sinsum
       integer, parameter:: b8 = selected_real_kind(14)
       real(b8), parameter :: pi = 3.141592653589793239_b8
       real(b8) :: da,a,s,tot
       da=2.0_b8*pi/100.0_b8
       tot=0.0
       do i=0,100
          a=da*i
          s=sin(a)
          tot=tot+s
       end do
       write(*,*)tot
      end program

Or consider the similar C program.

[joeminer@mc2 fort]$ cat dosin.c
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
int main() {

#define pi 3.141592653589793239
  double  da,a,s,tot;
  int i;
  da=2.0*pi/100.0;
  tot=0.0;
  for (i=0;i<=100;i++) {
    a=da*i;
    s=sin(a);
    tot=tot+s;
  }
  printf("%lg\n",tot);
  return(0);
}

We will build these programs with the default gcc (/usr/bin/gcc) and gfortran (/usr/bin/gfortran) compilers and see that they do not work properly on the compute nodes. We will run interactively on a compute node using the “srun” command.

   
[joeminer@mc2 fort]$ which gcc
/usr/bin/gcc

[joeminer@mc2 fort]$ gcc dosin.c -lm -o dosin_c.head

[joeminer@mc2 fort]$ which gfortran
/usr/bin/gfortran

[joeminer@mc2 fort]$ gfortran dosin.f90 -o dosin_f.head

[joeminer@mc2 fort]$ ls -l *head
-rwxrwxr-x 1 joeminer joeminer 7864 Nov  8 09:29 dosin_c.head
-rwxrwxr-x 1 joeminer joeminer 9577 Nov  8 09:30 dosin_f.head


[joeminer@mc2 fort]$ srun -n 1 dosin_c.head
2016-11-08 09:30:30.074 (FATAL) [0xfff81c88f40]
3574:ibm.runjob.client.Job: could not start job: job failed to start
2016-11-08 09:30:30.074 (FATAL) [0xfff81c88f40] 3574:ibm.runjob.client.Job: 
Load failed on R00-ID-J02: Application executable ELF header contains 
invalid value, errno 8 Exec format error

[joeminer@mc2 fort]$ srun -n 1 dosin_f.head
2016-11-08 09:30:36.314 (FATAL) [0xfff8a308f40] 
3586:ibm.runjob.client.Job: could not start job: job failed to start
2016-11-08 09:30:36.314 (FATAL) [0xfff8a308f40] 3586:ibm.runjob.client.Job: 
Load failed on R00-ID-J02: Application executable ELF header contains 
invalid value, errno 8 Exec format error

Next we load the module that points to the compute node versions of the compilers. We will rebuild our applications and then run them on a compute node.

[joeminer@mc2 fort]$ module load PrgEnv/gcc/gcc-4.7.2.bgq

[joeminer@mc2 fort]$ which gcc
/bgsys/drivers/ppcfloor/gnu-linux-4.7.2/powerpc64-bgq-linux/bin/gcc


[joeminer@mc2 fort]$ gcc dosin.c -lm -o dosin_c.comp


[joeminer@mc2 fort]$ which gfortran
/bgsys/drivers/ppcfloor/gnu-linux-4.7.2/powerpc64-bgq-linux/bin/gfortran



[joeminer@mc2 fort]$ gfortran dosin.f90 -o dosin_f.comp

[joeminer@mc2 fort]$ ls -l *comp
-rwxrwxr-x 1 joeminer joeminer 4183006 Nov  8 09:33 dosin_c.comp
-rwxrwxr-x 1 joeminer joeminer 4996830 Nov  8 09:31 dosin_f.comp


[joeminer@mc2 fort]$ srun -n 1 dosin_c.comp
7.34622e-15

[joeminer@mc2 fort]$ srun -n 1 dosin_f.comp
   7.3462205710450375E-015

How do I manage jobs?
Simple example Scripts
See: http://geco.mines.edu/prototype/How_do_I_do_a_simple_build_and_run/
Complex Scripts
See: http://geco.mines.edu/prototype/I_want_to_run_complex_scripts_any_advice/
Launching a job
sbatch script
Launching a job using a particular account
sbatch -A ACCOUNT_NUMBER script
Show the accounts I can use on AuN or Mc2
/opt/utility/accounts
Launching a job with exclusive access (recommended)
sbatch –exclusive script
Launching a job in a particular partition or set of nodes or running interactively
See: http://geco.mines.edu/prototype/How_do_I_select_MY_nodes/
See all jobs in the queue
squeue -a
Seeing what jobs I have in the queue
squeue -u $LOGNAME
Show an estimate of when a job will start
squeue –start –job JOB_NUMBER
Killing a job
scancel JOB_NUMBER
Show what partitions I am allowed to use:
sinfo -node
Show what partitions I am allowed to use and the nodes:
sinfo –summarize
Show all partitions and their nodes:
sinfo -a
Formatted Slurm Man Pages
See: http://slurm.schedmd.com/man_index.html
Text versions of the man pages
A cross reference for other work load managers
http://slurm.schedmd.com/rosetta.html
What information is sent to new users?

Your account on Mio has been activated.

Your account on BlueM, AuN, and Mc2 has been activated

Our user guides can be found at

HPC FAQ
http://inside.mines.edu/HPC-Home

At a minimum it is recommended that you look at the following links off of this page

How do I connect?
http://geco.mines.edu/prototype/How_do_I_connect

and

How do I do a simple build and run?
http://geco.mines.edu/prototype/How_do_I_do_a_simple_build_and_run

These guides will explain the process of logging into our HPC platforms and show how to build and run a parallel “Hello World” example.

Our HPC platforms (Mio, BlueM, AuN, and Mc2) are Linux based machines. You need to be familiar with how to “get around” on a Linux platform. That is, you are expected to have a comprehension of how to work in linux environment. The page off of our user’s guide:

I am new to linux. Help!
http://geco.mines.edu/prototype/I_am_new_to_linux_Help/

has a number of links to tutorials. In particular we have a local tutorial:

Linux for HPC
http://http://geco.mines.edu/files/userguides/techReports/linux

All of our HPC platforms run the same scheduling software, slurm. Slurm has three important concepts: exclusivity, partitions, and accounts.

If you are running less than N MPI tasks per node where N is the number of cores on the node you should add the –exclusive option in your run script or on the sbatch command line. This will prevent multiple jobs from running on the same node.

All nodes on AuN and Mc2 have 16 cores.

The number of cores on Mio nodes can be seen by running the command:

/opt/utility/slurmnodes | egrep "NodeAddr|CPUTot"

Partitions are a collection of nodes.

Mio has a number of partitions, with nodes owned by a particular group belonging to the group’s partition. Also, most of the nodes on Mio belong to the default partition “compute”.

AuN has two partitions, “aun” which is the default partition, and “debug” The debug partition allows for quicker turn around for short jobs.

Mc2 only has a single partition and thus it is not important on that machine.

You can see how to run in particular partitions and select particular nodes on the page:

How do I select MY nodes?
http://geco.mines.edu/prototype/How_do_I_select_MY_nodes

As a quick reference the command

    sinfo -node 

will show which partitions you can use.

The command

    squeue -a

will show which nodes are currently in use.

Accounts are only important on AuN and Mc2. On these machines every job must be associated with an account. You can see which accounts you are allowed to use by running the command:

    /opt/utility/accounts

The account number must be specified when you run a job as discussed in the

How do I do a simple build and run?
http://geco.mines.edu/prototype/How_do_I_do_a_simple_build_and_run

Questions should be sent to: hpcinfo@mines.edu

How do I see my scratch usage?

Managing your scratch space usage.

Until recently we did not have a good way for people to monitor their usage of scratch space on Mio and AuN. We now can easily show total usage. With a bit more effort you can also show aging of your files and directories .

We have enabled the command mmlsquota which will show your usage.

You can do a

[joeuser@mio001 ~]$ man mmlsquota

to see the full description of the command or

[joeuser@mio001 ~]$ mmlsquota -h 

to get a short description.

When you run the mmlsquota command you will get more information than is useful. You will see two Filesystems listed, lb and sb. The one that describes your scratch usage is lb. The sb Filesystem report is not important. You may also see a line that lists a “sets” Fileset. Again, this is not important.

We have a command /opt/utility/scsize that filters out most of the unimportant information. For example:

[joeuser@mio001 ~]$ /opt/utility/scsize
Block Limits
Filesystem Fileset type GB quota limit in_doubt grace
lb root USR 19 76800 102400 0 none
[joeuser@mio001 ~]$

This shows that joeuser has 19 Gbytes in scratch. The quota is a theoretical upper limit as to the amount of space you could use. In fact you will draw the attention of the HPC group long before you get anywhere (think small fraction) close to that limit.

As you know the HPC group reserves the right to remove files in scratch as necessary to keep the system running. Scratch by definition is for temporary storage of data. If you plan on keeping data it should be moved off of the machine.

There has been a question and debate about automatically removing files after they reach a certain age. Some institutions do that. We don’t for three reasons. People are generally responsible about cleaning up after themselves. It is actually an expensive operation to routinely purge files. Finally, for those few that are not responsible, it is too easy to “game” the aging tests.

However, we now have the ability for users to show their file aging information. This is a multistep process. The first step can be time consuming and hits the file system pretty hard so it is not something you will want to do on a daily basis.

The new command is /opt/utility/agedu. Again, you can get the man page for this command.

For the first step cd to your scratch directory and then run the command

[joeuser@mio001 joeuser]$cd $SCRATCH
[joeuser@mio001 joeuser]$/opt/utility/agedu --no-progress -f $HOME/adedu.dat -s $SCRATCH

This will create an inventory of your scratch directory. It will create a file agedu.dat. This can take several minutes. In a recent test for a user with a large number of files this took about 20 minutes. For most users it should run in a minute or two.

Please delete your inventory file, $HOME/adedu.dat, after you are done with it. They can be rather large and become irrelevant after you have modified your directory. The file is binary and can only be viewed as discussed below.

Once the inventory is created there are many options for displaying the data. You can:

  • Filter by age
  • Create a text file report
  • Create a static HTML page that can be viewed offline
  • Create a navigable web page that can show subdirectories

Here are some examples of generating a text report filtering by age. The first column is the amount of data in kilobytes in the given directory of that age or older.

Find data over 2 years old…

[joeuser@mio001 joeuser]$ /opt/utility/agedu -a 2y -f  $HOME/adedu.dat -t $SCRATCH
89247072 /scratch/joeuser/DMOL
42528 /scratch/joeuser/QuIET
48716960 /scratch/joeuser/Siesta
395154304 /scratch/joeuser

Find data over 1 years old…

[joeuser@mio001 joeuser]$ /opt/utility/agedu -a 1y -f  $HOME/adedu.dat -t $SCRATCH
89247072 /scratch/joeuser/DMOL
2170464 /scratch/joeuser/Octopus
42528 /scratch/joeuser/QuIET
48717024 /scratch/joeuser/Siesta
1952 /scratch/joeuser/ddscat
397326784 /scratch/joeuser

Find data over 1 month old…

[joeuser@mio001 joeuser]$ /opt/utility/agedu -a 1m -f  $HOME/adedu.dat -t $SCRATCH
89247072 /scratch/joeuser/DMOL
2170528 /scratch/joeuser/Octopus
512941760 /scratch/joeuser/Qchem
42528 /scratch/joeuser/QuIET
48717024 /scratch/joeuser/Siesta
1952 /scratch/joeuser/ddscat
910268608 /scratch/joeuser
[joeuser@mio001 joeuser]$

Notice the size changes as we change the reporting period. You can also specify subdirectories to get more detailed information.

[joeuser@mio001 joeuser]$ /opt/utility/agedu -a 9m -f  $HOME/adedu.dat -t $SCRATCH/Qchem
8726752 /scratch/joeuser/Qchem/Aniline
1931872 /scratch/joeuser/Qchem/Benzene
32448 /scratch/joeuser/Qchem/Coronene
27296 /scratch/joeuser/Qchem/H2
135328 /scratch/joeuser/Qchem/H2O
34905824 /scratch/joeuser/Qchem/TPA
96 /scratch/joeuser/Qchem/TPBoron
20947296 /scratch/joeuser/Qchem/TPCarbon
50214656 /scratch/joeuser/Qchem/TPP
9971424 /scratch/joeuser/Qchem/TPSilicon
40411488 /scratch/joeuser/Qchem/Trinapamine
4786048 /scratch/joeuser/Qchem/Triphenylarsenic
172090528 /scratch/joeuser/Qchem
[joeuser@mio001 joeuser]$

Create a static web page for offline viewing…

[joeuser@mio001 joeuser]$ /opt/utility/agedu -a 1y -f  $HOME/adedu.dat -H $SCRATCH/Qchem > agedu.html

You can then copy the file agedu.html to your local machine for viewing. This will give you a static very top level view of your directory structure.

The next option is much more interesting.

Create a navigable web page …

Finally, maybe the most useful option is to create a navigable web page that allows you to dive into subdirectories. When the page is created you can view your directory as a tree structure and navigate to see the size and ages of directories and files.

[joeuser@mio001 joeuser]$ /opt/utility/agedu -a 2y -f  $HOME/adedu.dat -w --address mio001.mines.edu --auth basic
Username: agedu
Password: p35n1vnd94nmx9cy
URL: http://mio001.mines.edu:34372/

This command will block until you do a Control-C. The command shows a user name: agedu, a password and a URL. Agedu actually starts a mini web server. It will display your data via the given URL. You will need to enter the requested Password and username.

login

Fig1. – An example agedu dynmamic web page login screen.

On a live version of the page you can click on the directory name on the right to see details.

joeuser

Fig2. – A static screen dump of a navigable web page created with agedu.

Please note, this page is not updated if you delete files. You will need to regenerate the agedu.dat file to see your updates.

Finally, please delete your inventory file, $HOME/adedu.dat, after you are done with it. They can be rather large and become irrelevant after you have modified your directory. The file is binary and can only be viewed as discussed above.

Less Frequently Asked Questions:
Show me an archive of past emails sent to users: