Monday A: Getting Started with AWS, Singularity, and JEDI

In this activity you will:

  • Learn how to access AWS through the academy JupyterLab

  • Get to know the JEDI development container

  • Build JEDI fv3-bundle from source

  • Run the unit tests for fv3-bundle

  • Explore the JEDI source code and directory structure

Step 1: Icebreaker

You should now be in your Zoom breakout room with a few fellow Padawans. Maybe you already know some of them, maybe you don’t. In any case, you will be together for the rest of the week so please take a moment to meet each other. A JEDI master may also be joining you for this activity.

Everyone turn on your cameras and microphones. Now take turns to introduce yourself. In the following list of questions, please answer numbers 1, 2, and 3. Then pick 1 or 2 more from the list - whichever ones you feel inspired to answer.

Please limit your introduction to about 3-4 minutes; this icebreaker step should last no more than 15 minutes. But, feel free to return to these questions (and others you can come up with!) throughout the week. You may wish to ask your fellow padawans (or a JEDI master!) one of these questions as you are waiting for codes to compile and run…

  1. What is your name?

  2. Where you live and work?

  3. What you plan to do with JEDI?

  4. What did you work on for your thesis (PhD or masters or undergraduate - answer whichever you wish)?

  5. Where are you from originally? What was that place like?

  6. Have you attended other virtual conferences - how did they go?

  7. Show off something from your home workspace: maybe your son? Your dog? Your favorite plant? The sandwich you made for lunch?

  8. What’s your favorite book? Or movie? Or TV show? Or song?

  9. If you were to pick a second career, what would you choose?

  10. Do you have a superpower? What is it?

Step 2: Access your AWS Instance

As a padawan in this JEDI academy, you already have a compute node on the Amazon cloud waiting for you to use it. To start this activity, a JEDI master will materialize in your virtual group and give each of you an ip address and a password. You will use these throughout the week to access your AWS node.

When you receive your IP address and password, you should proceed to log into your compute node as described here. These AWS access instructions are set apart from the activity instructions because you will repeat them every day when you do the activities.

Step 3: Explore the JupyterLab Interface

Take a few moments to familiarize yourself with the web interface provided by JupyterLab. Select the terminal tab in the main window to access the linux command line. Find an image file in the directory tree and see what happens when you select it.

Go to the python console window (likely labeled Console 1) and do a calculation: estimate how many seconds you have left of the Academy (and rejoice in the result!). Hint: <shift>-<enter> executes a particular cell in the Jupyter notebook; see the Run menu for more options. Open a new ssh terminal by selecting the artist’s palette on the left and scrolling down to Terminal-New Terminal. Switch to a dark background for your terminal window if you wish.

Step 4: Download and enter the JEDI Container

In order to build JEDI, we will need some of the software packages it depends on. These include build tools such as CMake and ecbuild, IO utilities such as NetCDF, and linear algebra libraries such as LAPACK and Eigen. We’ll also need C++ and Fortran compilers and an MPI implementation. For the activities in this academy, we will acquire these dependencies by means of a software development container. For an overview of what software containers are and why we use them, see the JEDI documentation

Various JEDI containers exist with different platforms (Singularity, Charliecloud, and Docker), different compilers (gnu, clang, and intel), and different MPI implementations (Openmpi, mpich, Intel MPI). For the Academy we’ll be using the gnu-openmpi Singularity container, which you can obtain by executing the following commands:

cd $HOME
singularity pull library://jcsda/public/jedi-gnu-openmpi-dev

If the pull was successful, you should see a new file in your current directory with the name jedi-gnu-openmpi-dev_latest.sif. If you wish, you can verify that the container came from JCSDA by entering:

singularity verify jedi-gnu-openmpi-dev_latest.sif

Now you can enter the container with the following command:

singularity shell -e jedi-gnu-openmpi-dev_latest.sif

To exit the container at any time (not now!), simply enter

exit

It is worth noting here that this JEDI singularity container is public - as long as you have access to the Singularity software (available on many HPC systems and available to download to your laptop), you can download it. You do not need to be on AWS.

Step 5: Get to know the Container

When you ran the singularity shell command at the end of Step 1, you entered a new world, or at least a new computing environment. Take a moment to explore it.

First, notice that you are in the same directory as before:

echo $PWD

So, things may look the same, though your command line prompt has likely changed. And, you can see that your username is the same as before and your home directory has not changed:

whoami
echo $HOME
cd ~
ls

You are still the same person. And, more importantly from a system administrator’s perspective, you still have the same access permissions that you did outside of the container. You can still see all the files in your home directory. And, you can still edit them and create new files (give it a try). But things have indeed changed. Enter this:

lsb_release --all

This tells you that you are now running an ubuntu 20.04 operating system, regardless of what host computer you are on and what operating system it has. Furthermore, take a look at some of the system directories such as:

ls /usr/local/lib

There you will see a host of JEDI dependencies, such as netcdf, lapack, and eckit, that may not be installed on your host system. Thus, singularity provides its own version of system directories such as /usr but shares other directories with the host system, such as $HOME. If you’re familiar with any of these libraries, you can run some commands, for example:

nc-config --all

Step 6: Build fv3-bundle

JEDI packages are organized into bundles. Each bundle identifies the different GitHub repositories that are needed to run the applications and orchestrates how all of these repositories are built and linked together.

In this tutorial we will build fv3-bundle. We will put the code in a directory coming off your home directory called jedi:

mkdir -p $HOME/jedi
cd $HOME/jedi
git clone https://github.com/jedi-da-academy/fv3-bundle.git

This should create a new directory called $HOME/jedi/fv3-bundle.

To see what code repositories will be built, cd to the fv3-bundle directory and view the file CMakeLists.txt. Look for the lines that begin with ecbuild-bundle.

ecbuild is a collection of CMake utilities that forms the basis of the JEDI build system. The ecbuild-bundle() function calls specify different GitHub repositories and integrate them into the building of the bundle, in order of dependency.

You will see references there to core JEDI repositories like OOPS, SABER, IODA, and UFO. You will also see references to repositories used to construct observation operators, such as JCSDA’s Community Radiative Transfer Model (CRTM). And, finally, you will see references to GitHub repositories that contain code needed to build the FV3-GFS and FV3-GEOS models and integrate them with JEDI. These include the linearized FV3 model used for 4D Variational DA, and the FV3-JEDI repository that provides the interface between JEDI and models based on the FV3 dynamical core.

Now, an important tip is: never build a bundle from the main bundle directory. In our example this means the top-level $HOME/jedi/fv3-bundle directory. Building from this directory would cause cmake to create new files that conflict with the original source code.

So, we will create a new build directory and run ecbuild from there:

mkdir -p $HOME/jedi/build
cd $HOME/jedi/build
ecbuild --build=RelWithDebInfo ../fv3-bundle

The --build=RelWithDebInfo option builds the code with optimization but also with debugging symbols in the executables and libraries that can be used to trace execution and identify where problems occur. Other build options include Release and Debug. If you omit the --build option, it defaults to RelWithDebInfo. The only required argument of ecbuild is the directory where the bundle is.

Warning

Some of the scripts in later activities assume that the JEDI build directory is $HOME/jedi/build as described here. If you give your build directory a different name or path, then you may have to modify these scripts accordingly.

We have not yet compiled the code; we have merely set the stage. To appreciate part of what these commands have done, take a quick look at the bundle directory:

ls ../fv3-bundle

Do you notice anything different? The bundle directory now includes directories that contain the code repositories that were specified by all those ecbuild-bundle calls in the CMakeLists.txt file as described above (apart from a few that are optional): oops, saber, ioda, ufo, fv3-jedi etc. If you wish, you can look in those directories and find the source code.

So, one of the things that ecbuild does is to check to see if the repositories are there. If they are not, it will retrieve (clone) them from GitHub. Running the make update command makes this explicit:

make update

Here ecbuild more clearly tells you which repositories it is pulling from GitHub and which branches. Running make update ensures that you get the latest versions of the various branches that are on GitHub. Though this is not necessary for tagged releases (which do not change), it is a good habit to get into if you seek to contribute to the JEDI source code.

All that remains is to actually compile the code (be sure to cd back to the build directory to run this):

make -j8

The -j8 option tells make to do a parallel build with 8 parallel processes.

While JEDI is building, you can proceed to Step 7.

Step 7: Explore the JEDI code

The JEDI code is organized into multiple git repositories, each with its own web interface on GitHub. You may recognize some of the repository names from today’s lectures - names like oops, ufo, ioda, saber, and fv3-jedi. If you don’t recognize these yet, you will by the end of the week.

Now explore some of the repositories themselves by navigating the directory tree with the menu on the left. Most have a src directory where the code is held as well as a test directory that mimics the structure of the src directory to test every class, function, module, and subroutine. An exception is oops which, as the highest-level organizational component is organized a bit differently. Here the QG and Lorenz 95 toy models have their own source and test directories (oops/qg/test and oops/l95/test respectively). Navigate to the oops/src/oops/interface directory to behold some of the generic C++ templates that set JEDI apart from other DA systems.

Note that when you select files of different types (C++, python, etc), the JupyterLab interface will bring them up in a new window, often with appropriate formatting.

If you wish, you can also explore the JEDI repositories on GitHub and the documentation for many of the components in our online users manual.

Step 8: Run the tests

Running the tests gives you an appreciation for how thoroughly the JEDI code is tested. Here we will only run the Tier 1 tests - more computationally extensive higher-tier tests are run regularly with varying frequency. These thoroughly test all the applications, functions, methods, class constructors, and other JEDI components. As emphasized in our working principles, no code is added to JEDI unless there is a test to make sure that it is working and that it continues to work as the code evolves.

A common source of spurious test failure is memory faults due to an insufficient memory stack size, which can lead to segmentation faults. To avoid this, run the following commands before running the JEDI ctests:

ulimit -s unlimited
ulimit -v unlimited

Now, to run the test suite, enter the following:

cd $HOME/jedi/build
ctest

When the tests complete, you can view the test log as follows (starting from the ~/jedi/build directory):

cd Testing/Temporary
vi LastTest.log

Note

If you selected files from the JupyterLab menu then this creates hidden files that can cause failures in the coding norms tests, for example oops_coding_norms. You can ignore these if you wish. Or, if you want the tests to pass again you can go to the directory in question and remove the jupyter notebook checkpoint files:

rm -rf `find -type d -name .ipynb_checkpoints`

For further tips on working with ecbuild and ctest see the JEDI Documentation on building and testing.

Step 9: Save the test data

When you ran the test suite you may have noticed one or more of these tests:

Test   #1: get_crtm_coeffs
Test #305: get_data_saber_data.tar.gz
Test #306: get_data_saber_data_mpi.tar.gz
Test #307: get_data_saber_data_oops.tar.gz
Test #308: get_data_saber_ref_1.tar.gz
Test #309: get_data_saber_ref_mpi_1.tar.gz
Test #310: get_data_saber_ref_cgal.tar.gz
Test #311: get_data_saber_ref_mpi_cgal.tar.gz
Test #312: get_data_saber_ref_oops.tar.gz
Test #566: get_ioda_test_data
Test #622: ufo_get_ufo_test_data
Test #623: ufo_get_crtm_test_data
Test #970: fv3jedi_get_fv3-jedi_test_data
Test #971: fv3jedi_get_crtm_test_data

As their name suggests, these tests retrieve test data files from an external server, in this case UCAR’s Digital Asset Services Hub (DASH). These test data files are used both as input to JEDI applications (e.g. observations and backgrounds) and as reference solutions to check the results of the tests. Many are binary netcdf files that are compared with the application output files with a specified tolerance using the netcdf compare utility nccmp. As such, they are relatively large files that should not be stored directly on GitHub.

In the academy practicals, we will not need to modify any of the test data. Furthermore, we want to ensure that the feature branch we will create on Wednesday will use the same test data (otherwise we may get some test failures).

So, the next step is to make a copy of the data that we can use throughout the week:

cp -r $HOME/jedi/fv3-bundle/test-data-release $HOME/jedi
cp -r $HOME/jedi/build/test_data/saber $HOME/jedi/test-data-release
ln -sf $HOME/jedi/test-data-release/crtm/2.3.0 $HOME/jedi/test-data-release/crtm/develop
ln -sf $HOME/jedi/test-data-release/ufo/1.1.0 $HOME/jedi/test-data-release/ufo/develop

The two link statements are needed because later this week we will create feature branches and the default for feature branches is to use data from the develop branch, following the git flow paradigm

Now we need to tell JEDI to use this test data when running the tests. This is done by setting the LOCAL_PATH_JEDI_TESTFILES environment variable, which JEDI will detect:

export LOCAL_PATH_JEDI_TESTFILES=$HOME/jedi/test-data-release

If this environment variable is set, then the get tests listed above will use the test data found locally instead of retrieving the data from the external server. This saves time when doing a fresh build and it can be useful for development purposes if you are introducing new test files or modifying existing test files. We’ll revisit this topic later this week.