wiki:WikiStart

Version 11 (modified by hfinkel, 7 years ago) (diff)

--

GenericIO

GenericIO is a write-optimized library for writing self-describing scientific data files on large-scale parallel file systems.

References

Habib, et al., HACC: Simulating Future Sky Surveys on State-of-the-Art Supercomputing Architectures, New Astronomy, 2015 http://arxiv.org/abs/1410.2805.

Source Code

A source archive is available here: genericio-20170925.tar.gz (previous releases: genericio-20160829.tar.gz genericio-20160412.tar.gz genericio-20150608.tar.gz), or from git:

  git clone http://git.mcs.anl.gov/genericio.git

Output file partitions (subfiles)

If you're running on an IBM BG/Q supercomputer, then the number of subfiles (partitions) chosen is based on the I/O nodes in an automatic way. Otherwise, by default, the GenericIO library picks the number of subfiles based on a fairly-naive hostname-based hashing scheme. This works reasonably-well on small clusters, but not on larger systems. On a larger system, you might want to set these environmental variables:

  GENERICIO_PARTITIONS_USE_NAME=0
  GENERICIO_RANK_PARTITIONS=256

Where the number of partitions (256 above) determines the number of subfiles used. If you're using a Lustre file system, for example, an optimal number of files is:

# of files * stripe count ~ # OSTs

On Titan, for example, there are 1008 OSTs, and a default stripe count of 4, so we use approximately 256 files.

Benchmarks

Once you build the library and associated programs (using make), you can run, for example:

  $ mpirun -np 8 ./mpi/GenericIOBenchmarkWrite /tmp/out.gio 123456 2
  Wrote 9 variables to /tmp/out (4691036 bytes) in 0.2361s: 18.9484 MB/s
  $ mpirun -np 8 ./mpi/GenericIOBenchmarkRead /tmp/out.gio
  Read 9 variables from /tmp/out (4688028 bytes) in 0.223067s: 20.0426 MB/s [excluding header read]

The read benchmark always reads all of the input data. The output benchmark takes two numerical parameters, one if the number of data rows to write, and the second is a random seed (which slightly perturbs the per-rank output sizes, but not by much). Each row is 36 bytes for these benchmarks.

The write benchmark can be passed the -c parameter to enable output compression. Both benchmarks take an optional -a parameter to request that homogeneous aggregates (i.e. "float4") be used instead of using separate arrays for each position/velocity component.