NetCDF User's Guide for C
char | 8-bit characters intended for representing text. |
byte | 8-bit signed or unsigned integers (see discussion below). |
short | 16-bit signed integers. |
int | 32-bit signed integers. |
float or real | 32-bit IEEE floating-point. |
double | 64-bit IEEE floating-point. |
These types were chosen to provide a reasonably wide range of trade-offs between data precision and number of bits required for each value. These external data types are independent from whatever internal data types are supported by a particular machine and language combination.
These types are called "external", because they correspond to the portable external representation for netCDF data. When a program reads external netCDF data into an internal variable, the data is converted, if necessary, into the specified internal type. Similarly, if you write internal data into a netCDF variable, this may cause it to be converted to a different external type, if the external type for the netCDF variable differs from the internal type.
The separation of external and internal types and automatic type conversion have several advantages. You need not be aware of the external type of numeric variables, since automatic conversion to or from any desired numeric type is available. You can use this feature to simplify code, by making it independent of external types, using a sufficiently wide internal type, e.g., double precision, for numeric netCDF data of several different external types. Programs need not be changed to accommodate a change to the external type of a variable.
If conversion to or from an external numeric type is necessary, it is handled by the library. This automatic conversion and separation of external data representation from internal data types will become even more important in a future version of netCDF, when new external types will be added for packed data for which there may be no natural corresponding internal type, for example, packed arrays of 11-bit values.
Converting from one numeric type to another may result in an error if the target type is not capable of representing the converted value. For example, an internal short integer type may not be able to hold data stored externally as an integer. When accessing an array of values, a range error is returned if one or more values are out of the range of representable values, but other values are converted properly.
Note that mere loss of precision in type conversion does not return an error. Thus, if you read double precision values into a single-precision floating-point variable, for example, no error results unless the magnitude of the double precision value exceeds the representable range of single-precision floating point numbers on your platform. Similarly, if you read a large integer into a float incapable of representing all the bits of the integer in its mantissa, this loss of precision will not result in an error. If you want to avoid such precision loss, check the external types of the variables you access to make sure you use an internal type that has adequate precision.
The names for the primitive external data types (byte
, char
, short
, int
, float
or real
, and double
) are reserved words in CDL, so the names of variables, dimensions, and attributes must not be type names.
It is possible to interpret byte
data as either signed (-128 to 127) or unsigned (0 to 255). However, when reading byte data to be converted into other numeric types, it is interpreted as signed.
See Section 2.3 "Variables," page 11, for the correspondence between netCDF external data types and the data types of a language.
Access to data is direct, which means you can access a small subset of data from a large dataset efficiently, without first accessing all the data that precedes it. Reading and writing data by specifying a variable, instead of a position in a file, makes data access independent of how many other variables are in the dataset, making programs immune to data format changes that involve adding more variables to the data.
In the C and FORTRAN interfaces, datasets are not specified by name every time you want to access data, but instead by a small integer called a dataset ID, obtained when the dataset is first created or opened.
Similarly, a variable is not specified by name for every data access either, but by a variable ID, a small integer used to identify each variable in a netCDF dataset.
An array section is a "slab" or contiguous rectangular block that is specified by two vectors. The index vector gives the indices of the element in the corner closest to the origin. The count vector gives the lengths of the edges of the slab along each of the variable's dimensions, in order. The number of values accessed is the product of these edge lengths.
A subsampled array section is similar to an array section, except that an additional stride vector is used to specify sampling. This vector has an element for each dimension giving the length of the strides to be taken along that dimension. For example, a stride of 4 means every fourth value along the corresponding dimension. The total number of values accessed is again the product of the elements of the count vector.
A mapped array section is similar to a subsampled array section except that an additional index mapping vector allows one to specify how data values associated with the netCDF variable are arranged in memory. The offset of each value from the reference location, is given by the sum of the products of each index (of the imaginary internal array which would be used if there were no mapping) by the corresponding element of the index mapping vector. The number of values accessed is the same as for a subsampled array section.
The use of mapped array sections is discussed more fully below, but first we present an example of the more commonly used array-section access.
temp
variable at one level (say, the second), and assume that there are currently three records (time
values) in the netCDF dataset. Recall that the dimensions are defined as
lat = 5, lon = 10, level = 4, time = unlimited;and the variable
temp
is declared as
float temp(time, level, lat, lon);in the CDL notation.
A corresponding C variable that holds data for only one level might be declared as
#define LATS 5 #define LONS 10 #define LEVELS 1 #define TIMES 3 /* currently */ ... float temp[TIMES*LEVELS*LATS*LONS];to keep the data in a one-dimensional array, or
... float temp[TIMES][LEVELS][LATS][LONS];using a multidimensional array declaration.
To specify the block of data that represents just the second level, all times, all latitudes, and all longitudes, we need to provide a start index and some edge lengths. The start index should be (0, 1, 0, 0) in C, because we want to start at the beginning of each of the time
, lon
, and lat
dimensions, but we want to begin at the second value of the level
dimension. The edge lengths should be (3, 1, 5, 10) in C, (since we want to get data for all three time
values, only one level
value, all five lat
values, and all 10 lon
values. We should expect to get a total of 150 floating-point values returned (3 * 1 * 5 * 10), and should provide enough space in our array for this many. The order in which the data will be returned is with the last dimension, lon
, varying fastest:
temp[0][1][0][0] temp[0][1][0][1] temp[0][1][0][2] temp[0][1][0][3] ... temp[2][1][4][7] temp[2][1][4][8] temp[2][1][4][9]Different dimension orders for the C, FORTRAN, or other language interfaces do not reflect a different order for values stored on the disk, but merely different orders supported by the procedural interfaces to the languages. In general, it does not matter whether a netCDF dataset is written using the C, FORTRAN, or another language interface; netCDF datasets written from any supported language may be read by programs written in other supported languages.
With mapped array access, the offset (number of array elements) from the origin of a memory-resident array to a particular point is given by the inner product[1] of the index mapping vector with the point's coordinate offset vector. A point's coordinate offset vector gives, for each dimension, the offset from the origin of the containing array to the point.In C, a point's coordinate offset vector is the same as its coordinate vector.
The index mapping vector for a regular array section would have--in order from most rapidly varying dimension to most slowly--a constant 1, the product of that value with the edge length of the most rapidly varying dimension of the array section, then the product of that value with the edge length of the next most rapidly varying dimension, and so on. In a mapped array, however, the correspondence between netCDF variable disk locations and memory locations can be different.
For example, the following C definitions
struct vel { int flags; float u; float v; } vel[NX][NY]; ptrdiff_t imap[2] = { sizeof(struct vel), sizeof(struct vel)*NY };where
imap
is the index mapping vector, can be used to access the memory-resident values of the netCDF variable, vel(NY,NX)
, even though the dimensions are transposed and the data is contained in a 2-D array of structures rather than a 2-D array of floating-point values. A detailed example of mapped array access is presented in the description of the interfaces for mapped array access. See Section 7.9 "Write a Mapped Array of Values: nc_put_varm_type," page 62.
Note that, although the netCDF abstraction allows the use of subsampled or mapped array-section access there use is not required. If you do not need these more general forms of access, you may ignore these capabilities and use single value access or regular array section access instead.
If the netCDF external type for a variable is char
, only character data representing text strings can be written to or read from the variable. No automatic conversion of text data to a different representation is supported.
If the type is numeric, however, the netCDF library allows you to access the variable data as a different type and provides automatic conversion between the numeric data in memory and the data in the netCDF variable. For example, if you write a program that deals with all numeric data as double-precision floating point values, you can read netCDF data into double-precision arrays without knowing or caring what the external type of the netCDF variables are. On reading netCDF data, integers of various sizes and single-precision floating-point values will all be converted to double-precision, if you use the data access interface for double-precision values. Of course, you can avoid automatic numeric conversion by using the netCDF interface for a value type that corresponds to the external data type of each netCDF variable, where such value types exist.
The automatic numeric conversions performed by netCDF are easy to understand, because they behave just like assignment of data of one type to a variable of a different type. For example, if you read floating-point netCDF data as integers, the result is truncated towards zero, just as it would be if you assigned a floating-point value to an integer variable. Such truncation is an example of the loss of precision that can occur in numeric conversions.
Converting from one numeric type to another may result in an error if the target type is not capable of representing the converted value. For example, an integer may not be able to hold data stored externally as an IEEE floating-point number. When accessing an array of values, a range error is returned if one or more values are out of the range of representable values, but other values are converted properly.
Note that mere loss of precision in type conversion does not result in an error. For example, if you read double precision values into an integer, no error results unless the magnitude of the double precision value exceeds the representable range of integers on your platform. Similarly, if you read a large integer into a float incapable of representing all the bits of the integer in its mantissa, this loss of precision will not result in an error. If you want to avoid such precision loss, check the external types of the variables you access to make sure you use an internal type that has a compatible precision.
Whether a range error occurs in writing a large floating-point value near the boundary of representable values may be depend on the platform. The largest floating-point value you can write to a netCDF float variable is the largest floating-point number representable on your system that is less than 2 to the 128th power. The largest double precision value you can write to a double variable is the largest double-precision number representable on your system that is less than 2 to the 1024th power.
This automatic conversion and separation of external data representation from internal data types will become even more important in a future version of netCDF, when new external types will be added for packed data for which there is no natural corresponding internal type, for example, arrays of 11-bit values.
It is possible to build other kinds of data structures from sets of arrays by adopting various conventions regarding the use of data in one array as pointers into another array. The netCDF library won't provide much help or hindrance with constructing such data structures, but netCDF provides the mechanisms with which such conventions can be designed.
The following example stores a ragged array ragged_mat
using an attribute row_index
to name an associated index variable giving the index of the start of each row. In this example, the first row contains 12 elements, the second row contains 7 elements (19 - 12), and so on.
float ragged_mat(max_elements); ragged_mat:row_index = "row_start"; int row_start(max_rows); data: row_start = 0, 12, 19, ...As another example, netCDF variables may be grouped within a netCDF dataset by defining attributes that list the names of the variables in each group, separated by a conventional delimiter such as a space or comma. Using a naming convention for attribute names for such groupings permits any number of named groups of variables. A particular conventional attribute for each variable might list the names of the groups of which it is a member. Use of attributes, or variables that refer to other attributes or variables, provides a flexible mechanism for representing some kinds of complex structures in netCDF datasets.