Logging

Within magnum.np you are provided with three Loggers in order to log the simulation state. These are the ScalarLogger, the FieldLogger and the Logger. The ScalarLogger is useful to log scalar values such as the time and the averaged magnetization. The FieldLogger is used to log arbitrary scalar and vector fields alongside the simulation time. Lastly, Logger logs both scalar and field values by using both the ScalarLogger and the FieldLogger. Furthermore, magnum.np includes a resume function which enables the user to restart a simulation from the last logged state.

Logger

class magnumnp.Logger(directory, scalars=[], fields=[], scalars_every=1, fields_every=1, scale=1.0)[source]

Bases: object

Combined Scalar- and Field-Logger class

Arguments
directory (str)

The name of the log file

scalars ([str | function])

The columns to be written to the log file

scalars_every (int)

Write scalar to log file every nth call

fields ([str | function])

The columns to be written to the log file

fields_every (int)

Write fields to log file every nth call

Example
# provide key strings with are available in state
logger = Logger('data', ['m', demag.h], ['m'], fields_every = 100)

# Actually log fields
state = State(mesh)
logger << state
__init__(directory, scalars=[], fields=[], scalars_every=1, fields_every=1, scale=1.0)[source]
__weakref__

list of weak references to the object (if defined)

is_resumable()[source]

Returns True if logger can resume from log files.

Returns
bool

True if resumable, False otherwise

resume(state)[source]

Tries to resume from existing log files. If resume is possible the state object is updated with latest possible values of t and m and the different log files are aligned and resumed. If resume is not possible the state is not modiefied and the simulations starts from the beginning.

Arguments
state (State)

The state to be resumed from the log data

The resume function is currently only available for the time t and the magnetization m. It uses existing log files and needs the field logger for the magnetization. Since this might not be given, resuming a state is not always possible and False otherwise. Therefore, you can check whether the state can be resumed by using is_resumable(). This function will return True in case resume is possible. Applying resume() will update the state object with the latest values of t and m. An example follows to show how this might be implemented. Here the magnetization is logged, then it is checked whether the state is resumable, after which the state is resumed and finally the state is logged.

logger = Logger('data', scalars = ['t', 'm'], fields = ['m'])
logger.is_resumable()
logger.resume(state)
logger << state

ScalarLogger

class magnumnp.ScalarLogger(filename, columns, every=1, fsync_every=1)[source]

Bases: object

__init__(filename, columns, every=1, fsync_every=1)[source]

Simple logger class to log scalar values into a tab separated file.

Arguments
filename (str)

The name of the log file

columns ([str | function])

The columns to be written to the log file

every (int)

Write row to log file every nth call

fsync_every (int)

Call fsync every nth write to empty OS buffer

Example
# provide key strings with are available in state
logger = ScalarLogger('log.dat', ['t','m'])

# provide func(state) or tuple (name, func(state))
logger = ScalarLogger('log.dat', [('t[ns]', lambda state: state.t*1e9)])

# provide predifined functions
logger = ScalarLogger('log.dat', [demag.h, demag.E])

# Actually log a row
state = State(mesh)
logger << state
__weakref__

list of weak references to the object (if defined)

resumable_step()[source]

Returns the last step the logger can resume from, e.g. if the logger logs every 10th step and the first (i = 0) step was already logged, the result is 10.

Returns
int

The step number the logger is able to resume from

resume(i)[source]

Try to resume existing log file from log step i. The log file is truncated accordingly.

Arguments
i (int)

The log step to resume from

To explain how to log a custom scalar function we will look at further examples. In principle you can use ScalarLogger to log any function that accepts the state as a parameter and returns a scalar or a list/tuple. For example, you can log the state at a certain point. This would have to be done by defining a lambda function like so:

myfunc = lambda state: state.m[1,2,3,:]

This function would return the state at point [1,2,3,:]. To log this you can simply add myfunc to the list of variables that are being logged:

ScalarLogger(['m', 't', myfunc])

You might also want to log the average magnetization which you can do as follows:

myfunc = lambda state: state.m.avg()
ScalarLogger(['m', 't', myfunc])

Or simply:

ScalarLogger(['m', 't', lambda state: state.m.avg()])

FieldLogger

class magnumnp.FieldLogger(filename, fields, every=1, scale=1.0)[source]

Bases: object

__init__(filename, fields, every=1, scale=1.0)[source]

Logger class for fields

Arguments
filename (str)

The name of the log file

fields ([str | function])

The columns to be written to the log file

every (int)

Write row to log file every nth call

scale (float)

Scale factor for dimentions (e.g. 1e9 for nm-units)

Example
# provide key strings with are available in state
logger = FieldLogger('data/m.pvd', ['m', demag.h])

# Actually log fields
state = State(mesh)
logger << state
__weakref__

list of weak references to the object (if defined)

last_recorded_step()[source]

Returns the number of the last step logged and None if no step was yet logged.

Returns
int

Number of the last step recorded

resume(i)[source]

Try to resume existing log file from log step i. The log file is truncated accordingly.

Arguments
i (int)

The log step to resume from

step_data(i, field=None)[source]

Returns field and time to a given step number.

Arguments
i (int)

The step number

field (str)

The field to be read

Returns
(dolfin.Function, float)

The field of step i and the corresponding time

Just like the ScalarLogger the FieldLogger can also log custom functions. All you need is a function that takes the state as a parameter and returns a vectorfeld (nx,ny,nz,3). The name of the function is then simply given to the FieldLogger as a string. For example, to log the function nsk you would do the following:

def nsk(state): # TODO: document and improve interface
    m = state.m.mean(axis=2)
    dxm = torch.stack(torch.gradient(m, spacing = state.mesh.dx[0], dim = 0), dim = -1).squeeze(-1)
    dym = torch.stack(torch.gradient(m, spacing = state.mesh.dx[1], dim = 1), dim = -1).squeeze(-1)
    return 1./(4.*pi) * (m * torch.linalg.cross(dxm, dym)).sum() * state.mesh.dx[0] * state.mesh.dx[1]

FieldLogger("fields.pvd", ['m'], 'nsk', every  = 100)

Here every = 100 was added, to log only every 100th step.