protomotions.agents.evaluators.metrics module

class protomotions.agents.evaluators.metrics.MotionMetrics(num_motions, motion_lens, max_motion_len, num_sub_features=1, device=None, dtype=<Mock object>)[source]

Bases: object

Store and compute metrics for motion data.

Stores raw data in the shape [num_motions, max_motion_len, num_sub_features] and supports basic reduction operations for computing final metrics.

__init__(num_motions, motion_lens, max_motion_len, num_sub_features=1, device=None, dtype=<Mock object>)[source]

Initialize the metrics tracker.

Parameters:
  • num_motions (int) – Number of motions to track

  • motion_lens (MockTensor) – Number of frames of each motion sequence

  • max_motion_len (int) – conservative max number of frames allocated for data storage for shape consistency across different GPUs when aggregating

  • num_sub_features (int) – Number of sub-features per data point (default: 1)

  • device (<Mock object at 0x701e6bcee4d0>[]) – Device to store the tensors on

  • dtype (<Mock object at 0x701e6bcee6d0>[]) – Data type for the tensors

update(motion_ids, values, frame_indices=None)[source]

Update the metrics data for specified motions.

Parameters:
  • motion_ids (MockTensor) – Tensor of motion IDs to update [batch_size]

  • values (MockTensor) – Tensor of values to update [batch_size, num_sub_features]

  • frame_indices (MockTensor | None) – Optional tensor of frame indices [batch_size] If None, will use the current count for each motion

get_unfilled_mask()[source]

Get a mask of the unfilled values in the data.

max_reduce_each_motion(with_frame=False)[source]

Reduce the data by taking the max of each motion.

min_reduce_each_motion()[source]

Reduce the data by taking the min of each motion.

mean_reduce_each_motion()[source]

Reduce the data by taking the mean of each motion.

ops_mean_reduce(op)[source]

first reduce the data by taking the op of each motion, then mean reduce across motions.

max_mean_reduce()[source]
min_mean_reduce()[source]
mean_mean_reduce()[source]
mean_max_reduce()[source]

First reduce each motion by taking the mean over valid frames, then take the max across all motions.

Returns:

Maximum of the per-motion means (worst performing motion)

Return type:

torch.Tensor

mean_min_reduce()[source]

First reduce each motion by taking the mean over valid frames, then take the min across all motions.

Returns:

Minimum of the per-motion means (best performing motion)

Return type:

torch.Tensor

compute_finite_difference_jitter_reduce_each_motion(num_bodies, aggregate_method='mean', order=2, field_description='data')[source]

Generic method to compute jitter using finite differences of specified order. Output is padded to match input length (padded with zeros at the beginning).

Parameters:
  • num_bodies (int) – Number of rigid bodies (to reshape the flattened data)

  • aggregate_method (str) – How to aggregate across bodies (“mean”, “max”, “sum”)

  • order (int) – Order of finite differences (1 for velocity-like, 2 for acceleration-like)

  • field_description (str) – Description of the field for error messages

Returns:

Jitter values with shape [num_motions, max_motion_len] (same as input)

Return type:

torch.Tensor

compute_jitter_reduce_each_motion(num_bodies, aggregate_method='mean')[source]

Compute jitter (2nd order finite differences of positions) and reduce across body dimensions.

This method is specifically designed for rigid_body_pos data with shape [num_motions, max_motion_len, num_bodies*3]. It computes the L2 norm of 2nd order finite differences (pos[t+1] - 2*pos[t] + pos[t-1]) for each body, then aggregates across all bodies using the specified method. Output is zero-padded at the beginning to match input length.

Parameters:
  • num_bodies (int) – Number of rigid bodies (to reshape the flattened data)

  • aggregate_method (str) – How to aggregate across bodies (“mean”, “max”, “sum”)

Returns:

Jitter values with shape [num_motions, max_motion_len] (same as input)

Return type:

torch.Tensor

compute_rotation_jitter_reduce_each_motion(num_bodies, aggregate_method='mean')[source]

Compute rotation jitter (1st order finite differences of angular velocities) and reduce across body dimensions.

This method is specifically designed for rigid_body_ang_vel data with shape [num_motions, max_motion_len, num_bodies*3]. It computes the L2 norm of 1st order finite differences (ang_vel[t+1] - ang_vel[t]) for each body, then aggregates across all bodies using the specified method. Output is zero-padded at the beginning to match input length.

Parameters:
  • num_bodies (int) – Number of rigid bodies (to reshape the flattened data)

  • aggregate_method (str) – How to aggregate across bodies (“mean”, “max”, “sum”)

Returns:

Rotation jitter values with shape [num_motions, max_motion_len] (same as input)

Return type:

torch.Tensor

jitter_mean_reduce_each_motion(num_bodies, aggregate_method='mean')[source]

Compute jitter and then take the mean over time for each motion.

Parameters:
  • num_bodies (int) – Number of rigid bodies

  • aggregate_method (str) – How to aggregate across bodies (“mean”, “max”, “sum”)

Returns:

Mean jitter value for each motion [num_motions]

Return type:

torch.Tensor

rotation_jitter_mean_reduce_each_motion(num_bodies, aggregate_method='mean')[source]

Compute rotation jitter and then take the mean over time for each motion.

Parameters:
  • num_bodies (int) – Number of rigid bodies

  • aggregate_method (str) – How to aggregate across bodies (“mean”, “max”, “sum”)

Returns:

Mean rotation jitter value for each motion [num_motions]

Return type:

torch.Tensor

copy_from(other)[source]

Copy data from another MotionMetrics object.

copy_from_motion_ids(other, motion_ids)[source]

Copy data from another MotionMetrics object for specific motions.

merge_from(other)[source]

Merge data from another MotionMetrics object.

reset()[source]

Reset all stored data and frame counts.

to(device)[source]

Move metrics to specified device.