smartgrid.rewards.numeric.differentiated.over_consumption.OverConsumption

class smartgrid.rewards.numeric.differentiated.over_consumption.OverConsumption[source]

Bases: Reward

Reward representing the over-consumption of an Agent.

The over-consumption is the quantity of energy that was consumed by the society of agents, but which was not available in the grid. (We assume that the grid automatically buys from the national grid to compensate, which has a negative impact. Over-consumption should thus be avoided).

We compare the quantity of energy taken (i.e., consumed + stored from the grid) by all agents to the quantity of energy over-consumed by all agents. This gives us a global component (the current environment).

Then, we compare the quantity of energy over-consumed, minus the agent’s taken energy, and we compare to the sum of energy taken by all agents. This gives us a local component (the hypothetical environment, had the agent not acted).

The reward follows the Difference Reward principle, and thus is the global component minus the local component.

__init__()[source]

Methods

__init__()

calculate(world, agent)

Compute the reward for a specific Agent at the current time step.

is_activated(world, agent)

Determines whether the reward function should produce a reward.

reset()

Reset the reward function.

Attributes

name

Uniquely identifying, human-readable name for this reward function.

calculate(world: World, agent: Agent)[source]

Compute the reward for a specific Agent at the current time step.

Parameters:
  • world – The World, used to get the current state and determine consequences of the agent’s action.

  • agent – The Agent that is rewarded, used to access particular information about the agent (personal state) and its action.

Returns:

A reward, i.e., a single value describing how well the agent performed. The higher the reward, the better its action was. Typically, a value in [0,1] but any range can be used.

is_activated(world: World, agent: Agent) bool

Determines whether the reward function should produce a reward.

This function can be used to enable/disable the reward function at will, allowing for a variety of use cases (changing the reward function over the time, using different reward functions for different agents, etc.).

By default, it returns True to avoid forcing the definition of this function. To specify when this reward function should be activated, two ways are possible:

  • Wrap the Reward object in a constraint class, e.g., TimeConstrainedReward.

  • Override this method in the subclass to implement the desired activation mechanism.

Parameters:
  • world – The World in which the reward function may be activated.

  • agent – The Agent that should (potentially) be rewarded by this reward function.

Returns:

A boolean indicating whether the reward function should produce a reward at this moment (for this state of the world and this learning agent).

name: str

Uniquely identifying, human-readable name for this reward function.

reset()

Reset the reward function.

This function must be overridden by reward functions that use a state, so that the state is reset with the environment. By default, does nothing, as most reward functions do not use a state.