smartgrid.rewards.numeric.differentiated.equity.Equity¶
- class smartgrid.rewards.numeric.differentiated.equity.Equity[source]¶
Bases:
Reward
Reward based on the equity of comforts.
We first get the comfort of all agents in the society, and we compute the equity of these comforts (see the
smartgrid.util.equity
and especiallysmartgrid.util.equity.hoover()
for details). This gives us a global component (the current environment).Then, we compute the equity only for the others’ comforts, i.e., all agents except the one being currently judged. This gives us a local component (the hypothetical environment, had the agent not acted).
The reward follows the Difference Reward principle, and thus is the global component minus the local component.
Methods
__init__
()calculate
(world, agent)Compute the reward for a specific Agent at the current time step.
reset
()Reset the reward function.
Attributes
Uniquely identifying, human-readable name for this reward function.
- calculate(world, agent)[source]¶
Compute the reward for a specific Agent at the current time step.
- Parameters:
world – The World, used to get the current state and determine consequences of the agent’s action.
agent – The Agent that is rewarded, used to access particular information about the agent (personal state) and its action.
- Returns:
A reward, i.e., a single value describing how well the agent performed. The higher the reward, the better its action was. Typically, a value in [0,1] but any range can be used.
- reset()¶
Reset the reward function.
This function must be overridden by reward functions that use a state, so that the state is reset with the environment. By default, does nothing, as most reward functions do not use a state.