Simulation#
Longest queue(s) first#
- class stochastic_matching.simulator.longest.Longest(model, shift_rewards=True, **kwargs)[source]#
Matching simulator derived from
ExtendedSimulator
. By default, the policy is greedy and whenever multiple edges are actionable, the longest queue (sum of queues of adjacent nodes) is chosen.Multiple options are available to tweak this behavior.
- Parameters:
model (
Model
) – Model to simulate.shift_rewards (
bool
, default=True) – Longest only selects edges with a positive score. By default, all actionable edges have a positive score, which makes the policy greedy. However, if negative rewards are added into the mix, this is not True anymore. Setting shift_rewards to True ensures that default edge scores are non-negative so reward-based selection is greedy. Setting it to False enables a non-greedy reward-based selection.**kwargs – Keyword parameters of
ExtendedSimulator
.
Examples
Let’s start with a working triangle. Not that the results are the same for all greedy simulator because there are no decision in a triangle (always at most one non-empty queue under a greedy policy).
>>> import stochastic_matching as sm >>> sim = Longest(sm.Cycle(rates=[3, 4, 5]), n_steps=1000, seed=42, max_queue=10) >>> sim.run() >>> sim.plogs Arrivals: [287 338 375] Traffic: [125 162 213] Queues: [[838 104 41 13 3 1 0 0 0 0] [796 119 53 22 8 2 0 0 0 0] [640 176 92 51 24 9 5 3 0 0]] Steps done: 1000
A non stabilizable diamond (simulation ends before completion due to drift).
>>> sim = Longest(sm.CycleChain(rates='uniform'), n_steps=1000, seed=42, max_queue=10) >>> sim.run() >>> sim.plogs Arrivals: [85 82 85 86] Traffic: [38 38 7 37 40] Queues: [[126 74 28 37 21 32 16 1 2 1] [326 8 3 1 0 0 0 0 0 0] [321 12 4 1 0 0 0 0 0 0] [ 90 80 47 37 37 23 11 3 5 5]] Steps done: 338
A stabilizable candy (but candies are not good for greedy policies).
>>> sim = Longest(sm.HyperPaddle(rates=[1, 1, 1.5, 1, 1.5, 1, 1]), n_steps=1000, seed=42, max_queue=25) >>> sim.run() >>> sim.plogs Arrivals: [46 26 32 37 70 35 45] Traffic: [24 17 2 23 33 12 13] Queues: [[ 23 32 45 38 22 43 31 34 20 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [290 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [290 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [ 9 1 7 9 3 3 26 37 4 8 10 9 2 10 40 11 2 16 3 3 21 27 22 1 7] [212 49 22 5 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [233 41 6 7 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [231 33 16 4 6 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]] Steps done: 291
Using greedy rewards-based longest (insprired by Stolyar EGCD technique), one can try to reach some vertices.
That works for the Fish graph:
>>> fish = sm.KayakPaddle(m=4, l=0, rates=[4, 4, 3, 2, 3, 2]) >>> sim = Longest(fish, rewards=[0, 2, 2, 0, 1, 1, 0], beta=.01, n_steps=10000, seed=42) >>> sim.run()
The flow avoids one edge:
>>> sim.flow array([2.8764, 1.089 , 1.0008, 0. , 0.8532, 2.079 , 1.0062])
The price is one node having a big queue:
>>> avg_queues = sim.avg_queues >>> avg_queues[-1] np.float64(61.1767)
Other nodes are not affected:
>>> np.round(np.mean(avg_queues[:-1]), decimals=4) np.float64(0.7362)
Playing with the beta parameter allows to adjust the trade-off (smaller queue, leak on the forbidden edge):
>>> sim = Longest(fish, rewards=[0, 2, 2, 0, 1, 1, 0], beta=.1, n_steps=10000, seed=42) >>> sim.run() >>> sim.flow array([2.9574, 1.008 , 0.9198, 0.018 , 0.9972, 2.061 , 1.0242]) >>> sim.avg_queues[-1] np.float64(8.4628)
Alternatively, one can use the k-filtering techniques:
>>> diamond = sm.CycleChain(rates=[1, 2, 2, 1]) >>> diamond.run('longest', forbidden_edges=[0, 4], seed=42, ... k=100, n_steps=1000, max_queue=1000) True >>> diamond.simulation array([0. , 0.954, 0.966, 0.954, 0. ])
Same result can be achieved by putting low rewards on 0 and 4 and settings forbidden_edges to True.
>>> diamond.run('longest', rewards=[1, 2, 2, 2, 1], seed=42, forbidden_edges=True, ... k=100, n_steps=1000, max_queue=1000) True >>> diamond.simulation array([0. , 0.954, 0.966, 0.954, 0. ])
Note that if the threshold is too low some exceptions are done to limit the queue sizes.
>>> diamond.run('longest', rewards=[1, 2, 2, 2, 1], seed=42, forbidden_edges=True, ... k=10, n_steps=1000, max_queue=1000) True >>> diamond.simulation array([0.108, 0.93 , 0.966, 0.93 , 0.024])
Having no relaxation usually leads to large, possibly overflowing, queues. However, this doesn’t show on small simulations.
>>> diamond.run('longest', rewards=[1, 2, 2, 2, 1], seed=42, forbidden_edges=True, ... n_steps=1000, max_queue=100) True >>> diamond.simulator.avg_queues array([7.515, 7.819, 1.067, 1.363]) >>> diamond.simulation array([0. , 0.954, 0.966, 0.954, 0. ])
To compare with the priority-based pure greedy version:
>>> diamond.run('priority', weights=[1, 2, 2, 2, 1], n_steps=1000, max_queue=1000, seed=42) True >>> diamond.simulation array([0.444, 0.63 , 0.966, 0.63 , 0.324])
We get similar results with Stolyar greedy longest:
>>> diamond.run('longest', rewards=[-1, 1, 1, 1, -1], beta=.01, n_steps=1000, max_queue=1000, seed=42) True >>> diamond.simulation array([0.444, 0.63 , 0.966, 0.63 , 0.324])
However, if you remove the greedyness, you can converge to the vertex:
>>> diamond.run('longest', rewards=[-1, 1, 1, 1, -1], beta=.01, ... n_steps=1000, max_queue=1000, seed=42, shift_rewards=False) True >>> diamond.simulation array([0. , 0.954, 0.966, 0.954, 0. ])
Another example with other rates.
>>> diamond.rates=[4, 5, 2, 1]
Optimize with the first and last edges that provide less reward.
>>> diamond.run('longest', rewards=[1, 2, 2, 2, 1], seed=42, forbidden_edges=True, ... k=100, n_steps=1000, max_queue=1000) True >>> diamond.simulation array([3.264, 0.888, 0.948, 0.84 , 0. ])
Increase the reward on the first edge.
>>> diamond.run('longest', rewards=[4, 2, 2, 2, 1], seed=42, forbidden_edges=True, ... k=100, n_steps=1000, max_queue=1000) True >>> diamond.simulation array([4.152, 0. , 0.996, 0. , 0.84 ])
On bijective graphs, no edge is forbidden whatever the weights.
>>> paw = sm.Tadpole() >>> paw.run('longest', rewards=[6, 3, 1, 2], seed=42, forbidden_edges=True, ... k=100, n_steps=1000, max_queue=1000) True >>> paw.simulation array([1.048, 1.056, 1.016, 0.88 ])
- name = 'longest'#
Name that can be used to list all non-abstract classes.
- stochastic_matching.simulator.longest.longest_core(logs, arrivals, graph, n_steps, queue_size, scores, forbidden_edges, k)[source]#
Jitted function for policies based on longest-queue first.
- Parameters:
logs (
Logs
) – Monitored variables.arrivals (
Arrivals
) – Item arrival process.graph (
JitHyperGraph
) – Model graph.n_steps (
int
) – Number of arrivals to process.queue_size (
ndarray
) – Number of waiting items of each class.scores (
ndarray
) – Default value for edges. Enables EGPD-like selection mechanism.forbidden_edges (
list
, optional) – Edges that are disabled.k (
int
, optional) – Queue size above which forbidden edges become available again.
- Return type:
None
Virtual queue policy#
- class stochastic_matching.simulator.virtual_queue.VirtualQueue(model, max_edge_queue=None, **kwargs)[source]#
Non-Greedy Matching simulator derived from
ExtendedSimulator
. Always pick-up the best edge according to a scoring function, even if that edge cannot be used (yet).- Parameters:
model (
Model
) – Model to simulate.max_edge_queue (
int
, optional) – In some extreme situation, the default allocated space for the edge virtual queue may be too small. If that happens someday, use this parameter to increase the VQ allocated memory.**kwargs – Keyword parameters of
ExtendedSimulator
.
Examples
Let start with a working triangle.
>>> import stochastic_matching as sm >>> sim = VirtualQueue(sm.Cycle(rates=[3, 4, 5]), n_steps=1000, seed=42, max_queue=10) >>> sim.run() >>> sim.plogs Arrivals: [287 338 375] Traffic: [125 162 213] Queues: [[837 105 41 13 3 1 0 0 0 0] [784 131 53 22 8 2 0 0 0 0] [629 187 92 51 24 9 5 3 0 0]] Steps done: 1000
Unstable diamond (simulation ends before completion due to drift).
>>> sim = VirtualQueue(sm.CycleChain(rates='uniform'), n_steps=1000, seed=42, max_queue=10) >>> sim.run() >>> sim.plogs Arrivals: [85 82 85 86] Traffic: [34 42 7 41 36] Queues: [[141 55 34 25 23 16 24 12 7 1] [318 16 3 1 0 0 0 0 0 0] [316 17 4 1 0 0 0 0 0 0] [106 67 64 37 22 24 7 3 6 2]] Steps done: 338
Stable diamond without reward optimization:
>>> sim = VirtualQueue(sm.CycleChain(rates=[1, 2, 2, 1]), n_steps=1000, seed=42, max_queue=10) >>> sim.run() >>> sim.plogs Arrivals: [179 342 320 159] Traffic: [ 95 84 161 84 75] Queues: [[823 120 38 19 0 0 0 0 0 0] [625 215 76 46 28 9 1 0 0 0] [686 212 70 24 7 1 0 0 0 0] [823 118 39 12 4 4 0 0 0 0]] Steps done: 1000
Let’s optimize (kill traffic on first and last edges).
>>> sim = VirtualQueue(sm.CycleChain(rates=[1, 2, 2, 1]), rewards=[0, 1, 1, 1, 0], beta=.01, ... n_steps=1000, seed=42, max_queue=10) >>> sim.run() >>> sim.plogs Arrivals: [22 51 55 27] Traffic: [ 0 22 24 27 0] Queues: [[136 16 3 0 0 0 0 0 0 0] [112 14 8 6 7 4 3 1 0 0] [ 73 21 13 7 9 8 4 5 12 3] [105 31 11 5 2 1 0 0 0 0]] Steps done: 155
OK, it’s working, but we reached the maximal queue size quite fast. Let’s reduce the pressure.
>>> sim = VirtualQueue(sm.CycleChain(rates=[1, 2, 2, 1]), rewards=[0, 1, 1, 1, 0], ... beta=.8, n_steps=1000, seed=42, max_queue=10) >>> sim.run() >>> sim.plogs Arrivals: [179 342 320 159] Traffic: [ 32 146 161 146 13] Queues: [[691 222 68 18 1 0 0 0 0 0] [514 153 123 110 48 32 16 4 0 0] [662 136 91 62 38 4 2 2 2 1] [791 128 50 18 7 6 0 0 0 0]] Steps done: 1000
Let’s play a bit with alt rewards, starting with adversarial rewards:
>>> rewards = [-1, 1, 1, 1, 2.9] >>> sim = VirtualQueue(sm.CycleChain(rates=[1, 2, 2, 1]), rewards=rewards, ... beta=.8, n_steps=1000, seed=42, max_queue=10) >>> sim.run() >>> sim.plogs Arrivals: [179 342 320 159] Traffic: [ 70 109 161 109 50] Queues: [[664 213 84 28 8 3 0 0 0 0] [460 247 135 70 50 25 10 3 0 0] [824 122 38 10 5 1 0 0 0 0] [902 71 20 7 0 0 0 0 0 0]] Steps done: 1000
With those rewards, we are far from the target. Let’s be gentle:
>>> sim = VirtualQueue(sm.CycleChain(rates=[1, 2, 2, 1]), rewards=rewards, ... alt_rewards="gentle", beta=.8, n_steps=1000, seed=42, max_queue=10) >>> sim.model.gentle_rewards(rewards) array([-2, 2, 2, 2, -2]) >>> sim.run() >>> sim.plogs Arrivals: [29 58 68 32] Traffic: [ 0 29 29 29 1] Queues: [[168 16 3 0 0 0 0 0 0 0] [143 15 9 5 7 4 3 1 0 0] [ 70 24 16 9 22 18 11 8 4 5] [ 95 32 23 10 19 7 1 0 0 0]] Steps done: 187
OK, it works but pressure is restored. Let’s normalize:
>>> sim = VirtualQueue(sm.CycleChain(rates=[1, 2, 2, 1]), rewards=rewards, ... alt_rewards="normalize", beta=.8, n_steps=1000, seed=42, max_queue=10) >>> sim.model.normalize_rewards(rewards) array([ 0. , 0. , 0. , 0. , -0.1]) >>> sim.run() >>> sim.plogs Arrivals: [179 342 320 159] Traffic: [ 95 84 161 84 75] Queues: [[823 120 38 19 0 0 0 0 0 0] [625 215 76 46 28 9 1 0 0 0] [686 212 70 24 7 1 0 0 0 0] [823 118 39 12 4 4 0 0 0 0]] Steps done: 1000
Not enough pressure in this example!
Let’s now use a k-filtering approach for this diamond:
>>> sim = VirtualQueue(sm.CycleChain(rates=[1, 2, 2, 1]), forbidden_edges=True, ... rewards=[0, 1, 1, 1, 0], n_steps=1000, seed=42, max_queue=10) >>> sim.run() >>> sim.plogs Arrivals: [27 57 65 31] Traffic: [ 0 27 29 28 0] Queues: [[158 16 3 3 0 0 0 0 0 0] [136 19 8 4 5 4 3 1 0 0] [ 70 25 13 14 12 15 21 8 1 1] [ 96 24 13 12 9 16 8 2 0 0]] Steps done: 180
Let us reduce the pressure.
>>> sim = VirtualQueue(sm.CycleChain(rates=[1, 2, 2, 1]), forbidden_edges=True, rewards=[0, 1, 1, 1, 0], ... n_steps=1000, seed=42, max_queue=10, k=6) >>> sim.run() >>> sim.plogs Arrivals: [179 342 320 159] Traffic: [ 31 145 161 145 14] Queues: [[600 171 122 81 20 6 0 0 0 0] [444 151 137 104 81 65 14 4 0 0] [698 83 52 59 57 38 7 3 2 1] [730 130 64 35 26 9 2 4 0 0]] Steps done: 1000
A stable candy. While candies are not good for greedy policies, the virtual queue is designed to deal with it.
>>> sim = VirtualQueue(sm.HyperPaddle(rates=[1, 1, 1.5, 1, 1.5, 1, 1]), n_steps=1000, seed=42, max_queue=25) >>> sim.run() >>> sim.plogs Arrivals: [140 126 154 112 224 121 123] Traffic: [109 29 17 59 58 62 107] Queues: [[301 83 94 83 54 60 43 48 41 32 49 60 31 3 8 7 3 0 0 0 0 0 0 0 0] [825 101 27 14 26 7 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [899 64 20 8 9 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [110 49 93 108 120 102 48 28 53 32 7 30 16 42 28 34 31 30 26 11 2 0 0 0 0] [276 185 151 95 60 40 49 52 47 33 12 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [755 142 67 36 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [709 228 47 13 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]] Steps done: 1000
Last but not least: Stolyar’s example from https://arxiv.org/abs/1608.01646
>>> stol = sm.Model(incidence=[[1, 0, 0, 0, 1, 0, 0], ... [0, 1, 0, 0, 1, 1, 1], ... [0, 0, 1, 0, 0, 1, 1], ... [0, 0, 0, 1, 0, 0, 1]], rates=[1.2, 1.5, 2, .8])
Without optimization, all we do is self-matches (no queue at all):
>>> sim = VirtualQueue(stol, n_steps=1000, seed=42, max_queue=25) >>> sim.run() >>> sim.logs.traffic.astype(int) array([236, 279, 342, 143, 0, 0, 0]) >>> sim.avg_queues array([0., 0., 0., 0.])
>>> rewards = [-1, -1, 1, 2, 5, 4, 7]
With optimization, we get the desired results at the price of a huge queue:
>>> sim = VirtualQueue(stol, rewards=rewards, beta=.01, n_steps=3000, seed=42, max_queue=300) >>> sim.run() >>> sim.logs.traffic.astype(int) array([ 0, 0, 761, 242, 591, 0, 205]) >>> sim.avg_queues array([86.17266667, 0.53233333, 93.03 , 0.33533333])
Same traffic could be achieved with much lower queues by enforcing forbidden edges:
>>> sim = VirtualQueue(stol, rewards=rewards, forbidden_edges=True, n_steps=3000, seed=42, max_queue=300) >>> sim.run() >>> sim.logs.traffic.astype(int) array([ 0, 0, 961, 342, 691, 0, 105]) >>> sim.avg_queues array([2.731 , 0.69633333, 0.22266667, 0.052 ])
- name = 'virtual_queue'#
Name that can be used to list all non-abstract classes.
- stochastic_matching.simulator.virtual_queue.vq_core(logs, arrivals, graph, n_steps, queue_size, scores, ready_edges, edge_queue, forbidden_edges, k)[source]#
Jitted function for virtual-queue policy.
- Parameters:
logs (
Logs
) – Monitored variables.arrivals (
Arrivals
) – Item arrival process.graph (
JitHyperGraph
) – Model graph.n_steps (
int
) – Number of arrivals to process.queue_size (
ndarray
) – Number of waiting items of each class.scores (
ndarray
) – Default value for edges. Enables EGPD-like selection mechanism.ready_edges (
ndarray
) – Tells if given edge is actionable.edge_queue (
MultiQueue
) – Edges waiting to be matched.forbidden_edges (
list
, optional) – Edges that are disabled.k (
int
, optional) – Queue size above which forbidden edges become available again.
- Return type:
None
- class stochastic_matching.simulator.constant_regret.ConstantRegret(model, rewards=None, fading=None, max_edge_queue=None, **kwargs)[source]#
Non-Greedy Matching simulator that implements https://sophieyu.me/constant_regret_matching.pdf Always pick-up the best positive edge according to a scoring function, even if that edge cannot be used (yet).
- Parameters:
model (
Model
) – Model to simulate.rewards (
ndarray
) – Edge rewards.fading (callable) – Jitted function that drives reward relative weight.
max_edge_queue (
int
, optional) – In some extreme situation, the default allocated space for the edge virtual queue may be too small. If that happens someday, use this parameter to increase the VQ allocated memory.**kwargs – Keyword parameters of
Simulator
.
Examples
Let start with a working triangle.
>>> import stochastic_matching as sm >>> sim = ConstantRegret(sm.Cycle(rates=[3, 4, 5]), n_steps=1000, seed=42, max_queue=10) >>> sim.run() >>> sim.plogs Arrivals: [287 338 375] Traffic: [125 162 213] Queues: [[837 105 41 13 3 1 0 0 0 0] [784 131 53 22 8 2 0 0 0 0] [629 187 92 51 24 9 5 3 0 0]] Steps done: 1000
Unstable diamond (simulation ends before completion due to drift).
>>> sim = ConstantRegret(sm.CycleChain(rates='uniform'), n_steps=1000, seed=42, max_queue=10) >>> sim.run() >>> sim.plogs Arrivals: [85 82 85 86] Traffic: [34 42 7 41 36] Queues: [[141 55 34 25 23 16 24 12 7 1] [318 16 3 1 0 0 0 0 0 0] [316 17 4 1 0 0 0 0 0 0] [106 67 64 37 22 24 7 3 6 2]] Steps done: 338
Stable diamond without reward optimization:
>>> sim = ConstantRegret(sm.CycleChain(rates=[1, 2, 2, 1]), n_steps=1000, seed=42, max_queue=10) >>> sim.run() >>> sim.plogs Arrivals: [179 342 320 159] Traffic: [ 95 84 161 84 75] Queues: [[823 120 38 19 0 0 0 0 0 0] [625 215 76 46 28 9 1 0 0 0] [686 212 70 24 7 1 0 0 0 0] [823 118 39 12 4 4 0 0 0 0]] Steps done: 1000
Let’s optimize (kill traffic on first and last edges).
>>> sim = ConstantRegret(sm.CycleChain(rates=[1, 2, 2, 1]), rewards=[0, 1, 1, 1, 0], ... n_steps=1000, seed=42, max_queue=10) >>> sim.run() >>> sim.plogs Arrivals: [20 44 51 23] Traffic: [ 3 17 25 16 0] Queues: [[136 2 0 0 0 0 0 0 0 0] [127 4 6 1 0 0 0 0 0 0] [ 24 22 9 13 14 20 16 13 4 3] [ 40 17 22 21 22 14 1 1 0 0]] Steps done: 138
OK, it’s mostly working, but we reached the maximal queue size quite fast. Let’s reduce the pressure.
>>> slow_fading = njit(lambda t: np.sqrt(t+1)/15) >>> sim = ConstantRegret(sm.CycleChain(rates=[1, 2, 2, 1]), rewards=[0, 1, 1, 1, 0], ... fading=slow_fading, n_steps=1000, seed=42, max_queue=10) >>> sim.run() >>> sim.plogs Arrivals: [179 342 320 159] Traffic: [ 43 136 161 136 23] Queues: [[900 79 18 2 1 0 0 0 0 0] [840 98 37 17 6 2 0 0 0 0] [483 224 122 78 61 26 5 1 0 0] [551 255 100 50 25 10 9 0 0 0]] Steps done: 1000
A stable candy. While candies are not good for greedy policies, virtual queues policies are designed to deal with it.
>>> sim = ConstantRegret(sm.HyperPaddle(rates=[1, 1, 1.5, 1, 1.5, 1, 1]), n_steps=1000, seed=42, max_queue=25) >>> sim.run() >>> sim.plogs Arrivals: [140 126 154 112 224 121 123] Traffic: [109 29 17 59 58 62 107] Queues: [[301 83 94 83 54 60 43 48 41 32 49 60 31 3 8 7 3 0 0 0 0 0 0 0 0] [825 101 27 14 26 7 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [899 64 20 8 9 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [110 49 93 108 120 102 48 28 53 32 7 30 16 42 28 34 31 30 26 11 2 0 0 0 0] [276 185 151 95 60 40 49 52 47 33 12 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [755 142 67 36 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [709 228 47 13 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]] Steps done: 1000
Last but not least: Stolyar’s example from https://arxiv.org/abs/1608.01646
>>> ns = sm.NS19(rates=[1.2, 1.5, 2, .8])
Without optimization, all we do is self-matches (no queue at all):
>>> sim = ConstantRegret(ns, n_steps=1000, seed=42, max_queue=25) >>> sim.run() >>> sim.logs.traffic.astype(int) array([236, 279, 342, 143, 0, 0, 0]) >>> sim.avg_queues array([0., 0., 0., 0.])
Let’s introduce some rewards:
>>> rewards = [-1, -1, 1, 2, 5, 4, 7] >>> ns.optimize_rates(rewards) array([0. , 0. , 1.7, 0.5, 1.2, 0. , 0.3]) >>> ns.normalize_rewards(rewards) array([-2., -5., 0., 0., 0., -1., 0.])
With optimization, we get the desired results:
>>> sim = ConstantRegret(ns, rewards=rewards, n_steps=3000, seed=42, max_queue=300) >>> sim.run() >>> sim.logs.traffic.astype(int) array([ 0, 0, 961, 342, 691, 0, 105]) >>> sim.avg_queues array([2.731 , 0.69633333, 0.22266667, 0.052 ])
One can check that this is the same as using a regular virtual queue with forbidden edges:
>>> from stochastic_matching.simulator.virtual_queue import VirtualQueue >>> sim = VirtualQueue(ns, rewards=rewards, forbidden_edges=True, n_steps=3000, seed=42, max_queue=300) >>> sim.run() >>> sim.logs.traffic.astype(int) array([ 0, 0, 961, 342, 691, 0, 105]) >>> sim.avg_queues array([2.731 , 0.69633333, 0.22266667, 0.052 ])
- name = 'constant_regret'#
Name that can be used to list all non-abstract classes.
- stochastic_matching.simulator.constant_regret.cr_core(logs, arrivals, graph, n_steps, queue_size, scores, ready_edges, edge_queue, fading)[source]#
Jitted function for constant-regret policy.
- Parameters:
logs (
Logs
) – Monitored variables.arrivals (
Arrivals
) – Item arrival process.graph (
JitHyperGraph
) – Model graph.n_steps (
int
) – Number of arrivals to process.queue_size (
ndarray
) – Number of waiting items of each class.scores (
ndarray
) – Normalized edge rewards (<0 on taboo edges). Enables EGPD-like selection mechanism.ready_edges (
ndarray
) – Tells if given edge is actionable.edge_queue (
MultiQueue
) – Edges waiting to be matched.fading (callable) – Jitted function that drives reward relative weight.
- Return type:
None
Epsilon-Filtering policy#
- class stochastic_matching.simulator.e_filtering.EFiltering(model, base_policy='longest', forbidden_edges=None, rewards=None, epsilon=0.01, **base_policy_kwargs)[source]#
Epsilon-filtering policy where incoming items are tagged with a spin. A match on a forbidden edge requires at least one “+” item. In practice, the simulator works on an expanded graph with twice the initial nodes to represent “+” and “-“.
- Parameters:
model (
Model
) – Model to simulate.base_policy (
str
orSimulator
) – Type of simulator to instantiate. Cannot have mandatory extra-parameters (e.g. NOT ‘priority’).forbidden_edges (
list
orndarray
, optional) – Edges that should not be used.weights (
ndarray
, optional) – Target rewards on edges. If weights are given, the forbidden edges are computed to match the target (overrides forbidden_edges argument).epsilon (
float
, default=.01) – Proportion of “+” arrivals (lower means less odds to select a forbidden edge).**base_policy_kwargs – Keyword arguments. Only universal parameters should be used (seed, max_queue, n_steps).
Examples
Consider the following diamond graph with injective vertex [1, 2, 3]:
>>> import stochastic_matching as sm >>> diamond = sm.CycleChain(rates=[1, 2, 2, 1])
Without any specific argument, epsilon-filtering acts as a regular longest policy:
>>> sim = EFiltering(diamond, n_steps=1000, seed=42) >>> sim.run() >>> sim.plogs Arrivals: [162 342 348 148] Traffic: [ 76 86 190 76 72] Queues: [[882 91 23 ... 0 0 0] [661 109 72 ... 0 0 0] [737 111 63 ... 0 0 0] [870 96 27 ... 0 0 0]] Steps done: 1000
Let us use epsilon-filtering by specifying forbidden_edges:
>>> sim = EFiltering(diamond, forbidden_edges=[0, 4], n_steps=1000, seed=42) >>> sim.run() >>> sim.plogs Arrivals: [162 342 348 148] Traffic: [ 1 158 189 147 1] Queues: [[571 44 61 ... 0 0 0] [560 37 52 ... 0 0 0] [529 66 67 ... 0 0 0] [537 62 59 ... 0 0 0]] Steps done: 1000
Switch to a FCFM policy:
>>> sim = EFiltering(diamond, base_policy='fcfm', forbidden_edges=[0, 4], n_steps=1000, seed=42) >>> sim.run() >>> sim.plogs Arrivals: [162 342 348 148] Traffic: [ 1 158 189 147 1] Queues: [[569 99 118 ... 0 0 0] [569 35 37 ... 0 0 0] [528 58 57 ... 0 0 0] [551 84 98 ... 0 0 0]] Steps done: 1000
Switch to virtual queue:
>>> sim = EFiltering(diamond, base_policy='virtual_queue', forbidden_edges=[0, 4], n_steps=1000, seed=42) >>> sim.run() >>> sim.plogs Arrivals: [162 342 348 148] Traffic: [ 0 157 186 144 1] Queues: [[311 93 110 ... 0 0 0] [108 120 111 ... 0 0 0] [ 67 113 76 ... 0 0 0] [174 142 137 ... 0 0 0]] Steps done: 1000
Stolyar’s example to see the behavior on hypergraph:
>>> stol = sm.Model(incidence=[[1, 0, 0, 0, 1, 0, 0], ... [0, 1, 0, 0, 1, 1, 1], ... [0, 0, 1, 0, 0, 1, 1], ... [0, 0, 0, 1, 0, 0, 1]], rates=[1.2, 1.5, 2, .8]) >>> rewards = [-1, -1, 1, 2, 5, 4, 7]
>>> sim = EFiltering(stol, base_policy='virtual_queue', rewards=rewards, n_steps=1000, epsilon=.0001, seed=42) >>> sim.run() >>> sim.plogs Arrivals: [216 282 371 131] Traffic: [ 0 0 311 76 213 0 55] Queues: [[ 30 114 181 ... 0 0 0] [ 22 64 117 ... 0 0 0] [ 24 590 217 ... 0 0 0] [995 5 0 ... 0 0 0]] Steps done: 1000 >>> int(sim.logs.traffic @ rewards) 1913 >>> sim.avg_queues array([3.577, 4.856, 1.65 , 0.005])
To compare with, the original EGPD policy:
>>> sim = sm.VirtualQueue(stol, rewards=rewards, n_steps=1000, seed=42, beta=.01) >>> sim.run() >>> sim.plogs Arrivals: [236 279 342 143] Traffic: [ 0 0 100 3 139 0 140] Queues: [[ 2 3 5 ... 0 0 0] [844 8 6 ... 0 0 0] [ 0 1 3 ... 0 0 0] [597 230 96 ... 0 0 0]] Steps done: 1000 >>> int(sim.logs.traffic @ rewards) 1781 >>> sim.avg_queues array([54.439, 1.597, 78.51 , 0.691])
- name = 'e_filtering'#
Name that can be used to list all non-abstract classes.
Tools for batch evaluations#
- class stochastic_matching.simulator.parallel.Iterator(parameter, values, name=None, process=None)[source]#
Provides an easy way to make a parameter vary.
- Parameters:
- Returns:
Examples
Imagine one wants a parameter x2 to vary amongst squares of integers.
>>> iterator = Iterator('x2', [i**2 for i in range(4)]) >>> for kwarg, log in iterator: ... print(kwarg, log) {'x2': 0} {'x2': 0} {'x2': 1} {'x2': 1} {'x2': 4} {'x2': 4} {'x2': 9} {'x2': 9}
You can do the same thing by iterating over integers and apply a square method.
>>> iterator = Iterator('x2', range(4), 'x', lambda x: x**2) >>> for kwarg, log in iterator: ... print(kwarg, log) {'x2': 0} {'x': 0} {'x2': 1} {'x': 1} {'x2': 4} {'x': 2} {'x2': 9} {'x': 3}
- class stochastic_matching.simulator.parallel.Runner(metrics)[source]#
- Parameters:
metrics (
list
) – Metrics to extract.
- class stochastic_matching.simulator.parallel.SingleXP(name, iterator=None, **params)[source]#
Produce a single experiment, with possibly one variable parameter.
- class stochastic_matching.simulator.parallel.XP(name, iterator=None, **params)[source]#
Produce an experiment, with possibly one variable parameter. Addition can be used to concatenate experiments.
- stochastic_matching.simulator.parallel.cached(func)[source]#
Enables functions that take a lot of time to compute to store their output on a file.
If the original function is called with an additional parameter cache_name, it will look for a file named cache_name.pkl.gz. If it exists, it returns its content, otherwise it computes the results and stores it is returned.
Additional parameter cache_path indicates where to look the file (default to current directory).
Additional parameter cache_overwrite forces to always compute the function even if the file exists. Default to False.
- Parameters:
func (callable) – Function to make cachable.
- Returns:
The function with three additional arguments: cache_name, cache_path, cache_overwrite.
- Return type:
callable
Examples
>>> from tempfile import TemporaryDirectory as TmpDir >>> @cached ... def f(x): ... return x**3 >>> with TmpDir() as tmp: ... content_before = [f.name for f in Path(tmp).glob('*')] ... res = f(3, cache_name='cube', cache_path=Path(tmp)) ... content_after = [f.name for f in Path(tmp).glob('*')] ... res2 = f(3, cache_name='cube', cache_path=Path(tmp)) # This does not run f! >>> res 27 >>> res2 27 >>> content_before [] >>> content_after ['cube.pkl.gz']
- stochastic_matching.simulator.parallel.evaluate(xps, metrics=None, pool=None)[source]#
Evaluate some experiments. Results can be cached.
- Parameters:
- Returns:
Result of the experiment(s).
- Return type:
Examples
>>> import stochastic_matching as sm >>> import numpy as np >>> diamond = sm.CycleChain() >>> base = {'model': diamond, 'n_steps': 1000, 'seed': 42, 'rewards': [1, 2.9, 1, -1, 1]} >>> xp = XP('Diamond', simulator='longest', **base) >>> res = evaluate(xp, ['income', 'steps_done', 'regret', 'delay']) >>> for name, r in res.items(): ... print(name) ... for k, v in r.items(): ... print(f"{k}: {np.round(v, 4)}") Diamond income: [216 304 293 187] steps_done: 1000 regret: 0.088 delay: 0.1634 >>> def q0(simu): ... return simu.avg_queues[0] >>> res = evaluate(xp, ['flow', 'avg_queues', q0]) >>> res['Diamond']['flow'] array([1.19, 0.97, 0.97, 0.88, 0.99]) >>> res['Diamond']['avg_queues'] array([0.531, 0.214, 0.273, 0.616]) >>> res['Diamond']['q0'] np.float64(0.531) >>> xp1 = XP('e-filtering', simulator='e_filtering', **base, ... iterator=Iterator('epsilon', [.01, .1, 1], name='e')) >>> xp2 = XP(name='k-filtering', simulator='longest', forbidden_edges=True, ... iterator=Iterator('k', [0, 10, 100]), **base) >>> xp3 = XP(name='egpd', simulator='virtual_queue', ... iterator=Iterator('beta', [.01, .1, 1]), **base) >>> xp = sum([xp1, xp2, xp3]) >>> len(xp) 9 >>> import multiprocess as mp >>> with mp.Pool(processes=2) as p: ... res = evaluate(xp, ['regret', 'delay'], pool=p) >>> for name, r in res.items(): ... print(name) ... for k, v in r.items(): ... print(f"{k}: {np.array(v)}") e-filtering e: [0.01 0.1 1. ] regret: [0.002 0.017 0.103] delay: [1.0538 0.695 0.1952] k-filtering k: [ 0 10 100] regret: [ 8.8000000e-02 2.0000000e-03 -8.8817842e-16] delay: [0.1634 0.7342 1.3542] egpd beta: [0.01 0.1 1. ] regret: [0.003 0.043 0.076] delay: [9.2024 1.013 0.1888]
Metrics#
- class stochastic_matching.simulator.metrics.CCDF[source]#
Complementary Cumulative Distribution Function of queue size for each item class.
- class stochastic_matching.simulator.metrics.Delay[source]#
Average waiting time (i.e. average total queue size divided by global arrival rate).
- class stochastic_matching.simulator.metrics.Metric[source]#
Blueprint for extracting a metric from the simulator.
- class stochastic_matching.simulator.metrics.Regret[source]#
Given the matched items observed, difference between the best possible match between these items and the actual match.
Notes
Regret depends on the assigned rewards.
Unmatched items are ignored to avoid side effects (e.g. if the optimal allocation is unstable, for example because of negative weights).
First-Come First Served#
- class stochastic_matching.simulator.fcfm.FCFM(model, n_steps=1000000, seed=None, max_queue=1000)[source]#
Greedy Matching simulator derived from
Simulator
. When multiple choices are possible, the oldest item is chosen.Examples
Let start with a working triangle. One can notice the results are the same for all greedy simulator because there are no multiple choices in a triangle (always one non-empty queue at most under a greedy policy).
>>> import stochastic_matching as sm >>> sim = FCFM(sm.Cycle(rates=[3, 4, 5]), n_steps=1000, seed=42, max_queue=10) >>> sim.run() >>> sim.plogs Arrivals: [287 338 375] Traffic: [125 162 213] Queues: [[838 104 41 13 3 1 0 0 0 0] [796 119 53 22 8 2 0 0 0 0] [640 176 92 51 24 9 5 3 0 0]] Steps done: 1000
Unstable diamond (simulation ends before completion due to drift).
>>> sim = FCFM(sm.CycleChain(rates=[1, 1, 1, 1]), n_steps=1000, seed=42, max_queue=10) >>> sim.run() >>> sim.plogs Arrivals: [85 82 85 86] Traffic: [34 42 7 41 36] Queues: [[126 70 22 26 29 12 23 15 10 5] [326 8 3 1 0 0 0 0 0 0] [321 12 4 1 0 0 0 0 0 0] [105 80 65 28 31 15 4 2 6 2]] Steps done: 338
A stable candy (but candies are not good for greedy policies).
>>> sim = FCFM(sm.HyperPaddle(rates=[1, 1, 1.5, 1, 1.5, 1, 1]), ... n_steps=1000, seed=42, max_queue=25) >>> sim.run() >>> sim.plogs Arrivals: [46 26 32 37 70 35 45] Traffic: [24 17 2 23 33 12 13] Queues: [[ 23 32 45 38 22 43 31 34 20 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [290 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [290 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [ 9 1 7 9 3 3 26 37 4 8 10 9 2 10 40 11 2 16 3 3 21 27 22 1 7] [212 49 22 5 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [233 41 6 7 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [231 33 16 4 6 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]] Steps done: 291
- name = 'fcfm'#
Name that can be used to list all non-abstract classes.
- stochastic_matching.simulator.fcfm.fcfm_core(logs, arrivals, graph, n_steps, queue_size, queues)[source]#
Jitted function for first-come, first-matched policy.
- Parameters:
logs (
Logs
) – Monitored variables.arrivals (
Arrivals
) – Item arrival process.graph (
JitHyperGraph
) – Model graph.n_steps (
int
) – Number of arrivals to process.queue_size (
ndarray
) – Number of waiting items of each class.queues (
MultiQueue
) – Waiting items of each class.
- Return type:
None
Other policies#
- class stochastic_matching.simulator.priority.Priority(model, weights, threshold=None, counterweights=None, **kwargs)[source]#
Greedy policy based on pre-determined preferences on edges.
A threshold can be specified to alter the weights if the queue sizes get too big.
- Parameters:
model (
Model
) – Model to simulate.weights (
list
orndarray
) – Priorities associated to the edges.threshold (
int
, optional) – Limit on max queue size to apply the weight priority.counterweights (
list
orndarray
, optional) – Priority to use above threshold (if not provided, reverse weights is used).**kwargs – Keyword arguments.
Examples
>>> import stochastic_matching as sm >>> fish = sm.KayakPaddle(m=4, l=0, rates=[4, 4, 3, 2, 3, 2]) >>> fish.run('priority', weights=[0, 2, 2, 0, 1, 1, 0], ... threshold=50, counterweights = [0, 0, 0, 1, 2, 2, 1], ... n_steps=10000, seed=42) True
These priorities are efficient at stabilizing the policy while avoiding edge 3.
>>> fish.simulation array([2.925 , 1.0404, 0.9522, 0. , 0.9504, 2.0808, 1.0044])
The last node is the pseudo-instable node.
>>> fish.simulator.avg_queues[-1] np.float64(38.346) >>> np.round(np.mean(fish.simulator.avg_queues[:-1]), decimals=2) np.float64(0.75)
Choosing proper counter-weights is important.
>>> fish.run('priority', weights=[0, 2, 2, 0, 1, 1, 0], ... threshold=50, ... n_steps=10000, seed=42) True >>> fish.simulation array([2.9232, 1.0422, 0.9504, 0.216 , 0.7344, 1.8666, 1.2186]) >>> fish.simulator.avg_queues[-1] np.float64(38.6016)
- name = 'priority'#
Name that can be used to list all non-abstract classes.
- stochastic_matching.simulator.priority.make_priority_selector(weights, threshold, counterweights)[source]#
Make a jitted edge selector based on priorities.
- class stochastic_matching.simulator.random_edge.RandomEdge(model, n_steps=1000000, seed=None, max_queue=1000)[source]#
Greedy Matching simulator derived from
Simulator
. When multiple choices are possible, one edge is chosen uniformly at random.- Parameters:
model (
Model
) – Model to simulate.**kwargs – Keyword arguments.
Examples
Let start with a working triangle. One can notice the results are not the than for other greedy simulators. This may seem off because there are no multiple choices in a triangle (always one non-empty queue at most under a greedy policy). The root cause is that the arrival process shares the same random generator than the random selection.
>>> import stochastic_matching as sm >>> triangle = sm.Cycle(rates=[3, 4, 5]) >>> sim = RandomEdge(triangle, n_steps=1000, seed=42, max_queue=11) >>> sim.run() >>> sim.plogs Arrivals: [276 342 382] Traffic: [118 158 224] Queues: [[865 92 32 10 1 0 0 0 0 0 0] [750 142 62 28 12 3 2 1 0 0 0] [662 164 73 36 21 7 10 12 8 5 2]] Steps done: 1000
Sanity check: results are unchanged if the graph is treated as hypergraph.
>>> triangle.adjacency = None >>> sim = RandomEdge(triangle, n_steps=1000, seed=42, max_queue=11) >>> sim.run() >>> sim.plogs Arrivals: [276 342 382] Traffic: [118 158 224] Queues: [[865 92 32 10 1 0 0 0 0 0 0] [750 142 62 28 12 3 2 1 0 0 0] [662 164 73 36 21 7 10 12 8 5 2]] Steps done: 1000
An ill diamond graph (simulation ends before completion due to drift).
>>> sim = RandomEdge(sm.CycleChain(rates='uniform'), n_steps=1000, seed=42, max_queue=10) >>> sim.run() >>> sim.plogs Arrivals: [24 20 16 23] Traffic: [10 8 2 8 6] Queues: [[ 9 4 7 13 5 5 10 24 5 1] [83 0 0 0 0 0 0 0 0 0] [76 4 3 0 0 0 0 0 0 0] [13 8 15 19 14 8 2 1 2 1]] Steps done: 83 >>> sim.flow array([0.48192771, 0.38554217, 0.09638554, 0.38554217, 0.28915663])
A working candy (but candies are not good for greedy policies).
>>> sim = RandomEdge(sm.HyperPaddle(rates=[1, 1, 1.5, 1, 1.5, 1, 1]), n_steps=1000, seed=42, max_queue=25) >>> sim.run() >>> sim.plogs Arrivals: [47 43 67 50 92 47 54] Traffic: [24 22 19 28 35 19 26] Queues: [[305 77 15 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [305 53 32 9 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [341 36 18 5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [ 7 6 9 13 7 34 41 47 23 9 2 3 2 7 6 2 12 46 16 14 39 43 2 5 5] [275 51 38 30 6 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [324 50 21 4 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [305 47 23 9 2 3 5 5 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]] Steps done: 400
Note that you can reset the simulator before starting another run.
>>> sim.reset() >>> sim.run() >>> sim.plogs Arrivals: [47 43 67 50 92 47 54] Traffic: [24 22 19 28 35 19 26] Queues: [[305 77 15 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [305 53 32 9 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [341 36 18 5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [ 7 6 9 13 7 34 41 47 23 9 2 3 2 7 6 2 12 46 16 14 39 43 2 5 5] [275 51 38 30 6 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [324 50 21 4 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [305 47 23 9 2 3 5 5 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]] Steps done: 400
You can display the distribution of queue sizes as a ccdf:
>>> sim.show_ccdf() <Figure size ...x... with 1 Axes>
- name = 'random_edge'#
Name that can be used to list all non-abstract classes.
- stochastic_matching.simulator.random_edge.random_edge_selector(graph, queue_size, node)[source]#
Selects a feasible edge at random.
- class stochastic_matching.simulator.random_item.RandomItem(model, n_steps=1000000, seed=None, max_queue=1000)[source]#
Greedy matching simulator derived from
Simulator
. When multiple choices are possible, chooses proportionally to the sizes of the queues (or sum of queues for hyperedges).- Parameters:
model (
Model
) – Model to simulate.**kwargs – Keyword arguments.
Examples
Let start with a working triangle.
>>> import stochastic_matching as sm >>> sim = RandomItem(sm.Cycle(rates=[3, 4, 5]), n_steps=1000, seed=42, max_queue=11) >>> sim.run() >>> sim.plogs Arrivals: [276 342 382] Traffic: [118 158 224] Queues: [[865 92 32 10 1 0 0 0 0 0 0] [750 142 62 28 12 3 2 1 0 0 0] [662 164 73 36 21 7 10 12 8 5 2]] Steps done: 1000
An ill braess graph (simulation ends before completion due to drift).
>>> sim = RandomItem(sm.CycleChain(rates='uniform'), n_steps=1000, seed=42, max_queue=10) >>> sim.run() >>> sim.plogs Arrivals: [25 17 13 12] Traffic: [10 6 2 5 5] Queues: [[ 9 4 7 7 5 4 9 8 11 3] [67 0 0 0 0 0 0 0 0 0] [60 4 3 0 0 0 0 0 0 0] [15 18 16 9 4 5 0 0 0 0]] Steps done: 67
A working candy (but candies are not good for greedy policies).
>>> sim = RandomItem(sm.HyperPaddle(rates=[1, 1, 1.5, 1, 1.5, 1, 1]), n_steps=1000, seed=42, max_queue=25) >>> sim.run() >>> sim.plogs Arrivals: [112 106 166 105 206 112 115] Traffic: [66 46 39 61 64 51 81] Queues: [[683 125 63 25 8 13 2 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [659 130 65 34 32 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [757 89 36 19 12 9 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [ 23 10 23 30 28 56 56 109 81 85 114 55 22 38 17 36 32 30 7 4 3 5 13 20 25] [673 123 73 28 19 6 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [769 112 33 4 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [708 122 55 17 4 5 5 5 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]] Steps done: 922
- name = 'random_item'#
Name that can be used to list all non-abstract classes.
- stochastic_matching.simulator.random_item.random_item_selector(graph, queue_size, node)[source]#
Selects a feasible edge at random proportionally to the number of waiting items.
Generic Class#
The root abstract class for all simulators.
- class stochastic_matching.simulator.simulator.Simulator(model, n_steps=1000000, seed=None, max_queue=1000)[source]#
Abstract class that describes the generic behavior of matching simulators. See subclasses for detailed examples.
- Parameters:
model (
Model
) – Model to simulate.n_steps (
int
, optional) – Number of arrivals to simulate.seed (
int
, optional) – Seed of the random generatormax_queue (
int
) – Max queue size. Necessary for speed and detection of unstability. For stable systems very close to the unstability border, the max_queue may be reached.
- internal#
Inner variables. Default to arrivals, graphs, queue_size, n_steps. Subclasses can add other variables.
- Type:
- logs#
Monitored variables (traffic on edges, queue size distribution, number of steps achieved).
- Type:
Examples
>>> import stochastic_matching as sm >>> sim = sm.FCFM(sm.CycleChain(rates=[2, 2.1, 1.1, 1]), seed=42, n_steps=1000, max_queue=8) >>> sim Simulator of type fcfm.
Use
run()
to make the simulation.>>> sim.run()
Raw results are stored in logs. A plog property gives a pretty print of the logs.
>>> sim.plogs Arrivals: [67 80 43 39] Traffic: [43 17 14 23 12] Queues: [[118 47 26 15 14 7 1 1] [188 25 13 3 0 0 0 0] [217 8 3 1 0 0 0 0] [125 50 31 11 9 3 0 0]] Steps done: 229
Different methods are proposed to provide various indicators.
>>> sim.avg_queues array([1.08296943, 0.26200873, 0.07423581, 0.8558952 ])
>>> sim.delay np.float64(0.3669530919847866)
>>> sim.ccdf array([[1. , 0.48471616, 0.27947598, 0.16593886, 0.10043668, 0.03930131, 0.00873362, 0.00436681, 0. ], [1. , 0.1790393 , 0.069869 , 0.01310044, 0. , 0. , 0. , 0. , 0. ], [1. , 0.05240175, 0.01746725, 0.00436681, 0. , 0. , 0. , 0. , 0. ], [1. , 0.45414847, 0.23580786, 0.10043668, 0.05240175, 0.01310044, 0. , 0. , 0. ]]) >>> sim.flow array([1.16419214, 0.46026201, 0.3790393 , 0.62270742, 0.32489083])
You can also draw the average or CCDF of the queues.
>>> fig = sim.show_average_queues() >>> fig <Figure size ...x... with 1 Axes>
>>> fig = sim.show_average_queues(indices=[0, 3, 2], sort=True, as_time=True) >>> fig <Figure size ...x... with 1 Axes>
>>> fig = sim.show_ccdf() >>> fig <Figure size ...x... with 1 Axes>
>>> fig = sim.show_ccdf(indices=[0, 3, 2], sort=True, strict=True) >>> fig <Figure size ...x... with 1 Axes>
- name = None#
Name that can be used to list all non-abstract classes.
- property plogs#
Print logs.
- Return type:
None
- show_average_queues(indices=None, sort=False, as_time=False)[source]#
- Parameters:
- Returns:
A figure of the CCDFs of the queues.
- Return type:
- stochastic_matching.simulator.simulator.core_simulator(logs, arrivals, graph, n_steps, queue_size, selector)[source]#
Generic jitted function for queue-size based policies.
- Parameters:
logs (
Logs
) – Monitored variables.arrivals (
Arrivals
) – Item arrival process.graph (
JitHyperGraph
) – Model graph.n_steps (
int
) – Number of arrivals to process.queue_size (
ndarray
) – Number of waiting items of each class.selector (callable) – Jitted function that selects edge (or not) based on graph, queue_size, and arriving node.
- Return type:
None
Extended Class#
Abstract extension designed for reaching polytope vertices.
- class stochastic_matching.simulator.extended.ExtendedSimulator(model, rewards=None, alt_rewards=None, beta=None, forbidden_edges=None, k=None, **kwargs)[source]#
This extended (abstract) class of
Simulator
accepts additional parameters designed to reach vertices of the stability polytope.How these parameters are exactly used is determined by its sub-classes.
- Parameters:
model (
Model
) – Model to simulate.rewards (
ndarray
orlist
, optional) – Rewards associated to edges.alt_rewards (
str
, optional) – Possible reward transformation.beta (
float
, optional) – Reward factor: each edge is given a default score 1/beta (small beta means higher reward impact). Setting beta triggers reward-based scoring.forbidden_edges (
list
orndarray
, orbool
, optional) – Edges that should not be used. If set to True, the list of forbidden edges is computed from the rewards.k (
int
, optional) – Limit on queue size to apply edge interdiction (enforce stability on injective-only vertices).
Internal tools#
- class stochastic_matching.simulator.arrivals.Arrivals(*args, **kwargs)[source]#
-
Examples
>>> arrivals = Arrivals([2 ,2, 3, 1], seed=42) >>> arrivals.prob array([1. , 1. , 1. , 0.5]) >>> arrivals.alias.astype(int) array([0, 0, 0, 2]) >>> from collections import Counter >>> Counter([arrivals.draw() for _ in range(800)]) Counter({2: 291, 1: 210, 0: 208, 3: 91})
- stochastic_matching.simulator.arrivals.create_prob_alias(mu)[source]#
Prepare vector to draw a distribution with the alias method.
Based on https://www.keithschwarz.com/darts-dice-coins/.
- Parameters:
- Returns:
Examples
>>> probas, aliases = create_prob_alias([2 ,2, 3, 1]) >>> probas array([1. , 1. , 1. , 0.5]) >>> aliases.astype(int) array([0, 0, 0, 2])
- class stochastic_matching.simulator.graph.JitHyperGraph(*args, **kwargs)[source]#
Jit compatible view of a (hyper)graph.
- Parameters:
- stochastic_matching.simulator.graph.make_jit_graph(model)[source]#
- Parameters:
model (
Model
) – Model that contains an incidence matrix.- Returns:
Jitted graph.
- Return type:
- class stochastic_matching.simulator.multiqueue.FullMultiQueue(*args, **kwargs)[source]#
A simple management of multi-class FCFM that supports add to tail and pop from head over multiple item classes. Each item must be characterized by a UNIQUE non-negative int (typically a timestamp).
- Parameters:
n (
int
) – Number of classes.
Examples
Let’s populate a multiqueue.
>>> queues = FullMultiQueue(4) >>> for cl, stamp in [(0, 2), (0, 5), (1, 4), (3, 0), (3, 1), (3, 6)]: ... queues.add(cl, stamp)
Oldest items per class (infinity indicates an empty queue):
>>> [queues.oldest(i) for i in range(4)] [2, 4, 9223372036854775807, 0]
Let’s remove some item. Note that pop returns nothing.
>>> [queues.pop(i) for i in [0, 1, 3, 3]] [None, None, None, None]
Oldest items per class (infinity indicates an empty queue):
>>> [queues.oldest(i) for i in range(4)] [5, 9223372036854775807, 9223372036854775807, 6]
Note that trying to pop from an empty queue will raise an error:
>>> queues.pop(1) Traceback (most recent call last): ... KeyError: 9223372036854775807
- class stochastic_matching.simulator.multiqueue.MultiQueue(*args, **kwargs)[source]#
A cruder, faster implementation of FullMultiQueue. Queues must be explicitly bounded on initialisation.
- Parameters:
n (
int
) – Number of classes.
Examples
Let’s populate a multiqueue and play with it.
>>> queues = MultiQueue(4, max_queue=4) >>> _ = [queues.add(i, s) for i, s in [(0, 2), (0, 5), (1, 4), (3, 0), (3, 1), (3, 6)]] >>> [int(queues.size(i)) for i in range(4)] [2, 1, 0, 3]
The age of the oldest element of an empty queue is infinity.
>>> [int(queues.oldest(i)) for i in range(4)] [2, 4, 9223372036854775807, 0] >>> queues.add(3, 10) >>> [int(queues.size(i)) for i in range(4)] [2, 1, 0, 4]
Trying to go above the max queue raises an error (item is not added, queue stays viable):
>>> queues.add(3, 10) Traceback (most recent call last): ... OverflowError: Max queue size reached.
So does trying to pop something empty (and queue stays viable).
>>> queues.pop(2) Traceback (most recent call last): ... ValueError: Try to pop an empty queue
Let’s play a bit more.
>>> _ = [queues.pop(i) for i in [0, 1, 3, 3]] >>> [int(queues.size(i)) for i in range(4)] [1, 0, 0, 2] >>> [int(queues.oldest(i)) for i in range(4)] [5, 9223372036854775807, 9223372036854775807, 6] >>> for a in range(22, 30): ... print(f"Add {a} to class 3.") ... queues.add(3, a) ... print(f"Oldest of class 3 is {queues.oldest(3)}, we pop it.") ... queues.pop(3) Add 22 to class 3. Oldest of class 3 is 6, we pop it. Add 23 to class 3. Oldest of class 3 is 10, we pop it. Add 24 to class 3. Oldest of class 3 is 22, we pop it. Add 25 to class 3. Oldest of class 3 is 23, we pop it. Add 26 to class 3. Oldest of class 3 is 24, we pop it. Add 27 to class 3. Oldest of class 3 is 25, we pop it. Add 28 to class 3. Oldest of class 3 is 26, we pop it. Add 29 to class 3. Oldest of class 3 is 27, we pop it. >>> [int(queues.size(i)) for i in range(4)] [1, 0, 0, 2] >>> [int(queues.oldest(i)) for i in range(4)] [5, 9223372036854775807, 9223372036854775807, 28]