Model#
Main module of the package.
- class stochastic_matching.model.Kernel(incidence, tol=1e-10)[source]#
- Parameters:
Examples
>>> import stochastic_matching.graphs as sm >>> paw = sm.Tadpole() >>> kernel = Kernel(paw.incidence)
The inverse is:
>>> kernel.inverse array([[ 0.5, 0.5, -0.5, 0.5], [ 0.5, -0.5, 0.5, -0.5], [-0.5, 0.5, 0.5, -0.5], [ 0. , 0. , 0. , 1. ]])
We can check that it is indeed the inverse.
>>> i = paw.incidence @ kernel.inverse >>> clean_zeros(i) >>> i array([[1., 0., 0., 0.], [0., 1., 0., 0.], [0., 0., 1., 0.], [0., 0., 0., 1.]])
Right kernel is trivial:
>>> kernel.right.shape[0] 0
Left kernel is trivial:
>>> kernel.left.shape[1] 0
Graph is bijective:
>>> kernel.type 'Bijective'
As the graph is simple and connected, there a more accurate description of the status:
>>> status_names_simple_connected[kernel.status] 'Non-bipartite monocyclic'
A summary:
>>> kernel Kernels of a graph with 4 nodes and 4 edges. Node dimension is 0. Edge dimension is 0 Type: Bijective Node kernel: [] Edge kernel: []
Now consider a bipartite version, the banner graph :
>>> banner = sm.Tadpole(m=4) >>> kernel = Kernel(banner.incidence)
The pseudo-inverse is:
>>> kernel.inverse array([[ 0.35, 0.4 , -0.15, -0.1 , 0.1 ], [ 0.45, -0.2 , -0.05, 0.3 , -0.3 ], [-0.15, 0.4 , 0.35, -0.1 , 0.1 ], [-0.05, -0.2 , 0.45, 0.3 , -0.3 ], [-0.2 , 0.2 , -0.2 , 0.2 , 0.8 ]])
We can check that it is indeed not exactly the inverse.
>>> i = banner.incidence @ kernel.inverse >>> i array([[ 0.8, 0.2, -0.2, 0.2, -0.2], [ 0.2, 0.8, 0.2, -0.2, 0.2], [-0.2, 0.2, 0.8, 0.2, -0.2], [ 0.2, -0.2, 0.2, 0.8, 0.2], [-0.2, 0.2, -0.2, 0.2, 0.8]])
Right kernel is not trivial because of the even cycle:
>>> kernel.right.shape[0] 1 >>> kernel.right array([[ 0.5, -0.5, -0.5, 0.5, 0. ]])
Left kernel is not trivial because of the bipartite degenerescence:
>>> kernel.left.shape[1] 1 >>> kernel.left array([[ 0.4472136], [-0.4472136], [ 0.4472136], [-0.4472136], [ 0.4472136]])
Status is nonjective (not injective nor bijective):
>>> kernel.type 'Nonjective'
As the graph is simple and connected, there a more accurate description of the status:
>>> status_names_simple_connected[kernel.status] 'Bipartite with cycle(s)'
Consider now the diamond graph, surjective (n<m, non bipartite).
>>> diamond = sm.CycleChain() >>> kernel = Kernel(diamond.incidence)
The inverse is:
>>> kernel.inverse array([[ 0.5 , 0.25, -0.25, 0. ], [ 0.5 , -0.25, 0.25, 0. ], [-0.5 , 0.5 , 0.5 , -0.5 ], [ 0. , 0.25, -0.25, 0.5 ], [ 0. , -0.25, 0.25, 0.5 ]])
We can check that it is indeed the inverse.
>>> i = diamond.incidence @ kernel.inverse >>> clean_zeros(i) >>> i array([[1., 0., 0., 0.], [0., 1., 0., 0.], [0., 0., 1., 0.], [0., 0., 0., 1.]])
There is a right kernel:
>>> kernel.right.shape[0] 1 >>> kernel.right array([[ 0.5, -0.5, 0. , -0.5, 0.5]])
Normalized version:
>>> kernel.alt_right array([[ 1., -1., 0., -1., 1.]])
The left kernel is trivial:
>>> kernel.left.shape[1] 0
The diamond is surjective-only:
>>> kernel.type 'Surjective-only'
As the graph is simple and connected, there a more accurate description of the status:
>>> status_names_simple_connected[kernel.status] 'Non-bipartite polycyclic'
Consider now a star graph, injective (tree).
>>> star = sm.Star() >>> kernel = Kernel(star.incidence)
The inverse is:
>>> kernel.inverse array([[ 0.25, 0.75, -0.25, -0.25], [ 0.25, -0.25, 0.75, -0.25], [ 0.25, -0.25, -0.25, 0.75]])
We can check that it is indeed the left inverse.
>>> i = kernel.inverse @ star.incidence >>> clean_zeros(i) >>> i array([[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]])
The right kernel is trivial:
>>> kernel.right.shape[0] 0
The left kernel shows the bibartite behavior:
>>> kernel.left.shape[1] 1 >>> kernel.left array([[-0.5], [ 0.5], [ 0.5], [ 0.5]])
The star is injective-only:
>>> kernel.type 'Injective-only'
As the graph is simple and connected, there a more accurate description of the status:
>>> status_names_simple_connected[kernel.status] 'Tree'
Next, a surjective hypergraph:
>>> clover = sm.Fan() >>> kernel = Kernel(clover.incidence)
Incidence matrix dimensions:
>>> clover.incidence.shape (9, 10)
The inverse dimensions:
>>> kernel.inverse.shape (10, 9)
We can check that it is exactly the inverse, because there was no dimensionnality loss.
>>> i = clover.incidence @ kernel.inverse >>> clean_zeros(i) >>> i array([[1., 0., 0., 0., 0., 0., 0., 0., 0.], [0., 1., 0., 0., 0., 0., 0., 0., 0.], [0., 0., 1., 0., 0., 0., 0., 0., 0.], [0., 0., 0., 1., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 1., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 1., 0., 0., 0.], [0., 0., 0., 0., 0., 0., 1., 0., 0.], [0., 0., 0., 0., 0., 0., 0., 1., 0.], [0., 0., 0., 0., 0., 0., 0., 0., 1.]])
Right kernel is 1 dimensional:
>>> kernel.right.shape[0] 1
Left kernel is trivial.
>>> kernel.left.shape[1] 0
Status:
>>> kernel.type 'Surjective-only'
Lastly, observe a bipartite hypergraph (in the sense of with non-trivial left kernel).
>>> clover = sm.Fan(cycle_size=4) >>> kernel = Kernel(clover.incidence)
Incidence matrix dimensions:
>>> clover.incidence.shape (12, 13)
The inverse dimensions:
>>> kernel.inverse.shape (13, 12)
We can check that it is not exactly the inverse.
>>> (clover.incidence @ kernel.inverse)[:4, :4] array([[ 0.83333333, 0.16666667, -0.16666667, 0.16666667], [ 0.16666667, 0.83333333, 0.16666667, -0.16666667], [-0.16666667, 0.16666667, 0.83333333, 0.16666667], [ 0.16666667, -0.16666667, 0.16666667, 0.83333333]])
Right kernel is 3 dimensional:
>>> kernel.right.shape[0] 3
Left kernel is 2-dimensional (this is a change compared to simple graph, where the left kernel dimension of a connected component is at most 1).
>>> kernel.left.shape[1] 2
Status:
>>> kernel.type 'Nonjective'
- class stochastic_matching.model.Model(incidence=None, adjacency=None, rates=None, names=None, tol=1e-07)[source]#
Main class to manipulate stochatic matching models.
- Parameters:
incidence (
ndarray
orlist
, optional) – Incidence matrix. If used, the graph will be considered as a hypergraph.adjacency (
ndarray
orlist
, optional) – Adjacency matrix. If used, the graph will be considered as a simple graph.rates (
ndarray
orlist
orstr
, optional) – Arrival rates. You can use a specific rate vector or list. You can use uniform or proportional for uniform or degree-proportional allocation. Default to proportional, which makes the problem stabilizable if the graph is surjective.names (
list
ofstr
or ‘alpha’, optional) – List of node names (e.g. for display)tol (
float
, optional) – Values of absolute value lower than tol are set to 0.
Examples
The following examples are about stability:
Is a triangle that checks triangular inequality stable?
>>> import stochastic_matching.graphs as sm >>> triangle = sm.Cycle(rates="uniform") >>> triangle.stabilizable True
>>> triangle.kernel.type 'Bijective'
We can look at the base flow (based on Moore-Penrose inverse by default).
>>> triangle.base_flow array([0.5, 0.5, 0.5])
As the graph is bijective, all optimizations will yield the same flow.
>>> triangle.incompressible_flow() array([0.5, 0.5, 0.5])
>>> triangle.optimize_edge(0) array([0.5, 0.5, 0.5])
What if the triangular inequality does not hold?
>>> triangle.rates = [1, 3, 1] >>> triangle.stabilizable False
We can look at the base flow (based on Moore-Penrose inverse).
>>> triangle.base_flow array([ 1.5, -0.5, 1.5])
Now a bipartite example.
>>> banner = sm.Tadpole(m=4, rates='proportional')
Notice that we have a perfectly working solution with respect to conservation law.
>>> banner.base_flow array([1., 1., 1., 1., 1.])
However, the left kernel is not trivial.
>>> banner.kernel.left array([[ 1], [-1], [ 1], [-1], [ 1]])
As a consequence, stability is False.
>>> banner.stabilizable False
>>> banner.kernel.type 'Nonjective'
Note that the base flow can be negative even if there is a positive solution.
>>> diamond = sm.CycleChain(rates=[5, 6, 2, 1]) >>> diamond.base_flow array([ 3.5, 1.5, 1. , 1.5, -0.5]) >>> diamond.stabilizable True >>> diamond.maximin.round(decimals=6) array([4.5, 0.5, 1. , 0.5, 0.5])
>>> diamond.incompressible_flow() array([4., 0., 1., 0., 0.])
>>> diamond.kernel.type 'Surjective-only'
- property adjacency#
Adjacency matrix of the graph.
Setting adjacency treats the graph as simple.
- Type:
- edge_to_kernel(edge)[source]#
- Parameters:
edge (
ndarray
) – A flow vector in edge coordinates.- Returns:
The same flow vector in kernel coordinates, based on the current base flow and right kernel.
- Return type:
Examples
Consider the codomino graph with a kernel with a kayak paddle.
>>> import stochastic_matching.graphs as sm >>> codomino = sm.Codomino(rates = [3, 12, 3, 3, 12, 3])
Default seeds of the codomino:
>>> codomino.seeds [5, 7]
Let use other seeds.
>>> codomino.seeds = [0, 4] >>> codomino.kernel.right array([[ 1, -1, 1, -2, 0, 1, -1, 1], [ 0, 0, -1, 1, 1, -1, 0, 0]])
Consider the Moore-Penrose flow and the maximin flow.
>>> codomino.moore_penrose array([3., 0., 3., 6., 0., 3., 0., 3.])
>>> codomino.maximin array([2., 1., 1., 9., 1., 1., 1., 2.])
As the Moore-Penrose is the default base flow, its coordinates are obviously null.
>>> codomino.edge_to_kernel(codomino.moore_penrose) array([0., 0.])
As for maximin, one can check that the following kernel coordinates transform Moore-Penrose into it:
>>> codomino.edge_to_kernel(codomino.maximin) array([-1., 1.])
If we change the base flow to maximin, we will see the coordinates shifted by (1, -1):
>>> codomino.base_flow = codomino.maximin >>> codomino.edge_to_kernel(codomino.moore_penrose) array([ 1., -1.]) >>> codomino.edge_to_kernel(codomino.maximin) array([0., 0.])
- gentle_rewards(rewards)[source]#
- Parameters:
rewards (
ndarray
orlist
) – Rewards associated to each edge.- Returns:
Rewards with negative values on taboo edges, positive values everywhere else. Absolute values are degree proportional.
- Return type:
Examples
>>> import stochastic_matching as sm >>> diamond = sm.CycleChain(rates=[4, 5, 2, 1])
First edge provides no reward (bijective vertex).
>>> diamond.gentle_rewards([0, 1, 1, 1, 1]) array([ 2, 2, 2, 2, -2])
First edge provides more reward (injective-only vertex).
>>> diamond.gentle_rewards([2, 1, 1, 1, 1]) array([ 2, -2, 2, -2, 2])
On bijective graphs, all edges are OK.
>>> paw = sm.Tadpole() >>> paw.gentle_rewards([6, 3, 1, 2]) array([2, 2, 2, 2])
Last example: the hypergraph from Nazari & Stolyar.
>>> ns = sm.NS19(rates=[1.2, 1.5, 2, 0.8]) >>> ns.gentle_rewards([-1, -1, 1, 2, 5, 4, 7]) array([-1, -1, 1, 1, 2, -2, 3])
- property incidence#
Incidence matrix of the graph.
Setting incidence treats the graph as hypergraph.
- Type:
- property incidence_csc#
Incidence matrix of the graph in csc format.
- Type:
- property incidence_csr#
Incidence matrix of the graph in csr format.
- Type:
- incompressible_flow()[source]#
Finds the minimal flow that must pass through each edge. This is currently done in a brute force way by minimizing every edges.
- Returns:
Unavoidable flow.
- Return type:
- kernel_to_edge(alpha)[source]#
- Parameters:
alpha (
ndarray
otlist
) – A flow vector in kernel coordinates.- Returns:
The same flow vector in edge coordinates, based on the current base flow and right kernel.
- Return type:
Examples
Consider the codomino graph with a kernel with a kayak paddle.
>>> import stochastic_matching.graphs as sm >>> codomino = sm.Codomino(rates=[3, 12, 3, 3, 12, 3]) >>> codomino.kernel.right array([[ 0, 0, 1, -1, -1, 1, 0, 0], [ 1, -1, 0, -1, 1, 0, -1, 1]])
Consider the Moore-Penrose and the maximin flows.
>>> codomino.moore_penrose array([3., 0., 3., 6., 0., 3., 0., 3.])
>>> codomino.maximin array([2., 1., 1., 9., 1., 1., 1., 2.])
As the Moore-Penrose inverse is the base flow, it is (0, 0) in kernel coordinates.
>>> codomino.kernel_to_edge([0, 0]) array([3., 0., 3., 6., 0., 3., 0., 3.])
As for maximin, (-2, -1) seems to be its kernel coordinates.
>>> codomino.kernel_to_edge([-2, -1]) array([2., 1., 1., 9., 1., 1., 1., 2.]) >>> np.allclose(codomino.kernel_to_edge([-2, -1]), codomino.maximin) True
If we change the kernel space, the kernel coordinates change as well.
>>> codomino.seeds = [0, 4] >>> codomino.kernel.right array([[ 1, -1, 1, -2, 0, 1, -1, 1], [ 0, 0, -1, 1, 1, -1, 0, 0]]) >>> codomino.kernel_to_edge([0, 0]) array([3., 0., 3., 6., 0., 3., 0., 3.])
>>> codomino.kernel_to_edge([-2, -1]) array([ 1., 2., 2., 9., -1., 2., 2., 1.]) >>> np.allclose(codomino.kernel_to_edge([-2, -1]), codomino.maximin) False
- property maximin#
solution of the conservation equation that maximizes the minimal flow over all edges.
- Type:
- property moore_penrose#
Solution of the conservation equation obtained using the Moore-Penrose inverse.
- Type:
- property names#
list of node names (e.g. for display).
If set to “alpha”, automatic alphabetic labeling is used. If set to None, numeric labeling is used.
- normalize_rewards(rewards)[source]#
- Parameters:
rewards (
ndarray
orlist
) – Rewards associated to each edge.- Returns:
For bijective cases, rewards with negative values on taboo edges, null values everywhere else.
- Return type:
Examples
>>> import stochastic_matching as sm >>> diamond = sm.CycleChain(rates=[4, 5, 2, 1])
First edge provides no reward (bijective vertex).
>>> diamond.normalize_rewards([0, 1, 1, 1, 1]) array([ 0., 0., 0., 0., -1.])
First edge provides more reward (injective-only vertex).
>>> diamond.normalize_rewards([2, 1, 1, 1, 1]) array([ 0., 0., 0., -1., 0.])
Note
Reward normalization can miss a taboo edge in injective-only case.
On bijective graphs, all edges are OK.
>>> paw = sm.Tadpole() >>> paw.normalize_rewards([6, 3, 1, 2]) array([0., 0., 0., 0.])
Last example: the hypergraph from Nazari & Stolyar.
>>> ns = sm.NS19(rates=[1.2, 1.5, 2, 0.8]) >>> ns.normalize_rewards([-1, -1, 1, 2, 5, 4, 7]) array([-2., -5., 0., 0., 0., -1., 0.])
- optimize_edge(edge, sign=1)[source]#
Tries to find a positive solution that minimizes/maximizes a given edge.
- optimize_rates(weights)[source]#
Tries to find a positive solution that minimizes/maximizes a given edge.
- Parameters:
weights (
ndarray
orlist
) – Rewards associated to each edge.- Returns:
Optimized flow that maximize the total reward.
- Return type:
Examples
>>> import stochastic_matching as sm >>> diamond = sm.CycleChain(rates=[4, 5, 2, 1])
Optimize with the first edge that provides no reward.
>>> diamond.optimize_rates([0, 1, 1, 1, 1]) array([3., 1., 1., 1., 0.])
Optimize with the first edge that provides more reward.
>>> diamond.optimize_rates([2, 1, 1, 1, 1]) array([4., 0., 1., 0., 1.])
On bijective graphs, the method directly returns the unique solution.
>>> paw = sm.Tadpole() >>> paw.optimize_rates([6, 3, 1, 2]) array([1., 1., 1., 1.])
Last example: the hypergraph from Nazari & Stolyar.
>>> ns = sm.NS19(rates=[1.2, 1.5, 2, 0.8]) >>> ns.optimize_rates([-1, -1, 1, 2, 5, 4, 7]) array([0. , 0. , 1.7, 0.5, 1.2, 0. , 0.3])
- property rates#
vector of arrival rates.
You can use uniform or proportional for uniform or degree-proportional allocation. Default to proportional, which makes the problem stabilizable if the graph is bijective.
- Type:
- run(simulator, n_steps=1000000, seed=None, max_queue=1000, **kwargs)[source]#
All-in-one instantiate and run simulation.
- Parameters:
simulator (
str
orSimulator
) – Type of simulator to instantiate.n_steps (
int
, optional) – Number of arrivals to simulate.seed (
int
, optional) – Seed of the random generatormax_queue (
int
) – Max queue size. Necessary for speed and detection of unstability. For stable systems very close to the unstability border, the max_queue may be reached.**kwargs – Arguments to pass to the simulator.
- Returns:
Success of simulation.
- Return type:
Examples
Let start with a working triangle and a greedy simulator.
>>> import stochastic_matching.graphs as sm >>> triangle = sm.Cycle(rates=[3, 4, 5]) >>> triangle.base_flow array([1., 2., 3.]) >>> triangle.run('random_edge', seed=42, n_steps=20000) True >>> triangle.simulation array([1.0716, 1.9524, 2.976 ])
A ill diamond graph (simulation ends before completion due to drift).
Note that the drift is slow, so if the number of steps is low the simulation may complete without overflowing.
>>> diamond = sm.CycleChain(rates='uniform') >>> diamond.base_flow array([0.5, 0.5, 0. , 0.5, 0.5])
>>> diamond.run('longest', seed=42, n_steps=20000) True >>> diamond.simulation array([0.501 , 0.4914, 0.0018, 0.478 , 0.5014])
A working candy. While candies are not good for greedy policies, the virtual queue is designed to deal with it.
>>> candy = sm.HyperPaddle(rates=[1, 1, 1.1, 1, 1.1, 1, 1]) >>> candy.base_flow array([0.95, 0.05, 0.05, 0.05, 0.05, 0.95, 1. ])
The above states that the target flow for the hyperedge of the candy (last entry) is 1.
>>> candy.run('longest', seed=42, n_steps=20000) False >>> candy.simulator.logs.steps_done 10458 >>> candy.simulation array([0.64234079, 0.37590361, 0.38760757, 0.40757315, 0.40895009, 0.59208262, 0.2939759 ])
A greedy simulator performs poorly on the hyperedge.
>>> candy.run('virtual_queue', seed=42, n_steps=20000) True >>> candy.simulation array([0.96048, 0.04104, 0.04428, 0.06084, 0.06084, 0.94464, 0.9846 ])
The virtual queue simulator manages to cope with the target flow on the hyperedge.
- property seeds#
edges that induce a description of the (right) kernel as cycles and paddles.
Changing the seeds modifies the right kernel accordingly. Only available on simple graphs.
- shadow_prices(rewards)[source]#
Return the shadow prices of each item type for a given reward. Cf https://doi.org/10.1145/3578338.3593532
- show(**kwargs)[source]#
Shows the model. See
show()
for details.Examples
>>> import stochastic_matching.graphs as sm >>> pyramid = sm.Pyramid() >>> pyramid.show() <IPython.core.display.HTML object>
- show_flow(**kwargs)[source]#
Shows the model (with check on conservation law and edge positivity).
Examples
>>> import stochastic_matching.graphs as sm >>> pyramid = sm.Pyramid() >>> pyramid.show_flow() <IPython.core.display.HTML object>
- show_graph(**kwargs)[source]#
Shows the graph of the model (with node names and no value on edges by default).
Examples
>>> import stochastic_matching.graphs as sm >>> pyramid = sm.Pyramid() >>> pyramid.show_graph() <IPython.core.display.HTML object>
- show_kernel(**kwargs)[source]#
Shows the kernel basis.
Examples
>>> import stochastic_matching.graphs as sm >>> pyramid = sm.Pyramid() >>> pyramid.show_kernel() <IPython.core.display.HTML object>
- show_vertex(i, **kwargs)[source]#
Shows the vertex of indice i. See
show()
for details.Examples
>>> import stochastic_matching.graphs as sm >>> pyramid = sm.Pyramid() >>> pyramid.show_vertex(2) <IPython.core.display.HTML object>
- property stabilizable#
Is the model stabilizable, i.e. is it bijective with a positive solution of the conservation law.
- Type:
- property vertices#
list of vertices
Examples
>>> import stochastic_matching.graphs as sm >>> paw = sm.Tadpole() >>> paw.vertices [{'kernel_coordinates': None, 'edge_coordinates': array([1., 1., 1., 1.]), 'null_edges': [], 'bijective': True}]
Note that vertices is soft-cached: as long as the graph and rates do not change they are not re-computed.
>>> paw.vertices [{'kernel_coordinates': None, 'edge_coordinates': array([1., 1., 1., 1.]), 'null_edges': [], 'bijective': True}]
>>> star = sm.Star() >>> star.vertices [{'kernel_coordinates': None, 'edge_coordinates': array([1., 1., 1.]), 'null_edges': [], 'bijective': False}]
>>> cycle = sm.Cycle(4, rates=[3, 3, 3, 1]) >>> cycle.vertices [{'kernel_coordinates': array([-0.75]), 'edge_coordinates': array([1. , 1.5, 2.5, 0. ]), 'null_edges': [3], 'bijective': False}, {'kernel_coordinates': array([0.75]), 'edge_coordinates': array([2.5, 0. , 1. , 1.5]), 'null_edges': [1], 'bijective': False}]
>>> diamond = sm.CycleChain(rates=[3, 3, 3, 1]) >>> diamond.vertices [{'kernel_coordinates': array([-0.5]), 'edge_coordinates': array([1., 2., 1., 1., 0.]), 'null_edges': [4], 'bijective': True}, {'kernel_coordinates': array([0.5]), 'edge_coordinates': array([2., 1., 1., 0., 1.]), 'null_edges': [3], 'bijective': True}]
>>> diamond.rates = [2, 3, 2, 1] >>> diamond.vertices [{'kernel_coordinates': array([-0.25]), 'edge_coordinates': array([1., 1., 1., 1., 0.]), 'null_edges': [4], 'bijective': True}, {'kernel_coordinates': array([0.75]), 'edge_coordinates': array([2., 0., 1., 0., 1.]), 'null_edges': [1, 3], 'bijective': False}]
>>> diamond.rates = [1, 3, 1, 1] >>> diamond.vertices Traceback (most recent call last): ... ValueError: The matching model admits no positive solution.
>>> diamond.rates = None >>> diamond.vertices [{'kernel_coordinates': array([-1.]), 'edge_coordinates': array([0., 2., 1., 2., 0.]), 'null_edges': [0, 4], 'bijective': False}, {'kernel_coordinates': array([1.]), 'edge_coordinates': array([2., 0., 1., 0., 2.]), 'null_edges': [1, 3], 'bijective': False}]
>>> codomino = sm.Codomino() >>> codomino.rates = [4, 5, 5, 3, 3, 2] >>> codomino.seeds = [1, 2] >>> codomino.base_flow = codomino.maximin >>> sorted(codomino.vertices, key=str) [{'kernel_coordinates': array([ 1., -1.]), 'edge_coordinates': array([1., 3., 1., 3., 1., 0., 2., 0.]), 'null_edges': [5, 7], 'bijective': True}, {'kernel_coordinates': array([-1., 0.]), 'edge_coordinates': array([3., 1., 2., 0., 2., 1., 0., 2.]), 'null_edges': [3, 6], 'bijective': True}, {'kernel_coordinates': array([-1., -1.]), 'edge_coordinates': array([3., 1., 1., 1., 3., 0., 0., 2.]), 'null_edges': [5, 6], 'bijective': True}, {'kernel_coordinates': array([0., 1.]), 'edge_coordinates': array([2., 2., 3., 0., 0., 2., 1., 1.]), 'null_edges': [3, 4], 'bijective': True}, {'kernel_coordinates': array([1., 0.]), 'edge_coordinates': array([1., 3., 2., 2., 0., 1., 2., 0.]), 'null_edges': [4, 7], 'bijective': True}]
>>> codomino.rates = [2, 4, 2, 2, 4, 2] >>> codomino.base_flow = np.array([1.0, 1.0, 1.0, 2.0, 0.0, 1.0, 1.0, 1.0]) >>> sorted(codomino.vertices, key=str) [{'kernel_coordinates': array([ 1., -1.]), 'edge_coordinates': array([0., 2., 0., 4., 0., 0., 2., 0.]), 'null_edges': [0, 2, 4, 5, 7], 'bijective': False}, {'kernel_coordinates': array([-1., 1.]), 'edge_coordinates': array([2., 0., 2., 0., 0., 2., 0., 2.]), 'null_edges': [1, 3, 4, 6], 'bijective': False}, {'kernel_coordinates': array([-1., -1.]), 'edge_coordinates': array([2., 0., 0., 2., 2., 0., 0., 2.]), 'null_edges': [1, 2, 5, 6], 'bijective': False}]
>>> pyramid = sm.Pyramid(rates=[4, 3, 3, 3, 6, 6, 3, 4, 4, 4]) >>> pyramid.seeds = [0, 12, 2] >>> pyramid.base_flow = pyramid.kernel_to_edge([1/6, 1/6, 1/6] ) >>> sorted([(v['kernel_coordinates'], v['bijective']) for v in pyramid.vertices], key=str) [(array([ 1., -1., 0.]), True), (array([-1., 1., 0.]), True), (array([-1., -1., 0.]), True), (array([0., 0., 1.]), False), (array([1., 1., 0.]), True)]
- Type:
- stochastic_matching.model.adjacency_to_incidence(adjacency)[source]#
Converts adjacency matrix to incidence matrix.
- Parameters:
adjacency (
ndarray
) – Adjacency matrix of a simple graph (symmetric matrix with 0s and 1s, null diagonal).- Returns:
Incidence matrix between nodes and edges.
- Return type:
Examples
Convert a diamond adjacency to incidence.
>>> diamond = np.array([[0, 1, 1, 0], ... [1, 0, 1, 1], ... [1, 1, 0, 1], ... [0, 1, 1, 0]]) >>> adjacency_to_incidence(diamond) array([[1, 1, 0, 0, 0], [1, 0, 1, 1, 0], [0, 1, 1, 0, 1], [0, 0, 0, 1, 1]])
- stochastic_matching.model.incidence_to_adjacency(incidence)[source]#
Converts incidence matrix to adjacency matrix. If the incidence matrix does not correspond to a simple graph, an error is thrown.
- Parameters:
incidence (
ndarray
) – Incidence matrix of a simple graph (matrix with 0s and 1s, two 1s per column).- Returns:
Adjacency matrix.
- Return type:
Examples
Convert a diamond graph from incidence to adjacency.
>>> import stochastic_matching as sm >>> diamond = sm.CycleChain() >>> diamond.incidence array([[1, 1, 0, 0, 0], [1, 0, 1, 1, 0], [0, 1, 1, 0, 1], [0, 0, 0, 1, 1]]) >>> incidence_to_adjacency(diamond.incidence) array([[0, 1, 1, 0], [1, 0, 1, 1], [1, 1, 0, 1], [0, 1, 1, 0]])
An error occurs if one tries to convert a hypergraph.
>>> candy = sm.HyperPaddle() >>> candy.incidence array([[1, 1, 0, 0, 0, 0, 0], [1, 0, 1, 0, 0, 0, 0], [0, 1, 1, 0, 0, 0, 1], [0, 0, 0, 0, 0, 0, 1], [0, 0, 0, 1, 1, 0, 1], [0, 0, 0, 1, 0, 1, 0], [0, 0, 0, 0, 1, 1, 0]]) >>> incidence_to_adjacency(candy.incidence) Traceback (most recent call last): ... ValueError: The incidence matrix does not seem to correspond to a simple graph.
- stochastic_matching.model.kernel_inverse(kernel)[source]#
- Parameters:
kernel (
numpy.ndarray
) – Matrix of kernel vectors (not necessarily orthogonal) of shape dXm.- Returns:
The reverse matrix dXd that allows to transform inner product with kernel to kernel coordinates.
- Return type:
Examples
When the kernel basis is orthogonal, it returns the diagonal matrix with the inverse of the squared norm of the vectors. For example:
>>> edge_kernel = np.array([[ 0, 0, 1, -1, -1, 1, 0, 0], ... [ 1, -1, 0, -1, 1, 0, -1, 1]]) >>> inverse = kernel_inverse(edge_kernel) >>> inverse array([[0.25 , 0. ], [0. , 0.16666667]])
Here the kernel basis is orthogonal, so you just have a diagonal matrix with the inverse square of the vector norms.
Consider [-1, 3] in kernel coordinates. In edge coordinates, it corresponds to:
>>> edges = np.array([-1, 3]) @ edge_kernel >>> edges array([ 3, -3, -1, -2, 4, -1, -3, 3])
One can project the edges on the kernel.
>>> projection = edge_kernel @ edges >>> projection array([-4, 18])
The kernel inverse allows to rectify the projection in kernel coordinates.
>>> projection @ inverse array([-1., 3.])
If the kernel basis is not orthogonal, it returns somethings more complex.
>>> edge_kernel = np.array([[ 1, -1, 1, -2, 0, 1, -1, 1], ... [ 0, 0, -1, 1, 1, -1, 0, 0]]) >>> inverse = kernel_inverse(edge_kernel) >>> inverse array([[0.16666667, 0.16666667], [0.16666667, 0.41666667]])
The inverse is more complex because the basis is not orthogonal.
>>> edges = np.array([2, -1]) @ edge_kernel >>> edges array([ 2, -2, 3, -5, -1, 3, -2, 2])
>>> projection = edge_kernel @ edges >>> projection array([ 24, -12])
>>> projection @ inverse array([ 2., -1.])
- stochastic_matching.model.simple_left_kernel(left)[source]#
- Parameters:
left (
ndarray
) – Left kernel (i.e. nodes kernel) of a simple graph, corresponding to bipartite components- Returns:
The kernel with infinite-norm renormalization.
- Return type:
Examples
By default the kernel vector are 2-normalized.
>>> import stochastic_matching.graphs as sm >>> sample = sm.concatenate([sm.Cycle(4), sm.Star(5)], 0) >>> raw_kernel = Kernel(sample.incidence) >>> raw_kernel.left array([[ 0.5 , 0. ], [-0.5 , 0. ], [ 0.5 , 0. ], [-0.5 , 0. ], [ 0. , -0.4472136], [ 0. , 0.4472136], [ 0. , 0.4472136], [ 0. , 0.4472136], [ 0. , 0.4472136]])
simple_left_kernel adjusts the values to {-1, 0, 1}
>>> simple_left_kernel(raw_kernel.left) array([[ 1, 0], [-1, 0], [ 1, 0], [-1, 0], [ 0, -1], [ 0, 1], [ 0, 1], [ 0, 1], [ 0, 1]])
- stochastic_matching.model.simple_right_kernel(edge_kernel, seeds)[source]#
- Parameters:
edge_kernel (
ndarray
) – Right kernel (i.e. edges kernel) of a simple graph.seeds (
list
ofint
) – Seed edges of the kernel space. Valid seeds can be obtained fromconnected_components()
. Cf https://hal.archives-ouvertes.fr/hal-03502084.
- Returns:
The kernel expressed as elements from the cycle space (even cycles and kayak paddles).
- Return type:
Examples
Start with the co-domino.
>>> import stochastic_matching.graphs as sm
Default decomposition is a square and an hex.
>>> right = sm.Codomino().kernel.right >>> right array([[ 0, 0, 1, -1, -1, 1, 0, 0], [ 1, -1, 0, -1, 1, 0, -1, 1]])
A second possible decomposition with a kayak paddle and a square.
>>> simple_right_kernel(right, [0, 4]) array([[ 1, -1, 1, -2, 0, 1, -1, 1], [ 0, 0, -1, 1, 1, -1, 0, 0]])
Another example with the pyramid.
>>> right = sm.Pyramid().kernel.right
Default decomposition: cycles of length 8, 10, and 6.
>>> right array([[ 0, 0, 1, -1, -1, 1, 0, -1, 1, -1, 1, 0, 0], [-1, 1, 0, 1, -1, 1, 0, -1, -1, 1, 0, -1, 1], [ 1, -1, 0, -1, 1, -1, 1, 0, 0, 0, 0, 0, 0]])
Second decomposition: two cycles of length 6, one of length 8.
>>> simple_right_kernel(right, [0, 12, 2]) array([[ 1, -1, 0, -1, 1, -1, 1, 0, 0, 0, 0, 0, 0], [ 0, 0, 0, 0, 0, 0, 1, -1, -1, 1, 0, -1, 1], [ 0, 0, 1, -1, -1, 1, 0, -1, 1, -1, 1, 0, 0]])
Another decomposition: two cycles of length 6 and a kayak paddle \(KP_{3, 3, 3}\).
>>> simple_right_kernel(right, [5, 7, 2]) array([[-1, 1, 0, 1, -1, 1, -1, 0, 0, 0, 0, 0, 0], [ 0, 0, 0, 0, 0, 0, -1, 1, 1, -1, 0, 1, -1], [ 1, -1, 1, -2, 0, 0, 0, 0, 2, -2, 1, 1, -1]])
- stochastic_matching.model.status_names = {(False, False): 'Nonjective', (False, True): 'Surjective-only', (True, False): 'Injective-only', (True, True): 'Bijective'}#
Name associated to a (injective, surjective) tuple.
- stochastic_matching.model.status_names_simple_connected = {(False, False): 'Bipartite with cycle(s)', (False, True): 'Non-bipartite polycyclic', (True, False): 'Tree', (True, True): 'Non-bipartite monocyclic'}#
Name associated to a (injective, surjective) tuple when the graph is simple and connected.
- stochastic_matching.model.traversal(model)[source]#
Using graph traversal, splits the graph into its connected components as a list of sets of nodes and edges. If the graph is simple, additional information obtained by the traversal are provided.
- Parameters:
model (
Model
) – Model with graph to decompose.- Returns:
The list of connected components. If the graph is not simple, each connected component contains its sets of nodes and edges. If the graph is simple, each component also contains a set of spanning edges, a pivot edge that makes the spanner bijective (if any), a set set of edges that can seed the kernel space, and the type of the connected component.
- Return type:
Examples
For simple graphs, the method provides a lot of information on each connected component.
>>> import stochastic_matching.graphs as sm >>> tutti = sm.concatenate([sm.Cycle(4), sm.Complete(4), sm.CycleChain(), sm.Tadpole(), sm.Star()], 0) >>> traversal(tutti) [{'nodes': {0, 1, 2, 3}, 'edges': {0, 1, 2, 3}, 'spanner': {0, 1, 2}, 'pivot': False, 'seeds': {3}, 'type': 'Bipartite with cycle(s)'}, {'nodes': {4, 5, 6, 7}, 'edges': {4, 5, 6, 7, 8, 9}, 'spanner': {4, 5, 6}, 'pivot': 8, 'seeds': {9, 7}, 'type': 'Non-bipartite polycyclic'}, {'nodes': {8, 9, 10, 11}, 'edges': {10, 11, 12, 13, 14}, 'spanner': {10, 11, 13}, 'pivot': 12, 'seeds': {14}, 'type': 'Non-bipartite polycyclic'}, {'nodes': {12, 13, 14, 15}, 'edges': {16, 17, 18, 15}, 'spanner': {16, 18, 15}, 'pivot': 17, 'seeds': set(), 'type': 'Non-bipartite monocyclic'}, {'nodes': {16, 17, 18, 19}, 'edges': {19, 20, 21}, 'spanner': {19, 20, 21}, 'pivot': False, 'seeds': set(), 'type': 'Tree'}]
This information makes the analysis worthy even in the cases where the graph is connected.
>>> traversal(sm.Pyramid()) [{'nodes': {0, 1, 2, 3, 4, 5, 6, 7, 8, 9}, 'edges': {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12}, 'spanner': {0, 1, 3, 4, 5, 7, 8, 9, 11}, 'pivot': 2, 'seeds': {10, 12, 6}, 'type': 'Non-bipartite polycyclic'}]
If the graph is treated as hypergraph, a lot less information is available.
>>> tutti.adjacency = None >>> traversal(tutti) [{'nodes': {0, 1, 2, 3}, 'edges': {0, 1, 2, 3}}, {'nodes': {4, 5, 6, 7}, 'edges': {4, 5, 6, 7, 8, 9}}, {'nodes': {8, 9, 10, 11}, 'edges': {10, 11, 12, 13, 14}}, {'nodes': {12, 13, 14, 15}, 'edges': {16, 17, 18, 15}}, {'nodes': {16, 17, 18, 19}, 'edges': {19, 20, 21}}]