Landmarks Tutorial#
In this notebook, we will use the landmarks submodule of Gismo to give an interactive description of ACM topics and researchers of the NPA laboratory (https://www.lip6.fr/recherche/team.php?acronyme=NPA).
This notebook can be used as a blueprint to analyze other group of people under the scope of a topic classification.
Before starting this topic, it is recommended to have looked at the ACM and DBLP tutorials.
Note
Many, not to say all, of the features presented here are scheduled to be integrated into Gismap so the content of this notebook is tagged as deprecated and will not be maitained in future Gismo releases.
NPA Researchers#
In this section, we bind the NPA researchers with their DBLP id.
[1]:
from pathlib import Path
path = Path.home() / "temp"
We assume that Gismap is installed (pip install gismap).
LDB is a local gopu of DBLP provided by Gismap.
[2]:
from gismap import LDB
LDB.search_author("Céline Comte")
[2]:
[LDBAuthor(name='Céline Comte', key='179/2173')]
LDB contains publications and authors in ZLists of tuple objects that one can convert to dict or more advanced objects.
[3]:
i = 1234567
print(LDB.publis[i])
print(LDB.publication_by_index(i))
('conf/hci/TamuraOC07', 'NIRS Trajectories in Oxy-Deoxy Hb Plane and the Trajectory Map to Understand Brain Activities Related to Human Interface.', 'conference', [581229, 1114584, 1252267], 'https://doi.org/10.1007/978-3-540-73345-4_112', ['conf/hci'], '994-1003', 'HCI (8)', 2007)
{'key': 'conf/hci/TamuraOC07', 'title': 'NIRS Trajectories in Oxy-Deoxy Hb Plane and the Trajectory Map to Understand Brain Activities Related to Human Interface.', 'type': 'conference', 'authors': [581229, 1114584, 1252267], 'url': 'https://doi.org/10.1007/978-3-540-73345-4_112', 'streams': ['conf/hci'], 'pages': '994-1003', 'venue': 'HCI (8)', 'year': 2007}
Gismap can fetch members of any LIP6 team.
[4]:
from gismap.lab_examples.lip6 import Lip6Map
npa = Lip6Map("npa", dbs="ldb")
npa.update_authors()
INFO:GisMap:Multiple entries for Liu Qiong in ldb
INFO:GisMap:Barré Capucine not found in ldb
INFO:GisMap:Bombar Ufuk not found in ldb
INFO:GisMap:Multiple entries for Pan Bo in ldb
INFO:GisMap:Zaarouri Firas not found in ldb
INFO:GisMap:Mespoulhes Émilie not found in ldb
INFO:GisMap:Rahich Hassane not found in ldb
INFO:GisMap:Vaissade Frédéric not found in ldb
INFO:GisMap:Zaidi Taha Mohsen not found in ldb
INFO:GisMap:Multiple entries for Nguyen Thi-Mai-Trang in ldb
INFO:GisMap:Kurose James not found in ldb
INFO:GisMap:Kazemi Saied not found in ldb
INFO:GisMap:Vacheresse Sabrina not found in ldb
NPA researchers:
[5]:
print(", ".join(a.name for a in npa.authors.values()))
Baey Sébastien, Baynat Bruno, Bui-Xuan Binh-Minh, Dias de Amorim Marcelo, Fdida Serge, Fladenmuller Anne, Fossati Francesca, Fourmaux Olivier, Friedman Timur, Kervella Brigitte, Liu Qiong, Malouch Naceur, Potop-Butucaru Maria, Pujolle Guy, Thai Kim Loan, Tixeuil Sébastien, Urbain Xavier, Zaghdoudi Bilel, An Tengfei, Canizio Lopes Diego, Da Silva Gilbert Mateus, Di Filippo Lorenzo, Galkiewicz Stefan, Hamroun Cherifa, Kefalas Dimitrios, Legheraba Mohamed Amine, Nibert Guillaume, Ohri Elif Ebru, Pan Bo, Pham Alexandre, Prestes Fittipaldi Giuliano, Rimlinger Hugo, Stoltidis Alexandros, Tighilt Massinissa, Koukoulis Ippokratis, Nardi Elena, Peugnet Nicolas, Badreddine Wafa, Giovanidis Anastasios, Mathieu Fabien, Nguyen Thi-Mai-Trang, Scognamiglio Ciro, Montpetit Marie-José, Blin Lélia, Vermeulen Kévin, De Souza e Silva Edmundo, Varela de Medeiros Dianne Scherly, Costa Luis Henrique, Masuzawa Toshimitsu, Campista Miguel Elias Mitre, Herlihy Maurice, Shrira Liuba, Nogueira José-Marcos, Richa Andrea, Ayoubi Solayman, Tsourdinis Theodoros, Beltrando Lionel, Bramas Quentin, Leonelli Caterina, Taniou Nadine
DBLP Gismo#
In this Section, we use Landmarks to construct a small XGismo focused around the NPA researchers. In details:
We construct a large Gismo between articles and researchers, exactly like in the DBLP tutorial;
We use landmarks to extract a (much smaller) list of articles based on collaboration proximity.
We build a XGismo between researchers and keywords from this smaller source.
Using landmarks to shrink a source#
To reduce the size of the dataset, we make landmarks out of the researchers, and we credit each entry with a budget of 2,000 articles.
[12]:
from gismo.landmarks import Landmarks
npa_landmarks_full = Landmarks(
source=npa.authors.keys(), to_text=lambda x: [LDB.keys[x]], x_density=2000
)
We launch the computation of the source. This takes a couple of minutes, as a ranking diffusion needs to be performed for all researchers.
[13]:
import logging
logging.basicConfig()
log = logging.getLogger("Gismo")
log.setLevel(level=logging.INFO)
[14]:
reduced_source = npa_landmarks_full.get_reduced_source(gismo)
INFO:Gismo:Start computation of 60 landmarks.
INFO:Gismo:All landmarks are built.
[15]:
print(f"Source length went down from {len(LDB.publis)} to {len(reduced_source)}.")
Source length went down from 8236135 to 64484.
Instead of 8,200,000 all purposes articles, we now have 64,500 articles lying in the neighborhood of the considered researchers. We now can close the file descriptor as we won’t need further access to the original source.
Building a small XGismo#
Author Embedding#
Author embedding takes a couple of seconds instead of a couple of minutes.
[16]:
reduced_corpus = Corpus(reduced_source, to_text=to_authors_text)
reduced_author_embedding = Embedding(vectorizer=vectorizer_author)
reduced_author_embedding.fit_transform(reduced_corpus)
C:\Users\loufa\AppData\Local\Programs\Python\Python312\Lib\site-packages\sklearn\feature_extraction\text.py:517: UserWarning: The parameter 'token_pattern' will not be used since 'tokenizer' is not None'
warnings.warn(
Sanity Check#
We can rebuild a small author Gismo. This part is merely a sanity check to verify that the reduction didn’t change too much things in the vicinity of the NPA..
[17]:
reduced_gismo = Gismo(reduced_corpus, reduced_author_embedding)
Ranking is nearly instant.
[18]:
reduced_gismo.rank(name_to_index("Fabien Mathieu"))
[18]:
True
The results are almost identical to what was returned by the full Gismo.
[19]:
from gismo.post_processing import post_features_cluster_print
reduced_gismo.post_features_cluster = post_features_cluster_print
reduced_gismo.post_features_item = lambda g, i: LDB.author_by_index(
g.embedding.features[i]
).name
reduced_gismo.get_features_by_cluster()
F: 0.02. R: 0.25. S: 0.71.
- F: 0.03. R: 0.25. S: 0.70.
-- F: 0.04. R: 0.24. S: 0.69.
--- F: 0.08. R: 0.22. S: 0.66.
---- F: 0.09. R: 0.19. S: 0.61.
----- F: 0.35. R: 0.18. S: 0.58.
------ Fabien Mathieu (R: 0.12; S: 1.00)
------ F: 0.61. R: 0.03. S: 0.36.
------- François Durand (R: 0.02; S: 0.42)
------- Ludovic Noirie (R: 0.01; S: 0.31)
------- Emma Caizergues (R: 0.01; S: 0.25)
------ F: 0.60. R: 0.03. S: 0.38.
------- Laurent Viennot (R: 0.01; S: 0.43)
------- F: 0.69. R: 0.02. S: 0.33.
-------- Diego Perino (R: 0.01; S: 0.39)
-------- Yacine Boufkhad (R: 0.01; S: 0.27)
----- The Dang Huynh (R: 0.01; S: 0.27)
---- F: 0.24. R: 0.03. S: 0.35.
----- F: 0.77. R: 0.02. S: 0.34.
------ Julien Reynier (R: 0.01; S: 0.34)
------ Fabien de Montgolfier (R: 0.01; S: 0.35)
------ Anh-Tuan Gai (R: 0.01; S: 0.27)
----- Gheorghe Postelnicu (R: 0.00; S: 0.17)
--- F: 0.58. R: 0.02. S: 0.31.
---- Céline Comte (R: 0.01; S: 0.30)
---- Thomas Bonald (R: 0.00; S: 0.23)
--- Nidhi Hegde (R: 0.00; S: 0.19)
-- F: 1.00. R: 0.01. S: 0.22.
--- Ilkka Norros (R: 0.01; S: 0.22)
--- François Baccelli (R: 0.00; S: 0.22)
- Mohamed Bouklit (R: 0.01; S: 0.18)
Word Embedding#
Now we build the word embedding, with the spacy add-on. Takes a couple of minutes instead of a couple of hours.
[20]:
import spacy
# Initialize spacy 'en' model, keeping only tagger component needed for lemmatization
nlp = spacy.load("en_core_web_sm", disable=["parser", "ner"])
# Who cares about DET and such?
keep = {"ADJ", "NOUN", "NUM", "PROPN", "SYM", "VERB"}
preprocessor = lambda txt: " ".join(
[
token.lemma_.lower()
for token in nlp(txt)
if token.pos_ in keep and not token.is_stop
]
)
vectorizer_text = CountVectorizer(
dtype=float, min_df=5, max_df=0.02, ngram_range=(1, 3), preprocessor=preprocessor
)
[21]:
reduced_corpus.to_text = lambda e: e[1]
reduced_word_embedding = Embedding(vectorizer=vectorizer_text)
reduced_word_embedding.fit_transform(reduced_corpus)
Gathering pieces together#
We can combine the reduced embeddings to build a XGismo between authors and words.
[22]:
from gismo.gismo import XGismo
xgismo = XGismo(
x_embedding=reduced_author_embedding, y_embedding=reduced_word_embedding
)
We can save this for later use.
[23]:
xgismo.dump(filename="reduced_npa_xgismo", path=path, overwrite=True)
The file should be less than 40 Mb, whereas a full-size DBLP XGismo is about 4 Gb. What about speed and quality of results?
[24]:
xgismo.rank(name_to_index("Anne Bouillard"), y=False)
xgismo.post_documents_item = lambda g, i: LDB.author_by_index(g.corpus[i]).name
[25]:
xgismo.get_documents_by_rank()
[25]:
['Anne Bouillard',
'Bruno Gaujal',
'François Baccelli',
'Ana Busic',
'Paul Nikolaus',
'Jens B. Schmitt',
'Eric Thierry',
'Zhen Liu',
'Christelle Rovetta',
'Ke Feng',
'Aurore Junier',
'Giovanni Stea',
'José Meseguer',
'Yves Dallery',
'Seyed Mohammadhossein Tabatabaee',
'Jean-Yves Le Boudec',
'Jean Mairesse',
'Élie de Panafieu',
'Leandros Tassiulas',
'Laurent George',
'Don Towsley',
'Bruno Baynat']
Let’s try some more elaborate display.
[26]:
from gismo.post_processing import (
post_documents_cluster_print,
post_features_cluster_print,
)
xgismo.parameters.distortion = 1.0
xgismo.post_documents_cluster = post_documents_cluster_print
xgismo.post_features_cluster = post_features_cluster_print
xgismo.get_documents_by_cluster()
F: 0.06. R: 0.09. S: 0.81.
- F: 0.67. R: 0.06. S: 0.76.
-- F: 0.82. R: 0.05. S: 0.75.
--- F: 0.91. R: 0.05. S: 0.74.
---- Anne Bouillard (R: 0.03; S: 0.79)
---- Paul Nikolaus (R: 0.00; S: 0.73)
---- Jens B. Schmitt (R: 0.00; S: 0.73)
---- Eric Thierry (R: 0.00; S: 0.75)
---- Ke Feng (R: 0.00; S: 0.70)
---- Élie de Panafieu (R: 0.00; S: 0.69)
--- Bruno Gaujal (R: 0.00; S: 0.75)
-- François Baccelli (R: 0.00; S: 0.65)
-- Aurore Junier (R: 0.00; S: 0.64)
- F: 0.06. R: 0.03. S: 0.45.
-- F: 0.12. R: 0.02. S: 0.35.
--- F: 0.33. R: 0.02. S: 0.26.
---- F: 0.55. R: 0.01. S: 0.23.
----- Ana Busic (R: 0.00; S: 0.31)
----- Christelle Rovetta (R: 0.00; S: 0.20)
----- Yves Dallery (R: 0.00; S: 0.22)
----- Bruno Baynat (R: 0.00; S: 0.18)
---- F: 0.60. R: 0.01. S: 0.36.
----- F: 0.72. R: 0.00. S: 0.34.
------ Zhen Liu (R: 0.00; S: 0.31)
------ Don Towsley (R: 0.00; S: 0.32)
----- Leandros Tassiulas (R: 0.00; S: 0.30)
--- F: 0.31. R: 0.00. S: 0.28.
---- José Meseguer (R: 0.00; S: 0.21)
---- Jean Mairesse (R: 0.00; S: 0.28)
-- F: 0.47. R: 0.01. S: 0.32.
--- F: 0.68. R: 0.00. S: 0.31.
---- Giovanni Stea (R: 0.00; S: 0.30)
---- Laurent George (R: 0.00; S: 0.27)
--- F: 1.00. R: 0.00. S: 0.25.
---- Seyed Mohammadhossein Tabatabaee (R: 0.00; S: 0.25)
---- Jean-Yves Le Boudec (R: 0.00; S: 0.25)
[27]:
xgismo.get_features_by_cluster(target_k=1.4, resolution=0.9, distortion=0.5)
F: 0.29. R: 0.20. S: 0.94.
- F: 0.74. R: 0.19. S: 0.93.
-- F: 0.83. R: 0.18. S: 0.93.
--- F: 0.93. R: 0.14. S: 0.93.
---- F: 0.96. R: 0.13. S: 0.92.
----- F: 0.97. R: 0.12. S: 0.92.
------ F: 1.00. R: 0.10. S: 0.92.
------- network calculus (R: 0.05; S: 0.92)
------- calculus (R: 0.05; S: 0.92)
------ stochastic network (R: 0.01; S: 0.91)
------ case delay (R: 0.01; S: 0.91)
----- worst case (R: 0.01; S: 0.91)
---- multiplexing (R: 0.01; S: 0.89)
--- F: 0.94. R: 0.04. S: 0.84.
---- F: 0.97. R: 0.02. S: 0.84.
----- free choice (R: 0.01; S: 0.84)
----- choice (R: 0.01; S: 0.83)
---- net (R: 0.01; S: 0.86)
-- delay (R: 0.01; S: 0.77)
- monotonicity (R: 0.02; S: 0.35)
Rebuild landmarks#
NPA landmarks#
We can rebuild NPA landmarks on the XGismo.
[28]:
npa_landmarks = Landmarks(
source=[k for k in npa.authors.keys()],
to_text=lambda x: [LDB.keys[x]],
rank=lambda g, q: g.rank(q, y=False),
)
npa_landmarks.fit(xgismo)
INFO:Gismo:Start computation of 60 landmarks.
INFO:Gismo:All landmarks are built.
We can extract the NPA researchers that the most similar to a given researcher (not necessarily from NPA).
[29]:
xgismo.rank(name_to_index("Anne Bouillard"), y=False)
npa_landmarks.post_item = lambda l, i: LDB.author_by_key(l[i]).name
npa_landmarks.get_landmarks_by_rank(xgismo)
[29]:
['Fabien Mathieu',
'Elif Ebru Ohri',
'Bruno Baynat',
'Luis Henrique Costa Neto',
'Nicolas Peugnet',
'Edmundo de Souza e Silva',
'Guy Pujolle',
'M. Timur Friedman',
'Anastasios Giovanidis',
'Andréa W. Richa',
'Marcelo Dias de Amorim',
'Quentin Bramas',
'José Marcos S. Nogueira',
'Massinissa Tighilt',
'Binh-Minh Bui-Xuan',
'Sébastien Baey',
'Mohamed Amine Legheraba',
'Maurice Herlihy',
'Marie-José Montpetit',
'Sébastien Tixeuil',
'Naceur Malouch',
'Kim Loan Thai',
'Theodoros Tsourdinis',
'Serge Fdida',
'Maria Potop-Butucaru',
'Toshimitsu Masuzawa',
'Anne Fladenmuller',
'Lélia Blin',
'Wafa Badreddine']
We can also use a keyword query, and organize the results in clusters.
[30]:
xgismo.rank("stochastic matching")
from gismo.post_processing import post_landmarks_cluster_print
npa_landmarks.post_cluster = post_landmarks_cluster_print
npa_landmarks.get_landmarks_by_cluster(xgismo, balance=0.5, target_k=1.2)
F: 0.18.
- Binh-Minh Bui-Xuan
- F: 0.29.
-- F: 0.45.
--- F: 0.70.
---- Anastasios Giovanidis
---- Fabien Mathieu
---- F: 0.84.
----- Bruno Baynat
----- Edmundo de Souza e Silva
----- Guy Pujolle
---- Massinissa Tighilt
--- Andréa W. Richa
-- F: 0.90.
--- Sébastien Tixeuil
--- Toshimitsu Masuzawa
--- Lélia Blin
--- Quentin Bramas
--- Maria Potop-Butucaru
ACM landmarks#
We can build other landmarks using the ACM categories. This will enable to describe things in term of categories.
[31]:
from gismo.datasets.acm import get_acm, flatten_acm
acm = flatten_acm(get_acm(), min_size=10)
[32]:
acm_landmarks = Landmarks(acm, to_text=lambda e: e["query"])
[33]:
acm_landmarks.fit(xgismo)
INFO:Gismo:Start computation of 113 landmarks.
INFO:Gismo:All landmarks are built.
[34]:
xgismo.rank(name_to_index("Fabien Mathieu"), y=False)
acm_landmarks.post_item = lambda l, i: l[i]["name"]
acm_landmarks.get_landmarks_by_rank(xgismo, balance=0.5, target_k=1.2)
[34]:
['Discrete mathematics',
'Graph theory',
'Machine learning algorithms',
'Theory of computation',
'Models of computation',
'Computational complexity and cryptography',
'Mathematics of computing',
'Architectures',
'Design and analysis of algorithms',
'Software system structures',
'Algorithmic game theory and mechanism design']
[35]:
xgismo.rank("combinatorics")
acm_landmarks.post_cluster = post_landmarks_cluster_print
acm_landmarks.get_landmarks_by_cluster(xgismo, balance=0.5, target_k=1.5)
F: 0.48.
- F: 0.98.
-- Discrete mathematics
-- Mathematics of computing
-- Graph theory
-- Visualization
-- Simulation types and techniques
- F: 0.77.
-- F: 1.00.
--- Symbolic and algebraic algorithms
--- Symbolic and algebraic manipulation
-- F: 0.95.
--- Cryptography
--- Hardware validation
- F: 0.72.
-- Models of computation
-- Computational complexity and cryptography
Note that we fully ignore the original ACM category hierarchy. Instead, Gismo builds its own hierarchy tailored to the query.
Combining landmarks#
Through the post_processing methods, we can intricate multiple landmarks. For example, the following method associates NPA researchers and keywords to a tree of ACM categories.
[36]:
from gismo.common import auto_k
import numpy as np
def post_cluster_acm(l, cluster, depth=0, kw_size=0.3, mts_size=0.5):
tk_kw = 1 / kw_size
tk_mts = 1 / mts_size
n = l.x_direction.shape[0]
kws_view = cluster.vector[0, n:]
k = auto_k(data=kws_view.data, max_k=100, target=tk_kw)
keywords = [
xgismo.embedding.features[i]
for i in kws_view.indices[np.argsort(-kws_view.data)[:k]]
]
if len(cluster.children) > 0:
print(f"|{'-' * depth}")
for c in cluster.children:
post_cluster_acm(l, c, depth=depth + 1)
else:
domain = l[cluster.indice]["name"]
researchers = ", ".join(
npa_landmarks.get_landmarks_by_rank(
cluster, target_k=tk_mts, distortion=0.5
)
)
print(f"|{'-' * depth} {domain} ({researchers}) ({', '.join(keywords)})")
[37]:
xgismo.rank("combinatorics")
acm_landmarks.post_cluster = post_cluster_acm
acm_landmarks.get_landmarks_by_cluster(xgismo, target_k=1.5)
|
|-
|-- Discrete mathematics (Fabien Mathieu, Sébastien Tixeuil, Quentin Bramas, Edmundo de Souza e Silva, Binh-Minh Bui-Xuan, Elena Nardi, Toshimitsu Masuzawa) (complexity, combinatoric, combinatorics, guard, polygon, nondeterministic, edge, transmitter, walk, point)
|-- Mathematics of computing (Fabien Mathieu, Sébastien Tixeuil, Quentin Bramas, Edmundo de Souza e Silva) (complexity, point, edge, combinatoric, combinatorics)
|-- Graph theory (Fabien Mathieu, Sébastien Tixeuil, Francesca Fossati) (complexity, edge, point)
|-- Visualization (Fabien Mathieu, Sébastien Tixeuil, Elena Nardi, Quentin Bramas) (analytic, edge, complexity, point, datum, evaluation)
|-- Simulation types and techniques (Fabien Mathieu, Elena Nardi, Sébastien Tixeuil) (analytic, edge, complexity, point, datum, fault)
|-
|--
|--- Symbolic and algebraic algorithms (Fabien Mathieu, Bruno Baynat, Xavier Urbain) (complexity, calculus, edge, point, functions, algebraic)
|--- Symbolic and algebraic manipulation (Fabien Mathieu, Bruno Baynat, Xavier Urbain) (complexity, edge, calculus, point, functions)
|--
|--- Cryptography (Diego Canizio Lopes, Fabien Mathieu, Elena Nardi) (complexity, key, attack, cryptanalysis, encryption, edge, point)
|--- Hardware validation (Fabien Mathieu, Xavier Urbain, Sébastien Tixeuil) (sat, complexity, fault, point, edge, sign)
|-
|-- Models of computation (Fabien Mathieu, Elena Nardi, Maurice Herlihy, Xavier Urbain, Maria Potop-Butucaru, Liuba Shrira, Binh-Minh Bui-Xuan) (complexity, calculus, edge)
|-- Computational complexity and cryptography (Elena Nardi, Xavier Urbain, Liuba Shrira, Maria Potop-Butucaru, Maurice Herlihy, Binh-Minh Bui-Xuan, Andréa W. Richa, Quentin Bramas, Sébastien Tixeuil) (complexity)
Conversely, one can associate ACM categories and keywords to a tree of NPA researchers.
[38]:
def post_cluster_npa(l, cluster, depth=0, kw_size=0.3, acm_size=0.5):
tk_kw = 1 / kw_size
tk_acm = 1 / acm_size
n = l.x_direction.shape[0]
kws_view = cluster.vector[0, n:]
k = auto_k(data=kws_view.data, max_k=100, target=tk_kw)
keywords = [
xgismo.embedding.features[i]
for i in kws_view.indices[np.argsort(-kws_view.data)[:k]]
]
if len(cluster.children) > 0:
print(f"|{'-' * depth}")
for c in cluster.children:
post_cluster_npa(l, c, depth=depth + 1)
else:
researcher = LDB.author_by_key(l[cluster.indice]).name
domains = ", ".join(
acm_landmarks.get_landmarks_by_rank(
cluster, target_k=tk_acm, distortion=0.5
)
)
print(f"|{'-' * depth} {researcher} ({domains}) ({', '.join(keywords)})")
[39]:
xgismo.rank(name_to_index("Anne Bouillard"), y=False)
npa_landmarks.post_cluster = post_cluster_npa
npa_landmarks.get_landmarks_by_cluster(xgismo, target_k=1.4)
|
|-
|--
|--- Fabien Mathieu (Symbolic and algebraic algorithms, Discrete mathematics, Symbolic and algebraic manipulation, Models of computation, Graph theory, Mathematics of computing, Mathematical analysis) (network calculus, calculus, space, aggregate)
|---
|----
|----- Bruno Baynat (Symbolic and algebraic algorithms, Symbolic and algebraic manipulation) (closed, queue network, queueing, queue, markovian, burstiness, closed queueing, modeling, trade, routing)
|----- Edmundo de Souza e Silva (Symbolic and algebraic algorithms) (markovian, queueing, end)
|---- Guy Pujolle (Network types) (architecture, net, routing, policy, end, issue)
|--- M. Timur Friedman (Networks) (delay, routing, end, case, pattern, space)
|--
|--- Luis Henrique Costa Neto (Logic, Formal languages and automata theory, Symbolic and algebraic algorithms, Symbolic and algebraic manipulation, Knowledge representation and reasoning, Language features) (case, petri, nets, modeling)
|--- Nicolas Peugnet (Software verification and validation) (case, architecture)
|- Elif Ebru Ohri (Network protocols, Numerical analysis, Models of computation) (synchronization, computation, communication)
That’s all for this tutorial!