# research

I started a PhD in Computer Science and Mathematics under the
supervision
of Prof. Floriana
Esposito
and Dr. Nicola Di Mauro on November
2014.

Being Machine Learning my main research area, I am part of the
LACAM laboratory, ML group.

## research interests

* Probabilistic Graphical Models* (PGMs) and in particular

*Tractable Models*(TPGMs) and

*structure learning*algorithms.

*.*

**Representation and Deep Learning**## publications

### Fast and Accurate Density Estimation with Extremely Randomized Cutset Networks

Cutset Networks can be effectively learned by totally random conditioning delivering state-of-the art density estimation performances in a fraction of the time of previous Cutset Networks and other tractable models structure learners. [ pdf | code ]

### Generative Probabilistic Models for Positive-Unlabeled Learning

Probabilistic Generative models provide an effective way to elicit reliable negative samples from unlabeled ones for Positive-Unlabeled learning schemes. [ pdf ]

### Encoding and Decoding Representations with Sum- and Max-Product Networks

Sum- and Max-Product Networks are exploited as autoencoders without the need of training them to reconstruct their input. These embeddings are cheap and surprisingly effective alternatives for structured output prediction tasks. [ pdf | bibtex | code ]

`@InProceedings{Vergari2017,`

author = {Antonio Vergari and

Robert Peharz and

Nicola Di Mauro and

Floriana Esposito},

title = {Encoding and Decoding Representations with Sum- and Max-Product Networks},

booktitle = {ICLR 2017: Proceedings of the Workshop Track of the 5th International Conference on Learning Representations},

journal = {ICLR Workshop 2017},

year = {2017}}

### Towards Representation Learning with Tractable Probabilistic Models

Generating embeddings from generative models as black boxes by concatenating the probability values of queries generated at random. [ pdf | bibtex | code ]

`@article{Vergari2016b,`

author = {Antonio Vergari and

Nicola Di Mauro and

Floriana Esposito},

title = {Towards Representation Learning with Tractable Probabilistic Models},

journal = {CoRR},

volume = {abs/1608.02341},

year = {2016},

url = {http://arxiv.org/abs/1608.02341},

timestamp = {Wed, 07 Jun 2017 14:41:08 +0200},

biburl = {http://dblp.uni-trier.de/rec/bib/journals/corr/VergariME16},

bibsource = {dblp computer science bibliography, http://dblp.org}}

### Visualizing and Understanding Sum-Product Networks

By interpreting Sum-Product Networks as neural networks they are exploited for representation learning and visualization techniques leveraging tractable inference are employed for better model interpretability. [ pdf | bibtex | code ]

`@article{Vergari2016a,`

author = {Antonio Vergari and

Nicola Di Mauro and

Floriana Esposito},

title = {Visualizing and Understanding Sum-Product Networks},

journal = {CoRR},

volume = {abs/1608.08266},

year = {2016},

url = {http://arxiv.org/abs/1608.08266},

timestamp = {Wed, 07 Jun 2017 14:40:26 +0200},

biburl = {http://dblp.uni-trier.de/rec/bib/journals/corr/VergariME16a},

bibsource = {dblp computer science bibliography, http://dblp.org}}

### Multi-Label Classification with Cutset Networks

Learning the structure of Cutset Networks discriminatively and using efficient MPE inference for state-of-the-art predictions for multi-label classification problems. [ pdf | bibtex | code ]

`@InProceedings{Dimauro2016,`

title = {Multi-Label Classification with Cutset Networks},

author = {Nicola {Di Mauro} and Antonio Vergari and Floriana Esposito},

booktitle = {PGM 2016: Proceedings of the Eighth International Conference on Probabilistic Graphical Models},

editor = {A. Antonucci, G. Corani, C.P. de Campos},

year = {2016},

pages = {147-158},

publisher = {JMLR Workshop and Conference Proceedings},

volume = {52}}

### Learning Bayesian Random Cutset Forests

Cutset Networks are extended into an ensemble framework using random projections and bagging. Bayesian Random Cutset Forests are proved to be fairly competitive as density estimators in an extensive empirical comparison. [ pdf | bibtex | code ]

`@InProceedings{dimauro15ismis,`

Title = {Learning Bayesian Random Cutset Forests},

Author = {Nicola {Di Mauro} and Antonio Vergari and Teresa M.A. Basile},

Booktitle = {ISMIS},

editor = {F. Esposito et al.},

Year = {2015},

pages = {1-11},

publisher = {Springer},

series = {LNAI},

volume = {9384}}

### Learning Accurate Cutset Networks by Exploiting Decomposability

The likelihood function of a Cutset Network is shown to be decomposable, thus yielding to efficient evaluation routines. A Bayesian score function is used for principled structure learning leading to more accurate Networks. [ pdf | bibtex | code ]

`@InProceedings{dimauro15aixia,`

Title = {Learning Accurate Cutset Networks by Exploiting Decomposability},

Author = {Nicola {Di Mauro} and Antonio Vergari and Floriana Esposito},

Booktitle = {AI*IA 2015: Advances in Artificial Intelligence},

editor = {Marco Gavanelli and Evelina Lamma and Fabrizio Riguzzi},

Year = {2015},

pages = {1-12},

publisher = {Springer},

series = {LNCS},

volume = {9336}}

### Simplifying, Regularizing and Strengthening Sum-Product Network Structure Learning

Tweaking LearnSPN by limiting the number of child nodes while splitting, by modeling leaves with Chow-Liu trees and by blending bagging into sum nodes, in order to improve both network structure quality and likelihood score. [ pdf | bibtex | code | supplemental ]

`@Inbook{Vergari2015,`

author="Vergari, Antonio and {Di Mauro}, Nicola and Esposito, Floriana",

editor="Appice, Annalisa and Rodrigues, Pereira Pedro and Santos Costa, Vitor and Gama, Jo{\~a}o and Jorge, Alipio and Soares, Carlos",

chapter="Simplifying, Regularizing and Strengthening Sum-Product Network Structure Learning",

title="Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2015, Porto, Portugal, September 7-11, 2015, Proceedings, Part II",

year="2015",

publisher="Springer International Publishing",

address="Cham",

pages="343--358",

isbn="978-3-319-23525-7",

doi="10.1007/978-3-319-23525-7_21",

url="http://dx.doi.org/10.1007/978-3-319-23525-7_21"}