Unige Instructors | ||
An Introduction to Prolog | Viviana Mascardi | 18-19 Jan 2024 |
Computational models of visual perception | Fabio Solari | 19-23 Feb 2024 |
Strategic Choices: Games and Team Optimization | Lucia Pusillo, Marcello Sanguineti | 8-12 Apr 2024 |
Robust control of agent teams: application to vehicle platoons | Enrico Zero | 9-12 Apr 2024 |
Introduction to Type Theory: from foundations to practice | Francesco Dagnino, Jacopo Emmenegger | 15-19 Apr 2024 |
High Performance Computing for heterogeneous accelerator architectures | Daniele D'Agostino | 6-10 May 2024 |
Affective Computing | Radoslaw Niewiadomski | 13-17 May 2024 |
Theory and Practice of Runtime Monitoring | Davide Ancona, Angelo Ferrando | 3-7 Jun 2024 |
Computer Vision Crash Course | Francesca Odone Nicoletta Noceti | 10-14 Jun 2024 |
Deep Learning: a hands-on introduction | Francesca Odone Nicoletta Noceti | 10-14 Jun 2024 |
Verification of Neural Networks | Stefano Demarchi, Armando Tacchella | 10-14 Jun 2024 |
Introduction to discrete differential geometry | Claudio Mancinelli | 17-21 Jun 2024 |
Effective habits and skills for successful young scientists | Fabio Roli | 24-28 Jun 2024 |
Machine Learning Crash Course (MLCC) 2024 | Lorenzo Rosasco, Silvia Villa, Giovanni Alberti, Simone Di Marino, Matteo Santacesaria | 25-28 Jun 2024 |
Theory and Practice of Learning from Data | Luca Oneto | TBD |
Optimization of Electric-Vehicle Charging: scheduling and planning problems | Michela Robba, Giulio Ferro | July 2024 TBD |
Theory and Practice of Virtual Reality Systems | Manuela Chessa | 15-19 Jul 2024 |
External Instructors (CNR/IMT Lucca) | ||
An Introduction to Model Predictive Control and Rolling Horizon Optimization | Mauro Gaggero | 18-22 Mar 2024 |
Accelerated Parallel Systems: the GPU and FPGA cases | Antonella Galizia, Christian Pilato | 8-12 Apr 2024 |
Information Hiding from Attack to Difense | Luca Caviglione | 1-5 Jul 2024 |
Detailed information
An Introduction to Prolog
Duration: 12 hours
Instructor:
Mascardi Viviana, viviana.mascardi@unige.it
-
When: 18-19th January 2024 h 9.30-13.00 and 14-15.30
Where: DIBRIS, Via Dodecaneso 35
Abstract: "In the summer of 1972, Alain Colmerauer and his team in Marseille developed and implemented the first version of the logic programming language Prolog. Together with both earlier and later collaborations with Robert Kowalski and his colleagues in Edinburgh, this work laid the practical and theoretical foundations for the Prolog and logic programming of today. Prolog and its related technologies soon became key tools of symbolic programming and Artificial Intelligence."
This statement is taken from the "The Year of Prolog" web page, http://prologyear.logicprogramming.org/: the 50th anniversary of Prolog has been celebrated in 2022 with scientific and dissemination initiatives all over the world. Keeping the momentum going, we offer a compact introductory course on Prolog, with practical exercises and an overview of the existing and future Prolog applications. In particular, we highlight the potential of Prolog for implementing cognitive intelligent agents and for supporting eXplainable Artificial Intelligence (XAI) thanks to its declarative flavor.
Program:
Prolog syntax
Prolog operational semantics
Extra-logical and meta-logical predicates
Examples
Applications
Future perspectives
References:
The Art of Prolog, second edition
Advanced Programming Techniques
by Leon S. Sterling and Ehud Y. Shapiro
1994
Interdisciplinarity: The course will also be offered in the teaching offer of the PhD-AI National Program, https://www.phd-ai.it.
Computational models of visual perception
Duration: 20 hours (+ final project)
Instructor(s): Fabio Solari – DIBRIS, University of Genoa – fabio.solari@unige.it
When: 19th - 23rd February 2024
Where: via Dodecaneso 35
Abstract
This course introduces paradigms and methods that allow students to develop computational models of visual perception, which are based on hierarchical networks of interacting neural units, mimicking biological processing stages.
Program
- Introduction to visual perception and to the cortical dorsal and ventral streams for action and recognition tasks.
- Hierarchical networks of functional neural units. Computational models of the visual features estimation for action and recognition. Comparison among computational models and computer vision algorithms. Benchmark Datasets. How to use computational models to improve virtual and augmented reality systems to allow natural perception and interaction.
- Case studies: models and algorithms of the literature.
References
- R, Hussain, M. Chessa, F. Solari, “Mitigating Cybersickness in Virtual Reality Systems through Foveated Depth-of-Field Blur”. Sensors, 21(12), p.4006, 2021
- G. Maiello, M. Chessa, P.J. Bex, F. Solari. Near-optimal combination of disparity across a log-polar scaled visual field. PLoS Computational Biology 16(4): e1007699, 2020
- W.S. Grant, J. Tanner, L. Itti. "Biologically plausible learning in neural networks with modulatory feedback." Neural Networks 88: 32-48, 2017
- F. Solari, M. Chessa, NK Medathati, P. Kornprobst. “What can we expect from a V1-MT feedforward architecture for optical flow estimation?”. Signal Processing: Image Communication. 1;39:342-54 ,2015
- G. Maiello, M. Chessa, F. Solari, P.J. Bex. The (In) Effectiveness of Simulated Blur for Depth Perception in Naturalistic Images. PLoS one, 10(10), pp. e0140230, 2015
- A.F. Russell, S. Mihalaş, R. von der Heydt, E. Niebur, R. Etienne-Cummings. "A model of proto-object based saliency." Vision research 94: 1-15, 2014
- P. Bayerl, H. Neumann. “Disambiguating visual motion by form-motion interaction—a computational model”. International Journal of Computer Vision. 72(1):27-45, 2007
R.S. Zemel, P. Dayan, A. Pouget. “Probabilistic interpretation of population codes”. Neural Computation, 10(2), pp.403-430, 1998
Interdisciplinarity: PhD program in Bioengineering and Robotics.
An Introduction to Model Predictive Control and Rolling Horizon Optimization
Duration: 20 hours
Instructor: Mauro Gaggero, National Research Council of Italy (CNR), Genova - mauro.gaggero@cnr.it
When: 18th - 22nd March 2024
Where: University of Genoa (classroom to be announced in Via Opera Pia) or Microsoft Teams platform
Abstract
Model predictive control (MPC) and rolling-horizon optimization are optimization and control paradigms that have been widely employed in the literature owing to their ability to exploit information on the future behavior of the system at hand, their capability of dealing with constraints, and the presence of many theoretical results about their properties. From various decades, MPC has been used for process control in chemical plants, and nowadays it is employed for the optimization of many other complex setups such as, for instance, power plants, mechatronic systems, logistics operations, cloud computing applications, and so on. It is still receiving on-going interest from researchers in both the industrial and academic communities. Concerning the academic world, MPC is attractive for both researchers working in the field of Control Systems and Operations Research since it combines several aspects of both disciplines. The course will start from the basic theoretic notions of MPC and rolling-horizon optimization, together with recent developments in design and implementation. Special attention will be devoted to the computational aspects of MPC and to the existing techniques to reduce the overall required effort. An overview of receding-horizon state estimation, a topic strictly related to MPC, will be given as well. Finally, recent applications of MPC and rolling-horizon optimization will be presented, together with details of their software implementation.
Program
- Introduction to discrete-time model predictive control and rolling-horizon optimization
- Model predictive control with constraints
- Model predictive control and stability analysis
- Moving-horizon state estimation
- Real-time implementations of model predictive control
- Examples of applications of model predictive control and rolling-horizon optimization
Interdisciplinarity
The course is suitable for students of the PhD program in Computer Science and Systems Engineering
References
- M. Morari, J.H. Lee, “Model predictive control: past, present and future”, Computers and Chemical Engineering, vol. 23 pp. 667-682, 1999.
- D. Mayne, J. Rawlings, C. Rao, and P. Scokaert, “Constrained model predictive control: stability and optimality,” Automatica, vol. 36, no. 6, pp. 789–814, 2000.
- E.F. Camacho, C. Bordons, “Model Predictive Control”, Series Advanced Textbooks in Control and Signal Processing, Springer, 2004.
- L. Wang, “Model Predictive Control System Design and Implementation Using MATLAB”, Series Advances in Industrial Control, Springer, 2009.
- A. Alessandri, M. Baglietto, G. Battistelli, M. Gaggero, "Moving-horizon state estimation for nonlinear systems using neural networks," IEEE Trans. on Neural Networks, vol. 22, no. 5, pp. 768-780, 2011.
- A. Alessandri, M. Gaggero, F. Tonelli, "Min-max and predictive control for the management of distribution in supply chains," IEEE Trans. on Control Systems Technology, vol. 19, no. 5, pp. 1075-1089, 2011.
- A. Alessandri, C. Cervellera, M. Gaggero, "Nonlinear predictive control of container flows in maritime intermodal terminals," IEEE Trans. on Control Systems Technology, vol. 21, no. 4, pp. 1423-1431, 2013.
- M. Gaggero, L. Caviglione, "Predictive control for energy-aware consolidation in cloud datacenters", IEEE Trans. on Control Systems Technology, vol. 24, no. 2, pp. 461-474, 2016.
- A. Alessandri, M. Gaggero, "Fast moving horizon state estimation for discrete-time systems using single and multi iteration descent methods", IEEE Trans. on Automatic Control, vol. 62, no. 9, pp. 4499-4511, 2017.
- M. Gaggero, L. Caviglione, "Model predictive control for energy-efficient, quality-aware, and secure virtual machine placement", IEEE Trans. on Automation Science and Engineering, vol. 16, no. 1, pp. 420-432, 2019, DOI:10.1109/TASE.2018.2826723.
- M. Gaggero, D. Di Paola, A. Petitti, L. Caviglione, "When time matters: predictive mission planning in cyber-physical scenarios", IEEE Access, vol. 7, no. 1, pp. 11246-11257, 2019, DOI:10.1109/ACCESS.2019.2892310.
Strategic Choices: Games and Team Optimization
Duration: 20 hours
Instructors: Lucia Pusillo - University of Genoa (DIMA) - pusillo@dima.unige.it
Marcello Sanguineti - University of Genoa (DIBRIS) - marcello.sanguineti@unige.it
When: 8th - 12th April 2024
Where:
Abstract: Game and Team Theory study strategic interactions among two or more agents, which have to take decisions in order to optimize their objectives. They have various links to disciplines such as Economics, Engineering, Computer Science, Political and Social Sciences, Biology, and Medicine. These links provide incentives for interdisciplinary research and make the role of Game and Team Theory invaluable in a variety of applications. The main goal of this course consists in providing students with the basic mathematical tools to deal with interactive problems and illustrating them via case-studies.
Program
- Non-cooperative games
- Strategic games and extended-form games
- Incomplete-information games
- Well-posedness problems for Nash equilibria
- Repeated games
- Evolutionary stable strategies
- Multiobjective games and solution concepts
- Cooperative TU-games
- Solutions for cooperative games
- Partial cooperative games
- Team optimization with stochastic information structure.
- Examples of applications in contexts such as:
- environment models;
- nonverbal communication & social interactions;
- medicine and biology;
- optimal production;
- telecommunication networks;
- transportation networks.
Interdisciplinarity. It is a methodological course, of interest to a large-spectrum audience. In particular, it can be offered in the other DIBRIS PhD Courses (“Security Risk and Vulnerability” - it is already included in the offer of this PhD - “Bioengineering and Robotics”, and “Robotics and Intelligent Machines”), in the DITEN PhD Course “Sciences and Technologies for Electrical Engineering and Complex Systems for Mobility”, in the DIME PhD Courses “Mechanical, Energy and Management Engineering” and “Engineering of modeling, machines and systems for energy, the environment and transportation”, and in the DIEC PhD Courses “Economics and Quantitative Methods”, “Strategic Engineering and Decision Methods”, and “Management and Security”. In previous years, we already had students from some of the above-mentioned PhD Courses.
References
- Course notes/slides.
- A. Dontchev, T. Zolezzi. ''Well-Posed Optimization Problems''. Lecture Notes in Math., vol. 1543. Springer, 1993. D. Fudenberg, J. Tirole. ''Game Theory'', MIT Press, 1991
- G. Gnecco, M. Sanguineti. “Team Optimization Problems with Lipschitz Continuous Strategies”, Optimization Letters, vol. 5, pp. 333-346, 2011.
- G. Gnecco, M. Sanguineti. “New Insights into Witsenhausen’s Counterexample”, Optim. Let. 6:1425-1446, 2012.
G. Gnecco, Y. Hadas, M. Sanguineti, “Some Properties of Transportation Network Cooperative Games". Networks 74:161–173, 2019.
- G. Gnecco, M. Sanguineti, G. Gaggero. “Suboptimal Solutions to Team Optimization Problems with Stochastic Information Structure”. SIAM J. on Optimization 22:212-243, 2012.
- Y. Hadas, G. Gnecco, M. Sanguineti. "An Approach to Transportation Network Analysis ViaTransferable Utility Games". Transportation Res. Part B: Methodological, vol. 105, pp. 120-143, 2017.
- K. Kolykhalova, G. Gnecco, M. Sanguineti, G. Volpe, A. Camurri, “Automated Analysis of the Origin of Movement: An Approach Based on Cooperative Games on Graphs". IEEE Trans. on Human-Machine Systems 50:550-560, 2020.
- H. Peters. ''Game Theory- A Multileveled Approach''. Springer, 2008.
- L. Pusillo. "Evolutionary Stable Strategies and Well Posedness Property", Appl. Math. Sc. 7:363-376, 2013. • L. Pusillo, S. Tijs. ''E-equilibria for Multicriteria Games ''. In: R. Cressman and P. Cardaliaguet. The Annals of the Int. Society of Dynamic Games (ISDG). vol. 12, pp. 217-228, Birkhauser, 2012.
- R. Zoppoli, M. Sanguineti, G. Gnecco, T. Parisini. “Neural Approximations for Optimal Control and Decision". Springer, Communications and Control Engineering Series. London, 2020
Robust control of agent teams: application to vehicle platoons
Enrico Zero
Hours: 18 hours
When: 9-10-11-12 April 2023
Location: via Opera Pia 13, Genova
Contact: enrico.zero@dibris.unige.it
Abstract
This short course considers linear quadratic team decision problems. It shows that linear decisions are optimal and can be found by solving a linear matrix inequality. One examples is shown: robust distance in a platoon of vehicles. The course will give the opportunity to put “hands-on” this technique using a proper Matlab toolbox (Linear Matrix Inequalities, LMI Toolbox), showing practical examples. The trainees will be required to develop a specific problem formulation, to implement it in Matlab, and to show results in a final report, whose discussion will play the role of final examination.
Program
3 hours: Introduction to robust control; Introduction to new findings in robust control of agent teams; Examples.
3 hours Introduction to the formulation of optimization problems in Matlab; Introduction to the LMI toolbox in
Matlab
3 hours Introduction to train signalling; Robust control of the distance in a platoon of vehicles; Matlab
implementation of the problem
9 hours: Work Project
References
- Gattami, A., Bernhardsson, B. (2007). Minimax team decision problems. In Proceedings of the American control conference. New York, USA pp.766–771.
- Gattami, A.,Bernhardsson,B., & Rantzer,A. (2012). Robust team decision theory. IEEE Transactions on Automatic Control, 57(3), 794–798.
- C. Bersani , S. Qiu, R. Sacile, M. Sallak, W. Schön, “Rapid, robust, distributed evaluation and control of train scheduling on a single line track”, Control Engineering Practice, 35, pp. 12–21, 2015.
Accelerated Parallel Systems: the GPU and FPGA cases
Duration: 20 hours (five half-days)
Instructor(s): Antonella Galizia - IMATI-CNR antonella.galizia@ge.imati.cnr.it, Christian Pilato
- Politecnico di Milano christian.pilato@polimi.it,
When: 8th - 12th April 2024
Where: DIBRIS-UNIGE Via Dodecaneso 35, Genova (online attendance will be possible)
Abstract
With the end of Moore law for sequential computing architectures and the advents of multi and many cores era, managing parallelism is no longer the goal of a restricted community but becomes a need for everybody who is interested in exploiting an adequate fraction of available performance provided by widespread modern computing architectures, including desktop and mobile devices. A computer is nowadays a complex system with heterogeneous computational units including multi cores CPU and many cores accelerators such as Graphic Processing Unit (GPU), Field-Programmable Gate Array (FPGA), or others.
The aim of the course is to present the state-of-the-practice on computing systems equipped with accelerated technology; the main focus is on high efficiency, which is of utmost importance but can have different meanings: as for high-performance computing and data center domains, high efficiency mostly relates to performance while in the mobile and IoT space, research communities think about accelerators more from a power/energy perspective. The course considers programming of Complex Heterogeneous Parallel Systems (CHPS) and in particular accelerators as GPU and FPGA, the overall goal and challenge is the portability and performances of software to ensure effectiveness and efficiency of target applications.
This edition of the course will discuss two different approaches: the GPGPU based solutions and
the hardware specialization of the application on FPGA. In particular, it will be shown, with practical cases, how to design and implement applications able to exploit available computational resources through a suitable selection of programming tools, communications and domain-oriented libraries, and design and implementation strategies. At this regards the course will include a hands-on part that the student may dedicate to a general case study or to a personalized case depending on specific interests.
Program
• Introduction to complex heterogeneous parallel systems (CHPS): from personal computer to High Performance clusters and GPUs.
• A coarse grain analysis of performances and programming issues for CHPS, including: memory hierarchies and data movement; computational units and different levels-types of parallelism; communications issues.
• Overview of GPGPU oriented parallel processing libraries, languages and tools. This will include CUDA, OpenACC and OpenCL insights, GPU-accelerated libraries.
• Overview of CUDA advanced programming features will be provided as well as examples of high software ecosystem such HPC solutions for Python, accelerated libraries, tools for profiling and debugging, and high-level synthesis tools for automatic hardware
customization.
• Designing parallel applications and practical experiences: hands on case studies selected from linear algebra, computational geometry, Monte Carlo simulation, and data science applications. Individual case study on topics proposed by students will be encouraged (8
hours).
Interdisciplinarity: Beside CSSE, during years the course has been attended by PhD students willing to speed up their codes, mainly engineers (I remember meccanica, telecommunication), robotics, and biologist/bioinformatics - not sure which PhD courses I can mention.
References: Slides of the course will be provided to students.
Introduction to Type Theory: from foundations to practice
Duration: ~24 hours
Instructor(s):
Francesco Dagnino – DIBRIS, Università di Genova – francesco.dagnino@dibris.unige.it
Jacopo Emmenegger – DIMA, Università di Genova – emmenegger@dima.unige.it
When: 15th-19th April 2024
Where: DIBRIS/DIMA @ VP, Università di Genova
Abstract
Proof assistants are tools designed to write formal proofs and automatically check their correctness. They are increasingly used in many different domains, from software verification to formalized mathematics. Most popular proof assistants, such as Agda, Coq or Lean, implement a constructive logic based on a (dependent) type theory. This means that they are strongly typed functional programming languages where types and programs are seen as logical formulas and proofs, respectively, and then the correctness of a proof is ensured just by typechecking a program.
In the course, we will study fundamental notions and results on type theories, explaining their connection with logic, and we will experiment formal reasoning in a type theory, using Agda as a concrete system.
Program
Below we report a tentative program. It will be adapted depending on the audience.
- Introduction, Constructive reasoning
- Untyped Lambda-Calculus: terms, reduction, confluence, normalisation
- Typing a la Curry vs Typing a la Church
- Simple Types and Intuitionistic Propositional Logic
- Strong Normalisation and Consistency
- Dependent Types and Quantifiers, Identity Types and Equality
- Advanced Agda Features (Inductive Types, Universes, Record Types, …)
Interdisciplinary: PhD in Mathematics, PhD in Security, Risk and Vulnerability, PhD in Philosophy
References
[1] J.Y. Girard, Y. Lafont, P. Taylor. Proofs and Types. Cambridge University Press, 1989.
[2] M.H.B. Sorensen, P. Urzyczyn. Lectures on the Curry-Howard Isomorphism. Elsevier, 2006.
[3] B. Nordstrom, K. Petersson, J.M. Smith. Programming in Martin-löf’s type theory : an introduction. Clarendon Press, 1990.
[4] M. Hofmann. Syntax and Semantics of Dependent Types. Cambridge University Press, 1997
[5] The Univalent Foundation Program. Homotopy Type Theory. Institute for Advanced Study, Princeton, 2013.
[6] Agda (https://agda.readthedocs.io/en/v2.6.4/)
High Performance Computing for heterogeneous accelerator architectures
Duration: 20 hours
Instructor: Daniele D’Agostino – DIBRIS Unige
When: 6th - 10th May 2024
Where: Via Dodecaneso
Abstract For most scientists the abstract fact of the existence of an algorithm solving a problem is enough, while its efficient implementation in terms of exploitation of the available computational capabilities is mostly disregarded. But with the end of Moore law for sequential computing architectures and the advents of multi and many cores era, managing parallelism is no longer the goal of a restricted ICT community, it becomes a need for everybody who is interested in exploiting an adequate fraction of available performance provided by widespread modern computing architectures. The aim of the course is to provide a glance of the different aspects involved in efficient and effective programming of current heterogeneous computing systems equipped with manycore x86 architectures and accelerators, in particular graphics cards (GPUs). Therefore, it conveys the required knowledge to develop a thorough understanding of the interactions between software and hardware at the core, socket, node and cluster level. In particular it will be presented, with practical cases, how the design and implementation of programs can exploit available computational resources through a suitable selection of programming paradigms, compiling and profiling tools. The course includes a hands-on part that the student may dedicate to a general case study or to a personalized case depending on specific interests.
With respect to the past editions this course will focus on OneAPI, an open, cross-industry, standards-based, unified, multi-architecture, multi-vendor programming model adopted by Intel, for a unified application programming interface (API) intended to be used across different computing accelerator architectures, including GPUs and field-programmable gate arrays (FPGAs).
The programming languages will be C/C++/Data Parallel C++
Program
- Introduction to complex heterogeneous parallel systems: from workstations to High Performance clusters and supercomputers.
- The von Neumann architecture then versus now, features and bottlenecks.
- Introduction to parallel architectures.
- Single Instruction Multiple Data (SIMD)
- Single Program Multiple Data (SPMD)
- The roofline performance model.
- Profiling and performance analysis
- The compiler, one of the most important software tools for HPC.
- Intel oneAPI
- Nvidia HPC SDK
- Optimal use of parallel resources – on the basis of students’ interests one or more of the following topics
- SYCL Programming for Accelerated Computing (CPUs, GPUs and FPGAs)
- Parallel programming for x86 nodes: OpenMP and MPI
- Parallel programming for GPUs: openACC and CUDA
- Parallel programming for HPC systems: MPI+X
- Designing parallel applications and practical experiences.
References
- Slides and references will be provided to students
Affective Computing
Duration: 16 hours
Instructor(s): Radoslaw Niewiadomski – University of Genova – radoslaw.niewiadomski@unige.it
When: 13th - 17th May 2024
Where: flexible
Abstract
The main goal of Affective Computing is to develop models and systems that can recognize, interpret, process, and simulate human affective states. This emergent field has several applications such as creation of artificial agents, entertainment (e.g., video-games), serious games, virtual training environments, positive computing and systems to improve well-being, marketing and so on.
The course will offer foundational knowledge on the design and development of affective computing systems, encompassing theoretical foundations (a brief introduction to the psychology of emotions) as well as practical skills (such as designing data collection protocols and constructing computational models). The focus will be made on nonverbal behaviors: facial expressions, body movements, gaze, and touch gestures that can be captured with RGB or depth cameras, motion capture systems, accelerometers, etc. The course will introduce fundamental concepts in the psychology of emotions and social psychology demonstrating how these concepts can be modeled using AI techniques. Such models are utilized to recognize and classify humans' internal states (e.g., emotion recognition from facial expressions), to reason about human emotions (e.g., in constructing empathic artificial companions), to simulate emotions in artificial agents (such as social robots), and to enable these agents to communicate emotions to their human interaction partners.
Program
The following concepts will be discussed:
- psychological background: emotion theories and emotion regulation, appraisal theories, interpersonal stances/attitudes;
- relation between nonverbal behavior (e.g., facial expression) and internal state;
- multimodal expressions and (in)-congruent modalities;
- computational models of emotions;
- techniques, protocols and devices for multimodal data collection;
- an overview of freely available datasets in affective computing;
- design of data collection protocols;
- the data assessment: methods and tools, design of questionnaires, manual data annotation and validation, and inter-rater agreement;
- features extraction techniques: extracting features using freely available software (e.g., OpenFace, OpenPose), designing hand-crafted features;
- design, development and validation of feature-based computational models;
- examples of emotion recognition models and their applications to artificial agents.
Interdisciplinarity: BIOENGINEERING AND ROBOTICS
References
Calvo, R., D'Mello, S., Gratch, J., Kappas, A. (Eds.), The Oxford Handbook of Affective Computing, Oxford University Press, 2015.
Scherer, K.R., Bänziger, T., Roesch, E. (Eds.), A Blueprint for Affective Computing: A sourcebook and manual, Oxford University Press, 2010.
Theory and Practice of Runtime Monitoring
Duration: about 20 hours
Instructor(s): Davide Ancona - University of Genoa (DIBRIS) - davide.ancona@unige.it, Angelo Ferrando - University of Genoa (DIBRIS) - angelo.ferrando@unige.it
When: 3rd - 7th June 2024
Where: via Dodecaneso 35, Valletta Puggia, DIBRIS
Abstract
The course provides a general introduction to Runtime Monitoring and Verification (RM&V), and the theoretical and practical aspects of RML (Runtime Monitoring Language), a system agnostic domain specific language for RM&V. Use cases will be considered in the context of distributed, Internet of Things and robotic systems.
- An introduction to RM&V.
- Theory and practice of RML, a domain specific language for RM&V.
- RM&V of IoT applications based on Node.js.
- RM&V of Robotic systems based on ROS.
- Hands-on labs with RML.
Interdisciplinarity: PhD Program on Security, Risk and Vulnerability
References
- Davide Ancona, Angelo Ferrando, Viviana Mascardi. Runtime Verification of Hash Code in Mutable Classes. FTfJP@ECOOP 2023: 25-31
- Davide Ancona, Luca Franceschini, Angelo Ferrando, Viviana Mascardi. RML: Theory and practice of a domain specific language for runtime verification. Science of Computer Programming, 205:102610 (2021).
- Angelo Ferrando, Louise A. Dennis, Rafael C. Cardoso, Michael Fisher, Davide Ancona, Viviana Mascardi.Toward a Holistic Approach to Verification and Validation of Autonomous Cognitive Systems. ACM Trans. Softw. Eng. Methodol. 30(4): 43:1-43:43 (2021)
- Angelo Ferrando, Rafael C. Cardoso, Michael Fisher, Davide Ancona, Luca Franceschini, Viviana Mascardi. ROSMonitoring: A Runtime Verification Framework for ROS. TAROS 2020: 387-399
- Luca Franceschini, RML: Runtime Monitoring Language, Ph.D. thesis, DIBRIS - University of Genova, URL http://hdl.handle.net/11567/1001856, March 2020.
- Davide Ancona, Francesco Dagnino, Luca Franceschini. A formalism for specification of Java API interfaces. ISSTA/ECOOP Workshops 2018: 24-26
- Davide Ancona, Luca Franceschini, Giorgio Delzanno, Maurizio Leotta, Marina Ribaudo, Filippo Ricca. Towards Runtime Monitoring of Node.js and Its Application to the Internet of Things. ALP4IoT@iFM 2017: 27-42
- Davide Ancona, Angelo Ferrando, Viviana Mascardi. Comparing Trace Expressions and Linear Temporal Logic for Runtime Verification. Theory and Practice of Formal Methods 2016: 47-64
- Angelo Ferrando, Davide Ancona, Viviana Mascardi. Decentralizing MAS Monitoring with DecAMon. AAMAS 2017: 239-248
- Davide Ancona, Angelo Ferrando, Viviana Mascardi. Parametric Runtime Verification of Multiagent Systems. AAMAS 2017: 1457-1459
- Y. Falcone, S. Krstic, G. Reger, D. Traytel, A taxonomy for classifying runtime verification tools, in: Runtime Verification – 18th International Conference, Proceedings, RV 2018, pp. 241–262.
- E. Bartocci, Y. Falcone, A. Francalanza, G. Reger, Introduction to runtime verification, in: Lectures on Runtime Verification – Introductory and Advanced Topics, 2018, pp. 1–33.
- Yliès Falcone, Klaus Havelund, Giles Reger. A Tutorial on Runtime Verification. Engineering Dependable Software Systems 2013: 141-175
- Martin Leucker, Christian Schallhart. A brief account of runtime verification. J. Log. Algebr. Program. 78(5): 293-303 (2009)
- RML: https://rmlatdibris.github.io
- Node.js: https://nodejs.org/en
Computer Vision Crash Course
Duration: 20 hours
Instructor(s):
Francesca Odone and Nicoletta Noceti
MaLGa DIBRIS, Università degli Studi di Genova
{francesca.odone, nicoletta.noceti,}@unige.it
When: Summer 10th-14th June 2024
Where: DIBRIS-UNIGE Via Dodecaneso 35, Genova
Abstract
Visual perception, as a key element of Artificial Intelligence, allows us to build smart systems sensitive to surrounding environments, interactive robots, and video cameras with real-time algorithms running on board. With similar algorithms, our smartphones can log us in by recognizing our faces, read text automatically, and improve the quality of the photos we shoot. At the core of these applications are computer vision models, often boosted by machine learning algorithms.
This crash course is conceived as a complement to the “Deep Learning: Hands on introduction” course (henceforth DL) although it can be taken independently.
It covers the basic principles of computer vision and visual perception in artificial agents, including theoretical classes, application examples, and hands-on activities.
Within CVCC, we present elements of classical computer vision (introduction to image processing, feature detection, depth estimation, motion analysis).
At the same time, by borrowing from DL, we also present deep learning approaches to computer vision problems such as image classification, detection, and semantic segmentation.
Core CVCC Program (for those attending the CVCC course only)
Integrated DL and CVCC program
References
Slides and readings will be provided.
Some reference books:
- E. Trucco, A. Verri Introductory Techniques for 3-D Computer Vision Prenctice Hall 1998
- R. Szeliski Computer Vision: Algorithms and Applications https://szeliski.org/Book/
- I. Goodfellow, Y. Bengio, A. Courville Deep Learning https://www.deeplearningbook.org/
Deep Learning: a hands-on introduction
Duration: 20 hours
Instructor(s):
Nicoletta Noceti and Francesca Odone
MaLGa-DIBRIS, Università degli Studi di Genova
{nicoletta.noceti, francesca.odone}@unige.it
When: Summer 10-14th June 2024
Where: DIBRIS-UNIGE Via Dodecaneso 35, Genova
Abstract
Deep Learning (DL) is a branch of Machine Learning that has recently achieved astonishing results in several different domains. This course will provide a hands-on introduction to DL, starting from its foundations and discussing the various types of deep architectures and tools currently available. The theoretical classes will be coupled with hands-on activities in the lab (in Python using Keras), which will constitute an integral part of the course, giving the possibility of practising deep learning with examples from real-world applications, with a particular focus on visual data. Besides well-established approaches, the course will also highlight current trends, open problems, and potential future lines of research.
Although the DL course can be taken independently, for the second year it will be held in synergy with the “Computer Vision Crash Course” (CVCC). Computer Vision is indeed one of the most classical and effective applications of DL in the real world. Contributions from the CVCC course will constitute a complementary deepening of basic principles of computer vision and visual perception in artificial agents, but also provides a guided tour using deep learning for computer vision problems.
Core DL Program (for those attending the DL course only)
Integrated DL and CVCC program
References
- Goodfellow, Y. Bengio and A. Courville, Deep Learning book, MIT Press, 2016.
- Francois Chollet. Deep Learning with Python, Manning Pub., 2017
- Slides, notebooks, and a list of bibliographical references and additional material will be provided to attendants. All the course material is in English.
Verification of Neural Networks
Duration: 20 hours
Instructor(s):
Stefano Demarchi – UniGe – stefano.demarchi@edu.unige.it
Armando Tacchella – UniGe – armando.tacchella@unige.it
When: 10th - 14th June 2024
Where: TBD
Abstract
In this course we present the tool NeVer2, an integrated environment for designing, learning and verifying (deep) neural networks. NeVer2 borrows its design philosophy from NeVer, the first package proposed in 2010 that integrated learning, automated verification and repair of (shallow) neural networks in a single tool. The goal of NeVer2 is to provide a similar integration for deep networks by leveraging a selection of state-of-the-art learning frameworks and integrating them with verification algorithms to ease the scalability challenge and make repair of faulty networks possible.
Program
- Verification of Neural Networks: problem definition, state of the art and current challenges
- Computing output images of ReLU-based neural networks with star-sets to obtain sound and complete verification algorithms
- Abstraction supported by star-sets to obtain sound verification algorithms
- Refinement and counter-example finding techniques
- An introduction to NeVer2 by example
Interdisciplinarity: Course offered for the Cybersecurity and Reliable AI curriculum of PhD in Security, Risk and Vulnerability
References
- Luca Pulina, Armando Tacchella: An Abstraction-Refinement Approach to Verification of Artificial Neural Networks. CAV 2010: 243-257
- Luca Pulina, Armando Tacchella: Challenging SMT solvers to verify neural networks. AI Commun. 25(2): 117-135 (2012)
- Francesco Leofante, Nina Narodytska, Luca Pulina, Armando Tacchella: Automated Verification of Neural Networks: Advances, Challenges and Perspectives. CoRR abs/1805.09938 (2018)
- Dario Guidotti, Francesco Leofante, Luca Pulina, Armando Tacchella: Verification of Neural Networks: Enhancing Scalability Through Pruning. ECAI 2020: 2505-2512
- Dario Guidotti, Luca Pulina, Armando Tacchella: pyNeVer a Framework for Learning and Verification of Neural Networks. ATVA 2021.
- Stefano Demarchi, Dario Guidotti, Andrea Pitto, Armando Tacchella: Formal Verification Of Neural Networks: A Case Study About Adaptive Cruise Control. ECMS 2022: 310-316
- Stefano Demarchi, Dario Guidotti, Luca Pulina, Armando Tacchella: Supporting Standardization of Neural Networks Verification with VNN-LIB and CoCoNet. FoMLAS 2023.
Introduction to discrete differential geometry
Duration: 20 hours (about 20 hours)
Instructor(s):
Claudio Mancinelli – DIBRIS, University of Genoa – claudio.mancinelli@unige.it
When: 17th -21st June 2024
Where: DIBRIS, Via Dodecaneso. Attendance via Teams is also possible.
Abstract The course has the purpose of introducing how basic concepts in differential geometry can be brought into the discrete setting, focusing on triangle meshes. Several applications in geometry processing in which these concepts play a pivotal role are presented as well.
Program Continuous setting: differentiable manifolds, Riemannian metric, affine connection, geodesics and exponential map. Discrete setting: Tangent space, metric, parallel transport, differential operators, geodesic paths and distances. Applications to geometry processing: smoothing, the vector heat method, vector graphics on discrete surfaces.
Interdisciplinarity: This course could be offered to PhD students in both Computer Science and Mathematics.
References
[1] do Carmo M. P., Riemannian Geometry, 1992
[2] Botsch M., Kobbelt L, Pauly M., Alliez P, Lévy B., Polygon Mesh Processing, 2010
[3] Crane K., Livesu M., Puppo E., Qin Y., A Survey of Algorithms for Geodesic Paths and Distances, 2020
[4] Sharp N., Crane K., The Vector Heat Method , 2019
[5] Mancinelli C., Nazzaro G., Pellacini F., Puppo E., B/Surf: Interactive Bézier Splines on Surfaces, 2022
Effective habits and skills for successful young scientists
Teacher: Fabio Roli
Duration: 20 hours (5 half-days)
Credits: 5 CFU
When: June 24-28 2024, 09-13 a.m.
Where: online on MS Teams
Curriculum: Cross-curricula course
Exam: written assessments with open-ended questions
Abstract:
Although tons of books on effective habits and soft skills have been published, they have not been thought for scientists, and, therefore, issues that are relevant for them are not easily available. This short course aims to collect spread ideas and place them in a coherent framework useful for young scientists and provide a small tactical guide for scientists at the first stages of their career. First, I review the main concepts of Steve Covey's personal and time management paradigm, the inspirational speeches of the late Professor Randy Pausch, and the paradigm of atomic habits of James Clear, and discuss their utility for daily activity of a young scientist. Then, I focus on a few practical skills, namely, on how to write a great paper and give a great talk. I try to convey the message that succeeding in science and technology requires skills and habits beyond the pure intelligence and intellectual abilities, and that good habits and skills of personal and time management are extremely important for young scientists.
Program:
- Basic concepts of theory of habits. Effective habits for young scientists.
- Basis concepts of personal and time management. Effective personal and time management for young scientists.
- Survival skills in the game of science. Know yourself: match your goals to your character and talents.
- How to write a great paper.
- How to give a great talk.
References:
- S. Covey, The 7 Habits of Highly Effective People, 2020
- J. Clear, Atomic habits, 2018
- F. Rosei, T. Johnston, Survival skills for scientists, 2006
- F. Roli, Personal and time management for young scientists, tutorial at the International Conference on Machine Learning and Cybernetics, 2013
- R. Hamming, You and your research, 1986
- U. Alon, How to choose a good scientific problem, Molecular Cell, 2009.
- D. A. Patterson, How to have a bad career in research, Talks at Google, 2016
Machine Learning Crash Course (MLCC) 2024
Teachers: Lorenzo Rosasco, DIBRIS (lorenzo.rosasco@unige.it), Silvia Villa, DIMA (silvia.villa@unige.it), Giovanni Alberti, DIMA (giovanni.alberti@unige.it), Simone Di Marino, DIMA (simone.dimarino@unige.it), Matteo Santacesaria, DIMA (matteo.santacesaria@unige.it)
Duration: 20 hours
Credits: 6 CFU
When: 25th - 28th June 2024
Where: DIBRIS, Via Dodecaneso 35
Exam:
The test will consist in completing remotely the notebooks that the class will work on during the labs, and a writing a report commenting on the numerical results obtained. The school will take place exclusively in person, it will not be streamed online. Active attendance will be part of the evaluation.
Abstract:
Machine Learning is key to develop intelligent systems and analyze data in science and engineering. Machine Learning engines enable intelligent technologies such as Siri, Kinect or Google self-driving car, to name a few. At the same time, Machine Learning methods help deciphering the information in our DNA and make sense of the flood of information gathered on the web, forming the basis of a new “Science of Data”. This course introduces the fundamental methods at the core of modern Machine Learning. It covers theoretical foundations as well as essential algorithms. Classes on theoretical and algorithmic aspects are complemented by practical lab sessions.
Program:
Tue - 9.30-11.00 - Class 1: - Introduction to Statistical Machine Learning
Tue - 11.30-13.00 - Class 2: - Local Methods and Model Selection
Tue - 14.30-16.30 - Lab 1: - Local Methods for Classification
Wed - 9.30-11.00 - Class 3: - Empirical Risk Minimization with Linear Models
Wed - 11.30-13.00 - Class 4: - Optimization and SGD
Wed - 14.30-16.30 - Lab 2: - ERM with Linear Models
Thu - 9.30-11.00 - Class 5: - Kernel Methods
Thu - 11.30-13.00 - Class 6: - Neural Networks
Thu - 14.30-16.30 - Lab 3: - Kernel Methods and Neural Networks
Fri - 9.30-11.00 - Class 7: Sparsity and variable selection
Fri - 11.30-13.00 - Class 8: Dimensionality Reduction and PCA
Fri - 14.30-16.30 - Lab 4: - Sparsity and PCA
References:
TBA
Title: Optimization of Electric-Vehicle Charging: scheduling and planning problems
CFU: 6
Instructors: Michela Robba– University of Genova – michela.robba@unige.it; Luca Parodi– University of Genova – luca.parodi@edu.unige.it; Giulio Ferro– University of Genova – giulio.ferro@unige.it;
When: July 2024, 1 week
Where: Teams and in presence
Abstract
The concept of a dynamic and highly distributed Smart Grid which can intelligently integrate all connected users in an efficient, sustainable, economic and secure way, has opened new challenges in the application of Energy Management Systems (EMSs) and optimization techniques in the various research areas related to planning, management and control of power generation, distribution systems, and demand response. In fact, the electrical grid is characterized by different components (distributed generation and production plants from renewables, storage systems, buildings, microgrids, distributed electric vehicles (EVs), etc.) and actors (microgrids’ owners, distribution systems operators, aggregators, owners of charging stations and islands of recharge, etc.) that must be coordinated in order to respect technical requirements, minimize costs and environmental impacts.
In this course, attention is focused on the application of control and optimization methods and approaches to energy systems in which EVs are present. In particular, the aim is to provide models and methods for the optimal management of EVs through an interdisciplinary approach that brings together knowledge from sectors of transportation, manufacturing and smart grids. After a brief introduction to the state of the art and technologies, first of all, the scheduling of EVs in a smart grid is presented through the formalization of a discrete-time optimization problem in which also fossil fuel production plants, storage systems, and renewables are considered to satisfy the electrical load of the grid. Then, a discrete-event formalization is presented. Finally, optimal planning of charging stations over a territory will be shown, as well as energy demand assessment based on traffic user equilibrium conditions. Some basic concepts on routing and charging approaches will be presented too.
Program
- Introduction to energy management systems and electric vehicles
- Optimal scheduling of electric vehicles in a discrete time framework
- Optimal scheduling of electric vehicles in a discrete event framework
- Traffic user equilibrium conditions for transportation networks with electric vehicles
- Optimal planning of charging stations over a territory
- Basic concepts on routing and charging
References
Ferro, R. Minciardi, L. Parodi, M. Robba. A user equilibrium model for electric vehicles: Joint traffic and energy demand assignment, Energy 198, 2020.
G. Ferro, R. Minciardi, L. Parodi, M. Robba. Optimal Planning of Charging Stations in Coupled Transportation and Power Networks Based on User Equilibrium Conditions. IEEE Transactions on Automation Science and Engineering, 2022.
G. Ferro, M. Robba, M. Paolucci. Optimal charging and routing of electric vehicles with power constraints and time-of-use energy prices. IEEE Transactions on Vehicular Technology, 69 (12), 14436-14447, 2020.
G. Ferro, R. Minciardi, L. Parodi, M. Robba. Discrete event optimization of a vehicle charging station with multiple sockets. Discrete Event Dynamic Systems: Theory and Applications, 31 (2)
G. Ferro, F. Laureri, R. Minciardi, M. Robba. An optimization model for electrical vehicles scheduling in a smart grid. Sustainable Energy, Grids and Networks 14, pp. 62-70, 2018
Information Hiding from Attack to Defense
Duration: 20 hours
Instructor: Luca Caviglione
When: 1st - 5th July 2024
Where: preferred venue is via Skype or Microsoft Teams to reach a wide audience. Otherwise, University of Genova – DIBRIS, Via Dodecaneso, Genova.
Abstract: Information hiding techniques are increasingly used in investigative journalism to protect the identity of sources or by malware to hide its existence and communication attempts. Therefore, understanding how information hiding can be used to empower privacy of users or endow malicious software with the ability of staying "under the radar" are essential to fully assess the modern cybersecurity panorama. In this perspective, the course introduces the use of information hiding in modern threats and privacy-enhancing architectures with emphasis on two different research areas, specifically: i) techniques for creating network covert channels for communicating with a remote command & control facility, exfiltrate sensitive information and or enforce privacy ii) how to create and detect a covert channel implementing an abusive local path between two colluding applications to bypass the security framework of mobile devices.
To give a comprehensive overview on information hiding and steganography, the course will also cover the use of information hiding and steganographic techniques for watermarking purposes. For instance, it will showcase the main mechanisms for watermarking images, sounds and network flows for management, retrieval, metadating, authentication and copyright enforcement. The course will also discuss possible countermeasures or mitigation methodologies for facing the risks of the increasing amount of steganographic threats observed in the wild.
Program:
- Course introduction and a general view on information hiding.
- Information hiding as a cybersecurity threat: malware and colluding applications.
- Network covert channels (including air-gapped covert channels).
- Information hiding for watermarking, privacy enhancing, and metadating.
- Countermeasures (e.g., detecting obfuscated malware or removing ambiguities in protocols).
Interdisciplinarity: the course is delivered within the SRV framework. The focus is on security but could be of interest for a broad audience but requires a basic knowledge of networking, security and computer science.
References:
- W. Mazurczyk, L. Caviglione, “Steganography in Modern Smartphones and Mitigation Techniques”, IEEE Communications Surveys & Tutorials, IEEE, Vol. 17, No.1, First Quarter 2015, pp. 334 - 357.
- L. Caviglione, W. Mazurczyk, “Never Mind the Malware, Here’s the Stegomalware”, IEEE Security & Privacy, Vol. 20, No. 5, pp. 101-106, Sept.-Oct. 2022.
- W. Mazurczyk, L. Caviglione, Information Hiding as a Challenge for Malware Detection, IEEE Security & Privacy, Vol. 13, No. 2, pp. 89-93, Mar.-Apr. 2015.
- L. Caviglione, M. Podolski, W. Mazurczyk, M. Ianigro, “Covert Channels in Personal Cloud Storage Services: the case of Dropbox”, IEEE Transactions on Industrial Informatics, IEEE, Vol. 13, No. 4, pp. 1921 - 1931, August 2017.
- L. Caviglione, M. Gaggero, J.-F. Lalande, W. Mazurczyk, M. Urbanski, “Seeing the Unseen: Revealing Mobile Malware Hidden Communications via Energy Consumption and Artificial Intelligence”, IEEE Transactions on Information Forensics & Security, IEEE, Vol. 11, No. 4, pp. 799 – 810, April 2016.
- W. Mazurczyk, L. Caviglione, “Cyber Reconnaissance Techniques”, Communications of the ACM, Vol. 64, No. 3, pp. 86-95, March 2021.
- Steg-in-the-wild (https://github.com/lucacav/steg-in-the-wild): a curated list of attacks observed in the wild taking advantage of steganographic or information-hiding-capable techniques.
- Steg-tools (https://github.com/lucacav/steg-tools): a list of software tools and resources for learning and experimenting with steganography and information hiding.
Theory and Practice of Learning from Data
Duration: 20 hours
Instructor(s): Luca Oneto, UNIGE, luca.oneto@unige.it
When: https://www.lucaoneto.it/teaching/tpld-phd
Where: https://www.lucaoneto.it/teaching/tpld-phd
Abstract
This course aims at providing an introductory and unifying view of information extraction and model building from data, as addressed by many research fields like DataMining, Statistics, Computational Intelligence, Machine Learning, and PatternRecognition. The course will present an overview of the theoretical background of learning from data, including the most used algorithms in the field, as well as practical applications.
Program
- Inference: induction, deduction, and abduction
- Statistical inference
- Machine Learning
- Deep Learning
- Model selection and error estimation
- Implementation and Applications
Interdisciplinarity: Yes, any engineering/science PhD.
References
- C. C. Aggarwal "Data Mining - The textbook" 2015
- T. Hastie, R.Tibshirani, J.Friedman "The Elements of Statistical Learning: Data Mining, Inference, and Prediction" 2009.
- S. Shalev-Shwartz, S. Ben-David "Understanding machine learning: From theory to algorithms" 2014
- I. Goodfellow, Y. Bengio, A. Courville "Deep learning" 2016
- L. Oneto "Model Selection and Error Estimation in a Nutshell" 2020
Theory and Practice of Virtual Reality Systems
Duration: 20 hours
Instructor(s): Manuela Chessa - University of Genoa (DIBRIS) - manuela.chessa@unige.it
When: 15th -19th July 2024
Where: via Dodecaneso 35, Valletta Puggia, DIBRIS (online if necessary to reach people outside Unige)
Interdisciplinarity: Yes, any engineering/science PhD.
Abstract
The course provides a general introduction to the theory and the development of Virtual Reality Systems. The course will start from some basic aspects of Virtual Reality towards the recent achievements in Mixed Reality. The course will cover the following topics.
- Introduction to Virtual Reality.
- Applications of Virtual Reality: opportunities and issues.
- Devices for Virtual Reality.
- Interaction Techniques in Virtual Reality (hand interaction, walking, …)
- Introduction to Unity.
- Unity, Unreal, and Godot with practical examples
- How to build a VR application
References
TBA