Uso de cookies

En las páginas web de la Universidad Carlos III de Madrid utilizamos cookies propias y de terceros para mejorar nuestros servicios mediante el análisis de sus hábitos de navegación. Al continuar con la navegación, entendemos que se acepta nuestra política de cookies. "Normas de uso"

Cabecera de página Seminarios Master Ciencia y Tecnología Informatica

Aceleración de algoritmos de machine learning desde un enfoque arquitectónico (Francisco D. Igual-Peña)

Título: Aceleración de algoritmos de machine learning desde un enfoque arquitectónico

Ponente: Francisco D. Igual-Peña

Fechas: 12, 13 y 14 de mayo de 2021

Horario: 

Organizador: Francisco Javier García Blas (grupo ARCOS) 

Lugar: Online

Créditos: 2 ECTS

Resumen: 

El seminario tiene como objetivo introducir conceptos de Machine Learning (ML) desde un punto de vista de la complejidad computacional y sus implicaciones arquitectónicas, planteando los requisitos de procesamiento y sus principales soluciones, tanto a nivel de software como a nivel arquitectural.

Durante el seminario se presentará la evolución de arquitecturas aceleradoras y arquitecturas de propósito específico para ML, así como el soporte a nivel software, casos de estudio y tendencias actuales y futuras. Una parte de las sesiones plantearán problemas prácticos, que serán resueltos mediante el uso de aceleradores de propósito general o específico en el ámbito de ML.

 

Session 1, May 12. 9am-13am

Architectures for AI. From sensors to cloud

Learning objectives: The session will cover the fundamentals and motivational facts behind the evolution of specific-purpose architectures for Artificial Intelligence, diving into implications on architectural design targeting performance and energy efficiency.

Contents:

  • Introduction and course overview.
  • Motivation. Artificial Intelligence: from sensors to cloud.
  • AI basics. Training vs. inferencing: Architectural implications.
  • Break.
  • Introduction to DNNs.
  • AI frameworks: Tensorflow and Tensorflow Lite.
  • Building DNNs with Tensorflow.
  • Managing devices. TPUs vs GPUs. Performance/energy efficiency evaluation.

 

Session 2, May 13, 9am-13am

Domain-specific architectures (DSAs) for AI

Learning objectives: The session will cover the specifics on design and implementation of modern (present and future) domain-specific architectures for AI, with special emphasis on the differences between training-specific and inference-specific architectures. From the practical perspectives, we will cover a range of real-world applications, and evaluate the performance-energy implications on different state-of-the-art architectures.

Contents:

  • DNN accelerator architectures. Motivation for Domain-Specific Architectures (DSAs).
  • Advanced technology opportunities.
  • Network and hardware co-design for DSAs.
  • Benchmarking metrics.
  • Market survey on DSAs (I).
  • Break
  • Building ad-hoc models for TFLite. Use cases (Image classification, segmentation and NLP).
  • Quantization and inferencing. Performance/model size/energy efficiency evaluation.
  • TFLite delegates.

 

Session 3, May 14, 9am-13am

DSAs and APIs for AI. Present and future trends

Learning objectives: The session will cover current and future trends that are already available or will be soon available on DSAs for AI. We will cover both architectures for AI, and AI techniques to improve architectural design and use, and system management. We will provide details (case studies) on real architectures and infrastructures, and provide hands-on exercises on a wide range of real DSAs.

Contents:

  • Architectures for AI and AI for architectures and systems.
  • DSAs: present and future trends.
  • Case study: Nvidia NVDLA.
  • Market survey on DSAs (II)
  • Frameworks, libraries and APIs for AI and DSAs.
  • Break
  • Hands-on with domain-specific APIs: Intel OpenVINO, Nvidia TensorRT.
  • Hands-on with domain-specific architectuers: Google Coral, Intel NCS, Nvidia Xavier.

 

Breve biografía:

Francisco D. Igual-Peña was born in Castellón, Spain, in 1983. He received the M.S. in Computer Science (2006) at the Universitat Jaume I de Castelló, and his Ph.D. degree at the same University (2011, with highest honors).

In March 2012 he joined the Department of Computer Architecture and Automation at UCM as a postdoc researcher under the Juan de la Cierva program, and he became Assistant Professor in 2013. He became Associate Professor (Profesor Contratado Doctor) in 2018, and Profesor Titular de Universidad in 2020.

His research interests include high performance and parallel computing, low-power architectures, image and video processing on novel architectures and general-purpose computing on graphics processors (GPGPU).
 

Speaker Personal web page

Automatic Text Simplification and Summarization (Horacio Saggion)

Título: Automatic Text Simplification and Summarization

Ponente: Horacio Saggion

Fechas: del 18 al 22 de enero de 2021

Horario: de 10:00 a 12:30 h.

Organizador: Paloma Martínez y Lourdes Moreno, Departamento de Informática de la UC3M

Lugar: on-line

Créditos: 2 ECTS

Resumen:

Text Simplification: Automatic text simplification as an NLP task arose from the necessity to make electronic textual content equally accessible to everyone. Automatic text simplification is a complex task which encompasses a number of operations applied to a text at different linguistic levels. The aim is to turn a complex text into a simplified variant, taking into consideration the specific needs of a particular target user. Automatic text simplification has traditionally had a double purpose. It can serve as preprocessing tool for other NLP applications and it can be used for a social function, making content accessible to different users such as foreign language learners, readers with aphasia, low literacy individuals, etc.  The first attempts to text simplification were rule-based syntactic simplification systems however nowadays with the availability of large parallel corpora, such as the English Wikipedia and the Simple English Wikipedia, approaches to automatic text simplification have become more data-driven. 

Text simplification is a very active research topic where progress is still needed. In this seminar I will provide the audience with a panorama of more than a decade of work in the area emphasizing also the relevant social function that content simplification can make to the information society.
 

Text Summarization: A summary is a text with a very specific purpose: to give the reader a concise idea of the contents of another text. The idea of automatically producing summaries has a long story in the field of natural language processing, however, nowadays with the ever growing amount of texts and messages available on-line in public or private networks, this research field has become, more than ever before, key for the information society.

The generation by computers of summaries or abstracts has been addressed from different angles starting with seminal work in the late fifties.  The applied techniques were first focused on the generation of sentence extracts and several methods grounded on statistical techniques were proposed to assess the relevance of sentences in a document. In the eighties, Artificial Intelligence symbolic techniques which considered summarization as an example of text understanding focused on the production of abstracts.  Hybrid techniques combining symbolic and statistical approaches sometimes relying on machine learning become popular with a renewed interest in summarization in the late nineties. Nowadays, with the availability of huge volumes of texts for training machine learning systems, several methods have emerged in the area of deep learning. In particular, neural networks perform today at the state of the art.

Offering a historical perspective, I will go through relevant solutions in the area of text summarization, emphasizing the role of current machine learning systems. Likewise, I will describe evaluation methods, challenges, and resources available for system development.

 

Dates from Monday 18/01/2021 to Friday 22/01/2021 from 10:00 to 1230.
 

Bb Collaborate | January 18th - 22th

 

Session 1, Specify Date, Specify time

​18/01/2021

Session title: Text Simplification (I)

Learning objectives: 

  • Motivation and definition of the text simplification problem
  • Readability and simplification
  • Lexical simplification methods

​19/01/2021

Session title: Text Simplification (II)

  • Syntactic simplification methods
  • Text simplification resources
  • Text simplification evaluation
  • Overview of projects in the area

​20/01/2021

Session title: Text Simplification (III)

  • Overview of current architectures for text simplification
  • Neural approaches to simplification
  • Current methods on context-aware simplification
  • Challenges ahead in the area

​21/01/2021

Session title: Text Summarization (I)

Learning objectives: 

  • The text summarization problem
  • Historical account of summarization
  • Extractive summarization: empirical approaches, machine learning, and graph-based methods.

​22/01/2021

Session title: Text Summarization (II)

Learning objectives: 

  • Abstractive summarization: knowledge-based approaches and current sequence to sequence models.
  • Scientific text summarization approaches

 

Breve biografía:

Horacio Saggion is an Associate Professor at the Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona. He is the head of the Large Scale Text Understanding Systems Lab, associated to the Natural Language Processing group (TALN) where he works on automatic text summarization, text
simplification, information extraction, sentiment analysis and related topics. Horacio obtained his PhD in Computer Science from Universite de Montreal, Canada in 2000. He obtained his BSc in Computer Science from Universidad de
Buenos Aires in Argentina, and his MSc in Computer Science from UNICAMP in Brazil. He was the Principal Investigator in the EU projects Dr Inventor and Able-to-Include and is currently principal investigator of the national project TUNER and the Maria de Maeztu project Mining the Knowledge of Scientific Publications. Horacio has
published over 150 works in leading scientific journals, conferences, and books in the field of human language technology. He organized four international workshops in the areas of text summarization and information extraction and was scientific Co-chair of STIL 2009 and scientific Chair of SEPLN 2014. He is a regular programme committee
member for international conferences such as ACL, EACL, COLING, EMNLP, IJCNLP, IJCAI and is an active reviewer for international journals in computer science, information processing, and human language technology. Horacio has given courses, tutorials, and invited talks at a number of international events including LREC, ESSLLI, IJCNLP, NLDB, and RuSSIR. He has received awards from Ministerio de Educación de la Nación (Argentina), Fundación Antorchas (Argentina), Université de Montréal (Canada), Canadian Association for International Development (Canada), Fundación Vodafone Innovación (Spain), Cátedra Telefónica/Universidad de Alicante (Spain).

Speaker Personal web page

Advances in Artificial Intelligence

PRIMERA PARTE

Título: Inteligencia Artificial en el Sector Financiero

Ponentes: Sumitra Ganesh, Svitlana Vyetrenko y Roberto Maestre

Fechas: 20 y 27 de Noviembre

Horario: 16:00 a 19:30

Organizador: Scalab (Fernando Fernández)

Créditos: 1 ECTS

Resumen:

El objetivo de esta serie de seminarios es repasar algunas de las principales tecnologías de Inteligencia Artificial que se utilizan o se están estudiando en el sector financiero por su potencial aplicabilidad en este sector. Para ello, contamos con un elenco de ponentes que actualmente trabajan en divisiones relacionadas con la Inteligencia Artificial en dos entidades financieras/tecnológicas: JPMorgan y BBVA Data & Analytics.

November 20th, 16:00 | Sumitra Ganesh

Talk Title: Agent-based modeling of markets using multi-agent reinforcement learning

Talk Summary: Market makers play an important role in providing liquidity to markets by continuously quoting prices at which they are willing to buy and sell, and managing inventory risk. In this talk, we disculls the building of a multi-agent simulation of a dealer market and analyze how it can be used to understand the behavior of a reinforcement learning (RL) based market maker agent. We will try to understand the behavior of a reinforcement learning (RL) based market maker agent. We use the simulator to train RL-based agents with different competitive scenarios, reward formulations and market price trends

Bio: Sumitra Ganesh is a Research Director at J.P.Morgan AI Research. Her current research is focused on agent-based modeling of complex systems (e.g. markets), and developing learning algorithms that can work efficiently and safely in the real world and in the presence of other strategic agents. Sumitra previously led the Cross-asset Client Intelligence team in the Corporate and Investment Bank at J.P. Morgan, where she was instrumental in improving client experience through several machine learning products – her team built the first personalization engine for J.P. Morgan Markets digital platform, a virtual assistant platform for client support and several predictive tools to empower sales. Prior to joining J.P. Morgan, Sumitra worked at Goldman Sachs and as a researcher at the Kellogg School of Management. Sumitra holds a Ph.D. in EECS from U.C. Berkeley where her research focused on modeling and recognition of human actions from 3D visual data.

 

November 20th, 17:30 | Svitlana Vyetrenko

Talk Title: Realism of multi-agent limit order book market simulations

Talk summary: Market simulation is an increasingly important method for evaluating and training trading strategies and testing “what if” scenarios. The extent to which results from these simulations can be trusted depends on how realistic the environment is for the strategies being tested. In this talk, we will discuss stylized facts as metrics of simulated market realism. We will also look at the problem of how the simulation of a financial market should be configured so that it most accurately emulates the behavior of a real market. For that, we will discuss an application of generative adversarial networks to multi-agent simulator calibration.

Bio: Svitlana Vyetrenko is a Vice President and Artificial Intelligence Research Lead at JP Morgan Chase. She holds a PhD in Applied and Computational Mathematics from California Institute of Technology. She was previously a Vice President in Macro Linear Quantitative Research at JP Morgan Chase; and an Associate in Equity Strategies at Goldman Sachs. Her research interests broadly span applications of artificial intelligence and machine learning methods to trading; with current focus on using multi-agent simulations to model realistic markets for trading strategy and policy research.

 

November 27th, 16:00 | Roberto Maestre

Talk Title: Introducción a la Artificial Intelligence Factory del BBVA.

Talk summary: Introducción sobre la AIF del BBVA, su funcionamiento y la estructura en general.

Bio: Roberto Maestre works in the intersection between technologies and humanities, always applying a multidisciplinary and integral focus among domains such as astrophysics, machine learning, history, strategy or legaltech. With more than 18 years of experience in the data industry, simoustanly working in the academy and the industry.

 

November 27th, 16:15 | José Miguel Leiva y Felipe Alonso Atienza

Talk Title: Analítica Avanzada en BBVA Asset Management: algunos casos de éxito.

Talk summary: Se mostrarán algunos casos de uso de aplicación de AA en el ámbito de Asset Management: optimización automática de carteras basada en opiniones expertas, procesado automático de documentos, análisis automatizado de carteras a gran escala, o dashboards avanzados para aplicaciones financieras.

Bio: Jose M. Leiva es Senior Expert Data Scientist en BBVA, donde desarrolla soluciones de analítica avanzada en el departamento global de Client Solutions. Previamente fue profesor e investigador en la Universidad Carlos III donde participó en proyectos de analítica de registros clínicos, clasificación de imágenes satelitales, procesado de señal para interfaces cerebrales y biología computacional, entre otros. Ha publicado artículos en redes auto-organizadas, técnicas de máximo margen, algoritmos genéticos, transfer learning, aprendizaje basado en teoría de la información, etc. Antes de su incorporación a BBVA trabajó en ETS Asset Management Factory desarrollando técnicas cuantitativas de inversión basadas en machine learning.

Bio: Felipe Alonso is Senior Expert Data Scientist at BBVA and part-time associate professor at Rey Juan Carlos University (signal processing and communications department). At BBVA he develops analytical solutions in the area of asset management. In academia, he focused in the areas of statistical signal processing, machine learning and computer simulation and modeling, applied to cardiac electrophysiology where he has co-authored more than 50 journal and conference papers. He is co-inventor of a patent, and has lead and worked in several public and private funding projects.

 

November 27th, 17:30 | Clara Higuera Cabañes y Manuel Martín Gómez

Talk Title: Procesamiento de lenguaje natural (NLP)

Talk summary: El procesamiento de lenguaje natural, más conocido actualmente por las siglas en inglés (NLP), permite a las organizaciones explotar el valor de información registrada en formato de texto. En esta charla explicaremos cómo aplicamos técnicas de NLP en la AI Factory tanto métodos clásicos basados en frecuencias de palabras y clasificadores como otros más sofisticados por ejemplo embeddings o métodos basados en deep learning.

Bio: Clara Higuera Cabañes graduated in Computer Science from Complutense University in Madrid, afterwards she carried out her PhD in Artificial Intelligence and Bioinformatics. Her thesis was focused on machine learning based methods for the study of metabolism. After her PhD she became interested in how to use data and AI in industry and moved to London where she worked for over 2 years as data scientist for BBC. At BBC among other projects she worked building audience segmentations and recommender systems for BBC News. Currently she works as data scientist in BBVA within the program of smart assistance using Natural Language Processing techniques to help customer assistants better manage the queries of BBVA customers. Clara enjoys presenting her work in meet ups and other public events and is actively involved in activities which encourage women and girls pursuit a career in technology and science to help bridge the gender gap in these disciplines.

Bio: Manuel Martín Gómez graduated in Industrial Engineering from Universidad de Castilla-La Mancha in 2016. His BSc thesis consisted of the mathematical modeling of a pneumatic suspension’s motion equation. After finishing his degree, he undertook a 1-year internship at Airbus as data analyst and afterwards shifted his career path to consulting at EY’s AI Wavespace CoE. At EY he worked as a Machine Learning Engineer for two and half years. During that time he took part mostly in international projects for financial and retail clients. He built systems such as text recognition models for scanned documents, customer’s e-mail classification systems and data lakes. He is currently working at BBVA Next Technologies within BBVA’s AI Factory and is also enrolled in a Mathematical and Computational Engineering MsC. Manuel’s professional motivation is to build valuable and impactful AI products. He likes to tackle complex computational problems and always advocates for knowledge sharing.

 

November 27th, 18:45 | David Muelas Ruenco

Talk Title: Predicción en series temporales y detección de eventos inesperados

Talk summary: El análisis de la evolución del saldo en las cuentas de nuestros clientes proporciona una información muy valiosa para estimar y mejorar su salud financiera. En esta charla hablaremos sobre los análisis que realizamos usando estas series, centrándonos en aplicaciones a la predicción de valores futuros y a la detección de eventos inesperados. Comentaremos las principales características de los modelos y motores analíticos que utilizamos, y presentaremos resultados empíricos de su funcionamiento en casos de uso reales.

Bio: David Muelas received double degrees in Mathematics and Computer Science (2013), MSc in ICT Research and Innovation and MSc in Mathematics and Applications (2015), and PhD in Computer Science and Telecommunication (2019). He has gained experience in data analysis as researcher at Universidad Autónoma de Madrid (2011-2019), Medialab-Prado (2017-2019) and visiting researcher at Harvard University (Summer 2016). During those experiences, David had the opportunity to handle diverse data sources, including data from computer networks, medical images and social media. Currently, he is with the Advice Program at BBVA D&A in the BBVA AI Factory.

 

SEGUNDA PARTE

Título: Adaptive Decision Making in Games

Ponente: Diego Perez Liebana 

Fechas: 4 y 11 de Diciembre

Horario: 17:00 a 20:00

Organizadores: Scalab (Yago Saez)

Créditos: 1 ECTS

Resumen:

Games have always been excellent benchmarks for the advancement of AI. From Machine Learning to Search and Decision-Making algorithms, from Deep Blue and Alpha Zero to Starcraft 2, researchers across the globe agree on frameworks to test their methods. This course gives an overview of the Game AI field, focusing on the different research opportunities that it offers and some of its applications. In more detail, we will cover a family of flexible and adaptive AI techniques: Statistical Forward Planning methods. We will study how algorithms such as Monte Carlo Tree Search and Rolling Horizon Evolution can be used for reactive decision making in Games, illustrated with a few example in modern tabletop and strategy games. The seminar will conclude with a future-looking session in which we'll discuss next researc steps in the application of these methods to complex and commercial games.

Biografía:

I am a Senior Lecturer in Computer Games and Artificial Intelligence at Queen Mary University of London (UK). I hold a Ph.D in Computer Science from the University of Essex (2015) and a Master degree in Computer Science from University Carlos III (Madrid, Spain; 2007). My research is centred in the application of Artificial Intelligence to games, Tree Search and Evolutionary Computation. I am especially interested on the application of Statsitical Forward Planning methods (such as Monte Carlo Tree Search and Rolling Horizon Evolutionary Algorithms) to modern games, and also on General Video Game Playing, which involves the creation of content and agents that play any real-time game that is given to it. I have experience in the videogames industry as a game programmer (Revistronic; Madrid, Spain), with titles published for both PC and consoles. I worked as a software engineer (Game Brains; Dublin, Ireland), where I was in charge of developing AI tools that can be applied to the latest industry videogames. As a lecturer I teach modules related to game development and the application of Artificial Intelligence to Games.

 

Sesión Bb Collaborate | December 4th