Cookie usage policy

The website of the University Carlos III of Madrid use its own cookies and third-party cookies to improve our services by analyzing their browsing habits. By continuing navigation, we understand that it accepts our cookie policy. "Usage rules"

Cabecera de página Seminarios Master Ciencia y Tecnología Informatica

Technology Innovation and R&D (Javier Busquets Carretero)

Title: Technology Innovation and R&D

Presenter: Javier Busquets Carretero, PhD in Management Sciences, Copenhagen Business School

Dates: October 20, 27 and November 17, 2017

Time: 10am - 2pm

Organizer: María Isabel Sánchez Segura and Fuensanta Medina Domínguez. Grupo Sintonia, Departamento de Informática

Place: 3.1.S08 Library Building. Leganés Campus

ECTS: 2

Abstract:
The world economy is moving towards an economic knowledge and service base in which innovation management and R&D are becoming one of the most important challenges for firms. Moreover, digitalization is pushing the boundaries of what is possible when “sourcing” and managing innovation.  Local resources for innovation are not enough. Instead, the modern management of knowledge and technology   management creates new systems and ecosystems which requires new models of collaborative exploration and discovery. However, such developments also make innovation management and R&D more complex. Innovation can be cultivated from many sources though, as technology evolves, it influences how ideas are nurtured and harvested. Knowledge does not only spring from centralized experts. Rather, it is the product of collaborative networks and user communities enabled by advances in digital platforms.

 

Session 1, October the 20th, 10 am-2 pm

Technology and Innovation

Learning objectives: In this session we will explore basic concepts of innovation, technology and knowledge management, stressing the differences between “individual creativity“ with systematic processes to manage corporate “knowledge”  through new organizational capabilities into High-technology and R&D. We will discuss how High-Tech and R&D present evolutionary trajectories and how those connect with sources of competitive advantage. We will analyse some key components of High Tech and R&D: (1) sources of innovation; (2) outcomes as types of innovations on products, services and “business models” and (3) patterns of innovations: continuous or disruptive; open and “closed”.

 

Session 2, October the 27th, 10 am-2 pm

Ecosystems of innovation: Competitive Architectures and Corporate Capabilities  

Learning objectives: A significant number of innovations are designed and emerge of digital models based on new ecosystems.  In this session we will explore how new digital firms create technology to change rules of the game exploring the concept of “digital platforms” and “ecosystems” illustrating the notions of (1) architectures and dominant designs and (2) timing of entry and learning curves. We will reflect on those topics through a business related case.  

 

Session 3, November the 17th, 10 am-2 pm

Managing the creation of new digital competences in ecosystems

Learning objectives: In this session we will expect a presentation in groups about a real case in order to cover and discuss the key learning points of this seminar, essentially reflecting upon how to transform an ecosystem through the use of new function of technology and R&D and their relationship with new capabilities, organizational architectures and digital business models. 

 

Short Bio:

Dr. Javier Busquets has a Phd in Management Sciences by the Copenhagen Business School, he is a professor of strategic management of innovation and technology and digital business. He launched and currently serves a director of the Executive Master of Digital Business in cooperation with Santa Clara University at Silicon Valley and the CIO Advanced Program.

He served as Chair of Department of Management of Information Systems (2003-2011) and teaches in the Global Executive MBA in cooperation with Georgetown; Executive MBA and MBA Programs. He served as Key Note Speaker for a number of events. He was a member of the research Project Euro-India and was member of the organization board of the Smart Business Network (SBNi) Conference in Beijing in 2008. He also served as Co-Chair in the International Conference of Mobile Business (ICMB) in 2008. He served as co-Chair for the European Conference of Information Systems (ECIS) hold in Barcelona 2012. Javier Busquets has an extensive professional and executive experience in the Information Technology (IT) and telecommunication industry where he served for 17 years.

His research in IT strategies in the banking sector has been awarded with IBM Faculty Award in 2007 and 2011.

Personal web page: http://www.esade.edu/faculty/xavier.busquets

An introduction to Deep Learning for Image and Video Interpretation (Mr. Huy-Hiem PHAM)

Title: An introduction to Deep Learning for Image and Video Interpretation

Presenter: Mr. Huy-Hiem PHAM, Ph.D Researcher. Cerema & Institut de Recherche en Informatique de Toulouse. Université Paul Sabatier

Dates and times:

  • Session 1:  November 3,  10 am - 2 pm
  • Session 2:  December 1, 3 pm - 7 pm
  • Session 3:  December 15, 3 pm - 7 pm

Organizer: Sergio Velastin, German Gutierrez, y Jose Manuel Molina Grupo SCALAB, Departamento de Informática

Place: 3.1.S08 Library Building. Leganés Campus

Crédits: 2 ECTS

Abstract:
We will start by giving an overview of what computer vision aims to do, looking at a number of application domains and highlighting its main processes: image processing, segmentation, object detection, motion tracking, etc. In the second session we will present some basic image processing operations, leading to the newer approaches based on "deep learning" for image interpretation.

 

Session 1 (3rd November, 10-14): Introduction to Deep Learning
  • Artificial Neural Networks (ANNs)
  • Forward Propagation in an ANN
  • Gradient Descent method
  • Training an ANN with Gradient Descent
  • Building an ANN with Python and Keras.

 

Session 2 (1st December, 15-19): Convolutional Neural Networks (CNNs) for Image Recognition
  • Limitations of Artificial Neural Networks
  • Convolutional Neural Networks: the key idea.
  • Implementing a CNN for Image Classification in Python and Keras.

 

Session 3 (15th December, 15-19): Some State-of-the-art CNN Architectures.
  • Repesentation of Depth in CNNs.
  • Deep Residual Networks
  • Project: Deep Residual Network for Human Action Recognition with Skeleton Sequences
  • Recommended reading (books, courses, tutorials,..etc).

 

Short Bio:
Mr. Huy-Hiem PHAM is a Researcher at the University Paul Sabatier and at the CEREMA research institute also in Toulouse, France. He is currently carrying out research on using deep learning techniques to recognise human actions especially using Kinect sensors. It is hoped that his research will lead to practical application in the field of safety and security in public transport networks.

Advanced User Interfaces (Aaron Quigley / Ana Tajadura-Jiménez)

Title: Advanced User Interfaces

Presenter: Prof. Aaron Quigley, Dr. Ana Tajadura-Jiménez

Dates: March 20-23, 2018

Time: from 9.30 am 12.45 am

Organizer: Paloma Díaz. DEI Interactive Systems Group, Departamento de Informática

Place: 3.S1.08 Library Building Rey Pastor. Leganés Campus

Crédits: 2 ECTS

Abstract:
El seminario está divido en dos partes.

 

Part I: Ubiquitous User Interfaces. Prof. Aaron Quigley

Abstract
Displays are all around us, on and around our body, fixed and mobile, bleeding into the very fabric of our day to day lives. Displays come in many forms such as smart watches, head-mounted displays or tablets and fixed, mobile, ambient and public displays. However, we know more about the displays connected to our devices than they know about us. Displays and the devices they are connected to are largely ignorant of the context in which they sit including knowing physiological, environmental and computational state. They don’t know about the physiological differences between people, the environments they are being used in, if they are being used by one or more persons.

In this talk we review a number of aspects of displays in terms of how we can model, measure, predict and adapt how people can use displays in a myriad of settings. With modeling we seek to represent the physiological differences between people and use the models to adapt and personalize designs, user interfaces. With measurement and prediction we seek to employ various computer vision and depth sensing techniques to better understand how displays are used. And with adaptation we aim to explore subtle techniques and means to support diverging input and output fidelities of display devices. This talk draws on a number of studies from work published in UIST, CHI, MobileHCI, IUI, AVI and UMAP.

Short Bio
Professor Aaron Quigley is the Chair of Human Computer Interaction and deputy head of school in the School of Computer Science at the University of St Andrews, UK. Aaron’s research interests include surface and multi-display computing, human computer interaction, pervasive and ubiquitous computing and information visualisation. He has published over 135 internationally peer-reviewed publications including edited volumes, journal papers, book chapters, conference and workshop papers and holds 3 patents. In addition he has served on over 80 program committees and has been involved in chairing roles of over 20 international conferences and workshops including UIST, ITS, CHI, Pervasive, UbiComp, Tabletop, LoCA, UM, I-HCI, BCS HCI and MobileHCI. He is also the ACM SIGCHI Vice President for Conferences. 

 

Part II: Body-centred application design: A short introduction to affective computing, wearable computing and virtual reality. Ana Tajadura-Jiménez

Abstract
The aim of these sessions is to introduce students to the emerging fields of affective computing, wearable computing and virtual reality, which bring together research and methodologies from cognitive psychology and neuroscience, human-computer interaction (HCI), affective science and machine learning. In the first session I will exemplify some of the concepts by introducing my own research, in which body-tracking, sensory feedback and wearable technologies are used. The aim of this research is to advance the understanding of the mechanisms underlying people’s perception of their own body, emotion and action, by investigating how to use sound to alter these, and to open opportunities to design audio-based body-centred applications to support wellbeing. The second session will provide a basic knowledge of emotion theories and of emotion sensing technologies and applications. We will also discuss how such technology can be ported on wearable devices and ethical issues around the use of this technology. This session will be followed by two practical sessions: in the third session we will demonstrate the process of emotion sensing and emotion recognition using wearable bio-sensing technology; the fourth session will be a hands-on wearable exercise in which students will be asked to design their own experiment using wearable bio-sensing technology to recognize emotional states, and to think about possible applications to the design of technology.

Short Bio
Ana Tajadura-Jiménez is a Ramón y Cajal research fellow at the DEI Interactive Systems Group, Universidad Carlos III de Madrid, and an Honorary Research Associate at the Interaction Centre of the University College London. She studied Telecommunications Engineering at Universidad Politécnica de Madrid. She received her MSc degree in Digicom (2003) and PhD degree in Applied Acoustics (2008) from Chalmers University of Technology, Gothenburg, Sweden. During 2009-2012 she was a post-doctoral researcher at the Lab of Action and Body at Royal Holloway, University of London. In 2012 she moved to University College London Interaction Centre (UCLIC) as an ESRC Future Research Leader and Principal Investigator of the project The Hearing Body. During 2016-2017 she was a Ramón y Cajal research fellow at Universidad Loyola Andalucía. Her research is empirical and multidisciplinary, combining perspectives of psychoacoustics, neuroscience and HCI. She focuses on the use of body-sensory feedback to change the mental representation of one’s own body, and her studies are pioneer in using sound for producing these changes. She currently coordinates the research line called "Multisensory stimulation to alter the perception of body and space, emotion and motor behavior” and is Principal Investigator of the project MagicShoes (www.magicshoes.es), in which they are developing wearable technology that integrates body-sensing and sensory feedback. Dr Tajadura-Jiménez has published more than 50 papers and book chapters. Her work has attracted public interest as it has been featured in the public media world-wide and has been presented at events in the London Science Museum, Wellcome Collection and the Being Human Festival, among others. A highlight is the article on her project The Hearing Body that appeared in 2015 in the magazine New Scientist.