Computational Intelligence

2019-2020

Course Objective

• Knowledge and understanding of:
◦ Complex optimisation problems (graphical discrete and continuous
optimisation problems
◦ Exact optimisation algorithms
◦ Heuristic optimisation algorithms
• Applying knowledge and understanding
◦ Applying heuristic optmisation algorithms to combinatorial (discrete)
optimisation problems
◦ Applying optimisation algorithms to learn a controller for a robot
• Making judgements
◦ About which algorithms to apply for complex optimisation problems
• Communication
◦ Writing a report on comparing algorithms in the style of an
experiments section of a scientific paper.

Course Content

In the course Computational Intelligence, we will focus on a part of
artificial intelligence beyond simple supervised (machine) learning.
Specifically, we will attempt to solve problems that are extremely hard
to solve, and therefore require extraordinary algorithmic measures to
have a shot at finding a good solution. In the first part, we will focus
on graph-based (combinatorial optimisation) problems with discrete input
domains. These problems pop up everywhere in computer science and AI,
from task scheduling on chips, to coordination in robot soccer, to
parking and scheduling maintenance tasks for trains at the NS. We will
first show how exact algorithms for such problems (such as variable
elimination and AND/OR tree search) can find optimal solutions for small
problem instances but do not scale up to larger problems in terms of
either memory or computational requires, or both. Therefore, we will
need to open our approximate-algorithm toolbox. We will introduce
different classes of algorithms that can be used to tackle these
problems: specifically, (multi-start, iterative, genetic) local search,
evolutionary algorithms, and swarm optimisation. Then, in the second
part, we will move to continuous optimisation problems, such as finding
a good controller for a morphologically evolved robot. Such continuous
optimisation problems are even harder, and require new algorithmic tool
that enable the use of the previously encountered ideas for continuous
input. For example, similar ideas underpin stochastic gradient decent
and local search algorithms, and CMA-ES is an evolutionary strategy that
can be used effectively in continuous domains. We will further look at
the use of neural networks, and Bayesian optimisation for such problems.

Teaching Methods

Lectures and practical assignments.

Method of Assessment

Final exam and practical assignments.

Literature

The literature will be made available on Canvas.

Target Audience

BSc Artificial Intelligence (track Intelligent Systems)

General Information

Course Code XB_0025
Credits 6 EC
Period P5
Course Level 200
Language of Tuition English
Faculty Faculty of Science
Course Coordinator dr. M. Hoogendoorn
Examiner dr. M. Hoogendoorn
Teaching Staff

Practical Information

You need to register for this course yourself

Last-minute registration is available for this course.

Teaching Methods Lecture, Practical
Target audiences

This course is also available as: