Understanding Artificial Intelligence through Algorithmic Information Theory (edX)

Understanding Artificial Intelligence through Algorithmic Information Theory (edX)
Course Auditing
Categories
Effort
Certification
Languages
Misc

MOOC List is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Understanding Artificial Intelligence through Algorithmic Information Theory (edX)
Can we characterize intelligent behavior? Are there theoretical foundations on which Artificial Intelligence can be grounded? This course on Algorithmic Information will offer you such a theoretical framework. You will be able to see machine learning, reasoning, mathematics, and even human intelligence as abstract computations aiming at compressing information. This new power of yours will not only help you understand what AI does (or can’t do!) but also serve as a guide to design AI systems.

MOOC List is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

- Artificial Intelligence is more than just a collection of brilliant, innovative methods to solve problems.

If you are interested in machine learning or are planning to explore it, the course will make you see artificial learning in an entirely new way. You will know how to formulate optimal hypotheses for a learning task. And you will be able to analyze learning techniques such as clustering or neural networks as just ways of compressing information.

- If you are interested in reasoning , you will understand that reasoning by analogy, reasoning by induction, explaining, proving, etc. are all alike; they all amount to providing more compact descriptions of situations.

- If you are interested in mathematics , you will be amazed at the fact that crucial notions such as probability and randomness can be redefined in terms of algorithmic information. You will also understand that there are theoretical limits to what artificial intelligence can do.

- If you are interested in human intelligence , you will find some intriguing results in this course. Thanks to algorithmic information, notions such as unexpectedness, interest and, to a certain extent, aesthetics, can be formally defined and computed, and this may change your views on what artificial intelligence can achieve in the future.

Half a century ago, three mathematicians made the same discovery independently. They understood that the concept of information belonged to computer science; that computer science could say what information means. Algorithmic Information Theory was born.

Algorithmic Information is what is left when all redundancy has been removed. This makes sense, as redundant content cannot add any useful information. Removing redundancy to extract meaningful information is something computer scientists are good at doing.

Algorithmic information is a great conceptual tool. It describes what artificial intelligence actually does , and what it should do to make optimal choices. It also says what artificial intelligence can’t do. Algorithmic information is an essential component in the theoretical foundations of AI.


What you'll learn

- How to measure information through compression

- How to compare algorithmic information with Shannon’s information

- How to detect languages through joint compression

- How to use the Web to compute meaning similarity

- How probability and randomness can be defined in purely algorithmic terms

- How algorithmic information sets limits to the power of AI (Gödel’s theorem)

- A criterion to make optimal hypotheses in learning tasks

- A method to solve analogies and detect anomalies

- A new understanding of machine learning as a way to achieve compression

- Why unexpected means abnormally simple

- Why coincidences are unexpected

- Why subjective information and interest are due to complexity drop and why relevance, aesthetics, emotional intensity and humour rely on coding.


Prerequisites

- Examples of what you should know before embarking on this course

- what a convex curve looks like,

- that log(7^n) is n times log(7)

- that rational numbers have finite or periodic expansion,

- that rational numbers are countable, but that real numbers are not,

- that the probability of "A and B" is the probability of "A knowing B" times the probability of B,

- that 65 is 1000001 is in base 2 and 41 in base 16,

- how to compute the sum of a finite geometric series,

- that {'a':1, 'i':0} is a Python dictionary and why list('ab'*4)[::2] yields ['a','a','a','a'],

- that k-means is a clustering method,

- what Bayes’ theorem tells us,

- how Shannon’s information is related to probability,

- that what is called a Turing machine is NOT the machine that Alan Turing (Benedict Cumberbatch) is using in the movie The imitation game.


Syllabus


Chapter 1. Describing data

- Complexity as code length

- Conditional Complexity


Chapter 2. Measuring Information

- Complexity and frequency

- Meaning distance


Chapter 3. Algorithmic information & mathematics

- Algorithmic probability, Randomness

- Gödel’s theorem


Chapter 4. Machine Learning and Algorithmic Information

- Universal induction - MDL

- Analogy & Machine Learning as complexity minimization


Chapter 5. Subjective information

- Simplicity & coincidences

- Subjective probability

- Relevance



MOOC List is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Course Auditing
41.00 EUR

MOOC List is learner-supported. When you buy through links on our site, we may earn an affiliate commission.