We're very happy to have Zach Sunberg (http://asl.stanford.edu/people/zachary-sunberg/) from the Stanford Intelligent Systems Lab (http://sisl.stanford.edu/) (SISL) speak to us on partially observable Markov decision processes in Julia. Before Zach's talk, Chris Peel (https://twitter.com/christianpeel?lang=en) will briefly review the LLLplus (https://github.com/christianpeel/LLLplus.jl) package for lattice reduction and ask for feedback. We will also briefly review the roadmap to Julia 1.0 before Zach's talk.



Zach's abstract: Safe and flexible autonomy, especially in systems like self-driving cars, is one of the most immediately important goals in artificial intelligence. The partially observable Markov decision process (https://en.wikipedia.org/wiki/Partially_observable_Markov_decision_process) (POMDP) is a tool for modeling decision making problems with uncertainty. In a POMDP, an agent must make a series of decisions based on stochastic observations to try to maximize a reward function. Though a POMDP is a very expressive tool for formalizing a real world problem, the approach is rarely used in practice because of extreme computational demands. Julia is an ideal tool for studying and solving POMDPs because it simultaneously provides the computational power to tackle large problems and the expressiveness to easily implement a wide range of problems and solution techniques. However, it also lacks some features (e.g. interfaces) that would make communication between programmers easier. This talk will focus on SISL's POMDPs.jl (https://github.com/JuliaPOMDP/POMDPs.jl) package and the challenges and successes we have had in building it and other related packages.



Come 15 min early for pizza!