Bayes Comp is a biennial conference sponsored by
the ISBA section of the same
name. The conference and the section both aim to promote original
research into computational methods for inference and decision making
and to encourage the use of frontier computational tools among
practitioners, the development of adapted software, languages,
platforms, and dedicated machines, and to translate and disseminate
methods developed in other disciplines among statisticians.
Bayes Comp is the current incarnation of the
popular
MCMSki series of conferences, and Bayes Comp 2020 is the second
edition of this new conference series. The first edition
was
Bayes Comp 2018, which was held in Barcelona in March of 2018.
Bayes Comp 2020 will take place in the Reitz Union at the University of Florida. It will start in the afternoon on Tuesday, January 7 (2020) and finish in the afternoon on Friday, January 10.
Provide the name and affiliation
of the speaker, as well as a title and an
abstract for the poster. If the poster is
associated with a technical report or
publication, please also provide that
information. Acceptance is conditional on
registration, and decisions will be made
onthefly, usually within a week of
submission. Email your proposal
to Christian
Robert.
Early (through Aug 14) 
Regular (Aug 15  Oct 14) 
Late (starting Oct 15) 


Student Member of ISBA 
125 
150 
175 
Student Nonmember of ISBA 
165 
190 
215 
Regular Member of ISBA 
250 
300 
350 
Regular NonMember of ISBA 
350 
400 
450 
Please note that:
There are funds available for junior travel support. These funds are earmarked for people who are either currently enrolled in a PhD program, or have earned a PhD within the last three years (no earlier than January 1, 2017). To be eligible for funding, you must be presenting (talk or poster), and be registered for the conference.
Applicants should email the following two items to Jim Hobert: (1) An uptodate CV, and (2) proof of current enrollment in a PhD program (in the form of a short letter from PhD advisor), or a PhD certificate featuring the graduation date. The application deadline is September 20, 2019.
Blocks of rooms have been reserved at three different hotels:
Trainer: Robert Grant is a medical statistician of 21 years' experience, and a professional trainer and coach for people working in data analysis. He developed and maintains the Stata interface for Stan and frequently teaches introductory courses on Bayesian statistics and data visualization. His personal website is robertgrantstats.co.uk and his company's is bayescamp.com
Prerequisites: Participants should know the basics of model fitting by MCMC simulation. There is no need for experience of Hamiltonian Monte Carlo or Stan but we will assume understanding of Bayesian analysis, model comparison and diagnosing MCMC problems such as nonconvergence. Please bring a laptop with one of the Stan interfaces installed  it doesn't matter which one as we will focus on the Stan code which is common to all.
Learning outcomes: (1) Know how to get started with Stan via the various interfaces, including the common functionality of checking your model code for errors, translating it to C++, compiling it, sampling from the posterior, summarising the output and exporting chains. (2) Understand the basics of coding regression models up to multilevel models. (3) Be aware of tricks for more efficient parameterisation (4) Know how to obtain statistical and graphical diagnostic outputs, recognise problems and set about debugging. (5) Know how to add a new distribution as a Stan function, expose it to R/Python/Julia for debugging, and use it in the loglikelihood and posterior predictive checks.
NIMBLE is a platform built on top of R that allows methodologists to write algorithms (and modify existing algorithms) in Rlike syntax with automatic compilation for fast runtimes via C++ that is autogenerated by the system. NIMBLE gives you access to a variety of tools for ease of implementation: querying of model graphical structure (e.g., parent and child nodes in the model graph), a wide range of mathematical functionality including linear algebra through the Eigen package, calculation of probability density values for nodes in the model graph, simulation of node values, automatic differentiation for gradients, optimization, and storage objects for samples from the model.
This tutorial will introduce you to how to develop algorithms in NIMBLE, including new MCMC samplers and entire new algorithms. We will discuss how developers can build upon NIMBLE's existing algorithms (including a variety of MCMC, Bayesian nonparametric, and SMC methods) to avoid having to reimplement standard methods. Users of methods developed in NIMBLE write their model code in syntax almost identical to BUGS and JAGS but can then apply a variety of algorithms (various MCMC samplers, choosing between samplers, parameter blocking, userdefined samplers, various SMC algorithms, etc.) to the same model. The tutorial will demonstrate how algorithms that you write using NIMBLE are then easily available to users, who can try them out at low cost and compare them to other algorithms available in NIMBLE.
Learning outcomes: The workshop will focus on live demos and handson coding. After the workshop, participants will understand (1) how to use NIMBLE to apply algorithms such as MCMC and SMC to fit hierarchical models, (2) how NIMBLE's builtin algorithms are implemented using nimbleFunctions, (3) how to use nimbleFunctions to extend NIMBLE's algorithms, and (4) how to develop algorithms in NIMBLE.
Prerequisites: Participants should have a basic understanding of Bayesian/hierarchical models and of one or more algorithms such as MCMC or SMC. Some experience with R is also expected. Please bring a laptop; we'll give instructions in advance for installing NIMBLE.
Instructor: Chris Paciorek is one of the core developers of NIMBLE (code repository) and an adjunct professor of Statistics at UC Berkeley. He has presented a variety of workshops and courses on NIMBLE and more generally on statistical computing and Bayesian statistics.
PROC MCMC is a general procedure that provides Bayesian inference for a wide range of models. Users are given full control to specify details of any statistical models. Builtin features enable you to work with nonstandard prior or likelihood functions, incorporate your own sampling algorithms, fit multilevel hierarchical models with arbitrary depth and nested or nonnested structures, handle missing data by using a cohesive Bayesian approach, and much more.
PROC BGLIMM, on the other hand, is a specialized procedure for generalized linear hierarchical models. Its simplified syntax greatly reduces the programming burden on users (for example, the CLASS statement handles categorical variables; the REPEATED statement models balanced or unbalanced longitudinal data with repeated measurements). The procedure deploys optimal sampling algorithms that are parallelized for performance and provides convenient access to Bayesian analysis of complex mixed models.
This tutorial introduces you to these procedures and illustrates how you can use them to perform a variety of tasks, such as fitting multilevel hierarchical models, modeling missing data, model assessment, largescale clinical trial simulation, and predictions.
Learning outcomes: Attendees will learn how to use SAS Bayesian procedures, PROC MCMC and PROC BGLIMM, to conduct Bayesian analysis. This tutorial focuses on the practical use of Bayesian computational methods, and the objective is to equip attendees with computational tools through a series of workedout examples that demonstrate sound practices for a variety of statistical models and Bayesian concepts.
Prerequisites: A basic understanding of the Bayesian paradigm will be useful. Some knowledge of the SAS language will be helpful but not necessary. You do not need to bring a computer.
Instructor: Fang Chen designed and developed the MCMC procedure. He is a Director of Advanced Statistical Methods at SAS Institute Inc., where he oversees software development in several statistical areas, including Bayesian computation and mixed models. Fang has presented numerous tutorials and workshops on SAS software and various Bayesian applications.
A brief overview to the use of the AutoStat® software will cover the following features:
A key component of using AutoStat® for teaching statistical thinking is in alleviating the need for coding, which allows the instructors to focus on key concepts, questions and outcomes. In this course we will briefly touch on key features of AutoStat®, such as its parallel approach to Bayesian and classical statistics on the GUI, which encourages educators to teach both paradigms within the same course. We will illustrate the project sharing facilities, the calculator tool for “on the fly” demonstrations, tutorial builders and bespoke output creation.
Presenters: Dr Chris Strickland & Dr Clair AlstonKnox
Chris and Clair both work at the AutoStat® Institute (Melbourne, Australia). They have previously worked together in Professor Kerrie Mengersen’s Bayesian Research Group (QUT, Australia).
Their combined work experience involves research positions in both academia and industry, having worked at NSW Agriculture, Bank of Queensland, Monash University, Queensland University of Technology, University of Queensland, Griffith University, University of NSW, Newcastle University, Predictive Analytics Group, Soil Conservation Service and NSW Sport and Recreation.
ISBA takes very seriously any form of misconduct, including but not limited to sexual harassment and bullying. All meeting participants are expected to adhere strictly to the official ISBA Code of Conduct. Following the safeISBA motto, we want ISBA meetings to be safe and to be fun. We encourage participants to report any concerns or perceived misconduct to the meeting organizers, Jim Hobert and Christian Robert. Further suggestions can be sent to safeisba@bayesian.org.