Life of Brian

R is a language and environment for statistical computing and graphics. It is a GNU project which is similar to the S language and environment which was developed at Bell Laboratories (formerly AT&T, now Lucent Technologies) by John Chambers and colleagues.R provides a wide variety of statistical (linear and nonlinear modelling, classical statistical tests, time-series analysis, classification, clustering, …) and graphical techniques, and is highly extensible.
Extending R in the direction of collecting data from the web will be the general, wider scope of this workshop. We will learn how to write a simple web crawler for a specific site (it can be your favorite online forum, that you frequently visit). Web crawler is a program that traverses HTML or XML source code, searches and filters data that we consider to be relevant. We will present graphically the data we collect via web crawler and make conclusions based on visible trends and relationships.
In the example of your favorite online forum/community web site, you evetnually might be able to visually present your online life and activities.
(Prerequisites for this workshop are some basic knowledge of programming (not necessarily in R) and fascination with the power of data.)