how google and facebook are using R

(March 26th Update: Video now available) 

Last night, I moderated our Bay Area R Users Group kick-off event with a panel discussion entitled “The R and Science of Predictive Analytics”, co-located with the Predictive Analytics World conference here in SF.

The panel comprised of four recognized R users from industry:

  • Bo Cowgill, Google
  • Itamar Rosenn, Facebook
  • David Smith, Revolution Computing
  • Jim Porzak, The Generations Network (and Co-Chair of our R Users Group)

The panelists were asked to explain how they use R for predictive analytics within their firms, its strengths and weaknesses as a tool, and provide a case study. What follows is my summary with comments.

Panel Introduction

I began by describing R as a programming language with strengths in three areas: (i) data manipulation, (ii) statistics, and (iii) data visualization.

What sets it apart from other data analysis tools? It was developed by statisticians, it’s free software, and it is extensible via user-developed packages — there are nearly 2000 of them as of today at the Comprehensive R Archive Network or CRAN.

Many of these packages can be used for predictive analytics. Jim highlighted Max Kuhn’s caret package , which provides a wrapper for accessing dozens of classification and regression models, from neural networks to naive Bayes.

Bo Cowgill, Google

R is the most popular statistical package at Google, according to Bo Cowgill, and indeed Google is a donor to the R Foundation. He remarked that “The best thing about R is that it was developed by statisticians. The worst thing about R is that… it was developed by statisticians.” Nonetheless, he’s optimistic to see that as the R developer community has expanded, R’s documentation has improved, and its performance has gained.

One theme that Bo first brought up, but which was echoed by others, was that while Google uses R for data exploration and model prototyping, it is not typically used in production: in Bo’s group, R is typically run in a desktop environment.

The typical workflow that Bo thus described for using R was: (i) pulling data with some external tool, (ii) loading it into R, (iii) performing analysis and modeling within R, (iv) implementing a resulting model in Python or C++ for a production environment.

Itamar Rosenn, Facebook

Itamar conveyed how Facebook’s Data Team used R in 2007 to answer two questions about new users: (i) which data points predict whether a user will stay? and (ii) if they stay, which data points predict how active they’ll be after three months?

For the first question, Itamar’s team used recursive partitioning (via the rpart package) to infer that just two data points are significantly predictive of whether a user remains on Facebook: (i) having more than one session as a new user, and (ii) entering basic profile information.

For the second question, they fit the data to a logistic model using a least angle regression approach (via the lars package), and found that activity at three months was predicted by variables related to three classes of behavior: (i) how often a user was reached out to by others, (ii) frequency of third party application use, and (iii) what Itamar termed “receptiveness” — related to how forthcoming a user was on the site.

David Smith, Revolution Computing

David’s firm, Revolution Computing, not only uses R, but R is their core business. David said that “we are to R what Red Hat is to Linux”. His firm addresses some of the pain points of using R, such as (i) supporting older versions of the software and (ii) providing parallel computing in R through their ParallelR suite.

David showcased how one of their life sciences clients used R to classify genomic data through use of the randomForest package, and how the analysis of classification trees could be easily parallelized using their ‘foreach’ package.

He also mentioned that several firms they have worked with do use R in production environments, whereby a particular script is exposed on a server, and a client calls it with some data to return a result (several ways exist to set up R in a client-server manner, such as RServe rapache , and Biocep).

David evangelizes and educates about R at the Revolutions blog .

Jim Porzak, The Generations Network

Jim (also co-chairs the R Users Group), gave a brief overview of his PAW talk on using R for marketing analytics. In particular, Jim has used the flexclust package to cluster customer survey data for Sun Microsystems, and apply the resulting profiles to identify high-value sales leads.

During the Q & A session, the panelists were asked several questions.

How do you work around R’s memory limitations? (R workspaces are stored in RAM, and thus their size is limited)

Three responses were given (including one from the audience):

(i) use R’s database connectivity (e.g. RMySQL), and pull in only slices of your data, (ii) downsample your data (do you really a billion data points to test your model?), or (iii) run your scripts on a RAM-obsessed colleague’s machine or fire up an virtual server on Amazon’s compute cloud — for up to 15 Gigs.

What’s the general ramp-up process for groups wanting to use R?

Itamar and Bo both indicated that within their groups, almost everyone arrived having learned R in their university studies. Jim Porzak led an R tutorial within his last firm using an internal slide deck.

How easy is it for developers who are not statisticians to learn R?

The consensus seemed to be that R is a difficult language to achieve competency in, vis-a-vis Python, Perl, or other high-level scripting languages.   Jim emphasized, however, that he is a not a statistician – nor were any of our panelists. (As a non-statistician R user myself, I will say this — a consequence of learning R is an improved grasp of statistics. Knowing statistics is a necessary pre-requisite for understanding R’s features, from its data types to its modeling syntax).

How well does R interface with other tools and languages?

There are several packages on CRAN for importing and exporting data to and from Matlab (RMatlab), Splus, SAS, Excel and other tools.  In addition, there are interfaces for running R within Python ( RPy ) and Java ( RJava ).

The panelists mentioned that they typically run R within a GUIs, either RCommander orRattle . (Aside: I run R exclusively in emacs using ESS — incidentally, one of its authors was panelist David Smith).

A video of the event is now available courtesy of Ron Fredericks and LectureMaker.

COMMENTS

  1. timothy vogel on April 30th, 2009

    Jim emphasized, however, that he is a not a statistician – nor were any of our panelists. (As a non-statistician R user myself, I will say this — a consequence of learning R is an improved grasp of statistics. Knowing statistics is a necessary pre-requisite for understanding R’s features, from its data types to its modeling syntax).

    R is like any other statistical package; i can help you gain inference if you know what you’re doing. Otherwise, it will produce output much like any poorly written program that doesn’t actually accomplish the task at hand.

    As a 30 year statistician from a top-10 graduate program I am increasingly distressed by the dominance computer scientists are gaining in the “analytics field” merely for their increased access to the platforms involved and higher-than-average keyboard skills.

    flexclus in R is like fastclus in SAS and minitab’s old cluster. SPSS, Matlab, Pstat, and Mathematica all have analogs. The truly disturbing aspect of this dynamic is that good statisticians are quite likely to give comp sci types their due, but the comp sci types are trying to corner the analytics market via their sheer advantage vis-a-vis platfrom expertise/access!

  1. Michael Wexler on February 22nd, 2009

    Great post! The all-in-memory problem will continue to hold back R’s utility, but there are some great efforts afoot to fix this, everything from parallelization to new ways to store the data in a memory-mapped-to-disk approach (ala S, SPSS, SAS, etc.) For example, see http://www.r-project.org/conferences/useR-2007/program/posters/adler.pdf as a promising approach.

    I don’t believe sampling is always the right answer; unless you understand the underlying distribution, your sampling could cause you to miss some subtle effects. I look forward to these new approaches which can handle all the data at once…

    For those interested in more hints and tips around using R that I’ve collected in my journey from novice to dangerous novice, please see http://www.nettakeaway.com/tp/?s=R where I review some GUIs, collected links, tips about coming from an SPSS environment, etc.

the case for open source data visualization

When I was in graduate school, the most closely studied part of the scientific publications we read was not the results, but the methods sections. (It was also, incidentally, often the hardest section to write for one’s own publications.) Methods sections are wonderful because they allow you to verify that someone else’s work is correct — by reproducing it yourself. But more importantly, methods sections allow you to build upon the work of others. They are the open source code of science.

Unfortunately, for all but a small fraction of data visualizations on the web, there are no methods sections being published. This is a shame, because it slows the free flow of ideas and prevents the creative extension of other people’s work.

Three conditions must be met for a data visualization to be considered open and reproducible:

  • Open Tools — The software tool used for the visualization must be freely available. Thankfully, many of the most powerful visualization software tools, languages, and frameworks are now open source, such as Processing, Prefuse, Actionscript, and R.
  • Open Code (or Methods) — The actual code, script, and/or series of steps taken to generate the visualization must be published. (For example, Lee Byron released hiscode for a walkability heatmap of San Francisco.)
  • Open Data — The data which is visualized should also be available in the same washed and scrubbed format that was used for the visualization. Ideally any code used to clean up the data might also be shared.

I grade some of the web’s existing data visualization sites using these criteria.

  • The New York Times routinely creates stunning graphics (like a visualization of 22 years of box office receipts ), but we are left to guess how they were created. Grade:D
  • VisualComplexity, a graphics gallery of mostly complex networks (like genome neworks), has pretty images but neither data nor visualization code. Grade:D
  • IBM’s ManyEyes has gorgeous visualizations (some of which are made with thePrefuse toolkit), and while the data is made available, the source code for the visualization is not. Grade:C
  • Processing’s exhibition page highlights several extraordinary visualizations created with its open-source framework. But unfortunately, no source code is available from the visual artists. Grade:C
  • The R Graphics Gallery does make source code for graphics available, but in more than half of the cases, no data is available. Grade: B

a simple lesson on public speaking

via NetApp Founder Dave Hitz, in his memoir How to Castrate a Bull:

“In public speaking, most people focus too much on the data that they want to present to their audience.  Whenever I asked Tom [Mendoza] for advice, he would always ask how I wanted the audience to feel after my talk…  Disappointed – in our performance.  Proud – of our new release.’

Next, Tom would ask what action I wanted people to take after my presentation.  ‘If you don’t want them to do anything different, why are you wasting your time talking with them?’  If you’ve reached an important milestone in a project, you might want people to feel proud of what they’ve accomplished so far, but to keep working hard until they’re done.  If a project is way off track, the feeling of disappointment could motivate people to accept and engage a new approach.

When you are clear about the feelings and actions that you hope to inspire, then – and only then – should you start to worry about the content, about what data to share to inspire those feelings.”