Approaches to live music synthesis for multivariate data analysis in R

SatRday #1

With modern advances in computing and the increasing abundance of digital data, it is becoming both feasible and necessary to expand our data analysis methods beyond conventional mathematical modeling and data visualization. Music and other emerging data analysis paradigms present the opportunity to represent high-dimensional data in intuitive and accessible forms. In this talk I will introduce the concept of plotting data as music and demonstrate some approaches to live music synthesis that are available within the R ecosystem.

  1. Introduction to data music
  2. Approaches to live music in R
  3. Relevance of data music

1. Introduction to data music

Why we plot data in the first place

Let's talk about why we plot data.

According to me, data are abstract numbers, with no fundamental representation in the real world, and the goal of data analysis is to get these numbers into our brains.

In order to perceive the data, we must convert them to a metaphor that we understand.

We typically use the metaphors of scatterplots

and tables, but anything could represent our data; for example, perhaps we could even use

kebabs as a metaphor.

In practice, we usually map the data to graphs, and not to kebabs, and we call this process "data visualization".

Similarly, we could map the data to music. We'd have to call that something else--maybe "musicification".

Synthesizing audio with R

Let's start with an example.

We can compose this sort of music with vector operations in R.

Sound is a series of air pressures

Your recognizes the changes in air pressure as sound.

Ear

Digitally, we can represent sound as a series of air pressure numbers, and we can create sound by moving a speaker back and forth.

Speaker in

If the speaker is all the way in, let's say that the number is negative one,

Speaker out

and if it's all the way out, let's say the number is positive one.

png('sine.png', width = 800, height = 450)
curve(sin, 0, 4 * pi, bty = 'l',
      xlab = 'Time', ylab = 'Air pressure')
dev.off()

When you see sound represented as a sine wave, you're seeing a plot of air pressure over time.

Sine wave

Our ear pays attention mostly to how the air pressure changes, not to the absolute pressure value. For example, if the air pressure goes up and down at a higher frequency, we perceive the sound as higher pitched.

"Pulse-code modulation"
(PCM)

Converting Western musical notes to frequencies

One way to think about pitch is as a Western musical scale.

There are twelve base notes in most Western music: A, A-sharp, B, and so on. We can represent these as the numbers 1 to 12, for example, where A is 1.

A <- 1
A.sharp <- 2
B <- 3
# ...

Note that you can create music without this Western concept of notes; this is just an example that I think will be familiar to some people.

Once we have notes represented as numbers, we can determine the frequencies that we need to play through our speakers to produce that note.

P.n <- function(n, P.a=440, a=0)
  P.a * (2^(1/12))^(n - a)

P.n(7, 440, 1)

# From data to frequencies
major.scale <- c(0, 2, 4, 5, 7, 9, 11) # half steps
x <- round(ChickWeight$weight/100) # index within the scale
notes <- major.scale[x]
freqs <- P.n(notes, 440, 0)

Here we have converted our data into frequencies in a particular scale, or key, but we still haven't generated the sound. Let's do that.

Musical form

If we plot data in the form of music, we are mapping changes in the data to musical metaphors.

  • Rhythm
  • Pitch
  • Tempo
  • Volume

For example, we might use data to vary the rhythm, the pitch, the tempo, or the volume of a phrase of music.

SECOND <- 48000
beat <- SECOND/3
P.n <- function(n, P.a=440, a=0)
  P.a * (2^(1/12))^(n - a)

# Rhythm
set.seed(1677724)
press <- seq(1, 0, length.out=beat)
silence <- rep(0, beat)
rhythm <- unlist(list(press, silence)[floor(runif(8, 1, 2.5))])

# Pitch
frequencies <- P.n(c(0, 4, 7, 11))

# Combine
unit <- rhythm * c(sapply(frequencies,
        function(f) sin(2*pi*f*(1:(beat*4))/SECOND)))

# Volume
wave <- c(unit, unit*3/5, unit*4/5, unit*3/5)

2. Approaches to live music in R

When I program music in R, I have usually used very slow and complex programs. This can creates a delay of more than a minute between when I program a change and when I hear how it affects the music. I have been making data music in R for some time now, so I can cope with having very little feedback in my editing process, but feedback is very important for beginners.

Send PCM to a standard blocking sound server

My first thought was to use a typical sound server.

  • sndio
  • OSS
  • Pulseaudio
  • Jack
  • (Or directly to a sound card)

The sound server has drivers for various hardware sound cards, and can do things like convert audio formats, mix different sound sources together, and route audio in interesting ways.

Spreadsheet → R Data → R PCM → Sound server → Audio hardware

We can interface with these servers by sending raw pulse-code modulation audio (PCM).

As an example, I implemented an OSS driver. You can indeed start playing music very quickly with this driver, but we have a new problem: You cannot do anything else while the music is playing.

     Situation with the blocking sound server

Main ├── Play first sound. ── Play second sound. ──→

     ┌─────┬─────┬─────┬─────┬─────┬─────┬──→
     0     1     2     3     4     5     6
                                      Time

The issue is that I did not implement any sort of concurrency for the sound-playing, as that is annoying in R. In a language with better threading or multiprocessing, I would have done something like this.

Main ├─┬────┬────────┬─ Do something else. ─→
       │    │        └─ Play third sound. ──┤
       │    └─ Play second sound. ──┤
       └─ Play first sound. ─────────────┤
     ┌─────┬─────┬─────┬─────┬─────┬─────┬──→
     0     1     2     3     4     5     6
                                      Time

Send PCM to another program

R      ├────┬────┬─ Do something else. ─→
            ↓    ↓
Player ├────┬────┬─ Wait for more audio from R. ─→
            │    └─ Play second sound. ──┤
            └─ Play first sound. ─────────────┤

       ┌─────┬─────┬─────┬─────┬─────┬─────┬──→
       0     1     2     3     4     5     6
                                        Time

This may seem messy, but it is typical of R; we make R interfaces to everything else so that we may use other software while pretend to be using the R we are comfortable with.

I implemented something like this with web audio in a browser. I think sending PCM to another program is a good idea, but I would prefer a much simpler implementation than the one I came up with.

Send high-level commands to another program

This is the same idea as sending raw PCM except that we are moving more of the music logic to the other program.

  • Timidity, &c. (MIDI)
  • Supercollider
  • AudiolyzR
  • Javascript

Shiny and Javascript audio

My dream for a while has been an htmlwidgets that alters a parametrized Javascript song based on the data. But I have not managed to understand how to create an htmlwidgets.

Exercise for you: Finish this htmlwidgets.

Generate a file (not quite live)

I mostly use this last approach. I generate small chunks of audio, and have a small R function that converts them to files and plays them with with a command-line.

  1. Create audio as a numeric vector in R.
  2. Save to a file.
  3. Play the file with a command-line player.

I think that it is most popular presently to generate wave files with tuneR and then play them with some command-line audio player; I lately use raw PCM files rather than wave files, but it is the same idea.

  • Wave (tuneR, seewave)
  • Raw PCM (Tom)

I use the following function to render the PCM audio.

#' IEEE 753 double-precision floats (f64le)
write.pcm <- function(x, filename) {
  if (any(is.na(x)) || min(x) < -1 || max(x) > 1)
    stop('You must normalize and remove NAs.')

  dsp <- file(filename, open='w+b')
  writeBin(as.double(x), dsp)
  close(dsp)
}

I can convert it to other audio formats with ffmpeg.

SAMPLE.RATE <- 48000

x <- rnorm(SAMPLE.RATE) # One second of white noise
write.wave(x/max(abs(x)), '/tmp/whitenoise.pcm')

system2(paste('ffmpeg -f f64le -acodec pcm_f64le -ar',
              SAMPLE.RATE,
              '-i /tmp/whitenoise.pcm -y /tmp/whitenoise.ogg'))

system2('mplayer /tmp/whitenoise.ogg')

A path to better live audio

Two promising approaches, with different use cases

  1. Operating system sound server
  2. Shiny htmlwidgets with sound synthesis in Javascript

R's convenient matrix arithmatic makes it an ideal language for manipulating PCM audio, so I would prefer that we keep construct the PCM audio in R and use other programs only to get around R's concurrency limitation.

R      ├────┬────┬─ Do something else. ─→
            ↓    ↓
Player ├────┬────┬─ Wait for more audio from R. ─→
            │    └─ Play second sound. ──┤
            └─ Play first sound. ─────────────┤

Shiny

Inputs from browser → R → Music parameters → Audio synthesis in browser

3. Relevance of data music

Now I'll take a moment to comment on the broader relevance of data music.

People who can't see

I think the most obvious reason to plot data as music is to present them to people who can't see. I haven't done any work in this direction, however.

Teach an intuition about data collection

Platonic data do not exist in our real world. Exposing ourselves to diverse data representations may help us get a better intuition about the collection and analysis of quantitative information.

Relevance to data science: Pretend that you are doing something useful

A lot of times, data are practically random noise that cannot be taken to be indicative of anything that we should care about. In data science we are usually in the business of pretending that such data are useful.

We should never forget the crucial distinction between the salesman and the scientist. It is the salesman's duty to please his customers or, if that is too difficult, to fool them.

Edsger W. Dijkstra

This next song is a great example of that.

I made this song, but I don't really know what "activity generated" is, or what any of the other underlying variables are; it's just what the column was called in the spreadsheet. Nobody cares whether my this music is informative; people just think "music is cool, and data are cool, so data music is really cool".

Data science is about supporting the illusion that computers will save us by turning random data into gold and eternal youth. Data music can help you:

  • Convince your clients that your product/service is high-tech and visionary.
  • Convince your workers that data science methods are interesting enough to make up for terrible jobs and dismal career prospects.
  • Convince yourself that you are doing something interesting with your life.

In data science, our main focus is to maintain the illusion that we are doing something high-tech and visionary so that our clients will continue to buy our snake oil. And data music is also very useful for that.

Relevance to data analysis: Escape Flatland

While good data analysis has never been an important part of my work, it is still valuable in other parts of life.

Multivariate representations of data help us notice trends that we did not expect, especially among between several variables. With an understanding of these trends, we can better develop simple models that capture much of the variation in our data.

Hopping Cities

The main potential I see in data music videos is thus in presenting high-dimensional data.

In order that our visualizations can reveal unexpected patterns, it is important that we present many dimensions at once. Edward Tufte says this a lot.

As I said earlier, we historically represent our abstract data as visuals and then looked at them in our eyes. This used to work, but this approach is reaching its limits in the age of big data.

As you can see, today we have big data. Data visualization does not provide enough sensory bandwidth to represent our high variety of data that is so common today.

I have been trying to use our non-visual senses to increase our sensory bandwidth.

Music videos is a way of adding the sense of sound. But why stop there?

We should look for ways of using more of our senses to increase our sensory bandwidth, so I have also been exploring the use of food for plotting. I call this process "data gastronomification".

As I mentioned earlier, we need to convert abstract data into something that we can percieve.

This would be graphs or tables, but could we plot our data as kebabs?

Well, in fact, we can, and we can do it in ggplot.

library(geomdoner)

mpg$truck <- mpg$class
levels(mpg$truck) <- c(TRUE,FALSE,FALSE,FALSE,TRUE,FALSE,TRUE)

mpg$y2008 <- mpg$year == 2008 # Alternative is 1999
mpg$id <- row.names(mpg)

set.seed(693)
ggplot(mpg[sample.int(nrow(mpg), 8),]) +
  aes(label = paste0('Make #', id, ' (', manufacturer, ' ', model, ')'),
      border = drv,
      knoblauch = truck,
      scharf = grepl('auto', trans),
      zwiebeln = y2008,
      tomaten = TRUE, salat = TRUE,
      x = hwy, y = cty) +
  xlab('Highway miles per gallon') +
  ylab('City miles per gallon') +
  ggtitle('Milage of eight automobile makes.\n(Each döner is a make.)') +
  geom_text() + geom_doner()

We can use the geomdoner package to plot our data as kebabs. This ggplot code produces a text graph

and a bunch of orders for döner kebabs.

Make #142 (nissan altima): döner box

* ohne knoblauch
* ohne kräuter
* ohne scharf
* ohne zwiebeln
* mit tomaten
* mit salat

Make #13 (audi a4 quattro): döner

* ohne knoblauch
* ohne kräuter
* ohne scharf
* ohne zwiebeln
* mit tomaten
* mit salat

Then we can order the kebabs and put them on top of the graph, which is what you see here.

The x-axis is the highway milage, y-axis is city milage,

These two were spicy, which meant automatic transmission, worse milage

Plotting data is about converting from abstract data to concrete metaphors. We have to find the meaningful representations that leverage our existing intuitions, and there's nothing specifically visual about it.