We’re going to load in the shock breakout data from Stripling, McClarren, et al., ``A calibration and data assimilation method using the Bayesian MARS emulator’’. Annals of Nuclear Energy, 52(C), 103–112.

fullData <- read.csv('CRASHBreakout.csv')
simulations <- fullData[,c("thickness","laser.energy","Be.gamma","wall.opacity","flux.limiter","breakout.time")]
measurements <- fullData[1:8,c("measure.1", "measure.2", "measure.3", "thickness", "laser.energy")]

The simulation has two physical parameters, \(x\)’s, Be disc thickness (\(\mu\)m) and laser energy (J), and 3 model parameters, \(\theta\)’s, Be \(\gamma\) is a \(\gamma\)-law equation of state, wall opacity multiplier, and flux-limiter coefficient. There are 104 simulations in our data set. Looking at a pairs plot of the simulations, we can see that the breakout time in picoseconds (our quantity of interest) is strongly influenced by the disc thickness and the Be \(\gamma\). plot of chunk unnamed-chunk-3

The first thing we will do is break the simulation data set into a training and test set. We’ll use 80% of the data in the training set. This can be done simply using the createDataPartition function in the caret package:

inTrain <- createDataPartition(y = simulations$breakout.time,times = 1, p = 0.8,list = F)

The next step is to use the bgp function (Bayesian Gaussian Process) from the tgp package.

emulator <- bgp(X = simulations[inTrain,1:5], Z = simulations[inTrain,"breakout.time"], XX = simulations[-inTrain,1:5])
## 
## burn in:
## r=1000 d=[1.06667 1.17384 0.536062 0.99874 1.48963]; n=84
## 
## Sampling @ nn=20 pred locs:
## r=1000 d=[1.17701 0.973611 0.210686 1.05141 1.87371]; mh=1 n=84
## r=2000 d=[1.56964 1.12832 0.050244 0.255278 0.839138]; mh=1 n=84
## r=3000 d=[0.749212 0.982461 0.0592442 0.79599 0.584764]; mh=1 n=84
predDF <- data.frame(actual = simulations[inTrain,"breakout.time"], predicted = emulator$Zp.mean, q1 = emulator$Zp.q1, q2 = emulator$Zp.q2, type = "Training" )
predDF <- rbind(predDF, data.frame(actual = simulations[-inTrain,"breakout.time"], predicted = emulator$ZZ.mean, q1 = emulator$ZZ.q1, q2 = emulator$ZZ.q2, type = "Test" ))

Below is our comparison of the training and test data.

ggplot(predDF,aes(x=predicted, y=actual, shape = type, color = type)) + geom_point() + geom_errorbarh(aes(xmin=predDF$q1, xmax = predDF$q2)) + geom_smooth(method="lm", se=F)

plot of chunk unnamed-chunk-6

It seems like we can build a decent emulator for our simulations. The root-mean square error for the training data is 4.1257 ps with out of an average breakout time of 398.3132. Before we proceed, let’s check density plots of the errors:

ggplot(predDF,aes(x=actual-predicted, color=type)) + geom_density()

plot of chunk unnamed-chunk-7

With this real data, there are some anomalies, but nothing invalidating the model.

Sensitivity analysis with Gaussian Processes

The sens function in the tgp package will perform a sensitivity analysis based on the built Gaussian process model. This function evaulates the Gaussian process model at a many different points based on Latin-hypercube sampling and then estimates main effects for each parameter (recall that is the mean response at a given value of that parameter when averaging over all other parameters).

First, let’s perform the sensitivity analysis.

sensitivity <- sens(X = simulations[inTrain,1:5], Z = simulations[inTrain,"breakout.time"], model=bgp, nn.lhs = 300)
## 
## burn in:
## r=1000 d=[1.3254 1.40153 0.0752399 0.614514 1.04961]; n=84
## r=2000 d=[1.38609 0.553448 0.0639716 1.07825 0.817388]; n=84
## r=3000 d=[1.11494 1.38491 0.0435532 0.546938 0.653676]; n=84
## 
## Sampling @ nn=2100 pred locs:
## r=1000 d=[1.10592 1.30052 0.137543 0.742193 1.02597]; mh=1 n=84
## r=2000 d=[1.12615 0.621989 0.0334875 1.26782 0.693002]; mh=1 n=84
## r=3000 d=[0.786582 1.1348 0.0937611 1.36745 0.433559]; mh=1 n=84
## r=4000 d=[0.887651 1.37885 0.0522783 1.55296 0.869931]; mh=1 n=84
## r=5000 d=[0.926791 0.789038 0.081929 1.68885 0.574913]; mh=1 n=84

Now let’s plot the main effects

#Extract main effects
mainEffects <- data.frame(sensitivity$sens$ZZ.mean)
colnames(mainEffects) <- colnames(sensitivity$X)
#Extrace 5th percentile
q1 <- data.frame(sensitivity$sens$ZZ.q1)
colnames(q1) <- colnames(sensitivity$X)
#Extract 95th percentile
q2 <- data.frame(sensitivity$sens$ZZ.q2)
colnames(q2) <- colnames(sensitivity$X)
#Add in scaled X's
mainEffects$scaledInput <- seq(0,1,length.out = length(mainEffects[,1]))
q1$scaledInput <- seq(0,1,length.out = length(mainEffects[,1]))
q2$scaledInput <- seq(0,1,length.out = length(mainEffects[,1]))
#Melt the dataframe then merge all together
meltDF <- melt(mainEffects,id.vars = "scaledInput")
meltDFq1 <- melt(q1,id.vars = "scaledInput")
meltDFq1 <- rename(meltDFq1, replace=c("value"="q1"))
meltDFq2 <- melt(q2,id.vars = "scaledInput", value.name = "q2")
meltDFq2 <- rename(meltDFq2, replace=c("value"="q2"))
meltDF <- merge(meltDF,meltDFq2, by=c("scaledInput","variable"))
meltDF <- merge(meltDF,meltDFq1, by=c("scaledInput","variable"))
#Now plot it
ggplot(meltDF,aes(x=scaledInput, y = value, color=variable)) + geom_line(size=1) + geom_line(aes(x=meltDF$scaledInput, y=meltDF$q1, color = meltDF$variable),alpha=0.5) +geom_line(aes(x=meltDF$scaledInput, y=meltDF$q2, color = meltDF$variable),alpha=0.5)  + scale_x_continuous("Scaled Input") + scale_y_continuous("Main effect")

plot of chunk unnamed-chunk-9

As we suspected, the disc thickness and the Beryllium \(\gamma\) are the most impactful parameters. We can also plot the Sobal sensitivity indices,\[S_i =\frac{\mathrm{var}(E[f(x_i)])}{\mathrm{var}(f)},\] where \(E[f(x_i)]\) is the expected value holding the \(i^\mathrm{th}\) input fixed. Using the Gaussian Process model for the function \(f\), these sensitivity indices can be estimated. In particular at each MCMC sample for the model, we can get a sample of the sensitivity index for each variable. For our data we find,

sobol <- data.frame(sensitivity$sens$S)
colnames(sobol) <- colnames(sensitivity$X)
meltDF <- melt(sobol)
## Using  as id variables
ggplot(meltDF,aes(x=variable, y = value)) + geom_boxplot(outlier.colour = "blue")

plot of chunk unnamed-chunk-10

This has the same flavor as before in terms of indicating which variables are important.

Calibration

Now let’s calibrate the values of \(\theta\). For each we only have a range, and no distributional information. That means we have a uniform distribution. The measurement has a measurement uncertainty with a standard deviation of 10 ps. This makes the likelihood of a given set of \(\theta\)’s be \[\hat{p}(\vec{\theta}) = \prod\limits_{i=1}^{8}\exp\left(-\frac{(T_\mathrm{break,obs}(\vec{x}_i) - T_\mathrm{break,sim}(\vec{x}_i,\vec{\theta}))^2}{200}\right),\] for \(\theta\)’s in their nominal range. We’ll use the mean value from the emulator in our likelihood. We could use the known 95% confidence intervals to do a better job, but in this case we’ll just use the mean value.

We define our likelhood function as

minThetas <- sapply(simulations[,3:5],min)
maxThetas <- sapply(simulations[,3:5],max)
phat <- function(theta){
  #compute exponent for each point
  XX = data.frame(measurements[,c("thickness","laser.energy")])
  XX = cbind(XX,theta,row.names=NULL)
  breakouts <- predict(emulator, XX=XX)
  exponent <- -(measurements$measure.1 - breakouts$ZZ.mean)^2/(200)
  #sum of exponent is equal to logarithm of product
  phatS <- exp(sum(exponent))
  if ( (min(theta >= minThetas) == 0) || (min(theta <= maxThetas) == 0) ){ phatS = 0}
  phatS
}

Our samples are generated by

#do 10000 samples
Nsamps <- 10^4
thetaSamps <- data.frame(i = 1:Nsamps, Be.gamma=0*(1:Nsamps)+1.75, wall.opacity = 0*(1:Nsamps)+1.3, flux.limiter = 0*(1:Nsamps)+0.075)
for (i in 1:(Nsamps-1)){
  proposedBe <- rnorm(1,mean=thetaSamps$Be.gamma[i],sd = 0.05)
  proposedWall <- rnorm(1,mean=thetaSamps$wall.opacity[i],sd = 0.05)
  proposedFlux <- rnorm(1,mean=thetaSamps$flux.limiter[i],sd = 0.05)
  u = runif(1)
  proposed = data.frame(Be.gamma = proposedBe, wall.opacity = proposedWall, flux.limiter = proposedFlux)
  numerator = phat(proposed)*dnorm(thetaSamps$Be.gamma[i],mean=proposedBe, sd = 0.05)*dnorm(thetaSamps$wall.opacity[i],mean=proposedWall, sd = 0.05)*dnorm(thetaSamps$flux.limiter[i],mean=proposedFlux, sd = 0.05)
  denominator = phat(thetaSamps[i,c("Be.gamma","wall.opacity","flux.limiter")])*dnorm(proposedBe,mean=thetaSamps$Be.gamma[i], sd = 0.05)*dnorm(proposedWall,mean=thetaSamps$wall.opacity[i], sd = 0.05)*dnorm(proposedFlux,mean=thetaSamps$flux.limiter[i], sd = 0.05)
  alpha = min(c(1,numerator/denominator))
  thetaSamps[i+1,c("Be.gamma","wall.opacity","flux.limiter")] = proposed*(u<= alpha) + thetaSamps[i,c("Be.gamma","wall.opacity","flux.limiter")]*(u>alpha)
}
ggplot(thetaSamps,aes(x=i, y = Be.gamma)) +  geom_line() + scale_x_continuous("sample") + scale_y_continuous(expression("Be gamma")) + geom_line(aes(x=c(1000,1000), y=c(minThetas["Be.gamma"],maxThetas["Be.gamma"])), color="red", size=1)

plot of chunk unnamed-chunk-12

ggplot(thetaSamps,aes(x=i, y = wall.opacity)) +  geom_line() + scale_x_continuous("sample") + scale_y_continuous(expression("Wall opacity multiplier")) + geom_line(aes(x=c(1000,1000), y=c(minThetas["wall.opacity"],maxThetas["wall.opacity"])), color="red", size=1)

plot of chunk unnamed-chunk-12

ggplot(thetaSamps,aes(x=i, y = flux.limiter)) +  geom_line() + scale_x_continuous("sample") + scale_y_continuous(expression("flux limiter parameter")) + geom_line(aes(x=c(1000,1000), y=c(minThetas["flux.limiter"],maxThetas["flux.limiter"])), color="red", size=1)

plot of chunk unnamed-chunk-12

Here are histograms for the tuning parameters:

postBurnIn <- thetaSamps[1000:Nsamps,]
ggplot(postBurnIn, aes(x=Be.gamma)) + geom_histogram()
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## Warning: position_stack requires constant width: output may be incorrect

plot of chunk unnamed-chunk-13

ggplot(postBurnIn, aes(x=wall.opacity)) + geom_histogram()
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
## Warning: position_stack requires constant width: output may be incorrect

plot of chunk unnamed-chunk-13

ggplot(postBurnIn, aes(x=flux.limiter)) + geom_histogram()
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.

plot of chunk unnamed-chunk-13

It appears the the wall opacity multiplier does not matter that much. Let’s look at a 2-D plot of the posteriors for Be \(\gamma\) and the flux limiter parameter:

ggplot(postBurnIn, aes(x=Be.gamma, y = flux.limiter)) + stat_density2d(colour = "#00274c") + geom_point(colour="#FFCB05")

plot of chunk unnamed-chunk-14

Building a discrepancy function

For this we will need to take samples from the posterior distribution of the tuning parameters, average over them, and compute the mean discrepancy as a function of \(\vec{x}\). The mean response for each experiment is computed as

emulator.response = function(thetas){
  response = data.frame(breakout.time = 0, thickness = 0,laser.energy = 0)
  for (i in 1:8){
    XX = cbind(measurements[i,c("thickness","laser.energy")], thetas[,c("Be.gamma","wall.opacity","flux.limiter")],row.names=NULL)
    preds <- predict(emulator,XX=XX)
    response[i,] = data.frame(breakout.time=mean(preds$ZZ.mean),measurements[i,c("thickness","laser.energy")])
    
  }
  response
}
emResponse <- emulator.response(postBurnIn)
print(emResponse)
##   breakout.time thickness laser.energy
## 1         425.5     18.44         3780
## 2         428.0     18.32         3640
## 3         447.3     19.59         3830
## 4         424.9     18.76         3950
## 5         492.7     21.56         3650
## 6         470.5     20.41         3660
## 7         483.4     21.43         3830
## 8         468.0     20.98         3950

Now that we have the mean responses at each \(\vec{x}\), we compute the discrepancy at each point

delta = merge(emResponse,measurements,by=c("thickness","laser.energy"))
delta$discrepancy = with(delta, measure.1 - breakout.time )

Finally, we build a GP model for the discrepancy. To start, we will use all the data

discrepancy.model <- bgp(X = delta[,1:2], Z = delta[,"discrepancy"])
## 
## burn in:
## r=1000 d=[1.53994 0.895599]; n=8
## 
## Sampling @ nn=0 pred locs:
## r=1000 d=[0.782531 1.21227]; mh=1 n=8
## r=2000 d=[1.1029 0.717478]; mh=1 n=8
## r=3000 d=[0.785011 0.983861]; mh=1 n=8

Let’s look at our discrepancy model as a 2-D function of the \(\vec{x}\)’s. First, we’ll make a grid

xGrids <- expand.grid.df(data.frame(thickness = seq(min(delta$thickness),max(delta$thickness),length.out = 10)),data.frame(laser.energy = seq(min(delta$laser.energy),max(delta$laser.energy),length.out = 10)))
discrepancy.preds <- predict(discrepancy.model, XX=xGrids)
discrepancy.surface = cbind(xGrids, data.frame(discrepancy = discrepancy.preds$ZZ.mean),row.names=NULL)
ggplot(discrepancy.surface,aes(x=thickness, y= discrepancy, color = laser.energy)) + geom_point() 

plot of chunk unnamed-chunk-18

ggplot(discrepancy.surface,aes(x=laser.energy, y= discrepancy, color = thickness)) + geom_point() 

plot of chunk unnamed-chunk-18

ggplot(discrepancy.surface,aes(x=laser.energy, y=thickness, z= discrepancy)) + stat_contour(aes(color=..level..))  

plot of chunk unnamed-chunk-18

Our predictions using discrepancy are now the sum of the emulator prediction plus the discrepancy function

prediction.response = function(thetas){
  response = data.frame(breakout.time = 0, thickness = 0,laser.energy = 0,q1=0,q2=0, discrepancy=0)
  for (i in 1:8){
    XX = cbind(measurements[i,c("thickness","laser.energy")], thetas[,c("Be.gamma","wall.opacity","flux.limiter")],row.names=NULL)
    preds <- predict(emulator,XX=XX)
    disp.pred <- predict(discrepancy.model,XX=measurements[i,c("thickness","laser.energy")])
    response[i,] = data.frame(breakout.time=mean(preds$ZZ.mean) + disp.pred$ZZ.mean,measurements[i,c("thickness","laser.energy")], q1 = mean(preds$ZZ.q1)-2*sd(preds$ZZ.mean) + disp.pred$ZZ.q1, q2 = mean(preds$ZZ.q1)+2*sd(preds$ZZ.mean) + disp.pred$ZZ.q1)
    
  }
  response
}
predictions <- cbind(prediction.response(postBurnIn),data.frame(actual=measurements$measure.1),row.names=NULL)
ggplot(predictions, aes(x=breakout.time, y = actual)) + geom_point(color="#00274c",size=3) + geom_errorbarh(color="#00274c",aes(xmin = q1, xmax = q2)) + geom_line(aes(x=c(min(min(predictions[,c("breakout.time","actual")])),max(max(predictions[,c("breakout.time","actual")]))), y=c(min(min(predictions[,c("breakout.time","actual")])),max(max(predictions[,c("breakout.time","actual")])))), color="#00274c") + geom_errorbar(color="#00274c",aes(ymax = actual + 20, ymin = actual - 20))

plot of chunk unnamed-chunk-19

So we can predict the experiments given the data. How about using the data from 7 to predict an 8th. We will now perform this leave-one-out analysis.

predictions.loo = data.frame(breakout.time = 0, thickness = 0,laser.energy = 0,q1=0,q2=0)
for (i in 1:8){
  #leave out ith measurement
  train.measure <- measurements[-i,]
  train.simulation <- emResponse[-i,]
  delta = merge(train.simulation,train.measure,by=c("thickness","laser.energy"))
  delta$discrepancy = with(delta, measure.1 - breakout.time )
  #build discrepancy function
  discrepancy.model <- bgp(X = delta[,1:2], Z = delta[,"discrepancy"])
  #now predict
  XX = cbind(measurements[i,c("thickness","laser.energy")], postBurnIn[,c("Be.gamma","wall.opacity","flux.limiter")],row.names=NULL)
    preds <- predict(emulator,XX=XX)
    disp.pred <- predict(discrepancy.model,XX=measurements[i,c("thickness","laser.energy")])
    predictions.loo[i,] = data.frame(breakout.time=mean(preds$ZZ.mean) + disp.pred$ZZ.mean,measurements[i,c("thickness","laser.energy")], q1 = mean(preds$ZZ.q1)-2*sd(preds$ZZ.mean) + disp.pred$ZZ.q1, q2 = mean(preds$ZZ.q1)+2*sd(preds$ZZ.mean) + disp.pred$ZZ.q1)
}
## 
## burn in:
## r=1000 d=[1.21024 0.0799383]; n=7
## 
## Sampling @ nn=0 pred locs:
## r=1000 d=[0.58909 0.017471]; mh=1 n=7
## r=2000 d=[0.758435 0.0354978]; mh=1 n=7
## r=3000 d=[1.21849 0.0521906]; mh=1 n=7
## 
## 
## burn in:
## r=1000 d=[0.750689 1.07854]; n=7
## 
## Sampling @ nn=0 pred locs:
## r=1000 d=[1.08816 0.0330091]; mh=1 n=7
## r=2000 d=[1.03155 0.0523101]; mh=1 n=7
## r=3000 d=[1.12401 0.0144226]; mh=1 n=7
## 
## 
## burn in:
## r=1000 d=[0.560101 0.978784]; n=7
## 
## Sampling @ nn=0 pred locs:
## r=1000 d=[1.21421 0.0353952]; mh=1 n=7
## r=2000 d=[1.07528 0.0102809]; mh=1 n=7
## r=3000 d=[1.31809 0.0256388]; mh=1 n=7
## 
## 
## burn in:
## r=1000 d=[1.55218 1.26637]; n=7
## 
## Sampling @ nn=0 pred locs:
## r=1000 d=[0.57932 0.829527]; mh=1 n=7
## r=2000 d=[0.696575 1.72134]; mh=1 n=7
## r=3000 d=[1.2539 0.750986]; mh=1 n=7
## 
## 
## burn in:
## r=1000 d=[1.63045 0.790019]; n=7
## 
## Sampling @ nn=0 pred locs:
## r=1000 d=[1.00965 0.0594733]; mh=1 n=7
## r=2000 d=[0.947343 0.0154586]; mh=1 n=7
## r=3000 d=[1.09927 0.0332947]; mh=1 n=7
## 
## 
## burn in:
## r=1000 d=[1.00259 0.826401]; n=7
## 
## Sampling @ nn=0 pred locs:
## r=1000 d=[1.26338 0.792216]; mh=1 n=7
## r=2000 d=[0.472157 0.797833]; mh=1 n=7
## r=3000 d=[0.0305209 0.50037]; mh=1 n=7
## 
## 
## burn in:
## r=1000 d=[0.011544 0.807445]; n=7
## 
## Sampling @ nn=0 pred locs:
## r=1000 d=[0.0121996 1.33412]; mh=1 n=7
## r=2000 d=[0.0236482 1.10057]; mh=1 n=7
## r=3000 d=[0.0529323 0.65255]; mh=1 n=7
## 
## 
## burn in:
## r=1000 d=[1.52819 0.0419122]; n=7
## 
## Sampling @ nn=0 pred locs:
## r=1000 d=[0.647752 0.0319954]; mh=1 n=7
## r=2000 d=[0.923482 0.00517531]; mh=1 n=7
## r=3000 d=[0.679956 0.0259269]; mh=1 n=7
predictions.loo <- cbind(predictions.loo,data.frame(actual=measurements$measure.1),row.names=NULL)
ggplot(predictions.loo, aes(x=breakout.time, y = actual)) + geom_point(color="#00274c",size=3) + geom_errorbarh(color="#00274c",aes(xmin = q1, xmax = q2)) + geom_line(aes(x=c(min(min(predictions[,c("breakout.time","actual")])),max(max(predictions[,c("breakout.time","actual")]))), y=c(min(min(predictions[,c("breakout.time","actual")])),max(max(predictions[,c("breakout.time","actual")])))), color="#00274c") + geom_errorbar(color="#00274c",aes(ymax = actual + 20, ymin = actual - 20))

plot of chunk unnamed-chunk-20