Connecting TOAD For MySQL, MySQL Workbench, and R to Amazon AWS EC2 Using SSH Tunneling

awsI often use Amazon EC2 to store and retrieve data when I need either additional storage or higher computing capacity.  In this tutorial I’ll share how to connect to a MySQL database so that one can retrieve the data and do the analysis.  I tend to use either TOAD for MySQL or MySQL Workbench to run and test queries against a MySQL database.  I generally use MySQL Workbench when I’m sitting on a Linux-based operating system and TOAD when I’m on Windows.  It’s not terribly difficult to connect to EC2 but it is also not as simple as typing localhost as it requires a few additional steps.

If you’re reading this article then I’ll assume that you already know about Amazon EC2 and are familiar with the basics.  Make sure you have all the security groups (i.e. IP Addresses) set up so that a database connection (for MySQL usually through port 3306) can be made from wherever you are located.

All that is needed is the Public DNS that is available once you start your Amazon instance and the key pair file (.pem) used for the Amazon instance (the one Amazon tells you not to lose).  The Public DNS can just be copied and pasted from the specific AWS instance you want to connect. The key pair .pem file should already be saved to your local hard drive.  I generally use an Ubuntu operating system on an Amazon instance so some of the connection information is specific to that instance type (e.g. username of ubuntu).  One additional tip is to turn off any locally running MySQL database on your desktop.

MySQL Workbench

MySQL Workbench is probably the easiest as everything can be done right from within MySQL WorkBench.  The Public DNS from the instance can be added to the SSH Hostname line.  This particular Amazon instance uses the username ubuntu.  The SSH Key File is the file that is generated from Amazon. Next is the MySQL Hostname.  The database host is relative to the SSH connection.  Once you have connected to the remote location it is now relative to the remote location and MySQL Hostname will be on localhost (though you can create more complex connections).  Then you can use the database username and password you created (or were provided from the database administrator).  Using MySQL Workbench is probably the easiest way to connect as the connection process is all self-contain.  However, this will only work for MySQL Workbench and if you want to use other software (e.g. R) to connect to the database then this approach alone will not work.


TOAD for MySQL requires an additional step as is it does not have a built-in SSH tunneling option.  So the tunneling process requires separate software.  There are several ways to do this but two freely available options are either PuTTY or Bitvise SSH Client (Bitvise is free for individual use but otherwise there is a fee). PuTTY is useful and completely free however it requires that the .pem file be converted to a PuTTY specific .ppk file using the puttygen.exe application.


First, take the Public DNS and add it to the PuTTY session.  Though not required I would suggest going to the Connection category on the left and changing the keepalives to something like 120 and then check the Enable TCP keepalives.  Otherwise, after a few minutes your connection will drop with inactivity.



Second, you need to select the SSH authentication (located within the Connection category).  This will be from the .ppk file created from the puttygen.exe file.  Select the .ppk file and add it to the Private key file for authentication.

PuTTY Auth


Third, you need to enable SSH port forwarding.  So that way you can connect to your local desktop but have all that forwarded on to Amazon.  This way when connecting to your localhost at port 3306 you will actually be connecting to your Amazon Public DNS location.

PuTTY Tunnels


Like PuTTY you will enter your Public DNS and username.  Then click on the User keypair manager. From the keypair manager import the .pem file into Bitvise.  Take note of the slot column as that will be used to select the key pair you want to use.  Once imported use the Initial method located directly below the username and select the keypair slot (e.g. “public-key slot 1”).



Bitvise Key Pair


Then to do the SSH port forwarding you will use the C2S tab across the top.  Simply set the Listening interface and port to and the destination port to  Once completed you can save the profile and then login.

Bitvise C2S


Connecting Using TOAD

Finally, once you have opened an SSH connection using PuTTY or Bitvise you can then open up TOAD and create a new database connection.  Because everything on is being forwarded you will want to connect using as your Host.  Then enter your database username and password.

TOAD Connect

Using R

A while back I wrote up an article on how to connect to a database using R (  To connect to an Amazon EC2 MySQL database from R the same process is used.  PuTTY or Bitvise will need to be running and since it has enabled SSH port forwarding, anything on using port 3306 will be forwarded on to the Amazon Public DNS.  So all that is needed is to create an ODBC connection pointing to port 3306 and that connection will be forwarded on to Amazon.  So if you ever change the Public DNS by stopping the instance you don’t need to change anything in the ODBC configuration.  Only the PuTTY or Bitvise host will need to be updated.


Probabilities and P-Values

P-values seem to be the bane of a statistician’s existence.  I’ve seen situations where entire narratives are written without p-values and only provide the effects. It can also be used as a data reduction tool but ultimately it reduces the world into a binary system: yes/no, accept/reject. Not only that but the binary threshold is determined on a roughly subjective criterion.  It reminds me of Jon Stewart’s recent discussion “good thing/bad thing“.  Taking the most complex of issues and reducing them down to two outcomes.

Below is a simple graph (nothing elaborate) that shows how p-values alone don’t tell the whole story.  Sometimes, data is reduced so much that solid decisions are difficult to make. The graph on the left shows a simulated situation where there are identical p-values but very different effects. The graph on the right shows where the p-values are very different, and one is quite low, but the simulated effects are the same.

P-values and confidence intervals have quite a bit in common and when interpreted incorrectly can be misleading. Simply put a p-value is the probability of the observed data (e.g. \bar{x}), or more extreme data, given the null hypothesis is true[1, 2, 3, see also Moore's book The Basic Practice of Statistics, 2nd ed. p 321-322].

Ultimately, I could write a fairly lengthy discussion, and there are several others (e.g. Gelman), on the topic. However, for this blog post I’ll highlight this one idea that with p-values it’s possible to get differing effects with the same p-values and differing p-values with the same effect. In the example below I opted for extreme data to highlight the point. Here’s a quick little matplot of the example…


In conclusion, don’t get me wrong. P-values are useful and I actually use them quite often. But we sometimes need to look beyond the test statistic and p-value to understand the data. Below is some code to simulate some extreme datasets.

set.seed(1234) # fix the one sample. However, replicate is randomized. So exact replication of these data not possible. Need a lot and sometimes it doesn't always work so it may need to be rerun.
x1 = rnorm(10, 0, 1)
x2 = replicate(500000, rnorm(10, 0, 5))
set.seed(1234) # same as previous seed.
x3 = rnorm(50, 0, 1)
x4 = replicate(500000, rnorm(10, 0, 4))

get.ttest = function(x){
## This is equivelent to the one sample t-test from t.test()
## just explicitly showing the formula
t.x1 = abs( ( mean(x) - 0 ) / ( sd(x)/sqrt(length(x)) ) )
p.value = pt(t.x1, 9, lower.tail=FALSE)*2 # two-sided
} = function(x){
## This is equivelent to the one sample t-test from t.test()
## just explicitly showing the formula
me = qt(1-.05/2, length(x)-1) * sd(x)/sqrt(length(x))
ll = mean(x)-me
ul = mean(x)+me
return(rbind(ll, ul))
### Find data with the greatest difference in effect but yet the same p-value.
## No real reason for this approach it just helps finding extreme sets of data.
## Need a very high number of simulations to ensure the constraints on effect.match are meet.
sim.p = apply(x2, 2, get.ttest) = apply(x2, 2,
sim.dif =[1,][2,]
effect.match = x2[,round(get.ttest(x1),3) == round(sim.p,3) & sim.dif==min(sim.dif)]
sim.max.effect = apply(effect.match, 2, mean) - mean(x1)
pick.max.effect = which( sim.max.effect == max(sim.max.effect) ) = effect.match[,pick.max.effect]
ci.matrix = cbind(,
### Find data with the same effect and has the greatest difference in p-value
sim.mean = apply(x4, 2, mean)
effect.match.mean = x4[, round(mean(x3),1) == round(sim.mean, 1)]
sim.max.p = apply(effect.match.mean, 2, get.ttest) - get.ttest(x3)
pick.max.p = which( sim.max.p == max(sim.max.p) )
pick.small.effect = effect.match.mean[,pick.max.p]
ci.matrix.effect = cbind(,
###Plot the graph
ind=1:ncol( ci.matrix )
ind.odd=seq(1,ncol( ci.matrix ), by=1)
ind.even=seq(2,ncol( ci.matrix ), by=1)
matplot(rbind(ind,ind),ci.matrix,type="l",lty=1, lwd=1, col=1,
xlab="Group",ylab="Response Variable, y", main=paste("Comparison of data with the same p-value of ", round(get.ttest(x1),2),"\nbut different effects", sep="")
, xaxt='n')
axis(side=1, at=ind.odd, tcl = -1.0, lty = 1, lwd = 0.5, labels=ind.odd, cex.axis=.75)
axis(side=1, at=ind.even, tcl = -0.7, lty = 1, lwd = 0.5, labels=rep("",length(ind.even)), cex.axis=.75)
points(ind,c(mean(x1),mean(,pch=19, cex=1, col='red')

###Plot the graph
ind=1:ncol( ci.matrix.effect )
ind.odd=seq(1,ncol( ci.matrix.effect ), by=1)
ind.even=seq(2,ncol( ci.matrix.effect ), by=1)
matplot(rbind(ind,ind),ci.matrix.effect,type="l",lty=1, lwd=1, col=1,
xlab="Group",ylab="Response Variable, y", main=paste("Comparison of data with the same effect of ", round(mean(x3),1), "\n but different p-values ", sprintf("%.3f", get.ttest(x3)), " and ", sprintf("%.3f", get.ttest(x4) ) , sep="")
, xaxt='n')
axis(side=1, at=ind.odd, tcl = -1.0, lty = 1, lwd = 0.5, labels=ind.odd, cex.axis=.75)
axis(side=1, at=ind.even, tcl = -0.7, lty = 1, lwd = 0.5, labels=rep("",length(ind.even)), cex.axis=.75)
points(ind,c(mean(x3),mean(pick.small.effect)),pch=19, cex=1, col='red')

Some Options for Testing Tables

Contingency tables are a very good way to summarize discrete data.  They are quite easy to construct and reasonably easy to understand. However, there are many nuances with tables and care should be taken when making conclusions related to the data. Here are just a few thoughts on the topic.

Table Plot

Dealing with sparse data

On one end of the spectrum there is a struggle to deal with the mass amounts of available data and trying to make sense of data in the petabyte (and larger) range. At the other end of the spectrum lacking sufficient data has its own problems.

Collapsing row or column levels

Sometimes when working with discrete data if certain factor levels lack adequate data then it may be possible to combine the factor levels together. This may be done fairly easily with 4 and 5-scale Likert scales and, in fact, happens quite frequently. Taking this approach may allow for sufficient data to make conclusions without violating underlying assumption.  The following tables of some random data show how cells can be collapse such that the basic assumptions are met.

18-29 30-45 46-59 60+
Very Strongly Agree 1 1 12 10
Strongly Agree 17 13 16 18
Undecided 13 6 15 2
Strongly Disagree 10 7 8 19
Very Strongly Disagree 0 11 10 2

18-45 46+
Agree 32 56
Undecided 19 17
Disagree 28 39

Sanity Checks

Though it’s not a formal test just doing some sanity checks and some other sensitivity analysis is a good way to check the stability of the test. If one can take a single observation and move it to a different cell and that changes the decision then one should reevaluate the criteria for making a conclusion.

Using a basic chi square test (though other statistical tests would help with this problem, including the chi square correction for continuity) gives a p-value of .0363 for the following table of some made-up data and would be considered significant at \alpha=.05.  However, by simply moving one observation from the Cold/Fast group to the Cold/Slow group the p-value of .1575 is no longer significant at \alpha=.05.  The volatility of the data is suspect and decisions should be taken with caution.

Fast Slow Total
Hot 7 2 9
Cold 1 4 5
Total 8 6 14

Fast Slow Total
Hot 7 2 9
Cold 2 3 5
Total 9 5 14


Sparse Data Tests

There are many tests to handle many different categorical data situations. Listed here are a few of the common approaches.

Chi Square With Yates Correction
Sometimes decreasing the chi square statistics (and increasing the p-value) is sufficient for the specific case of a 2 x 2 table.  In R, for example, this is applied by default.

\chi^2 = \sum_{i=1}^N \frac{(|O_i - E_i|-0.5)^2}{E_i}


Fisher’s Exact Test

This is often the immediate fall back for 2 x 2 tables. This test is based on the hypergeometric distribution. However, one important rule for this test is that it is conditioned on the marginal totals. An example counter to this rule is to take a random sample of 15 people. Suppose 5 are male and 10 are female. Here the chi square starts to break down. But the other problem is that Fishers Exact Test calls for the marginals to be fixed and that is not the case. If another random sample of 15 people is selected we could get a different number of males and females.

Fisher’s Exact Test was developed in a time (1934) when a computer just wasn’t available to play around with really complex hypergeometric distributions. A 2 x 2 table was really the only feasible sized table. Consequently, Fisher’s Exact Test was designed for 2 x 2 tables but can be used on any m x n sized table.

So why not always use Fisher’s Exact Test? At some point the two begin to converge and using the exact test may just be too exact. Alan Agresti and Brent Coull write an article here (pdf) that discusses this topic in the context of interval estimation.


Barnard Test
Is similar to the Fisher Test but this test overcomes the problem of conditioning on the marginal.  Like many tests this applies only to a 2 x 2 table.


McNemar Exact Test
McNemar’s test is used when the data are correlated. For example, a matched pairs design where there is a before and after treatment or when a question is asked on a repeat survey.  Like Fishers Test this provide options for smaller sample sizes.


These are only a few of the options available and great care should be taken in any analysis but smaller sample sizes add a bit more complexity. There are many other options including logistic regression as well as other non-parametric tests that are available for small tables.


Here is some R code that shows some of the tests I described: Chi Square, Fishers Exact Test, Barnards Test, and McNemars Exact Test.

x1 = matrix(c(7, 1, 2, 4),
nrow = 2,
dimnames =
list(c("Hot", "Cold"),
c("Fast", "Slow")))

x2 = matrix(c(7, 2, 2, 3),
nrow = 2,
dimnames =
list(c("Hot", "Cold"),
c("Fast", "Slow")))

chisq.test(x1, correct=FALSE)
chisq.test(x2, correct=FALSE)

fisher.test(x1, alternative="two.sided",, conf.level=0.95)
fisher.test(x2, alternative="two.sided",, conf.level=0.95)
x3 = matrix(c(50, 5, 3, 15),
nrow = 2,
dimnames =
list("Replication 1" = c("Hot", "Cold"),
"Replication 2" = c("Hot", "Cold")))

mcnemar.test(x3, correct=TRUE)
table.cont(x1, csi = 2,
col.labels = colnames(x1),
clabel.r = 1.5, clabel.c = 1.5)

%d bloggers like this: