Using Magento on Amazon EC2
This is an old revision of the document!
This wiki page is used to share some information about using Magento on Amazon’s EC2 cloud hosting environment.
We have tested with m1.small (1.7GB RAM) and m1.large (7.5GB RAM) instances first with Apache and later also with the not so widely used (only 1-4% market share) Nginx webserver.
A m1.small Amazon Machine Image (AMI) has been registered in the US region and can be launched e.g. with the AWS Management Console or with the EC2 API command line utilities. You can easily find the AMI by typing magento into the management console’s search bar.
We’ve also created an m1.large AMI in the EU region which is also registered and publicly available (see below for exact AMI names).
I used the Virtualmin GPL Debian Etch AMI as a basis for the m1.small image and launched it from AWS Management Console.
The AMI is based on the Linux kernel 2.6.16-xenU and has (among others) the following packages preinstalled
- Apache 2.2.3
- PHP 5.2.0-8+etch13
- MySQL 5.0.32
I had to install a couple of PHP related packages that are required by Magento in order to walk through the installation wizard and complete the installation successfully
The default PHP memory_limit was set to 16M which triggered memory errors on the product listing page, so I upped this to 512MB which solved the problem. I purposely set it to such a large value, because this whole set up is directed towards running a single Magento store on one EC2 instance, but after I found out that reducing the value to 64MB would yield the same result, I preferred to keep it that way, i.e. on 64MB.
I continued by following the do it yourself performance enhancements outlined in Performance is Key! - Notes on Magento’s Performance
Modifying the configuration of MySQL server to take better advantage of the server’s RAM.
Most Linux distributions provide a conservative MySQL package out of the box to ensure it will run on a wide array of hardware configurations. If you have ample RAM (eg, 1gb or more), then you may want to try tweaking the configuration. An example my.cnf is below, though you will want to consult the MySQL documentation for a complete list of configuration directives and recommended settings.
I double checked all query_cache_ variables and upped query_cache_limit from 1 to 16MB. Result: no further improvements. Reset value to 1MB
I also checked have_query_cache and query_cache_size variable values to make sure query caching is really enabled (see MySQL docs: http://dev.mysql.com/doc/refman/5.0/en/query-cache-configuration.html)
- Parse time home page: 1.1-1.5s
- Parse time product listing: 1.4-1.6s
- Parse time product detail: 1.6-2s
- Parse time add item to cart: 2.7-2.9s
Although others have reported huge performance improvements after tweaking MySQL config, the demo store with only a couple of products, does not seem to make a big difference. This might be different with stores that have more than 1000 or 10’000 products and many product attributes.
Although I did not do any precise benchmarking so far, the performance improvement was based on the Magento profiles parse times was not more than 100ms (mili seconds).
Finally, I ran the MySQL Performance Tuning Primer Script and got a couple of warnings, but I think that the configuration is still valid, because there has not been a lot of traffic so far. 48 hours have not yet passed, but I think the results are already representative:
Making sure the Apache configuration has KeepAlives enabled.
- Has already been enabled in the AMI used as a basis for this setup
This can deliver significant improvements to PHP‘s responsiveness by caching PHP code in an intermediate bytecode format, which saves the interpreter from recompiling the PHP code for each and every request.
I installed PHP opcode cache XCache v1.2.2 as a Debian package via etch-backports.
- Parse time home page: 1.0s
- Parse time product listing: 1.2-1.5s
- Parse time product detail: 1.3-1.6s
- Parse time add item to cart: 0.9-2.3s
Seems that XCache has more effect on the current demo store than MySQL query caching optimization. Again we have to consider the fact that the demo store only has very little products in it!
(this has not yet been implemented on the EC2 instance as of now!)
Use a memory-based filesystem for Magento’s var directory. Magento makes extensive use of file-based storage for caching and session storage. The slowest component in a server is the hard drive, so if you use a memory-based filesystem such as tmpfs, you can save all those extra disk IO cycles by storing these temporary files in memory instead of storing them on your slow hard drive.
The US Region EC2 demo store (m1.small) has been disabled. You can start your own instance with this publicly available AMI: magento-etch-virtualmin-gpl-3.63
After launching a new instance you can access the store in the browser by appending /apache2-default/magento/ to your instance URL. In case you want to login to the admin control panel, please use the username admin and password 4KKEzgn9zZ.
I’ve been also working on a 2nd EC2 demo store in the EU region using a minimal Debian Etch 64bit AMI. Even without MySQL query cache optimization and XCache installation, the Magento Profiler parse times are already very fast, i.e. between 0.4 and 1 secs.
The EU region demo store that I had up for a couple of days has now been disabled. You can launch an instance yourself using the AMI name below.
The m1.large specs are
- 7.5 GB of memory
- 4 EC2 Compute Units (2 virtual cores with 2 EC2 Compute Units each)
- 850 GB of instance storage
- 64-bit platform
After optimizing MySQL configuration and installing XCache on the m1.large instance, I got the following parse times:
- Parse time home page: 0.2-0.3s
- Parse time product listing: 0.4-0.6s
- Parse time product detail: 0.6-0.8s
- Parse time add item to cart: 0.6-1.2s
- magento-etch-virtualmin-gpl-3.63 (US region m1.small)
- debian-4.0-etch-64-magento-2009-03-10 (EU region m1.large)
Important note: When you launch your own instance, you have to make two small modifications in order for the demo store to run:
- Change the first two entries in the core_config_data table of the MySQL database to reflect the new URL that you have been assigned
- Clear the file cache under /var/www/apache2-default/magento/var/cache/*
Now the stylesheet will be read correctly and the demo store should be available at
I’ve been testing with 1000 requests and a concurrency of 10 on the m1.large instance, first without any performance optimization and then with MySQL query cache optimization and XCache installed:
I have no experience with the ab (ApacheBench) utility and wonder why 960 out of 1000 requests are failing. Maybe someone with more experience can shed some light on this in order to get some representative benchmarking results.
I’m not yet sure what to think about Pingdom, because the total page load times indicated often are a lot longer than how pages load on my machine, but it seems still to be a good indicator for overall performance as parse time is not everything!
m1.large instances have very fast parse times, but the total page load time as measured for the store home page by Pingdom is over 6 seconds, on an m1.small instance evenaround 10 seconds, let alone the product detail or product listing page.
After testing and tweaking with Apache Prefork (mod_php), we also did some testing and tweaking with Nginx, an open-source, high-performance HTTP server, php_fastcgi and Varnish HTTP accelerator and as a result got total page load times of under 3.6 seconds on an m1.small instance!
We have not yet tested Apache together with php_fastcgi, but it seems that Nginx is pretty impressive.
Performance is a key focus of the Magento core team for 2009. Version 1.3 due out soon should see some performance improvements using a flat catalog database
- MySQL query cache configuration optimization for EU region AMI
- Opcode cache installation for EU region AMI
- Register EU region AMI and make publicly available
- Add more products to the demo store to see how performance is with e.g. 1000 or 10000 products on both m1.small and m1.large instance
- Do some benchmarking tests with ApacheBench to see how performance is with increased traffic on both m1.small and m1.large instance