How does an SSD write?

An SSD is based around a controller chip and NAND Dies.  These are slabs of silicon consisting of cells.  These cells store voltage allowing an on or off state (1 or 0) allowing the storage of data in a binary form.

For Single Layer cell based SSD the cell is either on or off.


Writing to this type of Flash is quick and easy.  The controller has less to do.

For MLC the cells are split into 2 (it’s called Multi Layer Cell, but should really be called Dual Layer Cell as Triple Layer Cell technology is on the roadmap).  With this technology, the cell is divided between 0 and 50 and 51 and 100 (as an example – in reality it’s more complex than that).  This allows the cell to have 4 states On On, On Off, Off On, Off Off therefore doubling the amount of data the cell can store.


The above is a very basic view of how data it stored.  There is a bit more to it and it is good to understand it so that you can understand the myriad of different scores and specifications.

Writing to an SSD

Reading from an SSD is simple, writing is a lot more complex.  This next section assumes that the SSD does not hold the OS but is installed as an additional drive.  Brand new, out of the box, direct from the manufacturer. 

The storage is divided into blocks, that can store Pages.  Pages are made up of multiple adjacent cells on the NAND Flash.  Blocks contain pages, and the number of Blocks is determined by the size of the SSD.   Pages are actually 4 kilobytes in size and writable Blocks are made up of 64 Pages. Data is erased in 256KB sized blocks.

For illustration purposes I have shrunk the blocks into 12 pages, and each Page is shown as 1 byte.

Blocks of Storage on an SSD

Keeping it simple, let’s open Notepad


We enter 1234 into Notepad and save it as 1234.txt

Notepad 1234 as text

4 byte 1234.txt

This is stored on the new, fresh SSD.  See above the file is 4bytes in size. Lets say our 4 bytes file makes up 1 page per byte (just to keep it simple).  We need to therefore, write 4 pages onto the SSD to store our 1234.txt 4 byte sized file containing the text 1234.

4 blocks of data

These 4 pages in the 1st block make up the file.

1234.txt stored on the SSD

Let’s edit the file – by adding 5678 to the start of the file.

Notepad containing 56781234

The file has now grown to 8 bytes

8 byte 1234.txt

Because we saved over the original file on the disk, there is a little work to do to write the new version of the file.

Original Pages are marked for erase – new pages written to same block

The new version of the text file is now 8 bytes in size

1234.txt 8 bytes in size

Above we see that the old data is no longer relevant – it really needs to be deleted. 

Solid State Disks only erase whole blocks of data – they cannot erase Pages.  So to erase the old data, it must again, write the good data in Block A to Block B and erase the whole of Block A.

1234.txt written from Block A to Block B

Once it has written to Block B – it can erase Block A

Block A Erased

The amount of times a cell can be written to has a limit before it becomes unreliable.  In SLC this number is between 50,000 and 100,000.  In MLC is typically between 3,000 and 5,000.

You see in the simplified illustration above, that the number of writes is more than expected.  We used an example of changing a file, but the same theory should be used to writing additional files to the disk.

New SSD with data written across it

The above graphic shows that Block D has had a clean up and that WORD.DOC file is written across Blocks C and E.  They don’t need to be sequential, and it makes no difference to the performance of the SSD that the data spans blocks that are not concurrent.  The important thing for the SSD is to erase in complete blocks.  Block D would have been erased after WORD.DOC had been written to the disk.  Block F is yet to be written to.

If we were to delete XYZ.dll from the disk, the SSD would need to move 1234.txt from block B and Pages E-L in Block C before Blocks B and C could be erased.

SSD cleaned up after XYZ.dll erased

Hopefully I have demonstrated in a simple way how an SSD writes data.  It does it in 4k sized Pages (we showed it in 1 byte sized chunks in the example for simplicity). With 64 writable pages per block (rather than 12 in our illustrations).  4 sets of Blocks are erased each time in 256 KB chunks.

This cleaning of the data is referred to as Garbage Collection and keeps the performance of the SSD at it’s peak.  Therefore there is no need to defragment an SSD.

This cleaning of data from the SSD before data is written is a downside of SSD as it creates additional write activity to the SSD.  The SSD has a limit to the number of writes.  The calculation of the number of writes is called Write Amplification  and can be calculated as

amount of data written to the flash memory / amount of data written by the host
= write amplification

The Solid State Disk market talks of Sustained Read and Sustained Write as well as Random Read and Random write scores. 

Sustained Write, writing pages to entire blocks without the need to clean up the block, this is determined by the OS – makes cleaning up of the data much faster.

Random write looks for space on the disk and writes data where it finds free space.  The maximum speed is determined by the number of channels on the SSD, the efficiency of the firmware and the speed of the NAND Flash Memory.

Of course, files come in a myriad of file sizes that don’t all fit into 4kb chunks.  The controller within the SSD controls all of this logic, improvements in controller design have been key to increasing SSD performance.

Next time we’ll explore Over Provisioning, TRIM, Wear Levelling.

SSD price £1.03 per GB

Ok – so the world talks of Solid State Disks costing $1 per GB – we have our own brand of Future Storage 120GB SATA III A-Sync MLC Solid State Disks available for £1.03 per GB (Ex VAT and shipping).  This is virtually a pound per GB (works out at roughly $1.59) – with performance of Sequential Read and Write of 500 Mb/s and 4k Random Read/Write IOPS up to 60,000, with TRIM support.

These drives are affordable and very fast.  120GB is plenty big enough for any system.  The Asynchronous flash rather than Synchronous flash makes these drives affordable without losing out on performance.

Sure Asynchronous Flash has some negative press – check out

… in reality it’s much better than a spinning HDD in every way!  Just slower than a Synchronous SSD when dealing with compressed data (MP3’s, JPegs etc).

If you can afford it – go for the Synchronous version


With Read/Write speeds at 550/530 MB/s and 4k IOPS at 80,000 for an extra £34 (Ex VAt and shipping)

If you are unsure of what SSD to buy – get in touch we can help!

GreenBytes HA-3000 meets the VMWare I/O Analyzer

The Greenbytes HA-3000 SAN appliance is perfectly suited to hosting IOPS hungry  VMWare VDI  environments. 

Using VMWare’s new IOanalyzer tool available from with the community site at we ran a mixed read/write test, testing with only 32 outstanding IOs, This means the ‘pressure’ on the SAN was not as great resulting in lower IOPS and latency than would be measured with 64 outstanding IOs.

The results of this tool demonstrate that the GreenBytes HA-3000 is easily able to deliver over 40K write IOPS and 10K read IOPS when testing a typical 80/20 write/read VDI workload.

The testing was done with two ESXi 4.1 hosts attached to a single HA-3000 250GB iSCSI volume presented over 10GbE. Each host/VM was pulling average ~22K writes and ~5.5K read IOPS. See the following screenshots.


We’d love to hear from you to compare your own system’s scores – send us your screen shots and we will post them on this blog.

Benchmarking the GreenBytes HA-3000

I visited IP Expo 2011 held at Earls Court on the 20th October 2011.  Chris Bowles of Consolidate IT had a Remote Desktop session running to connect back to their office in the Netherlands.  Here he was able to demonstrate the GreenBytes HA-3000 in action.  For more information on this amazing iSCSI SAN appliance, check out  our blog The Virtualisation Journey.

Chris demonstrated IOMeter which was running on two hosts, each working on 2 iSCSI volumes each. The iSCSI volumes are connected over 10GbE with an IO queue length of 64.

Click to Enlarge

Here you can see the IOPS Score at an amazing 160,000 IOPS.  This, of course, is with the hosts running without load.

Chris went on to demonstrate cloning of a 25GB Windows 7 Virtual Machine.  This took only 2 minutes.

As more Virtual Machines (VMs) are loaded, the IOPS meter drops, as is expected.

Click to Enlarge

Here you can see the IOPS Score has dropped to 152725.70 IOPS as we loaded up the VMs.

Below is the IOMeter test results

Click to View

Get in touch if you have any questions on the test or post a comment.

Virtualised Disaster Recovery

The Old Way

The world is embracing Virtualisation – and so it should.  Hardware technology is becoming more powerful and we have had an era of excessive power usage.  The world is running out of resources and we need to use the power of the new hardware to use less of it.  Running more on less means we consume less.  We need less equipment to do more, and therefore consume less energy producing it and running it.

We now have Hypervisor software from companies like VMWare and Citrix that allow us to run multiple hosts on powerful server and SAN hardware.  Data centres still need to have a host per application so that  software titles do not conflict, as well as needing set amounts of hardware resource (Disk space, RAM, Processing power) as set by the software creator’s set requirements, in order to be fully supported.  An Enterprise anti-virus server application cannot run on the same host as another title, from another vendor, or it won’t be supported.  It’s not to say that two titles wont work together – they just don’t get built that way.

Until recently (approx. 3 years ago), many companies ran their infrastructure designs using physical server farms and large SAN storage arrays, to accommodate each of the varying software titles that are required to complete the application suite.


With Hypervisor VMWare or Citrix based platforms the number of hosts is reduced, therefore making an energy and cost saving on host server hardware.


This is achievable due to more powerful single servers being available to the market.  More powerful processing power, means they can each do more computing.

The above, simple illustrations, show a pattern where server farms have reduced in size, storage has arguably not.  Storage has been increasing in order to accommodate the expanding virtual farms – previously, the host servers ran the OS on their own internal disk drives.  As the host server runs multiple guest Operating Systems, it’s storage requirement grows.  Effectively we have removed the storage from the host server and we need it in the Storage device.  It does appear that although the server stack has shrunk, without storage technology improving, the SAN Storage stack grows.


This makes little or no energy or cost saving.  Storage products tend to use more power and cost more.

The New Way

GreenBytes HA-3000 High Availability (HA) Inline Deduplicating iSCSI SAN

New iSCSI SAN products like the GreenBytes HA-3000 High Availability (HA) Inline Deduplicating iSCSI SAN, allow less storage through technology improvements like inline deduplication and compression, where copies of each file are trimmed down to a simple pointer.  This creates a smart storage device that stores a single copy of an 100GB OS VM and 1000x 10GB copies of it all on the same device.


Now we are starting to see a picture of real power, heat, cooling and hardware cost savings.  This technology allows a 26TB SAN appliance to host multiple instances of VMs, negating the need for large disk arrays.  See The Virtualisation Journey previous post on this technology.



Backup, Restore and Disaster Recovery

So far, we have roughly explored the transition in infrastructure design and the problems facing the IT industry in making reductions in power and cost.  The problems are not simply solved by adding storage.  We have briefly explored the new storage appliance from GreenBytes that can drastically reduce the amount of storage hardware using smart compression and deduplication technology.  However, storage is always a growing entity for a growing business.

Each time a disk is added, the question of what to do should it fail arises.  Failure and subsequent loss of data is not the only pain that is felt in a disaster.  There is downtime to the end user, cost of recovery of the data in labour and of course, a loss to the business.

There is an entire industry around Backup and Disaster Recovery (DR) and ways of seamlessly resolving failures of hardware and software running in a virtual environment, that minimize  disruption to a business.  Most have evolved from the multiple hosts with 1 application per host environments, as we saw at the start of this article.  In this scenario, the important data was stored on the SAN – The OS on the host could be quickly rebuilt from an image, therefore backing up the SAN stored data took priority – in most installations this would be achieved using a duplicate of the SAN and a fast and expensive data connection between the local and a remote site.


This is still true today, of virtually all medium to large IT infrastructure designs.  There is too much data to go to tape, it takes too long to write to and read back in – those days are long gone for most installations.

Virtualised Backup and DR

As environments have moved into the virtualised field, where the SAN stores the actual OS as well as the data, Backup technology has centred around the backing up of the SAN.  This has meant that engineering teams have split into those who look after the SAN and those who look after the hosts.

The SAN is backed up to it’s remote SAN partner and this task is performed by Storage Engineers.  They perform LUN to LUN or Volume to Volume backups which is controlled within the storage device software. 

The Hypervisor engineer, when experiencing a failure can shuffle his/her VM around through technologies such as VMotion while it runs, on the fly, but to do this on a large scale requires manpower.  The VM and VMDK can be moved between hosts but this can be labour intensive.

Introducing Zerto

“Business Continuity and Disaster Recovery for the Cloud Era”

Zerto is a hypervisor-based replication and recovery technology that integrates with VMware vSphere and is managed from within vCenter. Zerto software performs continuous, near-syncronous replication at the Hypervisor level rather than at the Storage Level.

This gives control back to the Virtual host engineer where he can replicate and recover whole virtual environments, using continuous replication with block-level application-consistent data protection across both the host and the storage.

Zerto replication separates the application from its physical constructs at the data-protection level. It works on any storage device containing a datastore within your virtual datacenter and works with NetApp, EMC or any other storage vendor’s appliance.

To enable full application recovery, Zerto has developed Virtual Protection Groups. VPGs are a collection of VMs and their related VMDKs, which have dependencies and must be recovered with write-order fidelity. Zerto VPGs ensure applications are replicated and recovered with consistency, regardless of the underlying infrastructure.

Single click protection

Any VM in vSphere vCenter can be protected from a single click without any extra configuration. There is no client or agent software required to be installed on the guest OS. Zerto protects at the VM and VMDK level. This allows the Virtual Host engineer to protect the virtualised environment without storage constraints.

Tasks available include:

  • Replicate single or multiple VMs
  • Create VPGs – Virtual Protection Groups with write-order fidelity
  • Protect multiple VMDKs connected to the same VM
  • Support replication of RDM devices to a remote VMDK or RDM
  • Use an intuitive GUI embedded in vSphere vCenter
  • View and manage local and remote sites from the same vCenter clientRemote installation, takes under an hour
  • Seamless support for vMotion, DRS and HA while replicating
  • WAN optimization – built-in WAN compression and application policies
  • Replication, Testing, CDP, Reporting and Migration

Zerto removes the need to use multiple solutions for Disaster Recovery. All tasks are accomplished using built–in, easy to follow wizards within the tool.

These include:

  • Failover one or more VPGs, including automatic reverse replication
  • Recover to a historic point-in-time with journal-based CDP
  • Recover volumes instantly, in read-write format
  • Test failover, including full remote recovery in a sandboxed zone
  • Migrate workloads to a remote data center
  • Get comprehensive reporting on all system-wide activities

Cloud-Ready Disaster Recovery

Disaster recovery to the cloud is considered by many enterprises as a natural first step in the journey to the cloud, being an effective alternative to in-house DR sites.

Zerto’s technology fits perfectly with the specific needs of both enterprises and cloud providers.

  • Replication from any storage to any storage
  • Management API
  • Built-in WAN optimization
  • Native multi-tenancy support
  • Software only, highly scalable
  • Tier-one protection level

Supported Environments

  • ESX/ESXi 4 vSphere 5 support scheduled for 1/2012
  • Storage: SAN, NAS, FC, iSCSI, DAS, and all external and internal storage supported by VMware, including replication between different storage types
  • Volumes: VMFS, Virtual RDM, Physical RDM, including replication between volume types
  • Guest OS: all OS supported by VMware
  • Virtualization Features: vAPP, HA, DRS, vMotion, Storage vMotion.

Get in touch here, call 08452990793 for a free trial or check out for more information

The Virtualisation journey

I started out in IT straight from school as a trainee ICL VME operator.  VME was a mainframe system consisting of large cabinets of hardware produced by ICL.  I joined a company who had just got rid of their punch card process – it still sat in the corner but I never actually saw it in action.  The terminals were green screen with light green text and looked like a CRT monitor that we knew before Flat Screen monitors became the norm.  It had an on and off switch and a keyboard attached.  You turn it on and it loaded instantly.  There was no mouse and everything you did involved command line text, the syntax you had to learn.  The job involved running jobs or scripts to produce printed output on green and white ruled paper or cheques or other pre-printed paper.  The jobs/scripts would run for hours to generate the output, and involved loading large 24 inch reel to reel tapes to load in the data or to make backups.

Picture courtesy of IBM

Loading the tape took a bit of skill to line it up.  It later moved to the smaller cartridge (as in the above shot).  This held more data, required no skill (other than to put it in the right way round – though it would only go in the correct way).  They were even housed in a huge (StorageTech) robot which loaded then with a robotic arm.

The mainframe computer took up the size of a 5-a-side football (soccer) pitch and cost millions of pounds.

The users connected to the mainframe via their “dumb terminals” to interact with the hardware to get the information necessary to complete the tasks required of them.

Big, blue-chip companies worked like this for their entire business process.  Finance, manufacturing, personnel, it even had it’s own email system.

Then the PC came along, with it’s own operating system in Windows 3.1 and terminal emulation software was used to connect to the mainframe. This replaced the dumb terminals.  Now the employees used a mouse and could play solitaire or create pictures with MS Paint while the mainframe jobs ran their course.  Of course the PC had word processing and spreadsheets, using Lotus 1-2-3 software, so more could be achieved while sitting at the system.  However, the lowly 286 PC could not process and create 10,000 cheques a night or do anything close to what the mainframe was capable of.  It was simply a means to connect to it.

However, I could see that the PC was the future and managed to get out of mainframe operations into a PC support role.

The PC has been through quite a lot since then, though the common OS has gone from Windows 3 to 7 and now shortly Windows 8 – its role has not really changed all that much.  It is the PC server that has grown up in a more significant way to the extent where groups of servers are now capable of replicating the mainframe environment that was prevalent in the 1970’s through to the 1990’s.  So much so that they are now powerful enough to host the operating systems and the PC becomes unnecessary.  Thin client systems which only have a Keyboard, Video and Mouse (KVM) connectors, along with the network port are now becoming common place in the SME market.

We have returned to a central system running on the server, with a “dumb terminal” on the desk of the user.  However, the options available are far greater than those golden days of the green screen.  Businesses can now deploy entire systems at the click of a mouse – they can spend less money and achieve a whole lot more.

Traditional Rollout process

Take a typical rollout process of a company of 10,000 people.  Following the infrastructure build,the actual rollout would involve a lot of people developing a Common Operating Environment (COE), installing it onto new hardware, backing up current systems data, actually box shifting the equipment to each and every desk then running through the gremlins that inevitably occur following such a process.  Three to five years later will go through it all again.

Once the systems are rolled out there is the need for a call center, 2nd and 3rd line support, field services and hardware replacement costs through failure.

Cheaper solution through virtualisation

The new way is not only cheaper in hardware costs, but in personnel.  Thin client systems that sit on a desk are available for as little as £100  (even less in some cases).  They require no maintenance or upgrades.  They are a “dumb terminal”.  The Operating System runs on the server, using software from companies such as VMWare or Citrix, they host the user’s OS through hypervisor systems such as ESX or XenServer which run on powerful servers and make the server’s hardware available to the many guest Operating Systems running on them.

Installation is just a matter of unbox and plug in.


Above shows a very basic server which hosts the Hypervisor software.  This can host multiple guest Operating Systems, the number is dependant on the host’s resources (RAM, Storage, Processors). 

Most Host systems will have more than a single network connection, as well as the faster options such as Fibre Optic connections through Host Bus Adapters (HBA).  These hardware network connections are available to be used by the guest Operating Systems as well as being able to create virtual connections that allow guests to communicate with each other.  Virtual Guest operating systems can even be moved while they run from server to server through a dedicated network (this functionality requires it’s own network connection and VLAN).

Virtual Storage Solutions

In the above basic example, the host Hypervisor server is shown with only an internal disk.  Considering the largest disk available is 4TB (4000GB) – You need at least 1GB for the Host OS.  Installing guest Operating Systems with 100GB each would give approximately 39 Guest Operating systems on the single server, due to the space on the disk.  Though this would push the server to it’s limit due to storage, of course if the single server failed it would take all guests down in one go.

A robust virtualised solution would therefore look to have a mirrored system (either Active/Backup or Active/Active) as well as using external storage from the host server.  The host system’s storage would hold the local Host Hypervisor OS (ESXi or XenServer etc) and the guests be held on a SAN.

The Storage Area Network device contains storage using either purely spinning hard disks, Solid State Disks or a combination of the two (more on that later).  The disks within the SAN can be configured in a RAID configuration or Just a Bunch of Drives (JBOD).  They are presented to the Hypervisor server as a Logical Unit (LUN) in Fibre Optic connected systems or as connected drives in a JBOD setup.  The connection between the server and SAN is achieved using either 2GB Fibre Optic cabling (Fibre Channel) or using  1GB Ethernet copper cabling (iSCSi).  I won’t go into the argument between the 2 technologies however check out for a good description of them.

Once configured the host Hypervisor server or servers can store multiple guest hosts to the maximum extremes of its hardware capability.  Carve up the resources of the host to allocate to the guests.  It’s a simple mathematical equation of RAM and storage.  How much resource does each virtual guest require to run successfully?  Divide this between the amount available to the host and you have the number of guests available on each host server.

This brings us back to the need for the most powerful host servers.  Today’s servers are small in physical footprint, in comparison to the old mainframes.  Link them together in racks and even clusters (linked servers where failures are seamlessly accommodated without disruption) and the power and performance, once available in football pitch sized computer rooms are available in a single rack.  This technology is faster and cheaper than ever before.  The revolution of Moore’s law has allowed chips to get more powerful in smaller and smaller physical sizes.


So now we have the fast host server, the bottleneck is IOPS (Input Output Per Second).  In short, this is getting data off a disk and writing data to it.  The host now has multiple core processors, fast bandwidth capable networking.  The problem lies with the hard disk.  In order to achieve ultra fast IOPS with spinning disks, multiple disks in large RAID Arrays are required.  A typical 15k SAS spinning hard  disk can achieve 200 IOPS.  A typical Microsoft Operating System (Windows 2003 server or Windows 7) requires at least 40 IOPS  in order to run comfortably.  A good basic blog on this is available at IOPS: Performance Capacity Planning Explained and the pertinent text from that page is:

  • Microsoft Exchange 2010 Server
    Assuming 5000 users that send/receive 500 emails a day, an estimated total of 3000 IOPS is needed
  • Microsoft Exchange 2003 Server
    Assuming 5000 users are sending 60, receiving 150 emails a day, an estimated total of 7500 IOPS is needed
  • Microsoft SQL 2008 Server, cited by VMware
    3557 SQL TPS generates 29,000 IOPS
  • Various Windows Servers
    Community Discussion: between 10-40 IOPS per Server
  • Oracle Database Server, cited by VMware
    100 Oracle TPS generates 1,200 IOPS

Therefore to achieve 7500 IOPS (taking the MS Exchange 2003 Server example listed above), a RAID array containing 38 disk drives would be required (based on the 200 IOPS per 15K SAS disk).  The amount of storage is almost irrelevant.  38 x 72GB drives give 2.7 TB of storage.  This is possibly more storage than is needed  for the Exchange software, but the number of disks are required to give the IOPS performance.  For the SQL 2008 Server – 145 drives are required, using the above figures.

Here we can see why storage costs have risen in the fact that so many drives are required in order  to provide the performance required of the software running on the hardware.

Solid State Disks to the rescue

We have explored the problem of the need for high IOPS from storage technology.  Here is where the Solid State Disk (SSD) solves that problem.  SSDs are achieving as much as 80,000 IOPS in a 4k Random Read and write measurement (latest SATA III MLC Drives).

This technology enables less drives to be required to achieve the performance, though SSDs are currently more expensive than it’s spinning disk cousin, solutions can be built using less disks.


Single Layer Cell (SLC) based SSDs have always been labelled Enterprise drives due to the equation of writing 500GB of data every day to drive will cause a write wear issue, where the drive will fail after 10 years.  Multi Layer Cell (MLC) based SSDs have been labelled Consumer drives due to the equation of writing 200GB of data every day to a drive will cause a write wear issue, where the drive will fail after 6 years.

SLC drives are generally limited to 128GB (there are 240GB SSD sized drives, shortly to come on to the market).  MLC drives are now commonly available at 480GB and even 960GB sizes.  Using the above equation an application writing to a 480GB in it’s entirety every day will cause the drive to fail after approximately 3 years.  In reality though, most software applications will not behave in such a way – there would be a mixture of Reads and Writes.  Analysis of an application’s storage behaviour is required before selecting which technology is suitable.  It can make choosing a storage solution a complex process.

New Breed of Storage Devices from GreenBytes

All major storage companies in the market, have recognised this new technology in Solid State Disks.  EMC, NetApp, HP and Dell all offer SSD based storage devices either fully Solid State Storage or a new hybrid.  The hybrid uses traditional spinning disks as the medium to store the data and uses an SSD solution to act as a read/write cache device.  To the host server environment, the SSD is presented as the front end giving the fantastic IOPS performance.  The SSD reads and writes at amazing speeds, and then trickles the read and writes down to the storage behind it.  This allows a cost effective solution without using a fully populated Solid State device.

GreenBytes HA-3000 High Availability (HA) Inline Deduplicating iSCSI SAN

GreenBytes have produced a revolutionary storage device called the HA-3000 which is a  High Availability (HA) Inline Deduplicating iSCSI SAN. This device uses 2x Solid State Disks  to store an intelligent cache as well as the storage medium for metadata and the deduplication tables.  The data is stored on  either 2TB or 3TB 2.5 inch SAS drives achieving between 26TB and 39TB storage and as much as 150,000 IOPS.  All this from a single 3U device.  A single expansion shelf can be added to each HA-3000 (doubling it’s capacity).

Greenbytes H-3000 High Availability (HA) Inline Deduplicating iSCSI SAN

Features include:

Dual Controllers

These Xeon based controllers can operate independently in an Active/Active configuration giving high availability features that today’s Virtual Desktop and Virtual Server environments demand from an iSCSI SAN

Redundant Networking Capability

Equipped with 4x 1GbE and 2x 10GbE network ports, gives the HA-3000 a fully flexible connectivity and fully redundant capability.

True Hybrid based performance

The Greenbytes HA-3000 has an intelligent dual SSD based cache layer, giving low latency performance without the current high cost of SSD.  Having a flexible virtual pool and thin provisioning feature, the system can be configured to use additional SSD in the cache layer to accelerate performance.

Inline Deduplication

Deduplication is a method of storing a single instance of a file that is commonly used across guest operating systems hosted on a Hypervisor platform.  This allows a huge saving in the amount of data stored on a storage platform.  This can apply to operating system based files as well as software applications

For example a 1MB email attachment sent to all recipients in the address book would normally mean that the Mail Server software would store multiple copies of this attachment.  Deduplication allows a single instance of the file, with markers pointing to the file being stored within the emails containing the original file.

Above is a very simplified example of where storage savings in deduplication can be made.  Storing a single version of a file (notepad.exe) once, and storing only a pointer to the actual file in subsequent instances held on the device.

The calculation overhead related to the lookup and processing of this translation of data – Read pointer, lookup single file instance location, open the file – requires an additional  overhead on the storage, however, as this data is held on the Solid State Disks within the HA-3000, access to this data is very fast. The Greenbytes HA-3000 is specifically designed to be optimised for this function, which is key to how the amazing performance  is achieved.

Inline Compression

This functionality allows the Greenbytes HA-3000 to store more data than it’s physical size.  A typical compression ration of 10x theoretically expands the device to store 10 times it’s physical storage.  A 26TB SAN can therefore store 260TB (Yes! 260TB from a 3U device).  Inline compression and deduplication functionality expands the storage beyond it’s physical footprint.

Enterprise Management and Features

The GreenBytes HA-3000 includes a built-in snapshot and replication management software package which requires no additional licencing costs.  This allows the system to be configured with Active/Passive or Active/Active access to the storage pool(s).  When configured as Active/Active with the optional HA-3000 storage expansion unit, the controller design accelerates achievable IOPS, which broadens the range of target storage applications within the single product family.  The HA-3000 can easily be paired to work with other GreenBytes SAN systems (including the GB-X Series), all from a single screen software.

The HA-3000 has performance capabilities are significantly greater than other iSCSI SANs in the same price range. With GreenBytes’ Hybrid Storage Architecture (HSA), an intelligent SSD-enabled cache layer accelerates read and write IOPS for the most demanding of virtual infrastructure projects, including the largest enterprise VDI installations. 

The HA-3000 series offers a wide range of capacities aimed directly at the challenges of virtual infrastructure projects and other business-critical applications requiring highly available (HA) SAN storage.  The HA-3000 has raw capacities ranging from 26 TB to 78 TB. With additional optimizations of deduplication and compression, the actual capacities of the systems are typically much greater, especially in the area of desktop virtualization.

Buy GreenBytes High Availability (HA) Inline Deduplicating iSCSI SAN from Future Storage

Pro Tools SSD vs HDD

Ok so my first post on this topic (Comparing Pro Tools on an SSD) was not a true or particularly valid test.  Everyone knows that Windows 7 is faster than Windows XP – Heck Windows XP was released in 2001, so is 10 years old.  What was needed was to compare Pro Tools on an SSD vs HDD and does it make a difference?

Well I decided to take the plunge and rid myself of Windows XP – I’d built Windows 7 Ultimate on a Future Storage A-Synchronous 240GB SSD and have been running it for a week. So I used Clonezillato create an exact copy of the SSD onto the Seagate 500GB Hard Disk Drive (ST3500620AS).  This involved burning a CD – booting from it and carefully following the instructions.  It’s a bit scary as of course if you select the wrong drives you end up wiping Windows 7 and copying the contents of the HDD onto the SSD (I’d wiped the HDD beforehand so if I made a mistake would end up with 2 blank disks).  Luckily Clonezilla names the drives during the selection process and it’s impossible to get it wrong (if you follow the instructions carefully).

I did intend to use GAG 4.10 to allow me to boot from either disk – but this didn’t work and I ended up getting into a bit of a mess with the MBR (Master Boot Record) which involved booting off the Windows 7 disk and Restoring the Windows 7 MBR  – in the end I gave up with GAG and now just disable the disk I don’t need in the BIOS.

The results – well I created another video to compare the 2 (see below).  I also added the track that I was loading on each system (shameless bit of self indulgence – though I know I wont win any awards on any singing contests).  You will see that the Windows XP clip doesn’t even load before the end of the song.

So the conclusion is quite significant – from boot to load a track is 1 min 41 on the SSD  vs 3 min 26 on the HDD  running on the same Dell XPS 420 with 3GB RAM.

Pro Tools SSD vs HDD– SSD wins hands down!

Secure Solid State Disk Drives Coming Soon

While at Cebit 2011 I  came across a stall by Securedrives.  We exchanged details and we will be looking to resell their products when they are available.  What are they and what makes them so secure you may ask?

To give you an idea – I want to tell you a story:

ABC Designers Ltd has been given a brief to design the packaging for a set of top secret new products for the MP3 player market, from a leading manufacturer of electronic devices. They sign a Non Disclosure Agreement with the manufacturer and promise to keep it all a secret.

After 2 months of hard work, the project is ready to show the client.

They decide to store the whole project on a Securedrives 2.5 inch SATA Solid State Disk.

Securedrives portable SSD

Before making the trip to the client, they store all of the files required for the project (JPEGs, NDA agreement, PSD files, quotes and anything else) on the Securedrives SSD. They set the password on the keypad of the device, which also has a GSM mobile phone SIM, for which the details are kept by Maureen the office administrator.

They activate the GSM signal loss option, to self destruct the disk if it fails to pick up a signal for 5 minutes. Of course Steve can override this if they get stuck in traffic in a blackspot.

The battery is also fully charged – and so the option to self destroy the disk should the battery run down, are applied.

They also switch on the random keyboard option – which jumbles the numerical keypad at every time the device is locked, so that the pattern of the password, typed in by the user, cannot be replicated by prying eyes.

The disk is put into a briefcase and given to Steve to take to the client. He also has his trusty laptop. Steve goes out to hail a cab to make the trip across London.

Steve hailing a cab

He puts the case on the back seat next to him and chats to the cab driver as they move through the traffic. An hour later, Steve reaches his destination, pays for the cab from his wallet and jumps out of the cab while the lights are red at a junction just 20 meters from the client’s office.

Steve Lost the Briefcase

OH NO! He forgot the briefcase – ok, he found an umbrella – but that is not much good to him – the cab is long gone. Not only has he got to go and apologise to the customer, but he’s also got to explain that the data is lost and could fall into the wrong hands. Luckily they saved the data onto a Securedrives PSD64MG1 drive. He calls Maureen in the office to explain.

Maureen taking the call

Maureen initiates the remote data wipe facility by sending a text message to the drive using her mobile phone.

While sitting in the back of the taxi, still in the briefcase, the Securedrives PSD64MG1 drive receives the SMS from Maureen at which point the drive

  • …changes it’s encryption key, making it even more difficult to find the original encryption key (should anyone ever work out how to decode AES 256 bit Cypher Blocking Chaining (CBC) from a destroyed disk at any point in the future.
  • The drive’s partition table is overwritten with “white noise”, making it impossible to piece together any of the data within the NAND chips. The NAND chips are actually physically destroyed:

Physically destroyed NAND

This prevents anyone from removing the chips from the board – mounting them on another board, and somehow rebuilding the disk. All of these things are done in 300ms.

The taxi driver then drives through Blackwall tunnel, a notorious traffic blackspot, and sits in a traffic jam, where the GSM signal is lost for 10 minutes – this would normally destroy the drive – but that has already happened via the SMS from Maureen.

There are more secure options to this amazing drive. Let’s say the taxi driver hadn’t gone through the tunnel. Perhaps, the taxi driver had picked up another passenger who decided to have a nose in the briefcase. He would have had to guess the 20 length pin number (options are 4 to 20) within 8 attempts (options are 2 to 8).

If he had broken the case open to get to the NAND Flash itself – the device would also self destruct.

The GSM interface also allows the device’s location to be tracked – should they wished to have tried to recover the device before destroying it. However they didn’t have the tools to do that.

So ABC Designers were safe in the knowledge that the data could not fall into the hands of the customer’s competition. All was good in the end.

Steve, however was fired – poor chap!

Like us on Facebook to register an interest. for more info.

Comparing Pro Tools on SSD to HDD

Pro Tools on an SSD

My SSD upgrade on my music/Programming desktop took a while due to my mistrust of Microsoft OS upgrades where complex software such as Pro Tools would not work. Indeed, if you Google “Windows 7 Pro Tools” you will find 767 million results, of which many of them are stating that the 2 do not mix. However, on reading some of them, there is a community of Pro Tools users out there stating that Windows 7 and Pro Tools do work even in the 64-bit guise.

As I do more and more work on this desktop system now (as opposed to my Ubuntu, SSD marriage on my trusty laptop) I went for the upgrade.

The SSD is a 240GB A-Synchronous SATA III drive which I have discussed before (see SATA III on a SATA II PC), which I have installed Windows 7 64-bit Ultimate. I went for the top version of the OS, as it was the same price as the Professional version, but don’t think I will use the extra Bitlocker function or additional languages (though my wife uses Mandarin on her laptop – so if she ever used this desktop, we have that extra option).

My main use for the Dell XPS 420 (3GB RAM), with an ATI Radeon HD 3870 graphics card (with dual DVI outputs) is to run as my Digital Audio Workstation (DAW). I’ve been a bedroom musician for about 22 years now, and like to blast away – I say blast – it’s probably more mellow than that these days (I turned 40 this year).

I learned about Midi on an Atari 1040 STFM (with Mono Chrome monitor) running Cubase back in the early 90’s and built up a list of equipment:

Korg M1 Synth
Roland RD-300 88-key Keyboard
Roland S-760 Sampler (500GB SCSI Drive and CDROM)
Roland Super JD990 synth module
EMU Morpheus synth module
Cheetah MS6 Analogue Synth module
Casio VZ-10M Synth
Korg Poly 800 – Analogue Synth
Midjay Midi workstation
Korg 168RC Mixer
Behringer Eurorack 16chn rackmount mixer
Casio HZ600 Synth
Alesis AI3 – ADAT Optical interface (8 analog inputs/outputs to 1x ADAT 8chn in/out)

A few rack mount compressors, sound processors etc, Acoustic and Electric Guitars, Hohner Bass, King Alto Sax and an Indian made Trumpet I bought on Ebay for £1 (£60 shipping) – you get the idea.

A few years back I took the plunge and decided to migrate from Cubase to Pro Tools. My experience of DAW on a PC had never been very satisfactory running Cubase. On the Atari 1040 STFM it was amazing – never ever crashed and was a joy. However, recording audio on it was not an option.

So I moved to Cubase, but was never that happy with it. I even spent £600 on a SONORUS Pro PCI ADAT card to go with the Korg 168RC mixer. Which, although worked fine until I upgraded to Windows XP, and the new motherboards didn’t support the old PCI interface..

My Pro Tools setup was one up from the cheapest option withe MBOX 2 Factory bundle which came with version Pro Tools Le 7.4:

Mbox2 Pro Tools

Which gives a Stereo In and Out via USB connection to the PC. On later versions of Pro Tools the MBox 2 could also be used for the main audio output for the PC. This was very handy as it negated a need for an extra pair of channels on the mixer (I always struggle with channels as the Korg 168Rc only has 8 analog inputs – even using the AI3 I was still short so ended up buying the Behringer rack mount mixer). Now the MBOX2 is also my PC’s soundcard. Turns out that for Windows 7 an upgrade to Pro Tools 8.05 was required in order to get this to work.

I’m getting ahead of myself. My PC has had a Windows XP Pro build for a couple of years (since it’s last rebuild), running on a Seagate 500MB Hard Disk Drive (ST3500620AS). I also have a 1TB Samsung HD103UJ which I store all downloads and data. It really needs another wipe and rebuild, which I will do in due course. I always leave it until I’ve been using the new OS for a while in case I need to copy something over that I have missed.

The PC now has 3 drives, 2x with an OS. After looking for a Boot Loader to install I found GAG 4.10 which allowed me to boot from the CD that you create from it. You then get a add your Operating Systems to it from a very simple menu system (screenshots available at ).

This allows me to boot to Windows XP or Windows 7 without having to unplug. However, Windows XP was built in Native PCI mode (RAID turned off) and I converted the Windows 7 Ultimate to AHCI Mode (RAID Turned on) as per my blog at SATA III on a SATA II Motherboard update So this needs changing before the GAG Menu pops up. For those worried about doing it – you boot from the cd, install it on the Hard Disk (it stores it on the MBR for you) – if you don’t want it just uninstall in (there is an option in the menu to do it). I have installed Windows 7 under key 2 and Windows XP under key 3 (though this swaps when I change the BIOS over).

I recorded the 2 boot times. Note: I have taken the GAG menu out of the clips – will do a separate clip to show that. This is one of the best reasons to upgrade your PC to both the new version of Windows as well as upgrade to an SSD. I may be an advocate of Solid State Disks – but I’m now also a fan of Windows 7. The 2 are perfect partners.

So now to Pro Tools.  The difference is not so much apparent when playing a track that you have created, but more in the boot times.  I’ve loaded up both OS versions with plenty of plugins.  The following track has 15 separate tracks (11 Audio, stereo Acoustic Guitar, 5 tracks of electric guitar, bass, vocal, backing vocal, a couple of instrument tracks for drums (EZDrummer) and a Synth plugin).  It was originally recorded in Windows XP (this version is 8.04) – Windows 7 Ultimate has been upgraded to 8.05 to enable the MBOX2 to be used as the windows soundcard (as discussed above).  The Camstudio screen capture software failed to work on Windows XP while Pro Tools was loading so I had to record it with the video camera.
NoteFor those who may not use Pro Tools – the software effectively loads in every plugin that you have installed on your PC whether you use it or not which is why it takes so long to load.

Compare load times with Pro Tools on an SSD (Win 7) vs Windows XP HDD

I haven’t had a chance to do any long sessions with the music gear yet – but when I do I’ll post an update to report any findings.

Compare load times with Pro Tools on an SSD Vs Windows XP on HDD

Of course this is not a very good comparison – It’s not supposed to be competitive.   It’s to show how the difference the technology has moved on.  For anyone still holding onto Windows XP running Pro Tools – ditch it and upgrade!

Next up is to clone the SSD onto the HDD (wiping XP off of it) and then I can compare the SSD with the HDD.

What is a solid state disk?

The computer world, and users of the computers, demand faster and faster systems. Waiting for a booting PC or for a software package to load has become less acceptable to the demanding user. 30 years ago software was loaded via audio cassette tape. Yet today we all expect everything to happen at the touch of a button. Whether we are listening to music, working on a spreadsheet, Hey! even do the 2 things at the same time, which all PC’s are now capable of doing.

However, nobody wants to wait for the next song to load, or for Excel to launch. I upgraded my Windows XP system recently to Windows 7 64-bit Ultimate with a SATA Solid State Disk. The difference is at times subtle, but in other ways staggering. As I type this my Winamp plays my shuffled MP3 collection seamlessly, but then it did on Windows XP. This is written using Thingamablog (that is what it’s called – rather than having forgotten what the blog software is called, cause that would be just dumb!), and so while I am typing, the SSD is whirring away – actually it doesn’t whir at all – it is silent. For me, today, that doesn’t make a lot of difference to my world – my amp connected to my speakers that the sound card is plugged into is louder than the PC whir (I stupidly bought a DJ amp which has a fan in it and have not changed it yet).

However, I can see the benefit of a silent, or at best quiet PC. At one point I tried it in our living room as part of our entertainment system, and it was just too darn loud. Recording studios and broadcasting, recording suites would benefit from a quiet PC, or indeed you had your PC in your bedroom and left it on when you go to bed waiting for that download to complete.

The main thing is the boot time and loading all software onto the system. With Windows XP I used to turn the PC on before I go out to walk the dog in the morning, happy that on my return, the login would be acceptable. On the times I forgot required a long weight ensued, and where a watched kettle never boiled – it seemed a painful experience.

My main system is a Ubuntu 11.04 laptop with an SSD – which loads in seconds and programs are immediately available after the completed login. Both systems enjoy this exact same experience now, though I accept Windows 7 probably does help to perform this wonder.

What is an SSD? Ok Ok sorry I went off on a tangent. A solid state disk drive has no moving parts, they use high speed circuits and NAND Flash memory to store data. Therefore they have no speed measurement like a 5400 rpm or 7200 rpm disk (revolutions per minute) which had a rotating magnetic platter on which the data is written to and read from using a mechanical arm. A solid state disk has no mechanical parts and uses technology similar (but more complex and robust) to solid state media used in camera cards, mobile phone memory cards and USB Pen drives.

With more and more drives appearing on the market, and more systems being shipped with an SSD as standard, it won’t be long before we see the end of the spinning disk in the market.