Broadcast Beat Magazine September, 2015 - Page 52

In the days of nitrocellulose, the film editing process required the skillful use of a glue brush. Media asset management was accomplished with a card catalog pointing to a rack in the studio vault where one could find a can of film with a label matching a hand written leader. Today, we have seen several complete evolutions of the process from editing done on custom work stations through the use of commodity hardware and non-linear editing software. Storage has evolved through multiple generations as well, and we have supported this entire process.

DDN entered the fibre channel storage market specifically for media with an arbitrated loop product over 20 years ago. Those were the first days of the Storage Area Network (SAN), and we functioned behind managed hubs and switches to enable the first collaborative environments. In 2000, we introduced a storage product with a custom ASIC that presented a virtual fibre channel environment. This was the first product to introduce a guaranteed Quality of Service (QoS) and was, in effect, a “perfect” disk drive that never varied in latency or availability. We even offered a management API that allowed post production facilities to guarantee privacy for users through scripted, managed, permissions to common archives that could scale to unprecedented sizes.

In 2006, we were the first to offer an InfiniBand (IB) host interface, which allowed SANs to be deployed with a lower latency interconnect. By 2008, we had

I had the opportunity to tour the Library of Congress film archive in Culpepper, Virginia. There, in an underground bunker, the library has stored 140,000,000 feet of nitrocellulose film and 180,000,000 feet of safety film. As I walked deep into the bunker bored into the side of a hill, I could not help but think that the entire archive could be stored on just two racks of our very dense DDN storage. We are proud to be a part of the Library’s endeavor to digitize these precious creative assets for future generations.

developed an architecture that enabled complete virtual file systems to reside in a common memory space with the storage system completely eliminating the transmission latencies normally associated with a serial Small Computer System Interface (SCSI) transfer.

Initially, we focused on dedicated hardware with field-programmable gate array (FPGA)-state machines to guarantee consistency and high performance. The problem with that approach was that we were relying on file systems that were really not designed to have the same attributes. A typical file system handles the writing of data in a largely serial process. There is a V-node entry, which is created as the file name and is associated with an I-node entry in a file allocation table containing extent lists indicating the placement of data on devices such as disk drives. The deletion of data releases the blocks that were utilized in the I-node so that they can be gathered for another write operation. As data is written, locks are placed on either block-based segments or entire files to ensure file integrity in the case of two or more users trying to write to the same file at the same time. Since blocks are gathered in a new I-node for each file creation, and since these blocks have been released from a delete operation, data placement efficiency on disk drives is not a priority. Files can be modified by the inclusion of additional blocks of data, but this operation, again, does not guarantee efficient data placement. The end result is



IBC Issue September 2015