Fujitsu ETERNUS DX Software Features & Functionality
There is a whole host of software features that the Fujitsu ETERNUS DX supports:
- Automated Storage Tiering
- Thin Provisioning
- Advanced Copy Functions
- Disaster Recovery
- Remote Copy
- Data Integrity
- Data Encryption
- Offline Storage Migration
- Storage Cluster
A prerequisite for any storage consolidation strategy is the ability to host multiple applications on a single storage platform without allowing the actions of one set of users to affect the I/O performance of others.
Potential problem areas for shared storage access include:
■ Workloads with I/O and cache conflicts, such as online transaction processing (OLTP) and data warehousing
■ Tiered storage access restrictions, such as development and production applications
■ Peak processing demands for critical applications versus maintenance activities, such as backup or database reorganization
The ETERNUS DX Quality of Service feature with application I/O prioritization resolves these issues and enables the consolidation of multiple tiers of applications in a single storage system.
It sets performance limits for each connected server according to its priority. By prioritizing data access and dynamically managing any I/O conflict, high performance can be guaranteed for high-priority applications, and at the same time capacity is used more efficiently, thus increasing storage utilization without sacrificing performance. The QoS policies allow the user to specify the expected I/O patterns of each application (random, sequential, read or write-based, and mixed).
An example is shown in the figure below. Two servers are connected to an ETERNUS DX storage system. Server B is granted a higher priority than server A. Accordingly, limits for I/O requests from both servers are set and server B has a higher limit than server A. In the event of increased workloads on the low-priority server A, the system limits the I/O performance at the predefined level and the performance of the high-priority server B is not affected. Thus the required I/O performance is guaranteed regardless of the workloads on other servers with lower priority.
The Quality of Service functionality provides higher degree of automation to ensure simpler and more intuitive settings. The ETERNUS SF Storage Cruiser Quality of Service management option sets values based on performance requirements and dynamically adjusts the values along with the result of performance monitoring.
This feature makes it easier for the user to start the settings. Furthermore, the automatic tuning ensures that the values used are more accurate, resulting in better service level fulfilment.
■ Mapping application Service Level Agreements (SLA) to storage infrastructure
■ Increased storage utilization by combining different workload profiles
■ Allows service providers to guarantee a specific QoS and charge accordingly
The use of long-term data storage within organizations is increasing due to many laws and government regulations governing data retention, not to mention internal data audit requirements. One problem is the access frequency of older information, which typically decreases over time. But because of difficulties in managing such access, information which is well-suited for low-cost long-term storage is often left to reside in more expensive high-performance storage systems.
Automated Storage Tiering (AST) is a feature that monitors data access frequency in mixed environments that contain different storage classes and disk types. The storage administrator does not need to classify data or define policies. Once the tiers are configured, the ETERNUS DX storage system does all the work, enabling the storage administrator to focus on other storage-related responsibilities. The automation of tiered storage means that multiple storage tiers can be managed as a single entity. It helps ensure that the right data is in the right place at the right time.
The ETERNUS SF Storage Cruiser controls the destination and the arrangement of data, monitors its access frequency, and automatically relocates the data between drives to the most appropriate storage devices. This storage hierarchy control offers significant investment optimization and reduces storage costs by matching storage system capabilities and applications sensitivity to performance, availability, price and functionality. Infrequently used data and non-essential copies of primary application data, e.g. point-in-time snapshots, replication copies and data mining are located on Nearline drives, which have large capacity but are less expensive. For high priority applications, the best performance and response times for important information are improved by locating frequently accessed data on high-performance SSD. The overall arrangement of data on the different drive types is thus optimized regarding costs. The relocation of data is completely transparent to servers and applications and is carried out without any changes in server settings.
■ Reduces data management time and costs due to automated operations
■ Provides optimal performance while reducing costs
■ Operational data reallocation policies can be flexibly set to meet requirements
■ Reallocations are performed without changes in server settings
Data can be moved in 252 MB chunks providing high efficiency, as less data with low performance requirements would un-necessarily be moved to faster, more expensive disk drives. On the other hand, it guarantees that data demanding high performance will be moved to the fastest disk drives.
Calendar-based scheduling enables the exclusion of off-day performance, such as weekends and public holidays, from the tuning process.
Storage system growth continues year on year. Due to concerns about having sufficient storage capacity, users tend to deploy more physical storage than they actually need – “just to be safe.” However, in practice the allocated capacity is often underutilized. Industry research organizations have even stated that in some cases only 20% to 30% of the provided capacity is actually used.
Thin provisioning technology has thus been developed to enable effective use of available storage capacity for better investment utilization. It reduces physical storage deployment by using virtual storage techniques that maximize available capacities.
Thin provisioning only assigns the total overall user capacity as virtual storage. The actual physical disk capacity is allocated as and when needed. All physical disks are managed as a single disk pool and allocated according to the amount of data written to the virtual volumes. This reduces the amount of unused physical disk capacity and supports much more effective storage operations. Furthermore, predefined thresholds avoid storage capacity shortages by issuing a warning that additional physical disks need to be added.
Example: A user requests 10 TB of resource allocation from the server administrator. While 10 TB of physical storage capacity may eventually be needed, current usage suggests that 2 TB of storage is sufficient. The system administrator therefore prepares 2 TB of physical storage, but allocates a 10 TB virtual volume to the server. This means that the server can start using the existing physical disk pool which is only around 1/5 of the virtual volume. This “start small” approach enables more effective use of storage capacity. As more physical capacity is required to support the virtual volume (as shown in the diagram), existing physical volume capacity is consumed. In order to avoid a capacity shortage, the physical disk pool is monitored using a predefined usage threshold. For example, by defining 80% of the entire disk pool as the threshold, an alarm tells the administrator to expand the number of physical disks when that amount of 8 TB in our example is reached. This means that the new drives can be added without stopping the system, ensuring continuous system operation.
■ Lowers initial investment by using storage capacity very efficiently (start small)
■ Does not require any changes to storage capacity settings for changes on demand
■ Reduces operational costs by integrating storage with virtualization
■ Reduces overall power consumption via reductions in over-provisioning
The advanced copy functions allow the disk storage system to carry out high-speed copy operations without any need to draw on server CPU resources. Advanced Copy functions are used to copy a business data volume to a separate copy volume at any point in time, quickly and within the disk storage system. Once the copy is complete, the copy volume can be separated from the business volume in order to ensure that no further updates to the business volume are applied to the copy volume. This allows the copy volume data to be backed up to a tape device as a point-in-time copy of the business data while normal operations continue.
ETERNUS DX systems support the two distinct data copy modes:
Synchronous high-speed copy and Snapshot high-speed copy.
■ Synchronous high-speed copy maintains the equivalent status for a transaction volume and backup volume. The two copy types available are: EC (Equivalent Copy) and REC (Remote Equivalent Copy)
■ Snapshot high-speed copy creates a snapshot of data. The copy types available with this function are: OPC (One Point Copy), QuickOPC, SnapOPC and SnapOPC+
Equivalent Copy – EC
Equivalent Copy creates and maintains a copy volume (mirror) synchronized to the business data volume, until they are “detached” (mirror suspend or break) so as to enable the start of a backup operation.
As the detached copy volume contains the same data as the business volume up to the time synchronization stops, it can be used as a point-in-time copy for backup to a tape device, while business operations continue on the original business volume. Two methods of breaking the mirror are provided. If a complete break is made, the copy volume is fully detached and any subsequent use of EC copies all the operational data again to a new copy volume before maintaining the mirror in synchronization. If the Suspend/ Resume functions are used (mirror suspend), the same copy can be resumed. In this case only the differences between the business and the copy volume are copied until synchronization is once again reached and then maintained. The method of re-establishing synchronization depends on how long the copy volume has been detached. If the suspension time is relatively short, the suspend resume method is the quickest.
■ Ensures significant reductions in time by enabling backup data to be created and maintained while normal business operations are carried out in parallel.
■ Enables copy data to be detached and used for other processes, data discovery, batch enquiries, system testing, at any point in time without impacting operational processes.
One Point Copy – OPC
One Point Copy (OPC) enables the creation of a high-speed copy of an entire business data volume at any specific point in time.
Unlike EC, with its data synchronization (mirroring) capability, the copy volume created by OPC is always separated from the business volume (a point-in-time copy) and never reflects ongoing updates to the business data.
This means the copy is a snapshot of the business data at the time the OPC request is issued, and as such can be backed up to a tape device in parallel with the ongoing business operation. However, for a subsequent backup, OPC requires that all the data is copied once again. QuickOPC is provided to copy only the updated data.
■ High-speed backup and near real-time disk-to-disk-to-tape back-up extends business operation hours and availability
Quick One Point Copy – QuickOPC
QuickOPC initially copies all the business data volume to a copy volume. Subsequently, it only copies updates that occur on the business volume (the differences).
This reduces copying time, particularly with large databases, enabling high-speed backup operations. Both the business volume and the copy volume have the same data size. This process is particularly suitable for backup operations on mission critical databases where robust data security is essential.
■ Physical copying after the initial full copy requires significantly less time.
Partial copy function – SnapOPC/SnapOPC+
SnapOPC only copies the “before” image of data that is being updated to the copy volume. By only copying data subject to change, the copy volume capacity can be significantly reduced in comparison to the original business data volume.
Furthermore, SnapOPC+ provides generation management of the updated data. The difference between SnapOPC+ and SnapOPC is that SnapOPC+ updates data only as history information, while SnapOPC stores the data redundantly. Logging as history information can provide disk-based generation backup using a smaller copy volume capacity.
■ Enables efficient copy retention because the overall copy volume capacity is much smaller.
A loss of data due to human error or a natural disaster, such as earthquakes or fire, poses serious risks for IT administrators. Data has to reside at different geographical locations in order to ensure smooth recovery should disaster strike. ETERNUS DX storage systems support a number of features that guarantee reliable operation in disaster scenarios.
Remote Copy using Fibre Channel Interface
REC (Remote Equivalent Copy) provides a server-less remote mirroring function, which ensures fast recovery if the primary disk storage system site is not operational.
Remote Advanced Copy for Storage Area Networks (SAN)
By using Fibre Channel interfaces, Remote Advanced Copy can provide low-cost remote site support between a primary storage device and a secondary device.
Extended Remote Advanced Copy for Wide Area Networks (WAN)
Extended Remote Advanced Copy uses a combination of a Fibre Channel switch and WAN converter to cover very long distances over WAN. Replicated data can be located at a remote site hundreds of miles away from the primary site. This provides high security for the protection of critical data from any kind of disaster.
Furthermore, ETERNUS DX S3 supports replication to existing models as well as N:1 integrated backup. These capabilities enable flexible system configuration matched to customer requirements. Remote copy using iSCSI interface is supported as well.
A consistency model is provided in order to provide remote copying over low-bandwidth networks. It uses part of the cache memory as a buffer (the REC buffer). Data is then copied to the destination device and compiled on a block basis after accumulating the I/O from multiple REC sessions in the REC buffer for a specific period. Use of this mode provides the control to maintain transfer integrity even when data is transferred out of sequence, due to transfer delays as it travels to the destination device via WAN.
Furthermore, disk-buffered REC can be used if the cache memory capacity becomes insufficient due to instabilities in the link or increased traffic. This supports temporary increases in updated data using the larger buffering capacity of hard disks.
As some data is mission-critical, it must always be accessible. In order to ensure data availability, even in the event of a system or site failure, transparent failover will be introduced in the second version of the S3 release based on REC (Remote Equivalent Copy) functionality.
The basic concept is constructed around deploying a secondary storage system and a monitoring server. As long as the primary storage system is running, data is transferred from it to the secondary system via a synchronous REC function. The monitoring server continuously checks the status of the primary storage. If a failure is detected, it runs the failover logic and the primary storage information (e.g. LUN ID) is taken over to the secondary storage in order to recognize the volume transparently by the I/O server. Hence, operations run smoothly ensuring business continuity.
There are two scenarios that can be implemented:
Primary storage system failure
This scenario is targeted to overcome the situation should the primary storage system fail. The secondary system and the
monitoring server are deployed at the same site as the primary storage.
Primary site failure
As the first scenario does not cover a possible failure of the primary site, e.g. due to a natural disaster or man-made error, this scenario is supported in order to provide higher reliability and data availability. The secondary storage system is deployed at a different site, which can be up to 100 km away from the primary site. The monitoring server is located at a different site as well.
Data errors can occur for different reasons. They result in data corruption, which in turn can lead to a loss of important company information. The ETERNUS DX storage systems support the following techniques which ensure data integrity:
Data Block Guard
The Data Block Guard function adds check codes for data stored during write operations. While verifying the codes for the read/write operations, it guarantees data integrity at multiple checkpoints along the data transmission route.
Oracle Database Data Guard
ETERNUS DX disk storage systems check data integrity using Data Block Guard technology. While this is very important, it still does not cover those situations where data corruption occurs in the interfaces between systems. This is because Data Block Guard only verifies data after it has reached the storage device.
Fujitsu also uses another data protection mechanism called Database Data Guard by Oracle. This combination of data security measures enables ETERNUS DX disk storage systems to provide very robust data integrity.
When data is written to the disk storage system, the database adds check codes. The disk storage system knows the logic of these check codes and where the codes are placed, thus enabling it to verify the data via the check codes. When an ETERNUS DX disk storage system identifies any data corruption, it stops further operations and notifies the administrator, thus preventing the use of data which is known to be corrupt.
Disk Drive Patrol
Data on the ETERNUS DX disk storage systems is protected via a disk drive patrol function. The controller regularly checks the disk drives in order to detect errors and write failures. This process also ensures data consistency within the volume group.
Data on each disk drive is read, and if an error is detected, data is reconstructed via the redundant information contained within the volume group. The corrected data is then written to a new valid area on the disk drive.
■ Higher data reliability as data errors are quickly found and corrected (by reconstruction) and disk write failures are avoided.
ETERNUS DX S3 scalable entry-level models guarantee data security even in the event of cache failure because cache is redundantly configured and constantly mirrored.
If the power supply fails, the controller cache is automatically evacuated and data is placed in an internal SSD. A system capacitor unit (SCU) provides sufficient power to always ensure that all data is successfully rescued. The internal SSD protects the data indefinitely.
The use of capacitors has some advantages over batteries; they shrink system size and weight because capacitors are smaller and lighter than batteries. Toxic waste is also reduced by using a permanent SCU instead of periodically replaceable batteries.
The use of a super capacitor as a power supply is suitable for the entrylevel models. The mid-range and high-end models, due to their high amounts of data, require batteries to
■ Cached data remains secure during any power outage regardless of the duration.
Due to various data protection laws, enterprise information and the security involved has become much more important from a corporate social responsibility standpoint. Laws and internal guidelines require that access to relevant stored data is restricted only to authorized users and that sensitive information is protected against unauthorized or accidental access. ETERNUS DX disk storage systems provide data encryption functions to address such requirements.
Data can be automatically encrypted inside disk storage systems using high-security 128-bit AES technology and Fujitsu Original Encryption. This not only ensures that data is protected during use – it also ensures security during data transfer to off-site archive facilities.
Fujitsu Original Encryption is a unique encryption scheme that encrypts drive data in ETERNUS DX. Encryption is on a LUN basis. It comes at no extra cost and provides some key benefits in comparison with 128-bit AES encryption, such as:
■ Less performance degradation
■ Closed technology ensuring higher security
Robust security using SSL / SSH
The ETERNUS DX S3 series supports SSL (Secure Socket Layer)/SSH Secure Shell) for encryption and secure transfer of data over a network. Normal data transfer without encryption bears the risk of possible unauthorized accesses from malicious web browsers and CLI that appear authorized yet are attempting to steal or manipulate data.
SSL enables a secure transfer of important data using SSL server certification (public key and secret key) on both the browser and web servers. SSH encrypts data using common key encryption mechanisms (DES, AES) when it is forwarded from one computer to another via a TCP/IP network. SSH achieves high data security by also hiding the common key using public key encryption mechanisms. Encrypted communication between ETERNUS DX systems and user terminals equipped with these technologies prevents the manipulation and theft of important information.
Self-encrypting drives (SED)
In order to ensure full data security, the ETERNUS DX family supports self-encrypting drives (SED). Self-encryption means that all the data transferred to the storage medium is automatically encrypted internally before the data is written and vice versa. When the data is read from the storage medium, the data is automatically decrypted into plain text. All data passing the interface between the host controller and the disk drives interface is in plain text. The internal encryption process is transparent for the host. All read/write operations for the host are business as usual. The encryption is via a process where plain text is encrypted in order to hide its meaning. The plain text is encrypted (cipher text) when it is written to the disk and decrypted (deciphered) back to the original text when it is read from the disk. The encryption and decryption engines use the same secret internal data encryption key for this process.
The SED uses two methods for the encryption/decryption process:
■ The internal data encryption key
Each SED generates an internal data encryption key in the factory, which is embedded in the drive and cannot be read out or deleted. The encryption key can be modified to destroy or delete the data.
■ The algorithm of the encryption/decryption engine
The algorithm is a standard known as the Advanced Encryption Standard (AES), which is recommended by the US government. There are two versions of this standard: AES-128 and AES-256. The numbers 128 and 256 refer to the bit size of the encryption key used by the algorithm
When, for example, a storage system is replaced, storage migration allows logical volume data to be moved from one ETERNUS DX storage system to a new system without involving the host. In this process, the new ETERNUS DX storage system (migration destination) connects directly to the existing ETERNUS DX storage system (migration source) in order to copy the data in the logical volume on a block level basis. Access from the host is suspended during data copying. No additional costly software or licenses are needed for the storage migration.
Storage migration can be performed just by changing the operating mode of the migration destination channel adapter (CA) port from normal CA mode to initiator mode. The destination can thus obtain data from the source. The path between the migration destination and source can be direct or via switch. Path redundancy is also supported in order to ensure higher reliability. The progress of data migration can be monitored from the GUI. Functions, such as pause, suspension and resume, are also available.
Compare functions exist in order to verify that the data migration has been completed without any errors:
■ Quick compare:
compares only several data blocks from the top of a volume
■ Full compare:
compares all data blocks in a volume
Having completed the data migration process, the operating mode of the destination CA can be changed back to CA mode and the host is connected to the new storage system.
As some data is mission-critical, it must always be accessible. In order to ensure data availability, even in the event of a system or site failure, Storage Cluster supports application and server transparent failover based on synchronous REC (Remote Equivalent Copy) function.
The basic concept is constructed around deploying a secondary storage system and a Storage Cluster Controller. As long as the primary storage system is running, data is transferred from it to the secondary system via a synchronous Replication function. The Storage Cluster Controller continuously checks the status of the primary storage. If a failure is detected, it runs the failover logic and the primary storage information (e.g. LUN ID) is taken over to the secondary storage in order to recognize the volume transparently by the I/O server. Hence, operations run smoothly ensuring business continuity.
Primary storage system failure
When the primary storage system fails due to a natural disaster or man-made error, the second storage system serves as a critical backup.
- Ensure data availability in the event of a system or site failure
- The secondary storage system ensures business continuity in a disaster
|Supported Disk Storage Systems||ETERNUS DX100 S3, DX200 S3, ETERNUS DX500 S3, DX600 S3|
|Required Software||ETERNUS SF Storage Cruiser Standard License
ETERNUS SF Storage Cruiser Storage Cluster Option
ETERNUS SF AdvancedCopy Manager Remote Copy License