Quantcast
Channel: Symantec Connect - Storage and Clustering
Viewing all 2014 articles
Browse latest View live

SFHA Solutions 6.0.1: Deploying Oracle RAC with SF Oracle RAC in Solaris Zones environment

$
0
0
I do not need a solution (just sharing information)

You can install and manage your Oracle RAC database inside zone environments using Storage Foundation for Oracle Real Application Clusters (SF Oracle RAC). SF Oracle RAC runs in the global zone forming a cluster of multiple nodes. Oracle RAC runs in the non-global zone forming a cluster of zones across different nodes in the SF Oracle RAC cluster. You can thus have multiple Oracle RAC clusters within an SF Oracle RAC cluster.

Shared storage is configured on the global zone. You can then export storage from the global zone to the non-global zone using loopback data mounts and direct data mounts. This allows for shared storage access to the Oracle RAC database.

You must configure non-global zones with an exclusive-IP zone. In exclusive-IP zones, the network interfaces are assigned exclusively to a non-global zone and they are not shared with the global zone. All private interfaces inside a non-global zone must be configured under Low Latency Transport (LLT) as private interfaces.

For more information, see the following topics:

About SF Oracle RAC support for Oracle RAC in a zone environment

Setting up an SF Oracle RAC cluster with Oracle RAC on non-global zones

You may also refer to the following document on the SORT web site:

Veritas Storage Foundation and High Availability Solutions Virtualization Guide


SFHA Solutions 6.0.1: Oracle VM Server for SPARC deployment scenarios in SF Oracle RAC environments

$
0
0
I do not need a solution (just sharing information)

The Oracle VM Server for SPARC was formally known as Solaris Logical Domains (LDOMs). Oracle VM Server for SPARC is a virtualization technology that lets you create independent virtual machine environments on the same physical system. Oracle VM Server for SPARC provides a virtualized computing environment abstracted from all physical devices, which lets you consolidate and centrally manage your workloads on a system. The logical domains can be specified roles such as a control domain, service domain, I/O domain, and guest domain. Each domain is a full virtual machine where you can start, stop, and reboot the operating systems independently.

About Oracle VM for SPARC

Terminology for Oracle VM Server for SPARC

Some of the sample deployment scenarios that are supported by SF Oracle RAC are as follows:

Deployment scenariosBenefits
Oracle RAC database on I/O domains of two hosts

The scenario offers the following benefits:

  • The computing resources on the hosts are available to other LDoms.
  • Direct access to storage ensures better database performance.
Oracle RAC database on Guest domains of two hosts

The scenario offers the following benefits:

  • The configuration provides a completely virtualized domain.
  • The hardware resources can be effectively utilized with other LDoms.
Oracle RAC database on Guest domains of a single host

The scenario offers the following benefits:

  • The reduction in the number of physical servers used makes it a very cost-effective setup.
  • The setup is easy to create and maintain. It is also flexible and portable.
  • Many guest LDoms from multiple systems can be joined together to form a bigger cluster.
  • If the primary domain reboots, only the attached guest LDom is affected. The guest LDom attached to the secondary service domain continues to be available. Please note that shutting down the primary domain halts all domains.
Oracle RAC database on Guest domain and I/O domain of a single host

The scenario offers the following benefits:

  • Guest LDoms can be added to the host at any time.
  • The setup offers better disk performance.
  • If the primary domain reboots, only the attached guest LDom is affected. The guest LDom attached to the secondary service domain continues to be available. Please note that shutting down the primary domain halts all domains.

 

For information on deploying SF Oracle RAC in Oracle VM Server for SPARC environments,
see the following document on the SORT website:

Veritas Storage Foundation for Oracle RAC Installation and Configuration Guide

Data Insight 3.0.1 scanning cifs.homedir share

$
0
0
I need a solution

I am trying to setup Data Insight 3.0.1 with an artificial share so I can see activity on users home shares from a NetApp filer. I have created the .pb script as mentioned in the admin guide but I am struggling finding the user id to use with the configdb.exe command:

Find the user ID of a Data Insight user assigned Product Administrator Server Administrator role from the latest configuration database table in the $DATADIT/conf folder.

Is this the user ID of the Server Administrator that I see in the Settings -> Data Insight Users screen? If so how do I get the user ID from Windows?

... also ...

Somewhere in the script I am guessing that the line:

value: "/CIFS.HOMEDIR" should containg the actual path to the home directories?

 

Thanks folks.

David

Symantec Partner Accreditation Interview for Storage Foundation 6.0 for Unix

$
0
0
I need a solution

I have successfully passed Symantec Technical Specialist assessment of Veritas Storage Foundation 6.0 for Unix with a very good score. I had already passed both sales certifications (SSE & SSE+) of Storage Foundation 6.0 

While preparing for STS assessment, I thoroughly went through all the training videos and the SF admin guide (almost 60%).

Now Symantec has conducted my interview for technical accredition of my company as Specialist partner in Storage Management. Any suggestions regarding how to prepare for interview or which particular areas the interview will focus or some topic outlines that are important from interview perspective.

 

Any suggestions or guidance will be highly appreciated.

Thank You

Atif Mehmood Malik

How to remove reservation keys on VCS disks

$
0
0
I need a solution

Dear All,

 

I would like to remove reservation keys on VCS disks because I want to reclaim those disks. I have tried vxfenadm with different options but no lucky

Should I Worry about Data Corruption? - Unfortunately Yes......

$
0
0

Recently one of my customers had a series of outages in the Communications between their buildings. The upshot of this is because of the way they had deployed their clusters they weren’t protected against their cluster nodes losing communications between each other. I have seen mixed experiences from my customers in terms of split brain issues (split brain is when all nodes in a cluster begin writing in an uncoordinated fashion to shared storage as they each believe they are the last node in the cluster)

I have seen customers running Campus Clusters with no split brain protection and have yet to see any problems. I have also seen other customers who go belt and braces using IO fencing which is built into VERITAS cluster server. Some have issues due to the way they handle the IO fencing devices as they are often not understood. So what's it all about then? it’s pretty simple.

For a cluster to work there needs to be a communication system between the nodes in the cluster to establish which systems are up and which are down. VERITAS Cluster Server uses the concept of heartbeats. These are isolated channels which once a second pass a message  between nodes saying "i am alive". Normally we state 2 heartbeats plus a low priority heartbeat. This is a heartbeat which  uses a public interface only when the real fulltime heartbeats fail. In this way we can prevent the nodes from any arbitration behavior by using a public interface temporarily.

Lets say you have a 2 node cluster and I walk into your data Centre and yank out your heartbeat cables between your nodes. Suddenly after a specified interval of checking,  each node comes to the conclusion that it is the last node in the cluster.  It will then attempt to force import the storage. Now consider this could be a genuine failure of one of the nodes in the cluster. In that scenario  we want the remaining node to import the storage that was being used before the crash and start our applications. (otherwise what's the point of high availability) In our scenario where we have actually not lost any nodes simply the communications between systems both systems will import the storage and begin writing to the filesystems. Time to get your backup tapes out or resync from a hardware replica from this morning. This is what we call a split brain.

Symantec do have some good mechanisms to protect you from this. The first is a type of membership arbitrations is called IO fencing. This is  leveraging SCSI3 reservations from the hardware storage subsystems itself. The storage subsystem can forcibly stop a specific system doing IO to a device. It involves having 3 coordination points (vote disks) when the cluster starts each node joining the cluster registers keys on these vote disks. Now in the scenario above where all communication is lost between cluster nodes an arbitration race begins. Each node in the cluster will race to gain control of the vote disks, which ever loses the race by getting the minority of the vote disks will be fenced out of the cluster and sent a panic request.

So we are forcibly crashing the race loser to avoid it writing to the shared disks. IO fencing is bullet proof and will also block IO from any 3rd party hosts mistakenly gaining access to the shared disks. Also if a system has hung there is the possibility when it comes out of its hung state that it could flush IO down to the shared devices causing corruption.  SCSI3 reservations and IO fencing stop this.  This is the recommended way to configure clusters, it does come at the price of needing 3 vote disks for each cluster. Additionally in virtualised environments SCSI3 reservations are often not supported so this becomes a little irrelevant.

Symantec also have another clever arbitration method known as Coordination Point Server(CPS). It offers a solution for customers wishing to vastly reduce the possibility of split brain without needing the vote disks and scsi3.  Coordination point servers are used to independently judge which nodes are up in a cluster. So as with the vote disks three are needed to judge fairly. Three coordination point servers are required in the environment. These are effectively three single node VCS clusters which sit idle until there is a dispute. The difference here is that these three servers can arbitrate many hundreds of clusters as they are simply contacting the nodes over IP to see if they are alive. In my example above when both systems believe they are the last remaining  node the following takes place. The three coordination point servers attempt to contact each system in the cluster, which ever system gets the most votes is the winner and stays up. The losing node is send a kill command and crashes. Thus this is split brain protection by taking out the other contenders who might want to write to the storage.

This raises an interesting scenario. In a two node cluster if I have a production server and test server acting as a standby node. If there is a loss of communications between the two and the arbitration process starts using the coordination points server, what happens if your test server wins the race? you might have a red faced service manager shouting at you. The good news is from VCS 6.0 onwards there is the concept of preferred fencing. This simply means you can weight a race to choses either a system or service group. This way in the loss of communication scenario you can ensure your test system is taken out of the equation instead of your production server.

So which is better? it's horses for courses I'm afraid. SCSI3 offers bullet proof protection, of that there is no question. But it comes at the price of needing many vote disks and SCSI3 compliant storage. Coordination Point Server offers a best efforts approach to arbitration and the effort involved in terms of hardware and effort is almost negligible. But there will be corner cases as mentioned where you could face corruption if a hung system came back before it was killed and was able to flush it's data buffers down to disk.

If the data Centre  was mine I would risk the second approach with the CPS servers. It's much better than having no arbitration and is a doddle to setup. Of course if I stared seeing data corruption I could change my mind……and job.

Cordination Point Server is availalable from VCS 5.1SP1 onwards.

IP Configuration

$
0
0
I need a solution

Hi Expert,

Is it possible to configure 1 VIP with two gateways. The network design is as below.

1- Redundancy between OLM server for the VIP.
2- If GW-1 is down traffic should shift to GW-2 with same VIP.

As per network here engineer it is configurable...

SFHA Solutions 6.0.1: Troubleshooting unprobed resources in Veritas Cluster Server

$
0
0
I do not need a solution (just sharing information)

Veritas Cluster Server (VCS) monitors resources when they are online and offline to ensure that they are not started on systems where they are not supposed to run.

When you configure VCS, you should convey to the VCS engine the definitions of the cluster, service groups, resources, and dependencies among service groups and resources. VCS uses the following two configuration files in a default configuration:

  • main.cf—Defines the cluster, including service groups and resources
  • types.cf—Defines the resource types

For more information about configuring VCS using VCS configuration files, see:

VCS mainly fails to probe the resources or service group during following scenarios:

  1. When a new types.cf file is not copied into the /etc/VRTSvcs/conf/config/ directory during an upgrade of VCS. If VCS fails to probe the resources, the service group does not come online and also gets auto-disabled in the cluster. This happens due to old types.cf files in the /etc/VRTSvcs/conf/config/ directory.  
  2. When the definitions of the cluster, service groups, resources, dependencies, and attributes remain undefined or incorrect in the main.cf file. This causes configuration errors, due to which STALE_ADMIN_WAIT message is displayed.
  3. When the installation of an agent for a specific node has failed.
  4. When a resource returns the resource state as “UNKNOWN” which means the agent or resource is unable to monitor configured resource.
  5. When the resource is disabled.

For more information on probing the resources or service group, or troubleshooting service groups, see

Some of the probing issues can be resolved by copying the latest types.cf file from the /etc/VRTSvcs/conf/ directory to the /etc/VRTSvcs/conf/config/ directory as follows:

  1.  Stop the cluster on all nodes by using the following command:
       # hastop -all –force

       Applications will continue to run, but not fail over.

  2.  Back up the original types.cf file by using the following command:
       # mv /etc/VRTSvcs/conf/config/types.cf /etc/VRTSvcs/conf/config/types.cf.<date>

  3.  Copy the types.cf file by using the following command:
       # cp /etc/VRTSvcs/conf/types.cf /etc/VRTSvcs/conf/config/types.cf

  4.  Verify the size of both types.cf file again, they should be identical, by using the following command:
        # ls -l /etc/VRTSvcs/conf/types.cf
        # ls -l /etc/VRTSvcs/conf/config/types.cf

  5.  Start the cluster on the node by using the following command:
       # hastart 
The hastart command needs to be executed on all nodes in the cluster. And also verify the types.cf file did not revert to the original version. If so, then repeat the procedure and shut down the LLT and GAB, after you execute the hastop command.

You can also find information about probing resources and troubleshooting service groups in the PDF versions of the following guides:

VCS documentation for other platforms and releases can be found on the SORT website. 

 

 


We are getting error for / ufs FS on solaris 9 OS under /var/adm/messages file

$
0
0
I need a solution

Hi ,

We are getting error mesaages under OS log file for / File System, Kindly check below error.

Feb 17 02:50:54 XX021 ufs: [ID 702911 kern.warning] WARNING: Error writing ufs log state
Feb 17 02:50:54 XX021 ufs: [ID 127457 kern.warning] WARNING: ufs log for / changed state to Error
Feb 17 02:50:54 XX021 ufs: [ID 616219 kern.warning] WARNING: Please umount(1M) / and run fsck(1M)
Feb 17 02:50:55 XX021 swapgeneric: [ID 308332 kern.info] root on /pseudo/vxio@0:0 fstype ufs

We logged a case with our vendor and they sugested the issue is because of logging option selected for / FS, and asked us to approch Symantec as we have vxvm installed on server and OS FS's are encapsulated under vxvm, We have Veritas version 4.0 installed on server and Symantec wont register a case as EOS for vxvm version 4.0 is already passed in 2011. Below is the Vxvm product installed on our system.

 PKGINST:  VRTSvxvm
      NAME:  VERITAS Volume Manager, Binaries
  CATEGORY:  system
      ARCH:  sparc
   VERSION:  4.0,REV=12.06.2003.01.35
   BASEDIR:  /
    VENDOR:  VERITAS Software
      DESC:  Virtual Disk Subsystem
    PSTAMP:  VERITAS-4.0R_p1.4:14-January-2004
  INSTDATE:  Jun 10 2005 14:28
   HOTLINE:  800-342-0652
     EMAIL:  support@veritas.com
    STATUS:  completely installed
     FILES:      823 installed pathnames
                  20 shared pathnames
                  18 linked files
                  97 directories
                 411 executables
              256745 blocks used (approx)
 

Kindly let us know what is causing the logs to get generated after every reboot and whether we can ignore such messages or it's a critical situation.

 

SFHA Solutions 6.0.1: Understanding single-node VCS clusters and the single-node mode

$
0
0
I do not need a solution (just sharing information)

You can install Veritas Cluster Server (VCS) on a single system to configure a single-node VCS cluster.
 

You can use VCS single-node clusters where you require application restart as a fault management remedy. You can also use single-node clusters where you only want VCS to gracefully start or stop an application.

Note: Symantec supports single-node clusters but recommends that you use multi-node clusters to ensure that your critical applications are highly available.

The single-node mode is a different concept from the single-node cluster concept.
 

The term single-node mode or one-node mode refers to the option “-onenode” that you can either configure while installing VCS, or specify while bringing up the node.
 

You can invoke a VCS node in single-node mode, irrespective of whether the node is part of a single-node cluster or a multi-node cluster. In the single-node mode, the VCS policy engine (also known as High Availability Daemon or HAD) does not communicate with the Global Atomic Broadcast (GAB) module. You cannot use single-node mode for application failover.
 

As a result, you cannot add a single-node cluster that is in single-node mode, to another node, to form a multi-node cluster.
 

You must first unconfigure the single-node mode, configure the Low Latency Transport (LLT)/GAB modules, and then add the single-node cluster to another node or multi-node cluster.
 

For more information on the function of GAB and LLT in a VCS cluster, see the following Symantec Connect article:
SFHA Solutions 6.0.1: About GAB seeding and its role in VCS and other SFHA products

Common use cases of single-node clusters include:

  • Hosting the Coordination Point or CP Server of the VCS fencing (VxFEN) module on a single-node cluster
  • Hosting an application on a single-node cluster at the disaster recovery site (remote site), to economize on hardware in a Global Cluster Option (GCO) setup. In this case you cannot configure local application failover, but you can fail over the application back to the protected site.
  • Creating a single-node cluster as a first step to creating a multi-node cluster. Ensure that you do not configure VCS in single-node mode for such a cluster.

For more information on installing a single-node VCS cluster and adding it to other clusters, see:
 

For more information on configuring a multi-node VCS cluster in single-node mode, see:
Configuring VCS in single node mode

VCS documentation for other releases and platforms can be found on the SORT website

 

 

How to online Service group

$
0
0
I need a solution

Hi,

How can online a service group as I already add a resource but when I am executing, hagrp -online group -sys node then its showing an error message as,

--> There are no resources in the group to online.

 

when I see "hagrp -state", the group states are showing as OFFLINE

when I see "hares -state", the resources states are showing as ONLINE

Kindly assist to resolve the issue.

Thanks

Oracle monitoring agent clustered

$
0
0
I need a solution

DBAs came in and asked to add their Oracle monitoring agent as a resource into the cluster configuration.

They want to have Oracle native monitoring agent with all functionality Oracle give them working as a resource assigned to the Oracle Instance Service group, so it can be moved togethre with teh instance to another node, in case of failover.

I did not find any suitable resources for this agent. Did anybody configured it here? Can the setup of VCS be shared here?

If such resource is not availble how Oracle instance and lots of its parameters can be monitored from VCS itself?

I run VCS 5.1, on Linux Redhat 5 with Oracle 11/12.

Leonid

How to upgrade from Symantec Veritas Enterprise Administrator 3.2 to current version?

$
0
0
I need a solution

We have multiple servers running Symantec Veritas Enterprise Administrator, version 32.and 4.1 that have Nessus vulnerabilities, Symantec Veritas Enterprise Administrator Service (vxsvc) Multiple Integer Overflows  ( 3CN56239 ).

 

Would someone please provide suggestions?

SFHA Solutions 6.0.1: About the Veritas Cluster Server (VCS) startup sequence

$
0
0
I do not need a solution (just sharing information)

Communication among VCS components

When you install VCS, user-space components and kernel-space components are installed on a system. The VCS engine, also known as the high availability daemon (HAD) exists in the user space. The HA daemon contains the decision logic for the cluster and maintains a view of the cluster. The VCS engine on each system in the cluster maintains a synchronized view of the cluster. For example, when you take a resource offline, or bring a system from the cluster online, VCS on each system updates the view of the cluster.

The kernel-space components consist of the Group Atomic and Broadcast (GAB) and Low Latency Transport (LLT) modules. Each system that has the VCS engine installed on it communicates through GAB and LLT. GAB maintains the cluster membership and cluster communications. LLT maintains the traffic on the network and communicates heartbeat signal information of each system to GAB.

About the VCS startup sequence

The start and stop variables for the Asynchronous Monitoring Framework (AMF), LLT, GAB, I/O fencing (VxFEN), and VCS engine modules define the default behavior of these modules during a system restart or a system shutdown. For a clean VCS startup or shutdown, you must either enable or disable the startup and shutdown modes for all these modules.

VCS startup depends on the kernel-space modules and other user-space modules starting in a specific order. The VCS startup sequence is as follows:

  1. LLT
  2. GAB
  3. I/O fencing
  4. AMF
  5. VCS

For more information on setting the start and stop environment variables, VCS modules, and starting and stopping VCS, see:

In a single-node cluster, you can disable the start and stop environment variables for LLT, GAB, and VxFEN, if you have not configured these kernel modules. If you disable LLT and GAB, set the ONENODE variable to Yes in the/etc/default/vcs file.

The following topics provide information on troubleshooting startup issues:

 VCS documentation for other releases and platforms can be found on the SORT website.

 

SFHA Solutions 6.0.1: Installing the Solaris 11 operating system using the Automated Installer

$
0
0
I do not need a solution (just sharing information)

You can use the Oracle Solaris Automated Installer (AI) to install the Solaris 11 operating system on multiple client systems in a network. The AI enables you to install both x86 and SPARC systems “hands free” without any manual interaction. You can also use the AI media to install the Oracle Solaris operating system (OS) on a single SPARC or x86 platform. All cases require access to a package repository on the network to complete the installation.

You can download the AI bootable image from the Oracle website.

You can install the Oracle Solaris OS on many different types of clients. The clients can differ in architecture, memory characteristics, MAC address, IP address, and CPU. The installations can differ depending on specifications including the network configuration and the packages installed.

For information about the Automated Installer, see:

About Automated Installer

For information on using the Automated Installer, see:

Using Automated Installer

For detailed instructions on installing Solaris 11 using the Automated Installer, see:

Using AI to install the Solaris 11 operating system and SFHA products

Note: This feature has been introduced in the 6.0.1 release and is not applicable to the previous releases.

Veritas Storage Foundation and High Availability documentation for other releases can be found on the SORT website.


Clarification on what happens during a rolling SFHA upgrade

$
0
0
I need a solution

Prior to 5.1SP1, the way I upgraded SFHA was to:

  1. Force stop VCS leaving applications running, unload GAB and LLT and upgrade VCS on ALL nodes
  2. Upgrade SF on inactive node
  3. Switch SGs from one llive node to upgraded node and upgrade node that have SGs were moved from

The problems with this is that:

  1. If there was an issue with VCS upgrade as all nodes are upgraded, you may have to backout, where as if you could upgrade VCS on one node at a time, then you could switch services to non-upgraded node if there was an issue with new VCS version.
  2. This procedure didn't work with CVM as you can't unload GAB and LLT

The way rolling upgrades was explained to me by Symantec Product Management when 5.1SP1 came out was that VCS now had the ability to communicate on different versions and so for instance a VCS node on 5.1SP1 could co-exist in a cluster with a node at 5.1SP1RP1 meaning you could upgrade the whole stack including VCS one node at a time. 

However, I have a customer who applied RP3 to 5.1SP1RP1 on Solaris one node at a time and he got error:

Local node leaving cluster before joining as engine versions differ. Local engine version: 0x5010a1e; Current cluster version: 0x5010a00

So I am now wandering if only LLT and GAB support communicating on different versions and VCS does not and therefore the rolling upgrade procedure is:

Upgrade LLT, GAB, Xxfs and Vxvm using "installer -upgrade_kernelpkgs inactive_node_name" on node at a time when node is inactive

Upgrade VCS on all nodes using "installer -upgrade_nonkernelpkgs node1 node2 ..." on all nodes at the same time where I am guessing VCS is forced stop to leave applications running.

Can anyone clarify?

Thanks

Mike

Cannot Open a Technical Case

$
0
0
I need a solution

Hi Guys,

I got this error while trying to open a technical support case. I am filling all required data with a valid support contract.

CaseOwner_AutoFollowUnFollowCase: execution of AfterInsert caused by: System.DmlException: Insert failed. First exception on row 0; first error: INVALID_CROSS_REFERENCE_KEY, This user cannot follow any other users or records: [] Trigger.CaseOwner_AutoFollowUnFollowCase: line 103, column 1

 

BSOD from vxio.sys

$
0
0
I need a solution

we have a Win2008 Microsoft Cluster with Storage Foundation 5.1 (upgraded to SP2)and the cluster Option.

The Cluster nodes restart regulary with a Bluescreen. 

 

My debugger said:

Probably caused by : vxio.sys

I've attached my VxExplorer analysis

how to configure service group!!!!

$
0
0
I need a solution

 

hello:
 
Anyone have an idea how to configure the following
 
I have 2 nodes (node1 Production) (Node 2 Quality) VCS 6.0
 
For performance of HW need to configure the service group of production to realize that when one switchover or failover to node 2 the Quality Service Group take offline on node 2 before lifting the production service group on this node 2

SFHA Solutions 6.0.1: HP IVM support for Veritas Storage Foundation and High Availability solutions

$
0
0
I do not need a solution (just sharing information)

Veritas Storage Foundation and High Availability (SFHA) Solutions are supported in an HP Integrity Virtual Machine (IVM) environment. An HP IVM is a hosted hypervisor virtualization technology within the HP Virtual Server Environment. This environment enables you to create multiple virtual servers with shared resources within a single HP Integrity server or nPartition.

For more information about HP IVMs, see:

About HP Integrity Virtual Machines

Before you configure HP IVMs, you may find it helpful to review the following terminology:

HP Integrity Virtual Machines terminology

Symantec supports Veritas Storage Foundation (SF) and Veritas Cluster Server (VCS) in an HP IVM environment. Symantec does not support Veritas Storage Foundation for Oracle RAC in an HP IVM environment.

For more information, see:

Supported Storage Foundation and HP IVM versions

Supported VCS and IVM versions

SF supports the following configurations using IVM:

  • SF on VMGuest only
  • SF on VMHost only
  • SF on both VMGuest and VMHost

For more information about supported Storage Foundation configurations using IVM, see:

Storage Foundation supported configurations using IVM

SFHA supports the following configurations using IVM:

  • Cluster among VMGuests (VM-VM) (VM: Virtual Machine)
  • Cluster among VMGuests and physical machines (VM-PM)
  • Cluster among VMHosts (PM-PM) (PM: Physical Machine)

For more information about supported SFHA configurations using IVM, see:

Storage Foundation High Availability supported configurations using IVM
 

Storage Foundation Cluster File System High Availability (SFCFSHA) is supported only on the VMGuest.

For more information about supported SFCFSHA configurations using IVM, see:

Storage Foundation Cluster File System High Availability supported configurations using IVM

For a table of all supported configurations, see:

Supported configurations using IVM

Veritas Storage Foundation and High Availability documentation for other releases and platforms can be found on the SORT website.

 

 

Viewing all 2014 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>