bash-2.03# prtvtoc /dev/rdsk/c4t0d58s2
* /dev/rdsk/c4t0d58s2 partition map
*
* Dimensions:
* 512 bytes/sector
* 64 sectors/track
* 60 tracks/cylinder
* 3840 sectors/cylinder
* 36828 cylinders
* 36826 accessible cylinders
*
* Flags:
* 1: unmountable
* 10: read-only
*
* Unallocated space:
* First Sector Last
* Sector Count Sector
* 0 3840 3839
*
* First Sector Last
* Partition Tag Flags Sector Count Sector Mount Directory
2 5 01 0 141411840 141411839
3 15 01 3840 7680 11519
4 14 01 11520 141400320 141411839
bash-2.03#
bash-2.03# vxprint -htg ictdg
DG NAME NCONFIG NLOG MINORS GROUP-ID
DM NAME DEVICE TYPE PRIVLEN PUBLEN STATE
RV NAME RLINK_CNT KSTATE STATE PRIMARY DATAVOLS SRL
RL NAME RVG KSTATE STATE REM_HOST REM_DG REM_RLNK
V NAME RVG KSTATE STATE LENGTH READPOL PREFPLEX UTYPE
PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE
SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE
SV NAME PLEX VOLNAME NVOLLAYR LENGTH [COL/]OFF AM/NM MODE
DC NAME PARENTVOL LOGVOL
SP NAME SNAPVOL DCO
dg ictdg default default 6000 1074301281.1224.svan1008
dm ictdg01 c4t0d47s2 sliced 3583 17670720 -
dm ictdg02 c4t0d25s2 sliced 3583 17670720 -
dm ictdg03 c4t0d48s2 sliced 5503 70698240 -
dm ictdg04 c4t0d49s2 sliced 5503 70698240 -
dm ictdg05 c4t0d50s2 sliced 5503 70698240 -
dm ictdg06 c4t0d51s2 sliced 5503 70698240 -
dm ictdg07 c4t0d52s2 sliced 5503 70698240 -
dm ictdg08 c4t0d53s2 sliced 5503 70698240 -
dm ictdg09 c4t0d54s2 sliced 5503 70698240 -
dm ictdg10 c4t0d55s2 sliced 7423 141400320 -
dm ictdg11 c4t0d56s2 sliced 7423 141400320 -
dm ictdg12 c4t0d57s2 sliced 7423 141400320 -
dm ictdg13 c4t0d58s2 sliced 7423 141400320 FAILING
dm ictdg14 c4t0d59s2 sliced 7423 141400320 -
dm ictdg15 c4t0d60s2 sliced 7423 141400320 -
dm ictdg16 c4t0d61s2 sliced 7423 141400320 -
dm ictdg17 c4t0d62s2 sliced 7423 141400320 -
dm ictdg18 c4t0d63s2 sliced 7423 141400320 -
dm ictdg19 c4t0d64s2 sliced 7423 141400320 -
dm ictdg20 c4t0d65s2 sliced 7423 141400320 -
dm ictdg21 c4t0d66s2 sliced 7423 141400320 -
dm ictdg24 c4t0d69s2 sliced 5503 70698240 -
the system event will show this information each second.
and my Veritas high availability engine service was stopped each day.
我使用的是Windows server 2008 R2操作系统,从系统的事件里面看每秒钟都会有这个事件上报。
另外VCS的Veritas high availablity engine服务总是会自动停止。如下,需要我手动启动才可以
The Storage Foundation High Availability Product Management team is pleased to announce the upcoming SFHA 6.1 Beta Program. SFHA 6.1 is focused on I/O Optimization, reduction of infrastructure, and unparalleled availability of mission critical applications while allowing customers to take advantage of new and continuing trends in the Data Center.
Over the next few weeks and months, we will post articles, feedback polls, online previews, demo videos, and whitepapers. If you would like to participate, or you would like more details on timelines or components, please keep an eye on this page. We will be opening up a private group allowing you to get a sense of what we will introduce in the next release and have the opportunity to get your feedback directly to the product management team.
If you are interested in direct download of the beta binaries to test out at your site, please let us know or contact your local sales team.
Thanks,
Ryan Jancaitis (ryanjancaitis) and Anthony Herr (aherr)
The Veritas High Availability agents provide out-of-the-box support of many enterprise applications as well as a wide range of replication technologies. For a complete list of agents supported, please visit https://sort.symantec.com/agents. The agents monitor specific resources within an enterprise application. They determine the status of resources and start or stop them according to external events. Agents are released every quarter to extend support for newer versions of OS platform; and VCS/ApplicationHA releases.
Here is a list of links to recently available Veritas Cluster Server high availability Agents:
High Availability Agents - AgentPack-4Q2012 release
High Availability Agents - AgentPack-3Q2012 release
Hi all!
We have installed a set of 3 CPS (in three different sites) to configure IO fencing. We wonder how many clusters can use this set of 3 CPS. In the doc "Veritas™ Cluster Server Administrator's Guide 6.0", I read:
- 128 clusters.... of how many nodes?? 2,3,...???
In this post http://www.symantec.com/connect/forums/single-cps-use-multiple-clusters , AHerr says that "in testing we have had over 1000 2-node clusters using the same set of 3 CPS nodes". Why the limitation of 128 clusters????
Other question is about the bbdd /etc/VRTScps/db/current/cps_db, what happens if this file is deleted by accident??? Is it correct to use the cps_db file of the others CPS??? (I tried and worked fine) should we backup it up daily???
Regards,
Joaquín
Some business environments may require a cluster setup that supports application high availability with differentiated configurations in the same cluster. For example, some applications in the cluster may need high availability with or without dependency on databases. And, some applications may require high availability with support for clustered databases. In such scenarios, you may want to consider a mixed configuration of Storage Foundation for Oracle RAC (SF Oracle RAC) and Storage Foundation Cluster File System High Availability (SFCFSHA) in your environment.
In mixed configurations, nodes with applications that require high availability can be configured to run SFCFSHA while nodes with applications that require clustered databases can be configured to run SF Oracle RAC.
This article introduces the configuration scenarios and the advantage of having mixed configurations.
The deployment scenarios are as follows:
Both configuration scenarios provide the following advantages:
For more information and instructions:
Setting up a mixed configuration cluster running SF Oracle RAC and SFCFSHA
Additional information can be obtained from the following documents on SORT:
Veritas Storage Foundation for Oracle RAC Installation and Configuration Guide
Veritas Storage Foundation Cluster File System High Availability Installation Guide
Hello folks ,
Is there any way to disable the IO Fencing in VCS 5.1 Cluster without bringing the cluster down. I have a cluster consisting of 8 nodes , when I tried to add another node , Problems happened and the Fencing keys got corrupted. Therefore , I need to disable the IO fencing as it is not needed as I do not have CVM or CFS.
Thanks
Hi, we have problems with VCS (version 5.1) cluster on MS Exchange 2003 after we have updated AD to 2008 r2.
Situation is very odd, because we have another VCS cluster on MS Exchange 2003 (all versions are same) working fine.
Here is Lanman log:
On fresh install of SFHA 6.0.3 on Solaris 11.1 SPARC (per instructions, ie: install 6.0.1 w/out configuration, install 6.0.3, then configure) - the following service is showing in maintenance state:
# svcs -xv svc:/system/VRTSperl-runonce:default (?) State: maintenance since Mon Feb 11 14:11:28 2013 Reason: Start method failed repeatedly, last exited with status 127. See: http://support.oracle.com/msg/SMF-8000-KS See: /var/svc/log/system-VRTSperl-runonce:default.log Impact: This service is not running.
Looking at the log, the start method fails as the file it's trying to run ( /opt/VRTSperl/bin/runonce ) is missing / does not exist:
# tail /var/svc/log/system-VRTSperl-runonce:default.log [ Feb 11 14:11:28 Executing start method ("/opt/VRTSperl/bin/runonce"). ] /usr/sbin/sh[1]: exec: /opt/VRTSperl/bin/runonce: not found [ Feb 11 14:11:28 Method "start" exited with status 127. ] [ Feb 11 14:11:28 Executing start method ("/opt/VRTSperl/bin/runonce"). ] /usr/sbin/sh[1]: exec: /opt/VRTSperl/bin/runonce: not found [ Feb 11 14:11:28 Method "start" exited with status 127. ] [ Feb 11 14:11:28 Executing start method ("/opt/VRTSperl/bin/runonce"). ] /usr/sbin/sh[1]: exec: /opt/VRTSperl/bin/runonce: not found [ Feb 11 14:11:28 Method "start" exited with status 127. ]
Has anyone else seen this in SF 6.0.3?
The file name implies it's something that only needs to be run once (and was possibly deleted after it was run) - should this service be disabled/removed as part of the installation so it doesn't come up in maintenance every time?
Please advise if further details are required.
thanks,
Grace
The Symantec Operations Readiness Tools mobile application (SORT Mobile), enables you to look up Symantec error codes on your iPhone, iPad, or iPod Touch. You can now look up an error code anywhere: in a customer meeting, on the road, or at lunch with a client.
In addition to Error Code Lookup, SORT Mobile supports these key features from the SORT website:
SORT Mobile runs natively on iPhones, iPads, and iPod Touches running iOS 5.0 or later.
To learn more about SORT Mobile and to download it to your iPhone, iPad, or iPod Touch, go to:
Storage Foundation Cluster File System (CFS) provides several advantages over a traditional file system when running in a VMware guest OS. The primary benefit is that multiple virtual servers (up to 64) using CFS can simultaneously access data residing on the same physical LUN or virtual disk. A secondary advantage is the ability to provide faster application failover within virtual servers than a VM restart allows. And by using the VMware hypervisor to manage the virtual servers and their resources, applications gain all the advantages of running in a virtualized environment.
Utilizing CFS, an application can be recovered in a few seconds when running in a virtual machine (VM). The CFS High Availability Module will detect outages instantly and CFS will provide immediate access to the application data in another VM. Since the application is already running in another VM with access to the same data, no additional time is required to map the storage to a new VM and the application can immediately begin accessing the data needed to recover the service. Because several VMs can have simultaneous access to the data, other activities like reporting, testing, backup, etc… can be implemented without incurring in any CPU overhead on the primary application VM.
Business Intelligence applications are another area where a robust cluster file system can provide scale-out capabilities. Sharing the same data across all the virtual servers minimizes storage consumption by avoiding duplicate copies of data. By making the data immediately accessible for processing across all the servers, data processing and transformation cycles can be reduced.
Finally, CFS can provide a framework for an adaptable grid infrastructure based on virtual machines. With the CFS cluster capability to add/remove nodes dynamically without bringing the cluster down, administrators can tailor the cluster size to the changing dynamics of the workload. The cost of managing such a grid can be reduced using CFS storage optimization features such as compression, de-duplication, snapshots, and thin provisioning.
Cluster File System connection to storage can be implemented in two ways on VMware, depending on the application requirements:
Requirement | Solution |
Best performance, SCSI-3 PGR fencing | Raw Device Mapping -Physical (RDM-P) |
Vmotion and other VMware HA features | VMFS virtual disk with multi-writer flag enabled |
Support for both options is documented in Using Veritas Cluster File System (CFS) in VMware virtualized environments.
Each approach has its own pros and cons. RDM-P provides a direct connection between the virtual machine file system and the underlying physical LUN. Because of this, applications will likely achieve higher performance vs. a VMFS connection. Additionally, disk-based SCSI-3 data fencing can be implemented for data protection and it is possible to create a cluster or both physical and virtual machines. The downside of using RDM-P is that is does not allow the hypervisor to perform VMware management activities such as vSphere vMotion.
A VMFS virtual disk (VMDK) architecture can provide a more flexible model for managing storage, i.e. cluster nodes can be dynamically moved across ESX servers, allowing sever maintenance while keeping all cluster nodes attached to the shared data. Normally, in order to prevent data corruption, VMFS prevents access by more than one VM at a time to a VMDK. In order to allow CFS to provide simultaneous access to the virtual disk(s) by all the nodes in the cluster, the VMFS multi-writer flag must be enabled on the VMDK. The HOWTO8299 document mentioned above provides detailed instruction regarding this. It should be noted that applications can expect slightly lower performance when using VMFS vs. RDM-P, due to VMFS overhead. Additionally, VMware virtual disks do not emulate the SCSI-3 PGR data fencing command set at this time, so extra precaution should be taken to prevent inadvertent mapping of cluster VMDK to non-cluster virtual machines.
A detailed explanation on how to install and configure CFS with VMDK files can be found in the Storage Foundation Cluster File System HA on VMware VMDK Deployment Guide, which is attached to this article. This deployment guide presents a very specific example using the following infrastructure:
A four nodes cluster will be created using two different ESX servers as shown in this diagram:
This guide is not a replacement for any other guide (VMware or otherwise), nor does it contain an explicit supportability matrix. It is intended to document a particular implementation that may help users in their first implementation of CFS in a VMware environment using VMFS virtual disks as the backend storage. Please refer to product release notes, admin, and install guides for further information.
Carlos Carrero
Hi,
Would just like to know if the following solutions can be implemented via cloud? SF, VCS, VVR, VOM and APPHA. A client has their own private cloud and wants to create their DR via cloud as well. They are looking at our solution to help them implement their DR.
Many Thanks!
Dan
We have 2 Sun 3510 arrays each containing three disks. VxVM, on Solaris 10 for SPARC, has been set up to mirror each disk, ie Array 1 Disk 1 is mirrored to Array 2 Disk 1 etc etc. We had a problem with the power to the controller of array 2 and the disks in that array ended up in a disabled and removed state.
The power has now been restored and the disks are now visible to the operating system again. However whatever we try we can not get VxVM to accept the disks.
It can see them but will not re-add them. We have followed the replace disk procedure which fails with the message 'no device available to replace'.
Any suggestions would be most gratefuly received.
What is the use of service gp in cluster?
What if i dont create it?
I suspect that defragging a VxFS file system on a VVR volume would result in LOTS of traffic on the link and lots of blocks requiring sync?
This would risk filling up the SRL and generate lots of sync traffic. Anyone try this or has any insight on this?
The Cluster Manager (Java Console) enables you to administer your cluster. You can use the different views in the Cluster Manager (Java Console) to monitor and manage clusters and Veritas Cluster Server (VCS) objects, including service groups, systems, resources, and resource types.
If you want to manage clusters using the Cluster Manager (Java Console), the latest version is available for download from http://go.symantec.com/vcsm_download.
You need a free SymAccount to download the Cluster Manager (Java Console). You can obtain a SymAccount at: https://symaccount.symantec.com/SymAccount/index.jsp.
You cannot use the Cluster Manager (Java Console) to manage new features in Veritas Storage Foundation and High Availability releases 6.0 and later, for example, the Virtual Business Service (VBS) feature, using the Cluster Manager (Java Console). The Veritas Cluster Server Management Console is deprecated. Symantec recommends using Veritas Operations Manager instead to manage Storage Foundation and Cluster Server environments.
Veritas Operations Manager provides a centralized management console for Veritas Storage Foundation and High Availability products. You can use Veritas Operations Manager to monitor, visualize, and manage storage resources and generate reports.
You can download Veritas Operations Manager at http://go.symantec.com/vom.
For information on Veritas Operations Manager, see:
About Veritas Operations Manager
For information on VCS, see:
VCS documentation for other releases and platforms can be found on the SORT website.
Hi,
I need some help with VxVM volumes
I have been requested to restore some data from disks in a volume group
This was how it looked like before
v lvol4a - ENABLED ACTIVE 2537355264 ROUND - gen
pl lvol4a-01 lvol4a ENABLED ACTIVE 2537355264 CONCAT - RW
sd disk32-01 lvol4a-01 disk32 0 1006534400 0 xiv0_38 ENA
sd disk33-01 lvol4a-01 disk33 0 1006534400 1006534400 xiv0_39 ENA
sd disk35-01 lvol4a-01 disk35 0 524286464 2013068800 xiv0_42 ENA
pl lvol4a-02 lvol4a ENABLED ACTIVE 2537355264 CONCAT - RW
sd TMP1-01 lvol4a-02 TMP1 0 1019148544 0 xiv0_40 ENA
sd TMP2-01 lvol4a-02 TMP2 0 1308491520 1019148544 xiv0_41 ENA
sd TMP1-02 lvol4a-02 TMP1 1019148544 209715200 2327640064 xiv0_40 ENA
Then the volume was deleted as below
vxedit -g DISKGROUP001 -rf rm lvol4a
And recreated with same name and plex
vxmake -g DISKGROUP001 vol lvol4a plex=lvol4a-01
Now it looks like this..
v lvol4a - ENABLED ACTIVE 2537355264 ROUND - gen
pl lvol4a-01 lvol4a ENABLED ACTIVE 2537355264 CONCAT - RW
sd disk32-01 lvol4a-01 disk32 0 1006534400 0 xiv0_38 ENA
sd disk33-01 lvol4a-01 disk33 0 1006534400 1006534400 xiv0_39 ENA
sd disk35-01 lvol4a-01 disk35 0 524286464 2013068800 xiv0_42 ENA
Now I need to mount the data from these disks. which are siting the the VG and are not associated anywhere
disk01 auto:cdsdisk TMP1 DISKGROUP001 online
disk02 auto:cdsdisk TMP2 DISKGROUP001 online
So
1> has there been any loss data as a result of deleting and recreating volume lvol4a
2> If there is no data loss can I have the below as a spererate volume (vol_tmp) and mount the volume 2 )
pl lvol4a-02 lvol4a ENABLED ACTIVE 2537355264 CONCAT - RW
sd TMP1-01 lvol4a-02 TMP1 0 1019148544 0 xiv0_40 ENA
sd TMP2-01 lvol4a-02 TMP2 0 1308491520 1019148544 xiv0_41 ENA
sd TMP1-02 lvol4a-02 TMP1 1019148544 209715200 2327640064 xiv0_40 ENA
they dont exist in the VG only below dm exists
dm TMP1 disk01 auto 65536 1274937088 -
dm TMP2 disk02 auto 65536 1308491520 -
Hi experts,
Facing a problem with VCS 6.0 installation on RHEL6.3 64bit platform as gab service says its failed. Please let me know if you need any logs to
come out of this issue.
Starting VCS: 87% _______________________
Estimated time remaining: (mm:ss) 0:05 7 of 8
Performing VCS configuration ........................................................................................................................................................... Done
Starting llt ........................................................................................................................................................................... Done
Starting gab ......................................................................................................................................................................... Failed
Starting amf ........................................................................................................................................................................... Done
Starting had ........................................................................................................................................................................ Aborted
Starting CmdServer .................................................................................................................................................................. Aborted
Starting sfmh-discovery ................................................................................................................................................................ Done
Veritas Cluster Server Startup did not complete successfully
gab failed to start on grafspree
had aborted to start on grafspree
CmdServer aborted to start on grafspree
gab failed to start on warspite
had aborted to start on warspite
CmdServer aborted to start on warspite
I/O fencing configuration is not done. You have two ways to configure it:
1. Run the command 'installvcs -fencing'.
2. Select the I/O fencing configuration task while running the webinstaller.
Thanks,
Jag