Quantcast
Channel: Symantec Connect - Storage and Clustering
Viewing all 2014 articles
Browse latest View live

VxVM vxdctl ERROR V-5-1-1589 enable failed: License has expired or is not available for operation


Can't locate object method "QEMU" Error During Upgrade to 6.0.x

$
0
0
I need a solution

Hi all,

 

I have a two node VCS cluster running on two KVM Virtual Machines RHEL 5.5.

VCS version is currently 5.1 SP1 RP2 and I'm trying to upgrade it to version 6.0.1.

Right after I run the installer script I get the following error:

 

[root@Hostname rhel5_x86_64]# pwd

/shared_data/VCS/dvd1-redhatlinux/rhel5_x86_64

[root@Hostname rhel5_x86_64]# ./installer

Can't locate object method "QEMU" via package "Padv::RHEL5x8664" at /shared_data/VCS/dvd1-redhatlinux/rhel5_x86_64/scripts/EDR/Padv/Linux.pm line 1061.

[root@Hostname rhel5_x86_64]#

 

Has anyone encountered such an error before?

I've contacted support and so far they have only instructed me to approach RedHat and open a ticket with them.

 

Any help will be much appreciated.

 

Thanks,

Yair

 

 

cluster directory

$
0
0
I need a solution

 

 -bash-3.2$ cd /opt/VRTS/bin
  cd /opt/VRTSvcs/bin
  /opt/VRTSnbu/scripts 
 
 
  1.   why these cluster scripts are in 3 directories
  2. What is the default directory which is created under which i have all the cluster commands loke hagrp,hares etc (both in windows and unix)
  3. why we have 3 directories.What are the default ones?
-bash-3.2$ cd /opt/VRTS/bin
-bash-3.2$ ls |grep -i hares
hares
-bash-3.2$ ls |grep -i hagrp
hagrp
-bash-3.2$ cd /opt/VRTSvcs/bin
-bash-3.2$ ls |grep -i hagrp
hagrp
 
 
 
8424301
1362087288

Beyond Luck & Guesses: Overcoming the High Cost of Worthless Op Risk Models

$
0
0

The modern organization is highly dependent on information technology, simultaneously and quite unintentionally, information technology has introduced new exposures which have deceptively seeped into every layer of the financial organization.  The likelihood that an organization will experience a catastrophic loss from an IT-service interruption caused by an IT issue is far greater than an interruption coming from some disaster or ‘black swan’ event.  Still, the key to survival is allocating the appropriate amount of resources to the “right” risks; while that may include planning contingencies for a worse-case scenario, to be rational about risk more guidance regarding the investment tradeoffs that mitigate risk.

The “Big Question” is how to optimize scarce resources today, to achieve the greatest reduction in future losses.  The Big Question two components: (1) which risks are the serious ones and (2) what are the optimal risk-reduction actions.  The real problem for `traditional’ approaches like the Business Impact Analysis (BIA) and qualitative High-Medium-Low Risk analysis, is not that they are wrong, but that they offer no guidance on how to improve the situation. These traditional methods offer little advice for answering the Big Question.  In fact, they can be dysfunctional.  The unintended consequence of these outdated methods has been that the operational aspects of IT have been systematically neglected: This might be the biggest blunder in business today.  

The value of operational risk management lies not in identifying risks and exposures; the value lies in determining the optimal ‘investment to mitigate the most serious risks.  The cost-of-downtime and the BIA neither help identify causes nor help prioritize preventative actions.  The BIA provides little value for controlling operational risks because its primary purpose is to respond and recover, not prevent.  It overlooks the causal relationship of risk because it was never intended to treat a cause or a symptom.  It is an after-the-fact approach to produce contingencies for worse-case circumstances and not a preemptive, proactive approach to strengthen operations.

While traditional methods have inherent disconnects and do not answer the Big Question, there are things that can be done today to keep the odds in our favor.  A loss-expectancy risk model that economically quantifies operational risk will not only identify the serious risks but it also will provide the important cause-and-effect correlation needed to rationally evaluate risk-reduction tradeoffs through cost-benefit balancing. Visit the link below to read the details in Beyond Luck & Guesses: Overcoming the High Cost of Worthless Op Risk Models Click Here to Read.

Demo: Using Veritas Cluster Server 6.0.2 and Apache

$
0
0

Demo showing how to configure and teste Apache with VCS 6.0.2 on Linux.

Video Upload: 
Vendor Specific Configuration
2191203677001
Public

SFS fs ERROR V-288-2160 Volume tag set failed

$
0
0
I need a solution

Hi experts

I run a Symantec FileStore 5.6P3 and im getting the next error:

 

sfs> storage fs create simple eniqsts-admin2 2g eniqsts blksize=8192
SFS fs ERROR V-288-2160 Volume tag set failed for eniqsts-admin2.
 
If I try to run command again it fails again with different error:
 
omprsfs> storage fs create simple eniqsts-admin2 2g eniqsts blksize=8192
SFS fs ERROR V-288-170 File system eniqsts-admin2 already exists
 
However filesystem is not there when using:
 
sfs> nfs show fs 
 
or
 
sfs> storage fs list
 
How can I move on?
 
THanks in advance for your valuable help

 

Service group Configure HELP!!!

$
0
0
I need a solution

 

hello:
 
I have 2 nodes (node1 Production) (Node 2 Quality)
 
How to configure the service group of production to realize that when one switchover or failover to node 2 the Quality Service Group take offline on node 2 before lifting the service group node 1 node 2 production in this.

Notifier Cluster service group type

$
0
0
I need a solution

Hi,

What is the best recommended way of configuring the notifier cluster service group. Whether the parallel attribute should be 1 or 0.

example below:

 

[root@node1 ~]# hagrp -display ClusterService |grep -i parallel
ClusterService Parallel              global     0        <--------------------------------------------------0 or 1
[root@node1 ~]#

 

For what I can understand is if we have few application resources running in dirtributed node then to get the notification from all nodes the Cluster service group should be parallel i.e 1.

Appreciate your help.

 

Thanks and Regards,

Rajeev


New Release: Symantec Operations Readiness Tool 3.8.1

$
0
0

 

On February 20, 2013, Symantec completed another release of Symantec Operations Readiness Tool (SORT)! With SORT’s focus of improving the total customer experience for Storage Foundation and NetBackup customers, we’ve added the following Storage Foundation High Availability Solutions features and improvements to the website:

  • Support of the Storage Foundation 6.0.2 release

Visit SORT at http://sort.symantec.com to see why thousands of Symantec customers continue to gain value from the site.

 

Re-installation required of OS on one node in two nodes SQL Cluster

$
0
0
I need a solution

Environment

OS = Windows2008R2

SQL Server = 2008

sfha =

Background

One of our client have Two Nodes Cluster for SQL Server. Passive Node OS is miss behaving while rebooting and seems not able to getting fixed by client. Client is willing to re-install the OS of Passive Node. Below are the steps which seems below to do this activity.

 

1.) Run the Cluster Configuration Wizard.

2.) Delete the Passive Node.

3.) Install OS, sfha (same patch level as Active Node).

4.) Install SQL Server (same patch level as Active Node).             # at this step we further do the below My Idea steps

5.) Run the Cluster Configuration Wizard on Active Node.

6.) Add the Passive Node.

My Question

When we do step # 4. in an environment which is a brand new fresh, while SQL installation we give the Central Storage/SAN for the data files(.mdf and .ldf) and then click on NEXT to continue the SQL Server installation. In my this situation we have already data files created on SAN drive and these drives are mounted on Active Node. How can we locate .mdf and .ldf file while SQL installation step when we need to define data files. Recommandation required please.

My idea

Define the data files (.mdf and .ldf) to any where else. when SQL installation complete. Below are the steps seems to be taken.

Offline Service Group on Active Node.

Import the DiskGroup on Passive Node from VEA (Veritas Enterprise Administrator).

Call SQL Server DBA to define the path setting under SQL Server which locate SQL Server to see data files under DiskGroup Volumes.

Deport the DiskGroup.

Online Service Group on Active Node again.

Run step # 5 and # 6.

cluster behavior needed, which cfg vars to modify

$
0
0
I need a solution

Hallo,

 

I wish to have the following behavior from a Veritas cluster, monitoring a resource (app):

resource failed, first attempt to restart it on the same node, if not, migrate it to the second node.

However, is there another monitor which forces the resource to directly migrate if it fails too many times in a given timeframe, instead on starting it again on the same node ?

When testing, I have different behaviors depending on how much time I wait between manually killing the app and I do not know exactly which configurations I have to edit. basically, the question is how much time do I have between manually failing the resource, so the cluster restarts it again on the _same_ node?

 

cfg so far -> ToleranceLimit = 0 RestartLimit = 1 OnlineTimeout = 300.

 

Implementing Solaris Zones with Cluster File System HA 6.0

$
0
0

Provided within this document are the steps necessary to create a highly available Solaris zone environment using Cluster File System HA 6.0.  Additional content includes advanced provisioning techniques using Storage Foundation Space Optimized Snapshots, instant roll back procedures using File System Checkpoints as well as patching techniques designed to help elevate some of the operational challenges inherent to local zone administration. For those environments where native ZFS is used for managing the zone root file systems, the required steps for incorporating that configuration into VCS is also addressed in this guide.

SFHA Solutions 6.0.1: About the vxconfigd daemon

$
0
0
I do not need a solution (just sharing information)

The Veritas Volume Manager (VxVM) vxconfigd daemon handles all configuration management tasks for VxVM objects. It maintains disk and disk group configuration details, communicates configuration changes to the kernel, and modifies the persistent configuration information stored on disks.

Operations that view or change VxVM configuration objects use the vxconfigd daemon.

To check the current operating mode of the vxconfigd daemon, use the vxdctl mode command. The vxdctl mode command displays whether the vxconfigd daemon is in one of the following operating modes:

  • enabled
  • disabled
  • booted
  • not-running

Ensure that the vxdctl mode is enabled.

Verify that the vxconfigd daemon is running by checking its presence in the ps -ef | grep vxconfigd command output.

If the vxconfigd daemon is not running, you can usually restart it by running the vxconfigd -k command. If a vxconfigd daemon is already running, -k kills it before starting another daemon. This is useful for recovering from a hung vxconfigd daemon. Killing the old vxconfigd daemon and starting a new one usually does not cause problems for volume devices that are being used by applications, or that contain mounted file systems.

For information on the vxconfigd daemon, see:

About the configuration daemon in Veritas Volume Manager

In a cluster configuration, a separate instance of the vxconfigd daemon runs on each cluster node, and these instances communicate with each other. The vxconfigd daemon plays an important role in establishing the cluster setup.

For information on running the vxconfigd daemon in a cluster, see:

For more information on troubleshooting the vxconfigd daemon, see:
Veritas Storage Foundation and High Availability Solutions Troubleshooting Guide

vxconfigd (1M) 6.0.1 manual pages:

vxdctl (1M) 6.0.1 manual pages:

Veritas Storage Foundation and High Availability documentation for other platforms and releases can be found on the SORT website.

VxFS: Understanding the V-3-20837 warning message

$
0
0
I do not need a solution (just sharing information)

The Veritas File System (VxFS) fsck command examines VxFS file systems for consistency. Because VxFS file systems record pending file system updates in an intent log, the fsck command typically replays the intent log instead of doing a full structural file system check.

When you run the fsck command to repair a file system, the following warning message may appear:

UX:vxfs fsck: WARNING: V-3-20837: file system had I/O error(s) on user data.

The message suggests that there were one or more I/O errors on user data.

You do not have to take action on this message. However, you should note that the fsck command found data errors. Frequent data errors are often the first sign that a spinning disk is nearing the end of its life. Most modern disk drives include Self-Monitoring, Analysis and Reporting Technology (SMART) logic for anticipating failures.

The fsck command can also be used for:

fsck_vxfs (1M) manual pages:

fsck command use cases and fsck_vxfs (1M) manual pages for other releases can be found on the SORT website.

SFHA Solutions 6.0.1: About managing Virtual Business Services (VBS) using VOM and the VBS command line interface

$
0
0
I do not need a solution (just sharing information)

VBS is a feature that represents a multi-tier application as a single consolidated entity in Veritas Operations Manager (VOM). It builds on the high availability and disaster recovery features provided by Symantec products, such as, Veritas Cluster Server (VCS) and Symantec ApplicationHA. VBS enables administrators to improve operational efficiency of managing a heterogeneous multi-tier application.

You can control VBS from the VOM graphical user interface and the VBS command line interface (CLI).

When you install SFHA, the VBS installation packages, VRTSvbs and VRTSsfmh, are automatically installed on the nodes. From the VOM interface, you can define a VBS that consists of service groups from multiple clusters. You can also use the VBS CLI to perform back-end operations on that VBS.

The clustering solutions that are offered today can only manage applications running on the same operating system. So, deploying the clustering solutions for a multi-tier, cross-platform setup can be difficult to manage.

VBS can work across a heterogeneous environment to enable IT organizations to ensure that the applications across tiers can be made highly available. A typical multi-tier environment comprises of a database on a UNIX server, applications running in a Kernel-based Virtual Machine (KVM) on a Linux server, and a Web server on a VMware virtual machine.

VBS works across the heterogeneous environment to communicate between local operating systems to see the end-to-end state of multi-tier applications and to control start and stop ordering of the applications. With VBS there are relationships between tiers that you can customize to fit your environment. You can set up policies for the events that result in a failure or for the specific events that happen on tiers. For example, you can set up a policy that restarts the application service groups when the database service group fails over to another node.

For more information about VBS features, components, and workflow, see:

You can configure and manage a VBS created in VOM by using the VOM VBS Availability Add-on utility. You can also control a VBS from the VBS CLI, but you cannot create a VBS from the VBS CLI.

The VBS Availability Add-on utility enables you to:

  • Start or stop service groups associated to a VBS.
  • Establish service group relationships that decide the order in which service groups are brought online or taken offline.
  • Decide the reaction of application components in each tier when an event fault occurs on a tier.
  • Recover a VBS from a remote site when a disaster occurs.

For more information about installing the VBS add-on, packages, and configuring a VBS using VOM, see:

 For more information on managing VBS using VOM and the VBS command-line, see:

For more information on VBS commands, troubleshooting issues, and recovery operations, see:

For more information on managing VBS using VOM and the VBS command line, see:

Virtual Business Service-Availability User's Guide

Virtual Business Services documentation for other SFHA releases can be found on the SORT website.

 

 


need a solution for SRL volume in VVR

$
0
0
I need a solution

 

we have VCS Global cluster  and replication by VVR.

SRL volume looks like as below. I need to correct it.

I need to remove plex %63 and then relocate subdisk of plex oss_srl_vol-02 to other disk.

Can I remove and add plexes to SRL volume online ?

 

 

v  oss_srl_vol  ossrvg       ENABLED  ACTIVE   629145600 SELECT   -        SRL
pl %63          oss_srl_vol  ENABLED  TEMPRM   629145600 CONCAT   -        WO
sd osssrldk1m-UR-021 %63     osssrldk1m 6176   629145600 0        c2t40d0  ENA
pl oss_srl_vol-01 oss_srl_vol ENABLED ACTIVE   629145600 CONCAT   -        RW
sd osssrldk1-01 oss_srl_vol-01 osssrldk1 0     629145600 0        c1t40d0  ENA
pl oss_srl_vol-02 oss_srl_vol ENABLED ACTIVE   629145600 CONCAT   -        RW
sd disk2-42     oss_srl_vol-02 disk2  343291808 629145600 0       c0t40d0  ENA
 

 

lomas9o{root} #: vradmin -g ossdg printrvg ossrvg
Replicated Data Set: ossrvg
Primary:
        HostName: barnsley_lomas9o-ossrvg       <localhost>
        RvgName: ossrvg
        DgName: ossdg
Secondary:
        HostName: beckton_lomas8o-ossrvg
        RvgName: ossrvg
        DgName: ossdg

lomas9o{root} #: vradmin -g ossdg repstatus ossrvg
Replicated Data Set: ossrvg
Primary:
  Host name:                  barnsley_lomas9o-ossrvg
  RVG name:                   ossrvg
  DG name:                    ossdg
  RVG state:                  enabled for I/O
  Data volumes:               20
  VSets:                      0
  SRL name:                   oss_srl_vol
  SRL size:                   300.00 G
  Total secondaries:          1

Secondary:
  Host name:                  beckton_lomas8o-ossrvg
  RVG name:                   ossrvg
  DG name:                    ossdg
  Data status:                consistent, up-to-date
  Replication status:         replicating (connected)
  Current mode:               asynchronous
  Logging to:                 SRL
  Timestamp Information:      behind by 0h 0m 0s
 

New set of articles about troubleshooting the "failed" disk status, as reported by vxdisk

$
0
0
I do not need a solution (just sharing information)

Hi, all.

We have released a set of articles that contain information about troubleshooting the "failed" disk status, as reported by vxdisk.

Here is the link:

"Failed" or "failed was" is reported by vxdisk
http://www.symantec.com/docs/TECH200618

Since this is a broad topic, the "technote" is actually a set of about a dozen article that have been organized into a logical tree structure, with TECH200618 at its "root."

Let us know what you think!
 

Regards,

Mike

Business continuity for virtualized applications

$
0
0

I speak with a lot of our customers, and a frequent topic of conversation is around virtualization. Just a few years ago, most I spoke with didn't have more than 20 to 30% of their applications virtualized. Now that figure is typically around 70% or higher, and most customers say their goal is to hit as close to 100% as they can. The common blocker to virtualizing, though, is concern over application availability.

It's for this reason that we developed our ApplicationHA solution a few years back to manage in-guest monitoring of applications and restart of the application in the event of a failure. More recently, we announced a new release of Veritas Cluster Server which allows users to create clusters of virtual machines, and failover an application to an already-running standby virtual machine.

Taneja Group has been watching the disaster recovery/business continuity space in the industry, and just recently put up a blog post about this very topic. I particularly liked this passage:

Now the final interesting angle I'll call out here is that Veritas Cluster Server actually speaks to the number one reason why customers pursue server virtualization too. If you remember what I said, I said this was TCO savings. The biggest factor in achieving TCO savings with virtualization is efficiency, achieved through the consolidation of workloads and higher VM density. Symantec has a message here too. The Symantec ApplicationHA and Veritas Cluster Server solutions deliver enhanced availability while sharing resources and enabling low utilization standby systems with less overhead than full clones running as standby servers, and other potential approaches.

Be sure to read the full reports linked at the bottom of the Taneja Group blog post. It's interesting stuff.

SF HA SambaServer Agent is not zone-aware

$
0
0

The SambaServer Agent provided as bundled agent together with SF HA is not zone aware. That means when I configure the samba server process in a Solaris zone the SambaServer agent is not able to monitor it.

I experimented with setting RunInContainer = 1 for SambaServer which however can't work as SambaServerAgent is not a Script Agent, but has monitoring built into the binary. So you can consider this an RFE to make the SambaServer and corresponding agents for Samba zone aware and honour ContainerInfo.

Global Conference on Disaster Management

$
0
0
Location: 
Los Angeles, CA
Time: 
Thu, 24 October, 2013 - 7:00 - 18:00 PDT

Within just 6 years of existence, beginning in 2006, the Global Conference on Disaster Management has held over 45 conferences across more than 30 cities in the United States. With an impressive past attendee list and a collection of respectable exhibitors, our conferences offers essential knowledge and education while also creating opportunities for personal interaction. The Global Conference on Disaster Management provides connection and support to local, regional, state and national entities.

Viewing all 2014 articles
Browse latest View live


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>