Quantcast
Channel: Symantec Connect - Storage and Clustering
Viewing all 2014 articles
Browse latest View live

vxesd is taking 99% of CPU

$
0
0
I need a solution

Environment

RHEL = 5.3

SFHA/DR = 5.0 MP3RP3

Primary Site = Two Nodes Cluster

DR Site = One Node Cluster

 

Problem

vxesd is consuming 99% of my CPU. I saw the below TN which says that it is fixed in "5.0 MP3 rp2." but when I see run the command "# rpm -qa | grep VRTS" I see a mixmatch of version numbers. So my question is what package (regarding this issue) I see that the package is patched with 5.0 MP3 rp2

 

TechNote

http://www.symantec.com/business/support/index?pag...

 

rpm -qa | grep VRTS

rpm -qa | grep VRTS

VRTSicsco-1.3.28.0-0

VRTScscm-5.0.30.00-MP3_GENERIC

VRTSobgui-3.3.1202.0-0

VRTSdsa-5.0.00.00-GA_RHEL4

VRTSddlpr-5.0.30.00-GA_MP3_RHEL5

VRTSmapro-common-5.0.3.0-RHEL4

VRTSatServer-4.3.34.4-4

VRTScutil-5.0-MP3_GENERIC

VRTSaa-5.0.602.0-0

VRTSlvmconv-5.0.30.00-MP3_RHEL5

VRTSvrw-5.0.30.00-MP3_RHEL5

VRTSgab-5.0.30.20-MP3RP2_RHEL5

VRTSvlic-3.02.33.5500-0

VRTSatClient-4.3.34.4-4

VRTSacclib-5.0.30.00-MP3_GENERIC

VRTSjre15-1.5.3.5-5

VRTSweb-5.0.1-GA4_GENERIC

VRTSobc33-3.3.1202.0-0

VRTSmh-5.0.527.0-0

VRTSvxvm-common-5.0.30.00-MP3_RHEL5

VRTSvmman-5.0.30.00-MP3_GENERIC

VRTSalloc-5.0.30.00-MP3_RHEL5

VRTSvcsvr-5.0.30.00-MP3_GENERIC

VRTSfsmnd-5.0.30.00-MP3_GENERIC

VRTSllt-5.0.30.20-MP3RP2_RHEL5

VRTSvcs-5.0.30.20-MP3RP2_RHEL5

VRTSpbx-1.3.28.0-0

VRTSvcsmg-5.0.30.00-MP3_GENERIC

VRTSvcsmn-5.0.30.00-MP3_GENERIC

VRTScssim-5.0.30.00-MP3_RHEL5

VRTSccg-5.0.602.0-0

VRTSvxfs-platform-5.0.30.00-MP3_RHEL5

VRTSfspro-5.0.30.00-MP3_RHEL5

VRTSdcli-5.0.30.00-GA_MP3_RHEL

VRTSvrpro-5.0.30.00-MP3_RHEL5

VRTSfssdk-5.0.30.00-MP3_RHEL5

VRTScmcs-5.0.30.20-MP3RP2_RHEL5

VRTScmccc-5.0.30.20-MP3RP2_RHEL5

VRTSspt-5.5.00.0-GA

VRTSvcsdr-5.0.30.00-MP3_RHEL5

VRTSvxfs-common-5.0.30.00-MP3_RHEL5

VRTSvmpro-5.0.30.00-MP3_RHEL5

VRTSfsman-5.0.30.00-MP3_GENERIC

VRTSvcsag-5.0.30.20-MP3RP2_RHEL5

VRTSperl-5.8.8.0-RHEL5

VRTScscw-5.0.30.00-MP3_GENERIC

VRTSob-3.3.1202.0-0

VRTSvxvm-platform-5.0.30.00-MP3_RHEL5

VRTSvdid-1.2.206.0-206

VRTSvxmsa-4.4.028-RHEL4

VRTSvxfen-5.0.30.20-MP3RP2_RHEL5


New LUN not visible to Veritas Cluster Nodes

$
0
0
I need a solution

Hi,

I have two node Veritas Cluster and I have requested 3 more LUN in existing Cluster..

All 3 are attached, but 1 is already assigned to a group (not sure how it is created )

now I need to remove the disk so that its available as the other 2
the problem is this group shows as its online on another node, but when I check the other node, that group does not exist...
 
So can anyone suggest for solution.. ??

 

Patch missing for some components

$
0
0
I need a solution

My query.

I was updating SFHA 5.0 MP3RP3 with the rolling patch of MP4RP1.

The patch I downloaded did not contain any instalmp or installrp script. There were respective directories in which rpms were available. I had to to rpm -Uvh *.rpm to install them which even led me in the confusion for should i install all the rpms in all the directories or should I install repsective to products I am using.

My usage is, Global Cluster Server and Replication - 2 Node Primary & 1 Node DR

Anyhow what I did was just installed the rpms which were in the directories of Storage Foundation and Veritas Cluster Server, though while I tried to install rpms from Cluster Server directory after installing rpms from Storage Foundation directory I was received with a message of rpms already installed.

Please update me on the above queries where I am leading myself in confusion

Furher, I carry a similar replica enviorment with another client who is already on SFHA 5.0 MP4RP1. I compared the result of rpm -qa | grep VRTS to see if I have missed any of the rpms which should be installed and found 4 Rpms which were of older version in my new enviorment. I manually started finding it with find command in the entire patch directories but was unable to find them. The rpms are

VRTSmapro-common-5.0.3.0-RHEL4
VRTSvcsmg-5.0.40.00-MP4_GENERIC
VRTSvcsmn-5.0.40.00-MP4_GENERIC
VRTSvcsvr-5.0.40.00-MP4_GENERIC

I have similar packages in my enviorment but they are of different versions, kindly inform me where I will be able to find these rpms ? and why were they not available in the 5.0 MP4RP1 rolling patch which I downloaded from Sort

Thanks

Group vxfen keeps offline in VCS 5.1

$
0
0
I need a solution

 

Hi all!
 
We have started to setting io fencing up in our VCS in customized mode (using 3 CP servers). In VCS 6.0 everything works just fine but in VCS 5.1 (SP1RP2) the group vxfen keeps offline although it seems to be working properly:
 
Under Linux SLES 11:
 
 B  ClusterNOS      NODE1            Y          N               ONLINE
 B  ClusterNOS      NODE2            Y          N               OFFLINE
 B  vxfen           NODE1            Y          N               OFFLINE
 B  vxfen           NODE2            Y          N               OFFLINE
 
In the CPS, I can see both servers registered:
 
 CPS01TOR:/etc/VRTScps # cpsadm -s localhost -a list_nodes -c NOSTRUM
 Local node is CP Server, assuming nodeid as 0
 
 ClusterName      UUID                               Hostname(Node ID) Registered
 ===========   ===================================   ================  ===========
 NOS           {6176c10c-1dd2-11b2-8d00-a3e0b94e27bc}  NODE1(0)         1
 NOS           {6176c10c-1dd2-11b2-8d00-a3e0b94e27bc}  NODE2(1)         1
 
Log vxfend_A.log shows the proper registrations:
 
 End: join_local_node.sh returning SUCCESS (110)
 
No errors in any logs unless followings warnings:
 
 2013/01/26 12:44:03 VCS NOTICE V-16-1-10435 Group vxfen will not start automatically on System NODE2 as the system is not a part of AutoStartList attribute of the group.
 
The vxfen group only have a resource called coordpoint, which is online in both servers:
 
 NODE1:/var/VRTSvcs/log # hares -state coordpoint
 #Resource    Attribute             System     Value
 coordpoint   State                 NODE1      ONLINE
 coordpoint   State                 NODE2      ONLINE
 
The config we have is exactly the same that in other VCS (6.0) where the group vxfen is online.
 
Any idea about why the group vxfen keeps offline while it seems to be working properly?
 
Regards.
8282521
1359705813

SFHA Solutions 6.0.1: Use case for setting up mixed configuration clusters of SF Oracle RAC and SFCFSHA

$
0
0
I do not need a solution (just sharing information)

In Oracle Real Application Clusters (RAC) 11g Release 2, Oracle introduced policy-managed databases wherein database resources are dynamically allocated based on policies defined for the cluster. Earlier versions of Oracle RAC required administrators to specifically define the nodes on which the database could run and the services that could run within the database. These databases are called administrator-managed databases.

You can configure both types of databases in the same Storage Foundation for Oracle RAC (SF Oracle RAC) cluster. However, you cannot have a policy-managed database running on the same node as an administrator-managed database. If you want to create administrator-managed databases on the same cluster that runs policy-managed databases, make sure to choose the nodes carefully for administrator-managed databases. This is because the node may be moved into the generic server pool if it is part of the policy-managed server pool.

Upgrades may present a similar coexistence opportunity especially if your existing environment hosts administrator-managed databases and you want to upgrade some nodes in the environment to run policy-managed databases.

Note that coexistence of policy and administrator-managed databases requires that Oracle Clusterware be upgraded to use Oracle Grid Infrastructure on all nodes in the cluster. All nodes must belong to the same SF Oracle RAC cluster and Oracle Grid Infrastructure.

For more information and instructions on configuring policy-managed databases and administrator-managed databases in the same SF Oracle RAC cluster:

http://www.symantec.com/docs/HOWTO73184

Additional information on SF Oracle RAC can be found in the following guide on the SORT Website:

Veritas Storage Foundation for Oracle RAC Installation and Configuration Guide

SFHA Solutions 6.0.1: Reattaching failed disks using the Veritas Volume Manager (VxVM) vxreattach command

$
0
0
I do not need a solution (just sharing information)

 

You can perform a reattach operation if a disk could not be found at system startup, or if VxVM is started with some disk drivers unloaded and unloadable (causing disks to enter the failed state). If the underlying problem has been fixed (such as a cable or controller fault), use the vxreattach command to reattach the disks without plexes being flagged as STALE.  The reattach must occur before any volumes on the disk are started.

 

For more information on the reattach operation, see:

 

vxreattach (1M) 6.0.1 manual pages:
AIX
HP-UX
Linux
Solaris

 

Veritas Storage Foundation and High Availability documentation for other platforms and releases can be found on the SORT Website.

SFHA Solutions 6.0.1: Use cases for configuring Veritas Volume Manager disk groups using vxinstall

$
0
0
I do not need a solution (just sharing information)

SFHA Solutions 6.0.1: Use cases for configuring Veritas Volume Manager disk groups using vxinstall

The vxinstall utility allows you to configure Veritas Volume Manager (VxVM) disk groups. It provides the following functions:

  • Licensing VxVM
  • Setting up a system-wide default disk group
  • Starting VxVM daemons when Storage Foundation (SF) has been installed manually

The following are use cases for configuring VxVM disk groups using vxinstall:

vxinstall (1M) 6.0.1 manual pages:

You can also find information about the vxinstall utility in the PDF version of the following guide:

Veritas Storage Foundation Cluster File System High Availability Installation Guide

vxinstall (1M) manual pages for other platforms and releases can be found on the SORT website.

 

Storage Foundation and High Availability Solutions (SFHA) 6.0.3 is now available

$
0
0

Veritas Storage Foundation and High Availability Solutions (SFHA) 6.0.3 is now available

For AIX , Solaris, Red Hat Enterprise Linux, SUSE Linux and HP-UX

SORT links are below:

 

Use SORT notifications to receive updates on new patches and documentation.

 

Cheers

Tony

 

 

 

 

 

Rank

Product

Release type

Patch name

Release date

1

Veritas Storage Foundation HA 6.0.1

Maintenance Release

sfha-sol11_x64-6.0.3

2013-02-01

2

Veritas Storage Foundation HA 6.0.1

Maintenance Release

sfha-sol10_x64-6.0.3

2013-02-01

3

Veritas Storage Foundation HA 6.0.1

Maintenance Release

sfha-sol11_sparc-6.0.3

2013-02-01

4

Veritas Storage Foundation HA 6.0.1

Maintenance Release

sfha-sol10_sparc-6.0.3

2013-02-01

5

Veritas Storage Foundation HA 6.0.1

Maintenance Release

sfha-hpux1131-6.0.3

2013-02-01

6

Veritas Storage Foundation HA 6.0.1

Maintenance Release

sfha-sles11_x86_64-6.0.3

2013-02-01

7

Veritas Storage Foundation HA 6.0.1

Maintenance Release

sfha-sles10_x86_64-6.0.3

2013-02-01

8

Veritas Storage Foundation HA 6.0.1

Maintenance Release

sfha-rhel6_x86_64-6.0.3

2013-02-01

9

Veritas Storage Foundation HA 6.0.1

Maintenance Release

sfha-rhel5_x86_64-6.0.3

2013-02-01

10

Veritas Storage Foundation HA 6.0.1

Maintenance Release

sfha-aix-6.0.3

2013-01-31

 


Unable to Snapback in Windows 2008r2 VCS

$
0
0
I need a solution

I am running SFWHA5.1 with VCS. I have a diskgroup that is monitored by VCS. In my diskgroup I have my main drive then 3 snapshot drives that I have scheduled task running to Snapback then take a new snapshot daily. This process was running fine till I added the snap drive as resources in VCS. I want the snap drives to fail over to the second node and reattach the drive letters, basically I want what VCS will do but I am getting an error now V-76-58627-1162 now that I have done this.

How do I configure VCS to allow Snapback?

8291811
1359748574

Can I manually change drive status information

$
0
0
I need a solution

Server 2008r2, SFWHA5.1. I while back I had my main drive go corrupt. I renamed one of my snapshot drives changed the drive letter and I was back up and running. My question is the Volume status is "Healthy(Shadow copy - read/write), NTFS"

Is there a way to remove the "(Shadow Copy - read/write)" part so it goes back to the original "Healthy, NTFS"?

solution needed for vxfen issue

$
0
0
I need a solution

 

there is a two node cluster and we split two node cluster for upgrade. The isolated node is not coming up as vxfen is not starting

/02/01 11:52:05 VCS CRITICAL V-16-1-10037 VxFEN driver not configured. Retrying...
2013/02/01 11:52:20 VCS CRITICAL V-16-1-10037 VxFEN driver not configured. Retrying...
2013/02/01 11:52:35 VCS CRITICAL V-16-1-10037 VxFEN driver not configured. Retrying...
2013/02/01 11:52:50 VCS CRITICAL V-16-1-10037 VxFEN driver not configured. Retrying...
2013/02/01 11:53:05 VCS CRITICAL V-16-1-10037 VxFEN driver not configured. Retrying...
2013/02/01 11:53:20 VCS CRITICAL V-16-1-10031 VxFEN driver not configured. VCS Stopping. Manually restart VCS after configuring fencing
^C

IOFENCING configuration seems okay on node 2 as /etc/vxfentab has the entry for  co-ordinator disks and /etc/vxfendg has diskgroup entry vxfendg2 and these disks and diskgroup are visible too.

DEVICE       TYPE            DISK         GROUP        STATUS
c0t5006016047201339d0s2 auto:sliced     -            (ossdg)      online
c0t5006016047201339d1s2 auto:sliced     -            (sybasedg)   online
c0t5006016047201339d2s2 auto:sliced     -            (vxfendg1)   online
c0t5006016047201339d3s2 auto:sliced     -            (vxfendg1)   online
c0t5006016047201339d4s2 auto:sliced     -            (vxfendg1)   online
c0t5006016047201339d5s2 auto:sliced     -            (vxfendg2)   online
c0t5006016047201339d6s2 auto:sliced     -            (vxfendg2)   online
c0t5006016047201339d7s2 auto:sliced     -            -            online
c2t0d0s2     auto:SVM        -            -            SVM
c2t1d0s2     auto:SVM        -            -            SVM

On checking vxfen.log

 

nvoked S97vxfen. Starting
Fri Feb  1 11:50:37 CET 2013 starting vxfen.. 
Fri Feb  1 11:50:37 CET 2013 calling start_fun
Fri Feb  1 11:50:38 CET 2013 found vxfenmode file
Fri Feb  1 11:50:38 CET 2013 calling generate_vxfentab
Fri Feb  1 11:50:38 CET 2013 checking for /etc/vxfendg
Fri Feb  1 11:50:38 CET 2013 found  /etc/vxfendg.
Fri Feb  1 11:50:38 CET 2013 calling generate_disklist
Fri Feb  1 11:50:38 CET 2013 Starting vxfen.. Done. 
Fri Feb  1 11:50:38 CET 2013 starting in vxfen-startup
Fri Feb  1 11:50:38 CET 2013 calling regular vxfenconfig
Fri Feb  1 11:50:38 CET 2013 return value from above operation is 1
Fri Feb  1 11:50:38 CET 2013 output was VXFEN vxfenconfig ERROR V-11-2-1003 At least three coordinator disks must be defined
Log Buffer: 0xfffffffff4041090

refadm2-oss1{root} # cat /etc/vxfendg
vxfendg2

and there are below mentioned two disks in vxfendg2

c0t5006016047201339d5s2 auto:sliced     -            (vxfendg2)   online
c0t5006016047201339d6s2 auto:sliced     -            (vxfendg2)   online

 

is it due to two  disks in coordinator diskgroup? Is it a known issue ?

 

 

8304921
1360149192

Solaris 11.1 uninstall VXVM 6.0.1 in liveboot from CD

$
0
0
I need a solution

I have Solaris 11.1 x86 box with VXVM 6.0.1.

Last my action was try to install patch VXVM 6.0.3 on VXVM 6.0.1.

On preparation stage installer could not stop all modules :

    vxio failed to stop on smc
    vxdmp failed to stop on smc

And installer give suggest reboot system. After trying reboot system I got panic cycle. Single mode is not help.

Now I try to explore possibility to remove VXVM from livecd boot.

Which steps I already made:

1) Boot from sol-11_1-live-x86.iso

2) As root, zpool import -f rpool

3) ....

On step 3 I dont' know what to do next. I trying to chroot into be, which I mount with "beadm mount solaris /a" but I can't delete VXVM with standart methods.

 

Solaris 10u10 Zones and Storage Foundation Version 6.0

$
0
0
I need a solution

I have a Solaris 10u10 system with Veritas Storage Foundation version 6.0.100 and I'm trying to create a non-global zone.

The non-global zone doesn't completely come up because the VRTSvlic package creates a dependancy on the svc:/milestone/multi-user but the the vxfsldlic service is set to disabled.

Does anyone know how to get around this problem?

SORT mobile is now available on the App Store!

$
0
0

 SORT functionality is now available as a mobile application on the Apple App Store!

The SORT mobile app supports five key features from the SORT website:

  • SPVU Calculator (includes Server Tier mapping)
  • Error Code Lookup
  • Patch Lookup
  • Installation and Upgrade Checklist (Storage Foundation and NetBackup)
  • News and Alerts (Storage Foundation and NetBackup)

Go to http://sort.symantec.com/mobile to get more information and to download the app to your iPhone or iPad.

Be sure to mention the availability of this app to your coworkers!

SFHA/DR + VVR running on physical and virtual server

$
0
0
I need a solution

Hi,

I am setting 3 nodes cluster at production and DR site using SFHA/DR + VVR.  At production there are 2 pairs of clusters running on physicals servers. The first pairs cluster is exchange server, second pair is Ms SQL server.  At DR site, the third nodes are running as a VM on hyper-V 

I would like to check that can SFHA/DR+ VVR work in this environment with mixtual of physical and VM (Hyper-V)?

 

Thank you


Unable to failover service group in 2 node cluster, for error V-16-10001-5506

$
0
0
I need a solution

Hi All

 

I was trying to failover my sybase group, on to the secondary node and it fails for following reasons :

 

2013/02/06 10:58:34 VCS INFO V-16-1-10298 Resource sybasedg (Owner: Unspecified, Group: Sybase1) is online on node2 (VCS initiated)
2013/02/06 10:58:35 VCS WARNING V-16-10001-5506 ()node2 Mount:sybmaster_mount:online:BlockDevice </dev/vx/dsk/sybasedg/sybmaster> does not exist. Resource <sybmaster_mount> will not go online
2013/02/06 10:58:35 VCS WARNING V-16-10001-5506 (node2) Mount:syblog_mount:online:BlockDevice </dev/vx/dsk/sybasedg/syblog> does not exist. Resource <syblog_mount> will not go online
2013/02/06 10:58:35 VCS WARNING V-16-10001-5506 (node2) Mount:sybdata_mount:online:BlockDevice </dev/vx/dsk/sybasedg/sybdata> does not exist. Resource <sybdata_mount> will not go online
2013/02/06 10:58:35 VCS WARNING V-16-10001-5506 (node2) Mount:pmsyblog_mount:online:BlockDevice </dev/vx/dsk/sybasedg/pmsyblog> does not exist. Resource <pmsyblog_mount> will not go online
2013/02/06 10:58:35 VCS WARNING V-16-10001-5506 (node2) Mount:pmsybdata_mount:online:BlockDevice </dev/vx/dsk/sybasedg/pmsybdata> does not exist. Resource <pmsybdata_mount> will not go online
2013/02/06 10:58:35 VCS WARNING V-16-10001-5506 (node2) Mount:fmsyblog_mount:online:BlockDevice </dev/vx/dsk/sybasedg/fmsyblog> does not exist. Resource <fmsyblog_mount> will not go online
2013/02/06 10:58:35 VCS WARNING V-16-10001-5506 (node2) Mount:fmsybdata_mount:online:BlockDevice </dev/vx/dsk/sybasedg/fmsybdata> does not exist. Resource <fmsybdata_mount> will not go online
2013/02/06 10:58:35 VCS WARNING V-16-10001-5506 (node2) Mount:dbdumps_mount:online:BlockDevice </dev/vx/dsk/sybasedg/dbdumps> does not exist. Resource <dbdumps_mount> will not go online

 

Is there standard recovery mehod for this ?

 

Regards

Vivek

upgrading from SF 5.1 to 6.0.2

$
0
0
I need a solution

Using the installer script to upgrade from SF 5.1 to SFHA 6.1.  There is no VCS on this server and the installer exits:

Logs are being written to /var/tmp/installer-201302061759ZoK while installer is in progress

    Verifying systems: 25%                         _____________________________________________________________________

    Estimated time remaining: (mm:ss) 0:10                                                                        2 of 8

    Checking system communication ................................................................................. Done
    Checking release compatibility ................................................................................ Done
    Checking installed product CPI ERROR V-9-40-1083 Cannot upgrade  product because it is not installed on your system.
 

I can no longer see an option to install or  upgrade just the SF components without HA, jusat a choice of SFHA or VCS so not sure best solution.

DRA 6.1.1: Documentation available

$
0
0
I do not need a solution (just sharing information)

Documentation for Veritas Disaster Recovery Advisor (DRA) 6.1.1 is now available at the following locations:

The DRA 6.1.1 documentation set includes the following manuals:

  • Veritas Disaster Recovery Advisor Release Notes
  • Veritas Disaster Recovery Advisor Getting Started
  • Veritas Disaster Recovery Advisor Support Requirements
  • Veritas Disaster Recovery Advisor Deployment Requirements
  • Veritas Disaster Recovery Advisor User's Guide
  • Third-party Legal Notices

Backup up VCS Cluster Config

$
0
0
I need a solution

Can anyone advise on what command or process is involved to backup the VCS configuration of SFW HADR 5.1 onwards.

The old command was example :

 hasnap -backup -f clusterbackup.zip -n -m “Backup from March 25th 2007″

Apparently this is no longer supported on later versions of the software.

Get VCS WARNING V-16-10031-9509 every 5 seconds

$
0
0
I need a solution

I get the following error every 5 seconds in the /var/VRTSvcs/log/ProcessOnOnly_A.log:

VCS WARNING V-16-10031-9509 ProcessOnOnly:vxatd:monitor:Process(4447) is running with priority:0, requested priority:10

I am running VCS 5.1SP1 on a RHEL 5.5 Dell server.

When I issue the "hares -display vxatd", I see the following regarding Priority:

vxatd        ArgListValues         anp_gmp1   PathName  1       /opt/VRTSat/bin/vxatd   Arguments       1       ""      UserName   1root    Priority        1       10      PidFile 1       ""      IgnoreArgs      1       0
 

vxatd        Priority              global     10
 

The original contents of main.cf for this Resource is the following:

group VxSS (
        SystemList = { anp_gmp1 = 0, anp_gmp2 = 1, anp_gmp3 = 2 }
        PrintTree = 0
        Parallel = 1
        AutoStartList = { anp_gmp1, anp_gmp2, anp_gmp3 }
        OnlineRetryLimit = 3
        OnlineRetryInterval = 30
        )

        Phantom phantom_vxss (
                )

        ProcessOnOnly vxatd (
                PathName = "/opt/VRTSat/bin/vxatd"
                IgnoreArgs = 1
                )

 

I have tried the following, to no avail:

1). Removed the "IgnoreArgs = 1" setting in the main.cf

2). Added a "Priority = 10" setting in the main.cf

 

Any ideas on how to correct this? And what is "Priority" in the conext of VCS (doesn't look like a Linux priority/nice value)?

Viewing all 2014 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>