Quantcast
Channel: VMware Communities : Popular Discussions - Backup & Recovery
Viewing all 64650 articles
Browse latest View live

VDP Upgrade - stalls at 45% - Installation of the package stalled

$
0
0

Hi,

i just tried the upgrade from VDP 5.5 to 5.6 and it stopped with this error message:

 

Installation of the package stalled

The installation stalled due to an error. Please use your snapshot to rollback the appliance.

 

1.) error message is not really helpful

2.) i rolled back and tried a 2nd time, no luck

 

Anybody a clue where to look for detailed error messages?

 

Thanks


VDP: [004] The VDP appliance datastore is approaching maximum capacity

$
0
0

Hi,

 

I've got this alarm "VDP: [004] The VDP appliance datastore is approaching maximum capacity" because the datastore where i have the VDP virtual disks is almost with the exact size of the virtual disks.

 

I've disabled the alarm, but everytime i reboot the VDP appliance the alarm recreated and enabled.

 

Is there any way to make VDP not recreate this alarm definition?

Backup job failing - Error 3 (One of the parameters was invalid)

$
0
0

Hello,

as stated above my backup job is failing and I don't know why.

 

 

From the vSphere Web Client I get the following message on the task console:

 

VDP server execution error code: 10055

 

 

And from the logs I deducted that the important part is the one below:

 

/usr/local/avamarclient/var/Backup_test....log

2015-04-17T18:55:18.478-05:-30 avvcbimage Info <9666>: Available transport modes are file:san:hotadd:nbdssl:nbd

2015-04-17T18:55:18.478-05:-30 avvcbimage Info <9667>: Calling ConnectEx with servername=10.64.208.104 vmxspec=moref=vm-13 on port 443 snapshot(snapshot-26)

2015-04-17T18:55:18.478-05:-30 avvcbimage Info <9668>: virtual machine will be connected readonly

2015-04-17T18:55:18.478-05:-30 avvcbimage Info <16041>: VDDK:VixDiskLib: VixDiskLib_ConnectEx: Establish connection using (null).

2015-04-17T18:55:18.478-05:-30 avvcbimage Info <16041>: VDDK:VixDiskLib: VixDiskLib_Connect: Establish connection.

2015-04-17T18:55:18.478-05:-30 avvcbimage Info <16041>: VDDK:VixDiskLib: A thumbprint is required for SSL certificate validation. vixDiskLib.c line 2446

2015-04-17T18:55:18.478-05:-30 avvcbimage Info <16041>: VDDK:VixDiskLib: VixDiskLib_Connect: Failed to allocate connection. Error 3 (One of the parameters was invalid) at 3914.

 

All my software if version 6.0.

If there is anymore information I can provide please tell me.

Cannot create a quiesced snapshot because the create snapshot operation exceeded the time limit for holding off I/O in the frozen virtual machine.

$
0
0

 

Then i try to backup VM using VDR the following error occured:

 

 

Create virtual machine snapshot

Cannot create a quiesced snapshot because the create snapshot operation exceeded the time limit for holding off I/O in the frozen virtual machine.

 

 

How can i fix this issue ?

 

 

Please help...thanks.

 

 

VMware Data Protection 5.5.6.56 - VDDK:VixDiskLibVim: ESX/ESXi host is not licensed to use this feature.

$
0
0

Hello,

 

after I managed to upgrade to the latest VDP Version 5.5.6.56 I can no longer run a Backup job. When I trigger the start of any I get the following error:

 

Error 10055:  VixDiskLib_Open attempt to connect to virtual disk failed

 

Taking a closer look to the log files I see the following:

 

VDDK:VixDiskLibVim: ESX/ESXi host is not licensed to use this feature.


Before the upgrade it did work and I do have licenses for all ESXi and Vcenters. Is there any workaround? Anybody else experiencing the same?

 

Cheers

vdp-configure error message "cannot proceed because the appliance already has attached storage"

$
0
0

I am trying to deploy vSphere Data Protection 5.5 ova appliance.

I am on my third attempt at installation; the previous two installed vm's I have deleted (or at least what I could find of them) after hacking them to death trying to get this thing to work.

 

I am the vdp-configure stage; but I am now stuck at the Create Storage page.

If I try to either "create" storage or "attach" storage I get the same error box: "cannot proceed because the appliance already has attached storage".

 

So now I am stuck.

Searching KB has come up with nothing.

I have checked the vm/ova settings (but not changed anything).

I have checked the datastore, nothing unusual there.

 

So: where do I look next?

Are there remnants of the first two installs hidden someplace in the guts of vcenter?

Can I ssh in to vm to check a setting/log file?

 

Any suggestions


VMDK file is locked..

$
0
0

Hello,

 

From my Symantec recovery backup, I have ran the one time VM conversion to create a VMDK file for attaching in VM.

 

I have atthced this VMDK to New HDD in VM, But after that when i am trying to Switch on VM getting Error " unable to access file vmdk since it is locked"

 

Please help to resolve this issue..


Thanks,

Umakanth

VDP 5.8 - Datastore does not have enough free space for snapshot. Mk II

$
0
0

Hi,

 

I have a backup that worked fine in 5.6 that fails in 5.8 with the 'not enough free space for snapshot error' the DS in question is indeed full, but the disk image in question is independant persistant and therefore not included in snapshots or backups (I can take a standard snapshot of the host without issue)

 

From the logs:

 

Prior Disk '2000': file(base):'[DS1] NAS-B/NAS-B.vmdk', backItUp=1

               snapshot file:'[DS1] NAS-B/NAS-B.vmdk'

               prior size(KB):0, current size(KB):16777216, match=0

               prior change block ID:''

               Datastore:'DS1' Directly Accessible=1

  Prior Disk '2001': file(base):'[DS1] NAS-B/NAS-B_1.vmdk', backItUp=1

               snapshot file:'[DS1] NAS-B/NAS-B_1.vmdk'

               prior size(KB):0, current size(KB):104857600, match=0

               prior change block ID:''

               Datastore:'DS1' Directly Accessible=1

  Prior Disk '2002': file(base):'[DS1] NAS-B/NAS-B_2.vmdk', backItUp=1

               snapshot file:'[DS1] NAS-B/NAS-B_2.vmdk'

               prior size(KB):0, current size(KB):10485760, match=0

               prior change block ID:''

               Datastore:'DS1' Directly Accessible=1

  Prior Disk '2003': file(base):'[NAS-B] NAS-B/NAS-B.vmdk', backItUp=0

               snapshot file:'[NAS-B] NAS-B/NAS-B.vmdk'

               prior size(KB):0, current size(KB):975175680, match=0

               prior change block ID:''

               Datastore:'NAS-B' Directly Accessible=1

  Prior Disk '2004': file(base):'[DS1] NAS-B/NAS-B_3.vmdk', backItUp=1

               snapshot file:'[DS1] NAS-B/NAS-B_3.vmdk'

               prior size(KB):0, current size(KB):20971520, match=0

               prior change block ID:''

               Datastore:'DS1' Directly Accessible=1

  Prior Disk '2005': file(base):'[DS1] NAS-B/NAS-B_4.vmdk', backItUp=1

               snapshot file:'[DS1] NAS-B/NAS-B_4.vmdk'

               prior size(KB):0, current size(KB):104857600, match=0

               prior change block ID:''

               Datastore:'DS1' Directly Accessible=1

 

Disk 2003 is the one causing the failure, note 'backItUp' is set to false.

 

Again eveidence that the data on the NAS-B volume is not to be backed up (therefore no snapshot required or indeed requested)

 

2014-10-10T08:04:30.293-08:00 avvcbimage Info <19660>: targetlist contains <path backup="false" name="[NAS-B] NAS-B/NAS-B.vmdk" diskCapacity="998579896320" />

 

Data store usage

 

datastore:'DS1                           '  capacity=1999575711744   free=968086257664

datastore:'NAS-B                         '  capacity=999922073600    free=311427072

 

So we can agree NAS-B DS is full to capacity (but as noted above that's ok as we don't want to back this up)

 

However the entire backup then fails based on this:

 

2014-10-10T08:04:32.430-08:00 avvcbimage Info <19717>: DS(NAS-B) does not have enough free space (311427072     ) for disks used (49928994816).

2014-10-10T08:04:32.430-08:00 avvcbimage Error <19661>: Datastore does not have enough free space for snapshot

2014-10-10T08:04:32.430-08:00 avvcbimage Info <9772>: Starting graceful (staged) termination, failed to create snapshot (wrap-up stage)

 

Which seems madness seeing as I don't want this backed up anyway! (the volume is Independant Persistant)

 

I have also tried backing up the invidual disks rather than trying to take an image, exactly the same failure.

 

Again, this worked fine under 5.6 the only thing that has changed is the version of VDP.

 

This has got to be a bug, right?


Recovering Accidentally Deleted VM Folder in DataStore

$
0
0

I accidentally deleted a folder while browsing datastore thru VSphere Client containing the VM on ESXi 5.5. Can I recover it, I am new to this, can someone please help me.

VDP custom IPTABLES RULES

$
0
0

Hello, i wold like to change the VDP IPTABLES defualt rules.

I Need to close all Incoming traffic except for my network xxx.xxx.xxx.xxx

 

Someone can show me the best way to do this hardening ?

 

I have find This:

 

less /etc/firewall.default

#!/bin/sh

 

 

# This is to be installed/run on each of the Avamar nodes on

# the customer network.

 

 

# In the case that something goes terribly wrong invoke the command:

# "service avfirewall stop" for SLES or "iptables stop" for RHEL.

# To see if the parameters are loaded run "service avfirewall status"

# on SLES or "iptables -L" on RHEL.

 

 

#-- OP_MODE should be set in the /etc/firewall.conf file

if [ -z "$OP_MODE" ]; then

  #-- OP_MODE wasn't set ... just default to FULL

  OP_MODE="FULL"

fi

 

 

# 1. Path to the iptables command

IPT=`which iptables`

#sleep 10

MYIP=`hostname -i`

# 2. Flush old rules, old custom tables

$IPT --flush

$IPT --delete-chain

 

 

# 3. Set default policies for all three default chains, drop all incoming and

# forwarded packets, allow outgoing packets

# NOTE: Since the "default" policy of the outbound connections is "ACCEPT",

# we do not need any further "OUTPUT" rules (except for the loopback interface)

$IPT -P INPUT DROP

$IPT -P FORWARD DROP

$IPT -P OUTPUT ACCEPT

 

 

# 4. Enable free use of loopback interfaces

$IPT -A INPUT -s 127.0.0.1 -j ACCEPT

$IPT -A OUTPUT -s 127.0.0.1 -j ACCEPT

 

 

# 5. Allow returning packets

$IPT -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT

 

 

# 6. Allow ICMP traffic - for network debugging

$IPT -A INPUT -p icmp -j ACCEPT

 

 

# 7. Allow DNS and NTP access from any servers

# NOTE: add a "-s <ip address>" before the "-j" to specify which

# DNS and NTP servers may be allowed

#$IPT -A INPUT -p udp --dport 53 -j ACCEPT

#$IPT -A INPUT -p tcp --dport 53 -j ACCEPT

#$IPT -A INPUT -p udp --dport 123 -j ACCEPT

#$IPT -A INPUT -p tcp --dport 123 -j ACCEPT

 

 

# 8. Allow everyone to communicate on required ports

#$IPT -A INPUT -p tcp -m multiport --dport 22,80,443,7778,7779,7780,7781,8443,28001 -j ACCEPT

# allow port for MC Web services

#$IPT -A INPUT -p tcp --dport 9443 -j ACCEPT

# allow LDAP and LoginManager connections

#$IPT -A INPUT -p udp -m multiport --dport 389,700 -j ACCEPT

#$IPT -A INPUT -p tcp -m multiport --dport 389,700 -j ACCEPT

#$IPT -A OUTPUT -p tcp -m multiport --sport 389,700 -j ACCEPT

#$IPT -A OUTPUT -p udp -m multiport --sport 389,700 -j ACCEPT

 

 

#

# appliance can talk to itself

#

$IPT -A INPUT -p tcp -s $MYIP -d $MYIP -j ACCEPT

#

# Necessary for VDP to operate

$IPT -A OUTPUT -p tcp -m multiport --sport 22,80,902,7444,7778,8543,8580,9443 -j ACCEPT

$IPT -A OUTPUT -p udp -m multiport --sport 53,137,138 -j ACCEPT

#

$IPT -A INPUT -p tcp -m multiport --dport 22,80,902,7444,7778,8543,8580,9443 -j ACCEPT

$IPT -A INPUT -p udp -m multiport --dport 53,137,138 -j ACCEPT

#

$IPT -A INPUT -p tcp -m multiport --sport 7444 -j ACCEPT

#

# open communication on these encrypted ports

#

$IPT -A INPUT -p tcp --sport 443 -j ACCEPT

$IPT -A INPUT -p tcp --sport 9443 -j ACCEPT

#

# gsan ports

#

$IPT -A INPUT -p tcp -s $MYIP -d $MYIP --sport 27000 -j ACCEPT

$IPT -A INPUT -p tcp -s $MYIP -d $MYIP --sport 29000 -j ACCEPT

$IPT -A INPUT -p tcp -s $MYIP -d $MYIP --dport 27000 -j ACCEPT

$IPT -A INPUT -p tcp -s $MYIP -d $MYIP --dport 29000 -j ACCEPT

$IPT -A OUTPUT -p tcp -s $MYIP -d $MYIP --sport 27000 -j ACCEPT

$IPT -A OUTPUT -p tcp -s $MYIP -d $MYIP --sport 29000 -j ACCEPT

$IPT -A OUTPUT -p tcp -s $MYIP -d $MYIP --dport 27000 -j ACCEPT

$IPT -A OUTPUT -p tcp -s $MYIP -d $MYIP --dport 29000 -j ACCEPT

 

 

# New filter to stop UDP flooding

#$IPT -I INPUT -p tcp --dport 26000 -m state --state NEW -m recent --set

#$IPT -I INPUT -p tcp --dport 26000 -m state --state NEW -m recent --update --seconds 60 --hitcount 20 -j DROP

 

 

# 9. Allow everyone to communicate on GSAN required port ranges

#$IPT -A INPUT -p tcp -m multiport --dport 19000:19500 -j ACCEPT

#$IPT -A INPUT -p udp -m multiport --dport 19000:19500 -j ACCEPT

#$IPT -A INPUT -p tcp -m multiport --dport 20000:20500 -j ACCEPT

#$IPT -A INPUT -p udp -m multiport --dport 20000:20500 -j ACCEPT

#$IPT -A INPUT -p tcp -m multiport --dport 25000:25500 -j ACCEPT

#$IPT -A INPUT -p udp -m multiport --dport 25000:25500 -j ACCEPT

#$IPT -A INPUT -p tcp -m multiport --dport 26000:26500 -j ACCEPT

#$IPT -A INPUT -p udp -m multiport --dport 26000:26500 -j ACCEPT

#$IPT -A INPUT -p tcp -m multiport --dport 27000:27500 -j ACCEPT

#$IPT -A INPUT -p tcp -m multiport --dport 40000:45000 -j ACCEPT

# possible ports for apache tomcat mod_jk proxy tool

#$IPT -A INPUT -p tcp -m multiport --dport 8543,8580 -j ACCEPT

 

 

# 10. Allow SNMP traffic

# management console traffic

#$IPT -A INPUT -p udp --dport 161 -j ACCEPT

# data domain traps traffic

#$IPT -A INPUT -p udp --dport 162 -j ACCEPT

#

# Allow everyone communication on ports 27000/27001/27002

# NOTE: should this ONLY be for localhost and would be covered by rule 4

#$IPT -A INPUT -p tcp -m multiport --destination-port 27000,27001,27002 -j ACCEPT

 

 

# 11. Allow everyone to communicate in on ports 29000/29100 for stunnel

#$IPT -A INPUT -p tcp -m multiport --destination-port 29000,29100 -j ACCEPT

 

 

# 12. Allow everyone to communicate on ports range from 8778 to 8781

#$IPT -A INPUT -p tcp -m multiport --dport 8778:8781 -j ACCEPT

 

 

# 13. Allow DTLT default ports to be open

#$IPT -A INPUT -p tcp -m multiport --destination-port 8080,8181,8444 -j ACCEPT

 

 

#  DROP all other traffic and log it

# 14. Create a LOGDROP chain to log and drop packets

LOGLIMIT="2/s"

LOGLIMITBURST="10"

 

 

$IPT -N LOGDROP

$IPT -A LOGDROP -p tcp -m limit --limit $LOGLIMIT --limit-burst $LOGLIMITBURST -j LOG --log-level 7 --log-prefix "TCP LOGDROP: "

$IPT -A LOGDROP -p udp -m limit --limit $LOGLIMIT --limit-burst $LOGLIMITBURST -j LOG --log-level 7 --log-prefix "UDP LOGDROP: "

$IPT -A LOGDROP -p icmp -m limit --limit $LOGLIMIT --limit-burst $LOGLIMITBURST -j LOG --log-level 7 --log-prefix "ICMP LOGDROP: "

$IPT -A LOGDROP -f -m limit --limit $LOGLIMIT --limit-burst $LOGLIMITBURST -j LOG --log-level 7 --log-prefix "FRAGMENT LOGDROP: "

$IPT -A LOGDROP -j DROP

 

 

$IPT -A INPUT -p icmp -j LOGDROP -m pkttype ! --pkt-type broadcast

$IPT -A INPUT -p tcp -j LOGDROP -m pkttype ! --pkt-type broadcast

$IPT -A INPUT -p udp -j LOGDROP -m pkttype ! --pkt-type broadcast

 

 

$IPT -A INPUT -p tcp -j REJECT --reject-with tcp-reset

root@nastoosquare:/etc/init.d/rc3.d/#: service avfirewall status

Chain INPUT (policy DROP)

target     prot opt source               destination        

ACCEPT     all  --  localhost.localdomain  anywhere           

ACCEPT     all  --  anywhere             anywhere            state RELATED,ESTABLISHED

ACCEPT     icmp --  anywhere             anywhere           

ACCEPT     tcp  --  nastoosquare.virtualsolution.net  nastoosquare.virtualsolution.net

ACCEPT     tcp  --  anywhere             anywhere            multiport dports ssh,http,ideafarm-door,7444,interwise,8543,8580,tungsten-https

ACCEPT     udp  --  anywhere             anywhere            multiport dports domain,netbios-ns,netbios-dgm

ACCEPT     tcp  --  anywhere             anywhere            multiport sports 7444

ACCEPT     tcp  --  anywhere             anywhere            tcp spt:https

ACCEPT     tcp  --  anywhere             anywhere            tcp spt:tungsten-https

ACCEPT     tcp  --  nastoosquare.virtualsolution.net  nastoosquare.virtualsolution.net tcp spt:27000

ACCEPT     tcp  --  nastoosquare.virtualsolution.net  nastoosquare.virtualsolution.net tcp spt:29000

ACCEPT     tcp  --  nastoosquare.virtualsolution.net  nastoosquare.virtualsolution.net tcp dpt:27000

ACCEPT     tcp  --  nastoosquare.virtualsolution.net  nastoosquare.virtualsolution.net tcp dpt:29000

LOGDROP    icmp --  anywhere             anywhere            PKTTYPE != broadcast

LOGDROP    tcp  --  anywhere             anywhere            PKTTYPE != broadcast

LOGDROP    udp  --  anywhere             anywhere            PKTTYPE != broadcast

REJECT     tcp  --  anywhere             anywhere            reject-with tcp-reset

 

 

Chain FORWARD (policy DROP)

target     prot opt source               destination        

 

 

Chain OUTPUT (policy ACCEPT)

target     prot opt source               destination        

ACCEPT     all  --  localhost.localdomain  anywhere           

ACCEPT     tcp  --  anywhere             anywhere            multiport sports ssh,http,ideafarm-door,7444,interwise,8543,8580,tungsten-https

ACCEPT     udp  --  anywhere             anywhere            multiport sports domain,netbios-ns,netbios-dgm

ACCEPT     tcp  --  nastoosquare.virtualsolution.net  nastoosquare.virtualsolution.net tcp spt:27000

ACCEPT     tcp  --  nastoosquare.virtualsolution.net  nastoosquare.virtualsolution.net tcp spt:29000

ACCEPT     tcp  --  nastoosquare.virtualsolution.net  nastoosquare.virtualsolution.net tcp dpt:27000

ACCEPT     tcp  --  nastoosquare.virtualsolution.net  nastoosquare.virtualsolution.net tcp dpt:29000

 

 

Chain LOGDROP (3 references)

target     prot opt source               destination        

LOG        tcp  --  anywhere             anywhere            limit: avg 2/sec burst 10 LOG level debug prefix `TCP LOGDROP: '

LOG        udp  --  anywhere             anywhere            limit: avg 2/sec burst 10 LOG level debug prefix `UDP LOGDROP: '

LOG        icmp --  anywhere             anywhere            limit: avg 2/sec burst 10 LOG level debug prefix `ICMP LOGDROP: '

LOG        all  -f  anywhere             anywhere            limit: avg 2/sec burst 10 LOG level debug prefix `FRAGMENT LOGDROP: '

DROP       all  --  anywhere             anywhere           

-------------------------------------------------------------------------------------------------------------------------------------------------------

-------------------------------------------------------------------------------------------------------------------------------------------------------

 

What do you think about this changes:

 

Original

$IPT -A OUTPUT -p tcp -m multiport --sport 22,80,902,7444,7778,8543,8580,9443 -j ACCEPT

$IPT -A OUTPUT -p udp -m multiport --sport 53,137,138 -j ACCEPT

 

 

Modified

$IPT -A OUTPUT -p tcp -m multiport -s xx.57.10.0/24  -d xx.57.10.0/24 --sport 22,80,902,7444,7778,8543,8580,9443 -j ACCEPT

$IPT -A OUTPUT -p udp -m multiport -s xx.57.10.0/24  -d xx.57.10.0/24 --sport 53,137,138 -j ACCEPT

 

 

--------------------------------------------------------------------------------------------------

 

 

Original

$IPT -A INPUT -p tcp -m multiport --dport 22,80,902,7444,7778,8543,8580,9443 -j ACCEPT

$IPT -A INPUT -p udp -m multiport --dport 53,137,138 -j ACCEPT

 

 

Modified

$IPT -A INPUT -p tcp -m multiport -s xx.57.10.0/24  -d xx.57.10.0/24 --dport 22,80,902,7444,7778,8543,8580,9443 -j ACCEPT

$IPT -A INPUT -p udp -m multiport -s xx.57.10.0/24  -d xx.57.10.0/24 --dport 53,137,138 -j ACCEPT

 

 

--------------------------------------------------------------------------------------------------

 

 

$IPT -A INPUT -p tcp -m multiport -s xx.57.10.0/24  -d xx.57.10.0/24 --sport 7444 -j ACCEPT

 

 

$IPT -A INPUT -p tcp --s xx.57.10.0/24  -d xx.57.10.0/24 --sport 443 -j ACCEPT

$IPT -A INPUT -p tcp -s xx.57.10.0/24  -d xx.57.10.0/24 --sport 9443 -j ACCEPT

 

 

 

 

Thanks

 

 

ps

default rules are this:

 

Chain INPUT (policy DROP 3367 packets, 263K bytes)

pkts bytes target     prot opt in     out     source               destination       

156K   25M ACCEPT     all  --  *      *       127.0.0.1            0.0.0.0/0         

757K  370M ACCEPT     all  --  *      *       0.0.0.0/0            0.0.0.0/0           state RELATED,ESTABLISHED

   34  2806 ACCEPT     icmp --  *      *       0.0.0.0/0            0.0.0.0/0         

5115  306K ACCEPT     tcp  --  *      *       xx.57.10.221         xx.57.10.221      

  193 10484 ACCEPT     tcp  --  *      *       0.0.0.0/0            0.0.0.0/0           multiport dports 22,80,902,7444,7778,8543,8580,9443

  896 92128 ACCEPT     udp  --  *      *       0.0.0.0/0            0.0.0.0/0           multiport dports 53,137,138

    0     0 ACCEPT     tcp  --  *      *       0.0.0.0/0            0.0.0.0/0           multiport sports 7444

    2    88 ACCEPT     tcp  --  *      *       0.0.0.0/0            0.0.0.0/0           tcp spt:443

    0     0 ACCEPT     tcp  --  *      *       0.0.0.0/0            0.0.0.0/0           tcp spt:9443

    0     0 ACCEPT     tcp  --  *      *       xx.57.10.221         xx.57.10.221        tcp spt:27000

    0     0 ACCEPT     tcp  --  *      *       xx.57.10.221         xx.57.10.221        tcp spt:29000

    0     0 ACCEPT     tcp  --  *      *       xx.57.10.221         xx.57.10.221        tcp dpt:27000

    0     0 ACCEPT     tcp  --  *      *       xx.57.10.221         xx.57.10.221        tcp dpt:29000

    0     0 LOGDROP    icmp --  *      *       0.0.0.0/0            0.0.0.0/0           PKTTYPE != broadcast

  125  6805 LOGDROP    tcp  --  *      *       0.0.0.0/0            0.0.0.0/0           PKTTYPE != broadcast

   54  3862 LOGDROP    udp  --  *      *       0.0.0.0/0            0.0.0.0/0           PKTTYPE != broadcast

    0     0 REJECT     tcp  --  *      *       0.0.0.0/0            0.0.0.0/0           reject-with tcp-reset

 

 

 

 

Chain FORWARD (policy DROP 0 packets, 0 bytes)

pkts bytes target     prot opt in     out     source               destination       

 

 

 

 

Chain OUTPUT (policy ACCEPT 470K packets, 159M bytes)

pkts bytes target     prot opt in     out     source               destination       

156K   25M ACCEPT     all  --  *      *       127.0.0.1            0.0.0.0/0         

207K  120M ACCEPT     tcp  --  *      *       0.0.0.0/0            0.0.0.0/0           multiport sports 22,80,902,7444,7778,8543,8580,9443

    0     0 ACCEPT     udp  --  *      *       0.0.0.0/0            0.0.0.0/0           multiport sports 53,137,138

36416   14M ACCEPT     tcp  --  *      *       xx.57.10.221         xx.57.10.221        tcp spt:27000

   33 10198 ACCEPT     tcp  --  *      *       xx.57.10.221         xx.57.10.221        tcp spt:29000

42448 7299K ACCEPT     tcp  --  *      *       xx.57.10.221         xx.57.10.221        tcp dpt:27000

   43  7048 ACCEPT     tcp  --  *      *       xx.57.10.221         xx.57.10.221        tcp dpt:29000

 

 

 

 

Chain LOGDROP (3 references)

pkts bytes target     prot opt in     out     source               destination       

  125  6805 LOG        tcp  --  *      *       0.0.0.0/0            0.0.0.0/0           limit: avg 2/sec burst 10 LOG flags 0 level 7 prefix `TCP LOGDROP: '

   54  3862 LOG        udp  --  *      *       0.0.0.0/0            0.0.0.0/0           limit: avg 2/sec burst 10 LOG flags 0 level 7 prefix `UDP LOGDROP: '

    0     0 LOG        icmp --  *      *       0.0.0.0/0            0.0.0.0/0           limit: avg 2/sec burst 10 LOG flags 0 level 7 prefix `ICMP LOGDROP: '

    0     0 LOG        all  -f  *      *       0.0.0.0/0            0.0.0.0/0           limit: avg 2/sec burst 10 LOG flags 0 level 7 prefix `FRAGMENT LOGDROP: '

  179 10667 DROP       all  --  *      *       0.0.0.0/0            0.0.0.0/0 

Backing up VMware VM's

$
0
0

We are backing up our VM's with BackupExec 2012 up until a month ago the VM backups were running fine using the SAN as the transport.  Now they are wanting to only run over the network.  The only thing that has changed is for some reason the backup server seems to have initialized the luns that the VM's are on.  Is there a way for me to un-initialize these so we can get back to backing up via SAN?

 

Any help is great.

 

Thanks

Upgrading 5.6 to 5.8

$
0
0

While following the steps for upgrading VDP 5.6 to VDP 5.8 i got the error message  "Installation of package stalled" within a few minutes after starting the update. I rolled back the installation to the previous created snapshot, but would like to update the appliance to the latest level.

 

Where do i need to search for a solution?

Automating VDPA backups?

$
0
0

I know that the VDP documentation states that the only way to manage the backups is to use the web client plugin, but is there really no way to automate backups? For example by using API calls or directly thorugh the CLI?

I'm looking for a way in which I could automatically include VMs in a backup policy, after a virtual machine gets provisioned by vCAC. I've tried to include a whole ressource pool in the backup policy and each VM which gets migrated there should be backed up, but this does not work.

CBT backup size versus guest-OS modified file size

$
0
0

Hi!

 

I administer a vSphere 5.5 environment using IBM Tivoli Storage Management for Virtual Environments (TSM-VE) as our backup solution. TSM-VE uses CBT to make incremental-forever backups of our VMs, meaning that it only backs up the entire VM once; all subsequent backups are incrementals. We backup our VMs once a day.

 

I'm currently investigating why certain Windows VMs in our vSphere environment generate huge incremental backups. During my investigation I hit on something I don't understand.

 

As a quick-and-dirty test, I did a file scan on a VM, to list all files that had been modified on the VM during the last 24 hours. I then added up the total file size of all those modified files. Then, I compared this combined file size with the size of the incremental TSM-VE backup for that day. I repeated this test on a number of VMs, smaller as well as larger ones.

 

I had expected the combined file size to be much larger than the size of the incremental backup. After all, the fact that a file is modified does not mean that the entire file has changed (meaning that all CBT blocks need to be backed up). I expected a CBT backup to be more efficient, size-wise, than an incremental file-based backup.

 

Instead, I found out that on all VMs, the incremental TSM-VE backup was consistently 1.5 to twice the size of the combined modified file size, exactly the opposite of the result I expected.

 

I've tried to think of a few things that could cause this discrepancy.

 

1) In-guest disk defrag. This would change the blocks without changing the files, messing up the way CBT works. However, there are no scheduled or unscheduled defrags on our VMs.

2) The files on the VM are smaller than the CBT blocks. That could cause a small file to mark a larger CBT block as changed. However, as I understand it, CBT blocks are usually quite small (and not the same as VMFS blocks)

 

What am I missing here? Is there some other process that changes the VMDK storage blocks of my Windows VMs without changing the actual files? Is my quick-and-dirty file scan too simplistic? I really hope that someone can explain this to me, thanks!

vDP File Level Restore Issue

$
0
0

Hi Guys,

 

I am having an issue within my vSphere environment that has me stuck.

 

I have two vDP devices across two data centers.  Each device is doing local backups of the VMs within that datacenter and replicated using VDP Replication to the other appliance.

 

I have no problems with the VDP appliance at one location and I can access the /vdp-configure and /flr from anywhere.

 

At the other datacenter, I can only access the /vdp-configure but the /flr gives a 404 error from Tomcat that says 'resource not available'

 

I deployed this VDP the same way as the other using an OVA file and the configuration is correct.  I even tried redeploying the appliance and I have the same issue.


Configuration works fine and the unit is taking successful backups, I just don't know why the FLR doesn't work for this one appliance.

 

Any thoughts or suggestions for where I can look would be appreciated.

 

Thanks

Omar 


Anyone else having these VDR issues?

$
0
0

I love the fact that VMware is providing VDR as part of the vSphere package. It's definitely a step in the right direction, albeit I'm still inclined to think this software hasn't been put through the ringer in terms of proper QA. I'm just trying to put out a feeler to see how many others have experienced some of the same issues I'm having.

 

To start, I'm backing up my VMs via a network share on a standalone Windows 2003 server that has a NAS attached to it.

 

Some of the issues I've noticed:

 

1) Backups take an inordinate amount of time. I can understand the first backup, but my VMs don't change very much from day to day. Most of the data being manipulated is located on RDMs are these are backed up using Tivoli, not VDR (I use VDR solely for the OS partitions). Each partition is approximately 25GB, there are 15 VMs and my backup window (10pm - 6pm) isn't sufficient to complete the process.

 

2) Integrity checks for the backups are taking a crazy amount of time and will usually stop due to my window being closed (see point #1)

 

3) I'm getting inconsistent "failures" for certain VMs (the report will simply state that a VM failed to backup, not much else). It also varies per night and not always the same VMs (not exactly sure if this is related to #1 where the window is closing while VDR is executing)

 

4) I had the most difficult time setting up the remote share from the VDR appliance in vSphere. The username and password would never be accepted (even though if I tried the same share with the same user/pass on a Windows machine, it would work fine). I finally narrowed down the problem to the simple fact that the VDR appliance can't handle passwords that have special characters in them (this password had an "@" and a ","). Looking at the console while attempting to mount the share would spit out a CIFS error -22. Changing the password to include only numbers and letters was sufficient to work around this issue.

 

5) Snapshots not being created for no apparent reason and thus failing the VDR process. I'm fully able to do a manual snapshot with or without the memory state, so I'm not sure why VDR can't do it. This issue is very intermittent. I had it often when I first setup VDR, but now it only happens every so often (without any type of consistency).

 

I think that's all I can think about for now..

VDR: Trouble writing to destination disk. Error -1115 (disk full)

$
0
0

Hey guys,

 

I am seeing this error in my logs

 

 

7/27/2009 9:49:34 AM: Normal backup using Backup Servers

7/27/2009 9:49:39 AM: Copying dev2

7/27/2009 9:52:34 AM: Performing full back up of disk "[data2] dev2/dev2-flat.vmdk" using "SCSI Hot-Add"

7/27/2009 10:10:30 AM: Trouble writing to destination volume, error -1115 ( disk full)

7/27/2009 10:10:50 AM: Task incomplete

7/27/2009 10:10:50 AM: Remaining: 7 files, 31.6 GB

7/27/2009 10:10:50 AM: Completed: 0 files, 18.5 GB

7/27/2009 10:10:50 AM: Performance: 1033.3 MB/minute

7/27/2009 10:10:50 AM: Duration: 00:21:17 (00:03:00 idle/loading/preparing)

 

 

When I try to backup a VM  by selecting "Backup now" I get the message that backup did not start

because destination  is busy. I tried unmouting and mapping it again but no go.

 

 

 

 

 

There is 700 Gb out of 1Tb free. Any ideas?

 

 

 

 

 

 

 

 

 

Please consider marking my answer as "helpful" or "correct"

Poor NFS Performance: OpenFiler

$
0
0

Good Day Everyone,

 

   I have an issue with OpenFiler, a Linux-based operating that converts a computer system into a SAN/NAS appliance. Here is the problem. In my environment we have two Netapp Storevault 500 appliances that I normally perform backups to a NFS share. There are two backup cronjobs that use ghettoVCB to backup two groups of VM's. One group is a pool of 3 VMs. This takes 13 mins to complete. A second job that backups a pool of 5 VMs to a 2nd Storevault appliance which takes 2 hours.

 

   We then installed Openfiler on a old server that has 2 core Xeon processors. There is a software RAID 5 process in place. When performing the same backups to a NFS Openfiler share, the first backup job, which takes 13 mins, takes around 4 hours. The second backup job, which takes 2 hours, takes almost 10 hours to complete. This is unacceptable!!!! Especially considering the strain placed on the host ESX Server. I assumed that because of the software RAID 5, the overhead on the CPU explained the long backup times.

 

     I then installed Openfiler on a 2nd server, an IBM x306 machine which has a P4 Intel processor. This time no software RAID or any RAID at all. A single 750GB hard drive that contained the OS and the rest of the disk uses to backup VMs to a NFS share. I performed the first backup job of the pool of 3 VMs. This time the backup job took 1 and 1/2 hours to complete instead of 13 mins!!!!!!!!!!

 

     Is Openfiler simply poor at being an NFS Server!!!!!!!!!!!!! Has anyone else had these issues with Openfiler?

 

Thanks

Steve

Disk D becomes readonly after snapshot

$
0
0

Hi gurus,

 

we currently encounter problems with one of our Windows 2008 64bit machines (not R2!).

The server is backuped with VDR 1.2.0.1131. At the beginning the snapshot is taken - everthings fine so far. But after the snap completed and the backup is running, disk D inside the guest system becomes readonly!

 

This RO flag isn't remoed anymore, even after the backup completed and the server is not usable anymore. The following errors are displayed in the logs of the windows server:

 

System Log:

  • The system failed to flush data to the transaction log. Corruption may occur.
  • Application popup: Windows - Delayed Write Failed : Exception Processing Message 0xc000a082 Parameters 0x000007FEFD1A722C 0x000007FEFD1A722C 0x000007FEFD1A722C 0x000007FEFD1A722C
  • {Delayed Write Failed} Windows was unable to save all the data for the file D:\Notes\data\pid.nbf; the data has been lost.  This error may be caused if the device has been removed or the media is write-protected.

 

Application Log:

  • Volume Shadow Copy Error: VSS waited more than 40 seconds for all voumes to be flushed.  This caused volume \\?\Volume{6e198479-c63d-11df-b53f-806e6f6e6963}\ to timeout while waiting for the release-writes phase of shadow copy creation.  Trying again when disk activity is lower may  solve this problem.

    Operation:
       Executing Asynchronous Operation

    Context:
       Current State: flush-and-hold writes
  • Volume Shadow Copy Service error: The I/O writes cannot be held during the shadow copy creation period on volume D:\. The volume index in the shadow copy set is 0. Error details: Open[0x00000000], Flush[0x00000000], Release[0x00000000], OnRun[0x80042314].

    Operation:
       Executing Asynchronous Operation

    Context:
       Current State: DoSnapshotSet
  • The VSS service is shutting down due to idle timeout.

 

 

Does anybody has seen this error before and can assist to fix this?

 

Thanks and best regards

Marco

Bottleneck = source

$
0
0

Dear all,

 

I am using ESXi 3.5 (Free version) and I am triailing Veeam Backup & Replication. The most I can get out of it is 5 - 6 Mb/s My VM is 570GB and the whole thing is taking 25 odd hours.

 

Is this a limitation to the free version or have I go something wrong??

Viewing all 64650 articles
Browse latest View live


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>