Restore Exchange 2013 DAG

Suppose you have a two server setup Exchange 2013 DAG. (And an arbitrary witness machine). You would like to physically relocate the servers. Therefore, you shutdown Machine_A then Machine_B then the witness. After you have relocated the server, you think that you can leave soon. But suddenly, Machine_B has serious main broad problem and cannot start again……   Then…  Because Machine_A was shut down earlier…  The failover cluster refuses to start…  And the whole Exchange system is down….

Use the command in the link below to force cluster to start… using the  forcequorum option.

https://technet.microsoft.com/en-us/library/dd351049(v=exchg.150).aspx

net start clussvc /forcequorum

Now, the failover cluster is up.. But Exchange is still refuse to start because the mailbox database copy in Machine_A is not the latest version…

So you need to force the copy in Machine_A to active by using the method in this link.

http://blogs.technet.com/b/timmcmic/archive/2012/05/30/exchange-2010-the-mystery-of-the-9223372036854775766-copy-queue.aspx

You may also discover that the copy queue length is 9,223,372,036,854,775,766 long…

Move-ActiveMailboxDatabase DB01 –ActivateOnServer YOUR_SERVER_NAME -SkipLagChecks -SkipActiveCopyChecks –
MountDialOverride:BESTEFFORT -SkipClientExperienceChecks

But be careful !!!, you may lose some email with this command. !!!!  Use at your own risk!!!

Do this on all Mailbox database. After you have brought the database containing your administrator login, you can login to ECP.

Then we can setup a new machine to replace the old one. Install windows, update it, use the IP of the old machine, use the same computer name. Install Exchange server prerequisites.  Consult the documents when you install the Exchange server before.

The procedure is in this link:
https://technet.microsoft.com/en-us/library/dd638206(v=exchg.150).aspx

But in step 5, you may need to use
setup /m:RecoverServer /IAcceptExchangeServerLicenseTerms
Instead of
setup /m:RecoverServer

After adding back the DAG members, you need to force it to reseed…

This link will help:
http://msexchangeguru.com/2012/09/24/dag-recovery/

But the
Update-MailboxDatabaseCopy -Identity <DBNamedestinationservername> -SourceServer <sourcemailbox server> -DeleteExistingFiles
command will fail with error…  You may need to wait for about 10 minutes after running Suspend-MailboxDatabasecopy command in order to successfully run it.

And that, the Update-MailboxDatabaseCopy will block your console. If you have multiple huge mailbox databases, you need to do the first one, eat something, then copy back to do the second one.

Good luck. again, use the steps above at your own risk. To reduce risk, consider to use some paid services.

Qnap Postgresql schedule cronjob backup

Recently needs to configure backup for postgresql in qnap nas…. It’s much difficult than imagine…

You can easily google the sample script from postgresql’s wiki.

https://wiki.postgresql.org/wiki/Automated_Backup_on_Linux

However, it cannot be directly used in qnap.

Firstly, psql and pg_dump are not installed in the /bin folder.  You need to use full path for the commands.

Secondly, you will face PQparameterS not found problem…    After some googling, you need to set an env variable (LD_LIBRARY_PATH) in order to run the commands.

Thirdly, the command needs to include -E utf8

Fourthly, the find command in qnap does not support  maxdepth option

Fifthly, the custom backup in qnap does not support compression and that the find command does not support -exec option…

So finally, I added two variables to pg_backup.config file. (you need to create the postgresqlbackup share folder using the GUI first….)

pg_backup.config

##############################
## POSTGRESQL BACKUP CONFIG ##
##############################

##############################
## Added by me ###############
##############################

PSQL_PATH=/share/CACHEDEV1_DATA/.qpkg/PostgreSQL/bin/psql
PG_DUMP_PATH=/share/CACHEDEV1_DATA/.qpkg/PostgreSQL/bin/pg_dump

##### End added by me ########
 
# Optional system user to run backups as.  If the user the script is running as doesn't match this
# the script terminates.  Leave blank to skip check.
BACKUP_USER=
 
# Optional hostname to adhere to pg_hba policies.  Will default to "localhost" if none specified.
HOSTNAME=
 
# Optional username to connect to database as.  Will default to "postgres" if none specified.
USERNAME=
 
# This dir will be created if it doesn't exist.  This must be writable by the user the script is
# running as.
BACKUP_DIR=/share/postgresqlbackup/
 
# List of strings to match against in database name, separated by space or comma, for which we only
# wish to keep a backup of the schema, not the data. Any database names which contain any of these
# values will be considered candidates. (e.g. "system_log" will match "dev_system_log_2010-01")
SCHEMA_ONLY_LIST=""
 
# Will produce a custom-format backup if set to "yes"
ENABLE_CUSTOM_BACKUPS=yes
 
# Will produce a gzipped plain-format backup if set to "yes"
ENABLE_PLAIN_BACKUPS=yes
 
 
#### SETTINGS FOR ROTATED BACKUPS ####
 
# Which day to take the weekly backup from (1-7 = Monday-Sunday)
DAY_OF_WEEK_TO_KEEP=5
 
# Number of days to keep daily backups
DAYS_TO_KEEP=7
 
# How many weeks to keep weekly backups
WEEKS_TO_KEEP=5
 
######################################

Then in the pg_backup_rotated.sh will look like this:

#!/bin/bash

#########################################
##### QNAP 4.1.3/ POSTGRESQL 9.3.4.1 ####
#########################################

export set LD_LIBRARY_PATH=/share/CACHEDEV1_DATA/.qpkg/PostgreSQL/lib/

###########################
####### LOAD CONFIG #######
###########################

while [ $# -gt 0 ]; do
case $1 in
-c)
CONFIG_FILE_PATH=”$2″
shift 2
;;
*)
${ECHO} “Unknown Option \”$1\”” 1>&2
exit 2
;;
esac
done

if [ -z $CONFIG_FILE_PATH ] ; then
SCRIPTPATH=$(cd ${0%/*} && pwd -P)
CONFIG_FILE_PATH=”${SCRIPTPATH}/pg_backup.config”
fi

if [ ! -r ${CONFIG_FILE_PATH} ] ; then
echo “Could not load config file from ${CONFIG_FILE_PATH}” 1>&2
exit 1
fi

source “${CONFIG_FILE_PATH}”

###########################
#### PRE-BACKUP CHECKS ####
###########################

# Make sure we’re running as the required backup user
if [ “$BACKUP_USER” != “” -a “$(id -un)” != “$BACKUP_USER” ] ; then
echo “This script must be run as $BACKUP_USER. Exiting.” 1>&2
exit 1
fi

###########################
### INITIALISE DEFAULTS ###
###########################

if [ ! $HOSTNAME ]; then
HOSTNAME=”localhost”
fi;

if [ ! $USERNAME ]; then
USERNAME=”postgres”
fi;

###########################
#### START THE BACKUPS ####
###########################

function perform_backups()
{
SUFFIX=$1
FINAL_BACKUP_DIR=$BACKUP_DIR”`date +\%Y-\%m-\%d`$SUFFIX/”

echo “Making backup directory in $FINAL_BACKUP_DIR”

if ! mkdir -p $FINAL_BACKUP_DIR; then
echo “Cannot create backup directory in $FINAL_BACKUP_DIR. Go and fix it!” 1>&2
exit 1;
fi;

###########################
### SCHEMA-ONLY BACKUPS ###
###########################

for SCHEMA_ONLY_DB in ${SCHEMA_ONLY_LIST//,/ }
do
SCHEMA_ONLY_CLAUSE=”$SCHEMA_ONLY_CLAUSE or datname ~ ‘$SCHEMA_ONLY_DB'”
done

SCHEMA_ONLY_QUERY=”select datname from pg_database where false $SCHEMA_ONLY_CLAUSE order by datname;”

echo -e “\n\nPerforming schema-only backups”
echo -e “——————————————–\n”

SCHEMA_ONLY_DB_LIST=`”$PSQL_PATH” -h “$HOSTNAME” -U “$USERNAME” -At -c “$SCHEMA_ONLY_QUERY” postgres`

echo -e “The following databases were matched for schema-only backup:\n${SCHEMA_ONLY_DB_LIST}\n”

for DATABASE in $SCHEMA_ONLY_DB_LIST
do
echo “Schema-only backup of $DATABASE”

if ! “$PG_DUMP_PATH” -E utf8 -Fp -s -h “$HOSTNAME” -U “$USERNAME” “$DATABASE” | gzip > $FINAL_BACKUP_DIR”$DATABASE”_SCHEMA.sql.gz.in_progress; then
echo “[!!ERROR!!] Failed to backup database schema of $DATABASE” 1>&2
else
mv $FINAL_BACKUP_DIR”$DATABASE”_SCHEMA.sql.gz.in_progress $FINAL_BACKUP_DIR”$DATABASE”_SCHEMA.sql.gz
fi
done

###########################
###### FULL BACKUPS #######
###########################

for SCHEMA_ONLY_DB in ${SCHEMA_ONLY_LIST//,/ }
do
EXCLUDE_SCHEMA_ONLY_CLAUSE=”$EXCLUDE_SCHEMA_ONLY_CLAUSE and datname !~ ‘$SCHEMA_ONLY_DB'”
done

FULL_BACKUP_QUERY=”select datname from pg_database where not datistemplate and datallowconn $EXCLUDE_SCHEMA_ONLY_CLAUSE order by datname;”

echo -e “\n\nPerforming full backups”
echo -e “——————————————–\n”

for DATABASE in `”$PSQL_PATH” -h “$HOSTNAME” -U “$USERNAME” -At -c “$FULL_BACKUP_QUERY” postgres`
do
if [ $ENABLE_PLAIN_BACKUPS = “yes” ]
then
echo “Plain backup of $DATABASE”

if ! “$PG_DUMP_PATH” -E utf8 -Fp -h “$HOSTNAME” -U “$USERNAME” “$DATABASE” | gzip > $FINAL_BACKUP_DIR”$DATABASE”.sql.gz.in_progress; then
echo “[!!ERROR!!] Failed to produce plain backup database $DATABASE” 1>&2
else
mv $FINAL_BACKUP_DIR”$DATABASE”.sql.gz.in_progress $FINAL_BACKUP_DIR”$DATABASE”.sql.gz
fi
fi

if [ $ENABLE_CUSTOM_BACKUPS = “yes” ]
then
echo “Custom backup of $DATABASE”

if ! “$PG_DUMP_PATH” -E utf8 -Z 0 -Fc -h “$HOSTNAME” -U “$USERNAME” “$DATABASE” | gzip > $FINAL_BACKUP_DIR”$DATABASE”.custom.gz.in_progress; then
echo “[!!ERROR!!] Failed to produce custom backup database $DATABASE”
else
mv $FINAL_BACKUP_DIR”$DATABASE”.custom.gz.in_progress $FINAL_BACKUP_DIR”$DATABASE”.custom.gz
fi
fi

done

echo -e “\nAll database backups complete!”
}

# MONTHLY BACKUPS

DAY_OF_MONTH=`date +%d`

if [ $DAY_OF_MONTH -eq 1 ];
then
# Delete all expired monthly directories
find $BACKUP_DIR -maxdepth 1 -name “*-monthly” | xargs /bin/rm -rf

perform_backups “-monthly”

exit 0;
fi

# WEEKLY BACKUPS

DAY_OF_WEEK=`date +%u` #1-7 (Monday-Sunday)
EXPIRED_DAYS=`expr $((($WEEKS_TO_KEEP * 7) + 1))`

if [ $DAY_OF_WEEK = $DAY_OF_WEEK_TO_KEEP ];
then
# Delete all expired weekly directories
find $BACKUP_DIR -maxdepth 1 -mtime +$EXPIRED_DAYS -name “*-weekly” | xargs /bin/rm -rf

perform_backups “-weekly”

exit 0;
fi

# DAILY BACKUPS

# Delete daily backups 7 days old or more
find $BACKUP_DIR -mtime +$DAYS_TO_KEEP -name “*-daily” | xargs /bin/rm -rf

perform_backups “-daily”

The pg_backup.sh file has no use.

Then you need to put the two files in somewhere. e.g. /share/postgresqlbackup/scripts/

Then you can run the script to see whether it is working…

After that, you need to use crontab in qnap to schedule it to run…  however, crontab in Qnap is not very easy to setup…. you need to see the instruction in the bottom of this page.

http://wiki.qnap.com/wiki/Add_items_to_crontab

1. Edit /etc/config/crontab and add your custom entry.
2. Run ‘crontab /etc/config/crontab’ to load the changes.
3. Restart cron, i.e. ‘/etc/init.d/crond.sh restart’

Remember to chmod 755 /share/postgresqlbackup/scripts/pg_backup_rotated.sh

This works for me in Qnap firmware version 4.1.3 and postgresql 9.3.4.1.  Hope this will work for you. But I do not guarantee this will work for you. Use it at your own risk.

Upgrading Fedora 18 to 19 using Fedup with Btrfs RAID1 /boot partition

Before reading, please be reminded that, the recommended partition layout is a 250M ext3/4 /boot partition.

Grub2 began to support booting from btrfs last year. I like to try new stuffs so I installed my fedora 18 using a btrfs raid1 /boot partition. But after several updates, I began to notice that whenever kernel is updated, the new kernel will not appear in grub boot menu.

grubby fatal error: unable to find a suitable template

After some googling, the problem is related to grubby and there is a workaround. Just run grub2-mkconfig -o /boot/grub2/grub.cfg  each time after kernel update. It is totally fine with just one more command after update. But the problem becomes more serious when doing distro upgrade using fedup because fedup requires writing a grub entry to continue.

After reading these two posts:
https://bugzilla.redhat.com/show_bug.cgi?id=904253
https://bugzilla.redhat.com/show_bug.cgi?id=902498

I made the upgrade by doing the following steps:

  1. run fedup then it will install a fedup kernel
  2. run grub2-mkconfig -o /boot/grub2/grub.cfg  , this will create an entry in grub for booting in fedup kernel
  3. open the file  /boot/grub2/grub.cfg , copy a whole block of menu entry to /etc/grub.d/40_custom, change the menuentry name to “system upgrade”. Actually,  you can directly edit the grub.cfg.
  4. add “upgrade systemd.unit=system-upgrade.target plymouth.splash=fedup enforcing=0” without quotes to the tail of the linux line.  If you do not have any old kernel left into the grub menu, you will need to add one more menu entry to here.  The new entry will use the new fc19 kernel to boot. The actual version is determined by the media from which you install. If you use network install, then it will be the latest kernel….
  5. reboot and choose “system upgarde” from grub menu, then the upgrade process will begin.
  6. After fedup finishes its job, the system will reboot. But you can no longer boot into fedup kernel because fedup will delete it from the upgrade process. Now you need to boot into some old kernel or the entry you added from step 4.  If still cannot boot, you may need to boot from rescue CD to edit grub.conf.
  7. After successful boot, you need to run grub2-mkconfig -o /boot/grub2/grub.cfg to get the new grub menu entries.

For example if your grub.conf menuentry block look like this:

menuentry ‘System Upgrade’ –class fedora –class gnu-linux –class gnu –class os $menuentry_id_option ‘gnulinux-simple-/dev/sda1
/dev/sdb1′ {
load_video
insmod gzio
insmod part_msdos
insmod btrfs
set root=’hd0,msdos1′
if [ x$feature_platform_search_hint = xy ]; then
search –no-floppy –fs-uuid –set=root –hint-bios=hd0,msdos1 –hint-efi=hd0,msdos1 –hint-baremetal=ahci0,msdos1 –hint=’hd0,msdos1’ e0b0a9ca-2540-4ef8-87ec-967b372e6ee0
else
search –no-floppy –fs-uuid –set=root e0b0a9ca-2540-4ef8-87ec-967b372e6ee0
fi
echo ‘Loading Linux fedup …’
linux /root/boot/vmlinuz-fedup root=UUID=e0b0a9ca-2540-4ef8-87ec-967b372e6ee0 ro rootflags=subvol=root rd.md=0 rd.lvm=0 rd.dm=0 rd.luks=0 vconsole.keymap=us rhgb 
echo ‘Loading initial ramdisk …’
initrd /root/boot/initramfs-fedup.img
}

You will need to add “upgrade systemd.unit=system-upgrade.target plymouth.splash=fedup enforcing=0” to the linux line like this:

menuentry ‘System Upgrade’ –class fedora –class gnu-linux –class gnu –class os $menuentry_id_option ‘gnulinux-simple-/dev/sda1
/dev/sdb1′ {
load_video
insmod gzio
insmod part_msdos
insmod btrfs
set root=’hd0,msdos1′
if [ x$feature_platform_search_hint = xy ]; then
search –no-floppy –fs-uuid –set=root –hint-bios=hd0,msdos1 –hint-efi=hd0,msdos1 –hint-baremetal=ahci0,msdos1 –hint=’hd0,msdos1’ e0b0a9ca-2540-4ef8-87ec-967b372e6ee0
else
search –no-floppy –fs-uuid –set=root e0b0a9ca-2540-4ef8-87ec-967b372e6ee0
fi
echo ‘Loading Linux fedup …’
linux /root/boot/vmlinuz-fedup root=UUID=e0b0a9ca-2540-4ef8-87ec-967b372e6ee0 ro rootflags=subvol=root rd.md=0 rd.lvm=0 rd.dm=0 rd.luks=0 vconsole.keymap=us rhgb upgrade systemd.unit=system-upgrade.target plymouth.splash=fedup enforcing=0
echo ‘Loading initial ramdisk …’
initrd /root/boot/initramfs-fedup.img
}

Finally, again…  using a ext4 /boot partition will save you much time.

 

 

Webnode vs weebly vs google site

Recently, I have a chance to make quick, low cost, small websites.

My requirements are:

  1. fast. easy to use to build an a-few-pages website.
  2. Free to edit and include free hosting
  3. The free account can use real domain name
  4. No forced 3rd party advertisement

After some google researches, only 3 choices meet my requirements. They are Webnode, Weebly and Google site. But after I have setup a site in Webnode, I discovered that it has to pay to use real domain after 30 days… So only Weebly and Google site are suitable for me.

All of them can provide everything a website needs. Add/drop pages, navigation, news, rich text editor, uploading image/video, integrating website statistic/analytic, a web form to collection user comment/enquiry and so on.  All of them provide a forced footer to advertise themselves.  There are some tricks to hide Weebly’s footer. But being a free account user, we have the responsibility to advertise them. I don’t mind to show their ads as long as there are no other third party ads.

The CMS:

Google site’s CMS is surprisingly the most ugly one. Being a big giant’s product, it looks like an old styled CMS. But it can still do the jobs. There are not many good themes to choose. It looks like that there is google app engine to provide more advanced functions but it is out of the scope of fast, small websites. However, Google offers more than a CMS. You will need a google analytic account anyway….

Weebly and Webnode offer more intuitive CMS. But both of them rely on flash for some functions…  I don’t know why both of them require flash to edit the head image. Besides that, I can edit most of the parts in Webnode without flash. But in Weebly, many of the UI simply does not work without flash. Their CMS are in different styles and both of them are very good.

My choice:

I finally chose Weebly because it is more user friendly however, it requires flash in many areas. To use real domain, Weebly asks users to create A record for their domain but I think CNAME should be the correct choice…  Anyway, it finally works.

If you are willing to pay a little bit, your choice may be different. There are many alternatives in the world. Try googling.

Sources:
http://www.webnode.com
http://www.weebly.com
http://sites.google.com

 

ckeditor 3.6.x firefox 11 value not saved

Recently, we faced a problem of form value not being saved. The problem only appears to those fields using ckeditor. And what weire is that the value is not blank; instead it keep passing the orginal value to server. So the problem is that the value got reverted to its original value by the time we hit submit….   I think this problem does not always occur because there are only two people facing this problem according to the ckeditor forum.

My setting is pasted below…  and we are using the jquery plugin.

$(“.newbodytext”).ckeditor(function (evt) { }, {

filebrowserBrowseUrl: ‘some/url’,

filebrowserImageBrowseUrl: ‘some/url’,

filebrowserFlashBrowseUrl: ‘some/url’,

autoGrow_onStartup: true,

width: ‘600’,

autoUpdateElement: true,

contentsCss: [‘/cms/Content/stylesheet.css’, ‘/cms/Content/field.css’, ‘/cms/Content/backend.css’],

bodyClass: ‘content_column content’,

toolbar: [

{ name: ‘document’, items: [‘Source’, ‘Preview’] },

{ name: ‘clipboard’, items: [‘Cut’, ‘Copy’, ‘Paste’, ‘PasteText’, ‘PasteFromWord’, ‘-‘, ‘Undo’, ‘Redo’] },

{ name: ‘insert’, items: [‘Image’, ‘Flash’, ‘Table’, ‘HorizontalRule’, ‘SpecialChar’, ‘PageBreak’, ‘Iframe’] },

{ name: ‘editing’, items: [‘Find’, ‘Replace’, ‘-‘, ‘SelectAll’] },

{ name: ‘paragraph’, items: [‘NumberedList’, ‘BulletedList’, ‘-‘, ‘Outdent’, ‘Indent’, ‘-‘, ‘Blockquote’, ‘CreateDiv’, ‘-‘, ‘JustifyLeft’, ‘JustifyCenter’, ‘JustifyRight’, ‘JustifyBlock’] },

//’/’,

{name: ‘styles’, items: [‘Styles’, ‘FontSize’, ‘Format’] },

{ name: ‘basicstyles’, items: [‘Bold’, ‘Italic’, ‘Underline’, ‘Strike’, ‘Subscript’, ‘Superscript’, ‘-‘, ‘RemoveFormat’] },

{ name: ‘links’, items: [‘Link’, ‘Unlink’, ‘Anchor’] },

{ name: ‘colors’, items: [‘TextColor’, ‘BGColor’] }

// { name: ‘tools’, items: [‘Maximize’, ‘-‘, ‘About’] }

],

extraPlugins: ‘stylesheetparser’,

stylesSet: []

});

 

Solution

The solution is to manually destroy ckeditor when the form submit and tell it not to revert values.

$(‘form’).submit(function (e) {

//work around for firefox 11 plus ckeditor 3.6.2/3.6.3

if ($.browser.mozilla) {  //sometimes this problem also occurs in IE, just replace it with a true.

for (var instanceName in CKEDITOR.instances) {

if (CKEDITOR.instances[instanceName]) CKEDITOR.instances[instanceName].destroy(false);

}

}

});

for asp.net webform, you can find some hints from this link.

http://stackoverflow.com/questions/1230573/how-to-capture-submit-event-using-jquery-in-an-asp-net-application

 

Directoryinfo.Delete The directory is not empty

Sometimes, when you call the Delete(true) method of Directoryinfo class, you will receive an exception saying “The directory is not empty”…. Some people say that there is some kind of curse in the directory. Finally, building the solution in debug layout seemed to be the root cause. The problem solved by building the solution in relase layout.

QNAP NAS advanced folder permission

Having waited for 2 years, Qnap NAS finally supports folder level permission.

However, after enabling advanced folder permission, my phone rings again and again… non-stop. Many users complaint that they got access denied for some files and some folders randomly. When you login to the web interface, you can see that the owner field of that file/folder changed to the complainted user…  For folders, you can still use the web interface to change it one by one.. but for flies, you can do nothing.

Finally… login to ssh and ll the folders… you will discover that the permissions are 070 !!! ….  The owner got NO permission!  simply do a chmod u+rwx -R * in the problematic folder can solve this problem…   but use this method at your own risk.

 

Setting up 389 Directory Server for Active Directory Sync

The official installation method is to added EPEL repository
http://fedoraproject.org/wiki/EPEL

Then you can yum install 389-ds then you can run setup-ds-admin.pl then you can start dirsrv and dirsrv-admin services

Follow this link and you will be able to finish it. Viewing the official manual consumes too much time.
http://www.linuxmail.info/category/389-directory-server/

Some notes here:
1) If you only sync From AD (Active Directory) to DS (Directory Server), then the sync account in AD no need to be in Admin group. It can be an ordinary user with “replicate directory change” permission. This permission can be set by using “delegate control” in “AD user and computer”.

2) If you need to further sync from DS to other DS, you need to choose “Single Master” in the sync agreement. Otherwise, you can only initialize the second DS but no further replication will occur. It will say No replication since the server started.

3) Pay attention to the user names. In DS, use uid=xxx,dc=domain,dc=local but in AD they use cn=xxx,dc=domain,dc=local.

4) If you use your own CA, then you need to import your CA cert to 4 places:
4a) The truststore of DS.
4b) The truststore of DS-admin.
4c) The trusted root certificate of local computer in Domain Controllers.
4d) The trust store in the PassSync program folder in Domain Controller.

5) In Windows server 2008 R2, you need to open an administrator command prompt to run the passsync setup program.

6) To configure oneway sync, you need to add an attribute to the sync agreement. You can do it by browser the DS directory, in the config subtree. You can find your created sync agreement there.
http://directory.fedoraproject.org/wiki/One_Way_Active_Directory_Sync

7) To troubleshoot, there is a very good tool called ldp.exe released by microsoft in its Windows server 2003 support tool.. Yes, 2003… but it can run on Windows 2008 R2. Just download the whole package from the link below and extract only ldp.exe to your server. Life will suddenly become easier.

http://www.microsoft.com/downloads/en/details.aspx?FamilyID=96a35011-fd83-419d-939b-9a772ea2df90&DisplayLang=en

加油。。。

Convert Windows Vmware server 2.0.2 to KVM (Ubuntu)

Convert Windows Vmware server 2.0.2 to KVM (Ubuntu)

Be reminded that, KVM does not support Windows 98 / Windows ME well. If you have such guest, then you may need to consider something other than KVM.
If you are the first time, then you better have both the Windows host and the Linux host running… Don’t try to destroy the Windows and install the Linux on it then pray for success.
If your CPU does not have virtualization support, you better use VMware.

In Windows Host, explore to the folder containing the VM guest you want to convert.
pay attention to the virtual hard disk (vmdk) files. If you have some files name ending with numbers… e.g. winxp-00001.vmdk , winxp-00002.vmdk. Then you need to combine the files using vmware-vdiskmanager.exe. By default, it is in Program files\Vmware\ folder.
the command looks like
vmware-vdiskmanager -r winxp.vmdk -t 0 winxpbig.vmdk
or
vmware-vdiskmanager -r winxp.vmdk -t 2 winxpbig.vmdk

The -t 0 switch will create a resultant file that consist of only used space while -t 2 will produce a file consist of the whole image size.
Some says -t 0 failed, you can try -t 2.

For Windows guest, you need to do more things. Otherwise, You may receive a Stop 0x0000007B error after you move the Windows XP.
After combining the files, you can now change the guest’s disk to the combined one and boot it from Windows host to do the following steps.
1) In Windows guest, run mergeide.reg from http://support.microsoft.com/kb/314082 you also need to copy some files to system32\drivers folder. see the link.
2) In Windows guest, remove vmware tool
3) shutdown the guest

Now, you are ready to transfer the vmdk file to your linux machine…
If you do not have large enough portable device, then installing filezilla ftp server is a good way to transfer files. http://filezilla.sf.net
The files need to transfer are *.vmdk and *.vmx. And it is no harm to transfer everything if you have enough space. It is good to keep one virtual machine in one folder.

In the linux host part:
1) get Ubuntu and set it up. (http://www.ubuntu.com) you can choose VM host function.
2) install other useful things
sudo apt-get install virt-goodies qemu-kvm kvm libvirt-bin bridge-utils virt-top kvm-pxe
3) setup bridge network
3a) edit (vi) /etc/network/interfaces
3b) The address below should match your internal subnet.
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet manual

auto br0
iface br0 inet static
address 192.168.11.3
network 192.168.11.0
netmask 255.255.255.0
broadcast 192.168.11.255
gateway 192.168.11.1
bridge_ports eth0
bridge_stp off
bridge_fd 0
bridge_maxwait 0

3c) restart network sudo /etc/init.d/networking restart, then you should have a bridge network ready to use. you can verify it by the command ifconfig .
4) convert vmware config files (*.vmx) to libvirt XML config file. You need to have virt-goodies installed in the previous step to use vmware2libvirt.
4a) vmware2libvirt -f winxp.vmx > winxp.xml
5) life is not perfect and so does vmware2libvirt.. you need to modify the XML to make it work. otherwise, you will receive not bootable device or no boot device error.
5a) you need to add a driver tag inside the disk tag
<driver name=’qemu’ type=’vmdk’/>

5b) For windows guest you need to use localtime clock.
<clock offset=’localtime’/>
5c) change the type to bridge in interface tag and change network=’eth0′ to bridge=’br0′ in the source tag.
<interface type=’bridge’>

<source bridge=’br0’/>

</interface>

then you can define your virtual machine config to qemu using libvirt and start it.
sudo virsh -c qemu:///system define winxp.xml
sudo virsh start winxp <— please refer to your name tag in the xml file.

If it can boot, then you can convert the vmdk hdd to qemu’s native type qcow2.
Remembert to shutdown your guest first!!!

qemu-img convert winxp.vmdk -O qcow2 winxp.qcow2
* -O is a capital letter O not a zero

after converting the image, you need to tell libvirt to use it.
you can edit the xml file then undefine the vm and then define it again.
Or directly edit the config file.
sudo virsh edit winxp
change the driver type from vmdk to qcow2
change source file to the converted file.
<driver name=’qemu’ type=’qcow2’/>
<source file=’/your_virtual_disk_location/winxp.qcow2’/>

next, we can change to use virtio driver. kvm web site says that it will increase performance a lot.
But many people find no much differece…

Fedora commons installation

For version Fedora Commons 3.4.2 on CentOS 5.5 and possibly 5.6

Official Documentation
https://wiki.duraspace.org/display/FCR30/Fedora+Repository+3.4.2+Documentation

/usr/local/fedora/tomcat/logs/catalina.out
is a good place to view error message.

== installation
Follow this guide
https://wiki.duraspace.org/display/FCR30/Installation+and+Configuration+Guide
To install jdk
yum install java-1.6.0-openjdk

To add environment variable use the following commands
export FEDORA_HOME=”/usr/local/fedora”
export JAVA_HOME=”/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/jre”
export JAVA_OPTS=”-Djavax.net.ssl.trustStore=$FEDORA_HOME/server/truststore -Djavax.net.ssl.trustStorePassword=changeme”

To make env var appear when startup, create a file called fedoracommons.sh in /etc/profiles.d and put the commands above in to that file.

If only for testing purpose, Do NOT use SSL. It is complicated… To config SSL please see below.

== replication
Please config journaling first…

== enable journaling
https://wiki.duraspace.org/display/FCR30/Journaling
The page contains many old setting syntax.
If you copy and paste, you will see
“fedora.server.management.ManagementModule” could not be found
“fedora.server.journal.Journaler” could not be found
This problem consumed me 3 days…
For every occurance of fedora.server.management.Management or similar, change it to org.fcrepo.server.management.Management
In other word, replace fedora to org.fcrepo
You also need to create the folders
mkdir /usr/local/ndr-content
mkdir /usr/local/ndr-content/journals
mkdir /usr/local/ndr-content/journals/journalFiles
mkdir /usr/local/ndr-content/journals/archiveFiles

For Fedora Commons version 3.4 running in Linux… The Journal receiver can run and everything looks normally when start up but just no file is writing to the journal folder. This can cause error if you maark the follower server crucial.

Finally.. figured out that need to add -Djava.rmi.server.hostname=192.168.11.11 to the command to start journal receiver.

java -Djava.rmi.server.hostname=192.168.11.11 -jar fcrepo-server-3.4-rmi-journal-recv.jar “/usr/local/ndr-content/journals/journalFiles”