using pyvenv in Centos 7

Recently, I need to use python 3 in Centos 7.

In order to use python 3, you need to enable epel

yum install epel-release

then you can yum install python3.4

Everything seems to be fine until you setup a virtual environment using pyvenv you will receive this error.

pyvenv-3.4 returned non-zero exit status 1

You need to add –without-pip option to make this work….

pyvenv-3.4 myvenv –without-pip

But then you will not have pip to use… you need to manually install pip… The easiest way is to use get-pip.py

Go to this site, then download get-pip.py

https://pip.pypa.io/en/stable/installing.html

then activate your virtual env.

source myvenv/bin/activate

cp the get-pip.py file to your virtual env folder then run

python get-pip.py

Then you will have a working pythong environment in Centos 7.

 

Developing Owncloud app

Download and install OpenSuse from https://www.opensuse.org/en/

From software management, install git, php5, php5-gd, php5-mbstring, php5-mcrypt, php5-mysql, php5-zip, php5-zlib, php5-curl

Then follow the instruction from https://github.com/owncloud/ocdev/blob/master/README.rst#installation

install ocdev using pip3

sudo pip3 install ocdev

ocdev setup base –branch stable8.1   (branch depends on your need)

then you will have a folder ‘core’ in the current working folder

cd core/apps

ocdev startapp MyApp

Then you will have an app called MyApp

cd ..  (go back to the core folder)

edit the php.ini file by

sudo vi /etc/php5/cli/php.ini

set the session.save_path to /tmp (or somewhere the current user account have write access)

start the ocdev server by

ocdev server

Then you can browse at http://localhost:8080

 

Install Android Studio in Fedora 22 Workstation

  1. Install Fedora 22 Workstation
  2. dnf update
  3. dnf install java-1.8.0-openjdk-devel.x86_64
  4. dnf install compat-libstdc++-296.i686 compat-libstdc++-33.i686 compat-libstdc++-33.x86_64 ncurses-libs.i686 zlib.i686       (Otherwise, you will have the “Unable to run mksdcard SDK tool” problem)
  5. Download Android Studio from https://developer.android.com/sdk/index.html
  6. Extract it.
  7. Open a terminal and cd to the extracted Android studio path
  8. cd to the bin folder inside Android studio folder
  9. JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.45-38.b14.fc22.x86_64/ sh studio.sh
  10. The above sentence must be run in one line. (you may need to adjust the version. use “alternatives –display java”  command to locate your correct path)
  11. Follow the on-screen instruction.
  12. You are done.
  13. Use the command in step 9 whenever you want to start Android Studio.

Restore Exchange 2013 DAG

Suppose you have a two server setup Exchange 2013 DAG. (And an arbitrary witness machine). You would like to physically relocate the servers. Therefore, you shutdown Machine_A then Machine_B then the witness. After you have relocated the server, you think that you can leave soon. But suddenly, Machine_B has serious main broad problem and cannot start again……   Then…  Because Machine_A was shut down earlier…  The failover cluster refuses to start…  And the whole Exchange system is down….

Use the command in the link below to force cluster to start… using the  forcequorum option.

https://technet.microsoft.com/en-us/library/dd351049(v=exchg.150).aspx

net start clussvc /forcequorum

Now, the failover cluster is up.. But Exchange is still refuse to start because the mailbox database copy in Machine_A is not the latest version…

So you need to force the copy in Machine_A to active by using the method in this link.

http://blogs.technet.com/b/timmcmic/archive/2012/05/30/exchange-2010-the-mystery-of-the-9223372036854775766-copy-queue.aspx

You may also discover that the copy queue length is 9,223,372,036,854,775,766 long…

Move-ActiveMailboxDatabase DB01 –ActivateOnServer YOUR_SERVER_NAME -SkipLagChecks -SkipActiveCopyChecks –
MountDialOverride:BESTEFFORT -SkipClientExperienceChecks

But be careful !!!, you may lose some email with this command. !!!!  Use at your own risk!!!

Do this on all Mailbox database. After you have brought the database containing your administrator login, you can login to ECP.

Then we can setup a new machine to replace the old one. Install windows, update it, use the IP of the old machine, use the same computer name. Install Exchange server prerequisites.  Consult the documents when you install the Exchange server before.

The procedure is in this link:
https://technet.microsoft.com/en-us/library/dd638206(v=exchg.150).aspx

But in step 5, you may need to use
setup /m:RecoverServer /IAcceptExchangeServerLicenseTerms
Instead of
setup /m:RecoverServer

After adding back the DAG members, you need to force it to reseed…

This link will help:
http://msexchangeguru.com/2012/09/24/dag-recovery/

But the
Update-MailboxDatabaseCopy -Identity <DBNamedestinationservername> -SourceServer <sourcemailbox server> -DeleteExistingFiles
command will fail with error…  You may need to wait for about 10 minutes after running Suspend-MailboxDatabasecopy command in order to successfully run it.

And that, the Update-MailboxDatabaseCopy will block your console. If you have multiple huge mailbox databases, you need to do the first one, eat something, then copy back to do the second one.

Good luck. again, use the steps above at your own risk. To reduce risk, consider to use some paid services.

Qnap Postgresql schedule cronjob backup

Recently needs to configure backup for postgresql in qnap nas…. It’s much difficult than imagine…

You can easily google the sample script from postgresql’s wiki.

https://wiki.postgresql.org/wiki/Automated_Backup_on_Linux

However, it cannot be directly used in qnap.

Firstly, psql and pg_dump are not installed in the /bin folder.  You need to use full path for the commands.

Secondly, you will face PQparameterS not found problem…    After some googling, you need to set an env variable (LD_LIBRARY_PATH) in order to run the commands.

Thirdly, the command needs to include -E utf8

Fourthly, the find command in qnap does not support  maxdepth option

Fifthly, the custom backup in qnap does not support compression and that the find command does not support -exec option…

So finally, I added two variables to pg_backup.config file. (you need to create the postgresqlbackup share folder using the GUI first….)

pg_backup.config

##############################
## POSTGRESQL BACKUP CONFIG ##
##############################

##############################
## Added by me ###############
##############################

PSQL_PATH=/share/CACHEDEV1_DATA/.qpkg/PostgreSQL/bin/psql
PG_DUMP_PATH=/share/CACHEDEV1_DATA/.qpkg/PostgreSQL/bin/pg_dump

##### End added by me ########
 
# Optional system user to run backups as.  If the user the script is running as doesn't match this
# the script terminates.  Leave blank to skip check.
BACKUP_USER=
 
# Optional hostname to adhere to pg_hba policies.  Will default to "localhost" if none specified.
HOSTNAME=
 
# Optional username to connect to database as.  Will default to "postgres" if none specified.
USERNAME=
 
# This dir will be created if it doesn't exist.  This must be writable by the user the script is
# running as.
BACKUP_DIR=/share/postgresqlbackup/
 
# List of strings to match against in database name, separated by space or comma, for which we only
# wish to keep a backup of the schema, not the data. Any database names which contain any of these
# values will be considered candidates. (e.g. "system_log" will match "dev_system_log_2010-01")
SCHEMA_ONLY_LIST=""
 
# Will produce a custom-format backup if set to "yes"
ENABLE_CUSTOM_BACKUPS=yes
 
# Will produce a gzipped plain-format backup if set to "yes"
ENABLE_PLAIN_BACKUPS=yes
 
 
#### SETTINGS FOR ROTATED BACKUPS ####
 
# Which day to take the weekly backup from (1-7 = Monday-Sunday)
DAY_OF_WEEK_TO_KEEP=5
 
# Number of days to keep daily backups
DAYS_TO_KEEP=7
 
# How many weeks to keep weekly backups
WEEKS_TO_KEEP=5
 
######################################

Then in the pg_backup_rotated.sh will look like this:

#!/bin/bash

#########################################
##### QNAP 4.1.3/ POSTGRESQL 9.3.4.1 ####
#########################################

export set LD_LIBRARY_PATH=/share/CACHEDEV1_DATA/.qpkg/PostgreSQL/lib/

###########################
####### LOAD CONFIG #######
###########################

while [ $# -gt 0 ]; do
case $1 in
-c)
CONFIG_FILE_PATH=”$2″
shift 2
;;
*)
${ECHO} “Unknown Option \”$1\”” 1>&2
exit 2
;;
esac
done

if [ -z $CONFIG_FILE_PATH ] ; then
SCRIPTPATH=$(cd ${0%/*} && pwd -P)
CONFIG_FILE_PATH=”${SCRIPTPATH}/pg_backup.config”
fi

if [ ! -r ${CONFIG_FILE_PATH} ] ; then
echo “Could not load config file from ${CONFIG_FILE_PATH}” 1>&2
exit 1
fi

source “${CONFIG_FILE_PATH}”

###########################
#### PRE-BACKUP CHECKS ####
###########################

# Make sure we’re running as the required backup user
if [ “$BACKUP_USER” != “” -a “$(id -un)” != “$BACKUP_USER” ] ; then
echo “This script must be run as $BACKUP_USER. Exiting.” 1>&2
exit 1
fi

###########################
### INITIALISE DEFAULTS ###
###########################

if [ ! $HOSTNAME ]; then
HOSTNAME=”localhost”
fi;

if [ ! $USERNAME ]; then
USERNAME=”postgres”
fi;

###########################
#### START THE BACKUPS ####
###########################

function perform_backups()
{
SUFFIX=$1
FINAL_BACKUP_DIR=$BACKUP_DIR”`date +\%Y-\%m-\%d`$SUFFIX/”

echo “Making backup directory in $FINAL_BACKUP_DIR”

if ! mkdir -p $FINAL_BACKUP_DIR; then
echo “Cannot create backup directory in $FINAL_BACKUP_DIR. Go and fix it!” 1>&2
exit 1;
fi;

###########################
### SCHEMA-ONLY BACKUPS ###
###########################

for SCHEMA_ONLY_DB in ${SCHEMA_ONLY_LIST//,/ }
do
SCHEMA_ONLY_CLAUSE=”$SCHEMA_ONLY_CLAUSE or datname ~ ‘$SCHEMA_ONLY_DB'”
done

SCHEMA_ONLY_QUERY=”select datname from pg_database where false $SCHEMA_ONLY_CLAUSE order by datname;”

echo -e “\n\nPerforming schema-only backups”
echo -e “——————————————–\n”

SCHEMA_ONLY_DB_LIST=`”$PSQL_PATH” -h “$HOSTNAME” -U “$USERNAME” -At -c “$SCHEMA_ONLY_QUERY” postgres`

echo -e “The following databases were matched for schema-only backup:\n${SCHEMA_ONLY_DB_LIST}\n”

for DATABASE in $SCHEMA_ONLY_DB_LIST
do
echo “Schema-only backup of $DATABASE”

if ! “$PG_DUMP_PATH” -E utf8 -Fp -s -h “$HOSTNAME” -U “$USERNAME” “$DATABASE” | gzip > $FINAL_BACKUP_DIR”$DATABASE”_SCHEMA.sql.gz.in_progress; then
echo “[!!ERROR!!] Failed to backup database schema of $DATABASE” 1>&2
else
mv $FINAL_BACKUP_DIR”$DATABASE”_SCHEMA.sql.gz.in_progress $FINAL_BACKUP_DIR”$DATABASE”_SCHEMA.sql.gz
fi
done

###########################
###### FULL BACKUPS #######
###########################

for SCHEMA_ONLY_DB in ${SCHEMA_ONLY_LIST//,/ }
do
EXCLUDE_SCHEMA_ONLY_CLAUSE=”$EXCLUDE_SCHEMA_ONLY_CLAUSE and datname !~ ‘$SCHEMA_ONLY_DB'”
done

FULL_BACKUP_QUERY=”select datname from pg_database where not datistemplate and datallowconn $EXCLUDE_SCHEMA_ONLY_CLAUSE order by datname;”

echo -e “\n\nPerforming full backups”
echo -e “——————————————–\n”

for DATABASE in `”$PSQL_PATH” -h “$HOSTNAME” -U “$USERNAME” -At -c “$FULL_BACKUP_QUERY” postgres`
do
if [ $ENABLE_PLAIN_BACKUPS = “yes” ]
then
echo “Plain backup of $DATABASE”

if ! “$PG_DUMP_PATH” -E utf8 -Fp -h “$HOSTNAME” -U “$USERNAME” “$DATABASE” | gzip > $FINAL_BACKUP_DIR”$DATABASE”.sql.gz.in_progress; then
echo “[!!ERROR!!] Failed to produce plain backup database $DATABASE” 1>&2
else
mv $FINAL_BACKUP_DIR”$DATABASE”.sql.gz.in_progress $FINAL_BACKUP_DIR”$DATABASE”.sql.gz
fi
fi

if [ $ENABLE_CUSTOM_BACKUPS = “yes” ]
then
echo “Custom backup of $DATABASE”

if ! “$PG_DUMP_PATH” -E utf8 -Z 0 -Fc -h “$HOSTNAME” -U “$USERNAME” “$DATABASE” | gzip > $FINAL_BACKUP_DIR”$DATABASE”.custom.gz.in_progress; then
echo “[!!ERROR!!] Failed to produce custom backup database $DATABASE”
else
mv $FINAL_BACKUP_DIR”$DATABASE”.custom.gz.in_progress $FINAL_BACKUP_DIR”$DATABASE”.custom.gz
fi
fi

done

echo -e “\nAll database backups complete!”
}

# MONTHLY BACKUPS

DAY_OF_MONTH=`date +%d`

if [ $DAY_OF_MONTH -eq 1 ];
then
# Delete all expired monthly directories
find $BACKUP_DIR -maxdepth 1 -name “*-monthly” | xargs /bin/rm -rf

perform_backups “-monthly”

exit 0;
fi

# WEEKLY BACKUPS

DAY_OF_WEEK=`date +%u` #1-7 (Monday-Sunday)
EXPIRED_DAYS=`expr $((($WEEKS_TO_KEEP * 7) + 1))`

if [ $DAY_OF_WEEK = $DAY_OF_WEEK_TO_KEEP ];
then
# Delete all expired weekly directories
find $BACKUP_DIR -maxdepth 1 -mtime +$EXPIRED_DAYS -name “*-weekly” | xargs /bin/rm -rf

perform_backups “-weekly”

exit 0;
fi

# DAILY BACKUPS

# Delete daily backups 7 days old or more
find $BACKUP_DIR -mtime +$DAYS_TO_KEEP -name “*-daily” | xargs /bin/rm -rf

perform_backups “-daily”

The pg_backup.sh file has no use.

Then you need to put the two files in somewhere. e.g. /share/postgresqlbackup/scripts/

Then you can run the script to see whether it is working…

After that, you need to use crontab in qnap to schedule it to run…  however, crontab in Qnap is not very easy to setup…. you need to see the instruction in the bottom of this page.

http://wiki.qnap.com/wiki/Add_items_to_crontab

1. Edit /etc/config/crontab and add your custom entry.
2. Run ‘crontab /etc/config/crontab’ to load the changes.
3. Restart cron, i.e. ‘/etc/init.d/crond.sh restart’

Remember to chmod 755 /share/postgresqlbackup/scripts/pg_backup_rotated.sh

This works for me in Qnap firmware version 4.1.3 and postgresql 9.3.4.1.  Hope this will work for you. But I do not guarantee this will work for you. Use it at your own risk.

Upgrading Fedora 18 to 19 using Fedup with Btrfs RAID1 /boot partition

Before reading, please be reminded that, the recommended partition layout is a 250M ext3/4 /boot partition.

Grub2 began to support booting from btrfs last year. I like to try new stuffs so I installed my fedora 18 using a btrfs raid1 /boot partition. But after several updates, I began to notice that whenever kernel is updated, the new kernel will not appear in grub boot menu.

grubby fatal error: unable to find a suitable template

After some googling, the problem is related to grubby and there is a workaround. Just run grub2-mkconfig -o /boot/grub2/grub.cfg  each time after kernel update. It is totally fine with just one more command after update. But the problem becomes more serious when doing distro upgrade using fedup because fedup requires writing a grub entry to continue.

After reading these two posts:
https://bugzilla.redhat.com/show_bug.cgi?id=904253
https://bugzilla.redhat.com/show_bug.cgi?id=902498

I made the upgrade by doing the following steps:

  1. run fedup then it will install a fedup kernel
  2. run grub2-mkconfig -o /boot/grub2/grub.cfg  , this will create an entry in grub for booting in fedup kernel
  3. open the file  /boot/grub2/grub.cfg , copy a whole block of menu entry to /etc/grub.d/40_custom, change the menuentry name to “system upgrade”. Actually,  you can directly edit the grub.cfg.
  4. add “upgrade systemd.unit=system-upgrade.target plymouth.splash=fedup enforcing=0” without quotes to the tail of the linux line.  If you do not have any old kernel left into the grub menu, you will need to add one more menu entry to here.  The new entry will use the new fc19 kernel to boot. The actual version is determined by the media from which you install. If you use network install, then it will be the latest kernel….
  5. reboot and choose “system upgarde” from grub menu, then the upgrade process will begin.
  6. After fedup finishes its job, the system will reboot. But you can no longer boot into fedup kernel because fedup will delete it from the upgrade process. Now you need to boot into some old kernel or the entry you added from step 4.  If still cannot boot, you may need to boot from rescue CD to edit grub.conf.
  7. After successful boot, you need to run grub2-mkconfig -o /boot/grub2/grub.cfg to get the new grub menu entries.

For example if your grub.conf menuentry block look like this:

menuentry ‘System Upgrade’ –class fedora –class gnu-linux –class gnu –class os $menuentry_id_option ‘gnulinux-simple-/dev/sda1
/dev/sdb1′ {
load_video
insmod gzio
insmod part_msdos
insmod btrfs
set root=’hd0,msdos1′
if [ x$feature_platform_search_hint = xy ]; then
search –no-floppy –fs-uuid –set=root –hint-bios=hd0,msdos1 –hint-efi=hd0,msdos1 –hint-baremetal=ahci0,msdos1 –hint=’hd0,msdos1’ e0b0a9ca-2540-4ef8-87ec-967b372e6ee0
else
search –no-floppy –fs-uuid –set=root e0b0a9ca-2540-4ef8-87ec-967b372e6ee0
fi
echo ‘Loading Linux fedup …’
linux /root/boot/vmlinuz-fedup root=UUID=e0b0a9ca-2540-4ef8-87ec-967b372e6ee0 ro rootflags=subvol=root rd.md=0 rd.lvm=0 rd.dm=0 rd.luks=0 vconsole.keymap=us rhgb 
echo ‘Loading initial ramdisk …’
initrd /root/boot/initramfs-fedup.img
}

You will need to add “upgrade systemd.unit=system-upgrade.target plymouth.splash=fedup enforcing=0” to the linux line like this:

menuentry ‘System Upgrade’ –class fedora –class gnu-linux –class gnu –class os $menuentry_id_option ‘gnulinux-simple-/dev/sda1
/dev/sdb1′ {
load_video
insmod gzio
insmod part_msdos
insmod btrfs
set root=’hd0,msdos1′
if [ x$feature_platform_search_hint = xy ]; then
search –no-floppy –fs-uuid –set=root –hint-bios=hd0,msdos1 –hint-efi=hd0,msdos1 –hint-baremetal=ahci0,msdos1 –hint=’hd0,msdos1’ e0b0a9ca-2540-4ef8-87ec-967b372e6ee0
else
search –no-floppy –fs-uuid –set=root e0b0a9ca-2540-4ef8-87ec-967b372e6ee0
fi
echo ‘Loading Linux fedup …’
linux /root/boot/vmlinuz-fedup root=UUID=e0b0a9ca-2540-4ef8-87ec-967b372e6ee0 ro rootflags=subvol=root rd.md=0 rd.lvm=0 rd.dm=0 rd.luks=0 vconsole.keymap=us rhgb upgrade systemd.unit=system-upgrade.target plymouth.splash=fedup enforcing=0
echo ‘Loading initial ramdisk …’
initrd /root/boot/initramfs-fedup.img
}

Finally, again…  using a ext4 /boot partition will save you much time.

 

 

Webnode vs weebly vs google site

Recently, I have a chance to make quick, low cost, small websites.

My requirements are:

  1. fast. easy to use to build an a-few-pages website.
  2. Free to edit and include free hosting
  3. The free account can use real domain name
  4. No forced 3rd party advertisement

After some google researches, only 3 choices meet my requirements. They are Webnode, Weebly and Google site. But after I have setup a site in Webnode, I discovered that it has to pay to use real domain after 30 days… So only Weebly and Google site are suitable for me.

All of them can provide everything a website needs. Add/drop pages, navigation, news, rich text editor, uploading image/video, integrating website statistic/analytic, a web form to collection user comment/enquiry and so on.  All of them provide a forced footer to advertise themselves.  There are some tricks to hide Weebly’s footer. But being a free account user, we have the responsibility to advertise them. I don’t mind to show their ads as long as there are no other third party ads.

The CMS:

Google site’s CMS is surprisingly the most ugly one. Being a big giant’s product, it looks like an old styled CMS. But it can still do the jobs. There are not many good themes to choose. It looks like that there is google app engine to provide more advanced functions but it is out of the scope of fast, small websites. However, Google offers more than a CMS. You will need a google analytic account anyway….

Weebly and Webnode offer more intuitive CMS. But both of them rely on flash for some functions…  I don’t know why both of them require flash to edit the head image. Besides that, I can edit most of the parts in Webnode without flash. But in Weebly, many of the UI simply does not work without flash. Their CMS are in different styles and both of them are very good.

My choice:

I finally chose Weebly because it is more user friendly however, it requires flash in many areas. To use real domain, Weebly asks users to create A record for their domain but I think CNAME should be the correct choice…  Anyway, it finally works.

If you are willing to pay a little bit, your choice may be different. There are many alternatives in the world. Try googling.

Sources:
http://www.webnode.com
http://www.weebly.com
http://sites.google.com

 

ckeditor 3.6.x firefox 11 value not saved

Recently, we faced a problem of form value not being saved. The problem only appears to those fields using ckeditor. And what weire is that the value is not blank; instead it keep passing the orginal value to server. So the problem is that the value got reverted to its original value by the time we hit submit….   I think this problem does not always occur because there are only two people facing this problem according to the ckeditor forum.

My setting is pasted below…  and we are using the jquery plugin.

$(“.newbodytext”).ckeditor(function (evt) { }, {

filebrowserBrowseUrl: ‘some/url’,

filebrowserImageBrowseUrl: ‘some/url’,

filebrowserFlashBrowseUrl: ‘some/url’,

autoGrow_onStartup: true,

width: ‘600’,

autoUpdateElement: true,

contentsCss: [‘/cms/Content/stylesheet.css’, ‘/cms/Content/field.css’, ‘/cms/Content/backend.css’],

bodyClass: ‘content_column content’,

toolbar: [

{ name: ‘document’, items: [‘Source’, ‘Preview’] },

{ name: ‘clipboard’, items: [‘Cut’, ‘Copy’, ‘Paste’, ‘PasteText’, ‘PasteFromWord’, ‘-‘, ‘Undo’, ‘Redo’] },

{ name: ‘insert’, items: [‘Image’, ‘Flash’, ‘Table’, ‘HorizontalRule’, ‘SpecialChar’, ‘PageBreak’, ‘Iframe’] },

{ name: ‘editing’, items: [‘Find’, ‘Replace’, ‘-‘, ‘SelectAll’] },

{ name: ‘paragraph’, items: [‘NumberedList’, ‘BulletedList’, ‘-‘, ‘Outdent’, ‘Indent’, ‘-‘, ‘Blockquote’, ‘CreateDiv’, ‘-‘, ‘JustifyLeft’, ‘JustifyCenter’, ‘JustifyRight’, ‘JustifyBlock’] },

//’/’,

{name: ‘styles’, items: [‘Styles’, ‘FontSize’, ‘Format’] },

{ name: ‘basicstyles’, items: [‘Bold’, ‘Italic’, ‘Underline’, ‘Strike’, ‘Subscript’, ‘Superscript’, ‘-‘, ‘RemoveFormat’] },

{ name: ‘links’, items: [‘Link’, ‘Unlink’, ‘Anchor’] },

{ name: ‘colors’, items: [‘TextColor’, ‘BGColor’] }

// { name: ‘tools’, items: [‘Maximize’, ‘-‘, ‘About’] }

],

extraPlugins: ‘stylesheetparser’,

stylesSet: []

});

 

Solution

The solution is to manually destroy ckeditor when the form submit and tell it not to revert values.

$(‘form’).submit(function (e) {

//work around for firefox 11 plus ckeditor 3.6.2/3.6.3

if ($.browser.mozilla) {  //sometimes this problem also occurs in IE, just replace it with a true.

for (var instanceName in CKEDITOR.instances) {

if (CKEDITOR.instances[instanceName]) CKEDITOR.instances[instanceName].destroy(false);

}

}

});

for asp.net webform, you can find some hints from this link.

http://stackoverflow.com/questions/1230573/how-to-capture-submit-event-using-jquery-in-an-asp-net-application

 

Directoryinfo.Delete The directory is not empty

Sometimes, when you call the Delete(true) method of Directoryinfo class, you will receive an exception saying “The directory is not empty”…. Some people say that there is some kind of curse in the directory. Finally, building the solution in debug layout seemed to be the root cause. The problem solved by building the solution in relase layout.

QNAP NAS advanced folder permission

Having waited for 2 years, Qnap NAS finally supports folder level permission.

However, after enabling advanced folder permission, my phone rings again and again… non-stop. Many users complaint that they got access denied for some files and some folders randomly. When you login to the web interface, you can see that the owner field of that file/folder changed to the complainted user…  For folders, you can still use the web interface to change it one by one.. but for flies, you can do nothing.

Finally… login to ssh and ll the folders… you will discover that the permissions are 070 !!! ….  The owner got NO permission!  simply do a chmod u+rwx -R * in the problematic folder can solve this problem…   but use this method at your own risk.