Friday, June 21, 2013

Google 2 factor authentication with Zimbra Desktop

I would like to thank the user "wdurham" for pointing me in the correct direction:

For Google account security reasons, I decided to enable the 2 factor authentication earlier this year. I have been using the Zimbra Desktop email client to consolidate my various email accounts. There are many good features implemented in Zimbra Desktop. The (now) sad reality is that VMware has stopped any further development so I will be looking for a new one :(


  1. Enable Google's 2 factor/2 step authenticatoin
  2. Create an Application Specific Password (ASP) and keep it handy
  3. Temporarily unlock the account
  4. Now use the just created ASP in the password field for the Zimbra Desktop client

Monday, October 29, 2012

Zimbra 8.0 upgrade from Zimbra 7.2 on CentOS 6.x quicky

Woot Woot. The rather well done Zimbra Collaboration Server ("enterprise-class email, calendar and collaboration solution") has a major version change from 7.2.1 to version 8.0.0! We choose to run the Zimbra Collaboration Server Open Source Edition (OSE). We have previously upgraded through the 7.1 and 7.2 series with no issues. This is a brief overview of what needed to happen to get from 7.2.0 -> 8.0.0 on my CentOS 64-bit test server. I will take the production system from 7.2.1 -> 8.0.0 after a couple of weeks of random testing. I'm not crazy enough to just put a dot0 version in production. I may even wait a month to make sure enough bugs are worked through. I do have a job I want to keep ;)

Naturally, one should start by reading the very informative Zimbra 8.0.0 GA Release Release Notes. I am glad to see the "Inline reply", "Autocomplete matches now work correctly for Contact Groups" bugs fixed! The "Per folder message retention policy creation" added is long overdue IMHO. Don't short change your upgrade by skipping the release notes. You *should* take the time to go through it!


As root:
screen
tar -xzvf zcs-8.0.0_GA_5434.RHEL6_64.20120907144639.tgz
cd zcs-8.0.0_GA_5434.RHEL6_64.20120907144639
./install.sh --platform-override

To get the new 5 year default SSL cert, you will want to run as user zimbra:
sudo /opt/zimbra/bin/zmcertmgr createca -new
sudo /opt/zimbra/bin/zmcertmgr deployca
sudo /opt/zimbra/bin/zmcertmgr deploycrt self

Production server upgrade note (10/29/2012):
The Zimbra 8.0.0 upgrade/install seems to have reset several of my custom settings to default attributes! I have made most of my custom changes based on the very handy Zimbra wiki for Performance Tuning Guidelines for Large Deployments in the past. The main changed items I've spotted so far are zimbraImapMaxConnections and zimbraImapNumThreads that reverted back to 200. I hope not to find too many more production system oddities! The reason I noticed the issue is that the /opt/zimbra/log/mailbox.log had MANY messages like:
WARN  [ImapServer-76] [] imap - Dropping connection (max connections exceeded)
WARN  [ImapServer-80] [] imap - Dropping connection (max connections exceeded)

Wednesday, September 26, 2012

Adobe Acrobat Reader 64-bit CentOS (RHEL or SL) linux LDAP problem

Let me start by saying, "Shame on you Adobe!" for not recognizing an opportunity to continue in the Linux OS realm in a meaningful way... I already use Evince more for PDF files now anyway. No more real Flash support is just another nail in the coffin.

Here's the deal; Adobe doesn't have a 64-bit Acrobat Readear (acroread). So, when you need to install the 32-bit version you get only some of the 32-bit stuff gets installed. More may actually be needed. This is especially true if you have LDAP (openldap) at the center of the authentication realm. You will end up with a message like:
GLib-WARNING **: getpwuid_r(): failed due to unknown user id
As user root, you will need to run:
yum -y install nss-pam-ldapd.i686

Tuesday, September 11, 2012

Eye-Fi Connect X2 mini review/recommendation

Summary:

Eys-Fi makes a nice little SD card with built-in wifi called an Eye-Fi Connect X2 among others. This little gem is great! The title should really be, "Amazing geek toy and CYA device"...
I find the Eye-Fi Connect X2 card and Android Eye-Fi App to be very useful and mostly reliable on my Samsung Galaxy Tab Plus GT-P6210 running both the original Android 3.2 Honeycomb and recently updated Ice Cream Sandwich. There is a great deal of comfort knowing I have an immediate backup copy of every picture I take. 

Background:


Since pictures are of major importance to me. Safeguarding my photos with multiple copies is a priority. Under normal circumstance, I would take some random number of pictures, head to my laptop or home system, copy the pictures, rotate out the SD card, take more pictures, etc.

I had an original Eye-Fi card from a couple of years ago. I tried a couple of the Linux options like the python powered eyefiserver with some luck. Sadly, at the time, this option was not very portable since it needed an access point plus laptop or similar. Besides, reading the original eye-fi card was a pain for me under Linux.

Skip ahead a couple of years and the X2 versions... and add a tablet or smart phone and Ad-hoc network support of wireless uploads... welcome to a whole new world.

Now with my Android powered phone or my great little Samsung Galaxy Tab Plus GT-P6210 and the Android Eye-Fi App (or IOS App), I just start the App, shoot a pic and have an original sized copy on my phone or tablet!

OK there's a bit more to process of course. Some will not like the *requirement* to have an Eye-Fi account to use the app. Others will note that I needed to register the SD card with a Windows system. I wish that was different, but it's not.

Just as a side note, the Ad-hoc network range was about 40-45 feet as a general rule for quick transfers. As the distance got just past this range, the time to upload increased by minute(s). So, proximity does matter as you would expect for such a tiny device. Also, I did have a couple of times that the Ad-hoc network would not connect. Restarting the app seemed to provide relief.

Friday, July 27, 2012

OMSA 7.0 firmware update issue or holdover public key problem from 2010? 

Updated Dell's OMSA from 6.5.1 to 7.0 via  standard yum process. Checked for any new goodies via "yum install $(bootstrap_firmware)". Tried to update firmware, but failed:
# update_firmware  --yes
Running system inventory...
Searching storage directory for available BIOS updates...
Checking BIOS - 6.1.0
Available: dell_dup_componentid_00159 - 6.1.0
Did not find a newer package to install that meets all installation checks.
Checking SAS/SATA Backplane 0:0 Backplane Firmware - 1.07
Available: dell_dup_componentid_11204 - 1.07
Did not find a newer package to install that meets all installation checks.
Checking PERC 6/i Integrated Controller 0 Firmware - 6.3.1-0003
Available: pci_firmware(ven_0x1000_dev_0x0060_subven_0x1028_subdev_0x1f0c) - 6.3.1-0003
Did not find a newer package to install that meets all installation checks.
Checking OS Drivers - 0
Available: dell_dup_componentid_18981 - 7.0.0.4
Found Update: dell_dup_componentid_18981 - 7.0.0.4
Checking Dell Lifecycle Controller - 1.5.1.57
Available: dell_dup_componentid_18980 - 1.5.2.32
Found Update: dell_dup_componentid_18980 - 1.5.2.32
Checking NetXtreme II BCM5709 Gigabit Ethernet rev 20 (eth1) - 6.2.16
Available: pci_firmware(ven_0x14e4_dev_0x1639) - 6.2.16
Available: pci_firmware(ven_0x14e4_dev_0x1639_subven_0x1028_subdev_0x0235) - 7.0.47
Found Update: pci_firmware(ven_0x14e4_dev_0x1639_subven_0x1028_subdev_0x0235) - 7.0.47
Checking NetXtreme II BCM5709 Gigabit Ethernet rev 20 (eth0) - 6.2.16
Available: pci_firmware(ven_0x14e4_dev_0x1639) - 6.2.16
Available: pci_firmware(ven_0x14e4_dev_0x1639_subven_0x1028_subdev_0x0235) - 7.0.47
Found Update: pci_firmware(ven_0x14e4_dev_0x1639_subven_0x1028_subdev_0x0235) - 7.0.47
Checking ST3450857SS Firmware - es65
Available: dell_dup_componentid_20795 - es65
Did not find a newer package to install that meets all installation checks.
Checking iDRAC6 - 1.80
Available: dell_dup_componentid_20137 - 1.85
Found Update: dell_dup_componentid_20137 - 1.85
Checking NetXtreme II BCM5709 Gigabit Ethernet rev 20 (eth2) - 6.2.16
Available: pci_firmware(ven_0x14e4_dev_0x1639) - 6.2.16
Available: pci_firmware(ven_0x14e4_dev_0x1639_subven_0x1028_subdev_0x0235) - 7.0.47
Found Update: pci_firmware(ven_0x14e4_dev_0x1639_subven_0x1028_subdev_0x0235) - 7.0.47
Checking NetXtreme II BCM5709 Gigabit Ethernet rev 20 (eth3) - 6.2.16
Available: pci_firmware(ven_0x14e4_dev_0x1639) - 6.2.16
Available: pci_firmware(ven_0x14e4_dev_0x1639_subven_0x1028_subdev_0x0235) - 7.0.47
Found Update: pci_firmware(ven_0x14e4_dev_0x1639_subven_0x1028_subdev_0x0235) - 7.0.47
Checking Dell 32 Bit Diagnostics - 5154a0
Available: dell_dup_componentid_00196 - 5154a0
Did not find a newer package to install that meets all installation checks.
Checking System BIOS for PowerEdge R710 - 6.1.0
Available: system_bios(ven_0x1028_dev_0x0235) - 3.0.0
Did not find a newer package to install that meets all installation checks.
Found firmware which needs to be updated.
Running updates...
/ Installing dell_dup_componentid_18981 - 7.0.0.4Installation failed for package: dell_dup_componentid_18981 - 7.0.0.4
aborting update...
The error message from the low-level command was:
Update Failure: Partition Failure - The Delete Dynamic Partition has failed
Tried the /etc/redhat-release fix without success. Tried a reboot to flush out any oddities...

After much google'n it seems that I had some kind of public key issue:
rpm --import http://linux.dell.com/files/libsmbios/download/RPM-GPG-KEY-libsmbios
rpm --import http://lists.us.dell.com/linux-security-publickey.txt
Now update_firmware works again as expected.

Tuesday, July 10, 2012

CentOS 6.3 mdadm won't start older md arrays.

For some of us, the drive setups we create stays with a system for a long time. Keeping the same data disk array untouched even for major revision changes is common (like a OS rebuild of 5.x -> 6.x). Sometimes that long term usage bites back. Here is my failure case while upgrading from CentOS 6.2 -> CentOS 6.3.

Symptoms:
Simply md RAID 1 extra data drive will not boot. The system drops to recovery mode with a missing (md) drive to mount and an fsck request. The extra file system has 2 "linux_raid_member" drives that show under fdisk and blkid. Even a "cat /proc/mdstat" shows no arrays. If I run, as root:
mdadm --auto-detect
the /proc/mdstat will finally show info, however.

Solutions:
  1. Make sure that the /etc/mdadm.conf contains the array info from: 
    mdadm --examine --scan >> /etc/mdadm.conf
  2. There seems to be kinda "depreciated" mdadm technical note about older created arrays with BZ-788022 implicated and a "+0.90" needing to be added to/etc/mdadm.conf, but you need to get rid of the "+1.x -all" options!
How do you tell in advance that you will have an issue *before* an upgrade? If  you run, as root,
mdadm -E /dev/sdc1|grep Vers
and get output like: "Version : 0.90.00", you will want to make a change *before* you reboot!

It is interesting to note that I was able to just have this md array work in 6.0, 6.1 and 6.2 because,
In Red Hat Enterprise Linux 6.1 and 6.2, mdadm always assembled version 0.90 RAID arrays automatically due to a bug.

Thursday, June 21, 2012

"DEFROUTE" in RHEL, CentOS or Fedora - a usage case for eth0

Here is the situation: There is a user with a remote office that includes networking equipment like printers, backup server etc. He uses a company provided CentOS laptop as the gateway for his network via a cell card since there is no other cheaper option available to get him connected. He travels with the laptop often and I don't need access to the equipment at his office unless he's there. The user always connects either via WIFI or a cell card for VPN connection depending on what is available. The laptop is running openvpn and is connecting to our company's openvpn server. No problem... until I want to also connect the laptops ethernet cable and use NetworkManager and use a DHCP server for the eth0... That's when the VPN drops and the laptop now has an incorrect default route. What's odd is that order of interface startup does not matter... dhcp on eth0 wins no matter what... bummer.


What's happening? Well, normally with DHCP, the client is given a default route as part of the information exchange. That eth0 default route takes over. It's just that simple... unless you give the networking system guidance on what to do in the form of the "DEFROUTE" option. The option has been around for a long time. The current Red Hat Enterprise Linux 6 Deployment Guide has DEFROUTE option hidden in the 8.2.4. Dialup Interfaces section. Here is a chunk of the information from the guide:
DEFROUTE=answer
where answer is one of the following:
  • yes — Set this interface as the default route.
  • no — Do not set this interface as the default route.         

In the past, I did not use the DEFROUTE option. I found that I could just statically assign the eth0 and *not* let NetworkManager have access to it (NM_CONTROLLED=no). In fact with Centos (and the like), NM seems to get disabled as a generally rule.

Also, if this was a server or I wanted to statically assign the interface, it would not be an issue. Just one of those fringe usage cases.