Saturday, March 22, 2014

Making Juniper SSL VPN network connect applet work on 64-bit Linux (Fedora)

Old Juniper SSL VPN appliances network connect Java applets not working on 64 bit systems without some additional steps. Here these steps.

1. Install older then Java "7 update 51" 32 bit version of java. 
         You can use 7.51 but you need to add your ssl vpn endpoint to the exception list:
         http://kb.juniper.net/InfoCenter/index?page=content&id=KB28704 
 You can download 32 bit JRE 7.45 from Oracle web site java archive
 I recommend you to download tar.gz archive instead of rpm. It will allow you to have multiply java  installed and switch between them using alternatives

2.  Install java and configure alternatives to use downloaded java: 
Install:
$ tar -xvzf ./jre-7u40-linux-i586.tar.gz
# mv /home/myuser/Downloads/jre1.7.0_45/ /usr/java/

configure alternatives for java 32 bit

# alternatives --install /usr/bin/java java /usr/java/jre1.7.0_40/bin/java 32
configure alternatives for java browser plugin (yes, i suggest to use alternatives to manage java plugins)
alternatives --install /opt/google/chrome/plugins/libnpjp2.so java_chrome /usr/java/jre1.7.0_40/lib/i386/libnpjp2.so 32

switch alternatives to use installed 32bit Java:
# alternatives --config java_chrome
# alternatives --config java

3. Test if java plugins works and has correct version  in chrome or firefox
go to http://www.java.com/en/download/installed.jsp and verify Java version

4. Install xterm and some 32 bit version of libraries: 
    yum install xterm glibc.i686 zlib.i686 
    These components required to install and run network connect application

5. Go to your ssl vpn endpoint using browser, login and launch network connect.
You should now see new xterm window and sudo password request for first time network connect installation.

If not --> check network connect install log for details:
$ less ~/.juniper_networks/network_connect/installnc.log

Normally after this you should see network connect starting and working. 

If not ---> check:
 - presence of network connect aplication , file permissions and ownership:
      $ ls -la ~/.juniper_networks/network_connect/
         -rws--s--x. 1 root   root   1281164 Mar 21 22:17 ncsvc
 - if network connect application could run:  
$ ~/.juniper_networks/network_connect/ncsvc --version
Juniper Network Connect Server for Linux.
Version         : 7.1
Release Version : 7.1-16-Build26805
Build Date/time : Aug 21 2013 01:11:08 
Copyright 2001-2010 Juniper Networks
if you see any error check for the missing 32bit libraries.
  

Monday, March 17, 2014

Junk Cloud POC project.

Junk Cloud is fast deploying CloudStack environment running on KVM hypervisor and using outdated equipment.

As first attempt to build Junk Cloud I have tested CloudStack baremetal support. What I expect from baremetal support?

  • reduce time to configure and deploy each host
  • you can add computing resources (physical hosts) to the cloud in a minutes instead of hours. 
  • fast deployment of the new private cloud
  • allow easy utilization of outdated hardware


Unfortunately CloudStack built-in baremetal support is limited: http://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.2.0/html/Installation_Guide/choosing-a-hypervisor.html


  • not support advanced networking
  • not support central storage (SAN)
  • not support vmotion and HA

So, it was decided to build own version of the barematal support for theCloudStack:


  • network boot (pxe boot)
  • network auto install of lnux + kvm + network + CloudStack
  • storage configuration 
  • upon completion auto selfprovisionng to the CloudStack as a new host (resource)
The Anaconda install script for building and configuring Centos based KVM host for CloudStack (Advanced networking support) on GitHub:  https://github.com/IhorKravchuk/cloudstack_ingvar/blob/master/Centos_6x64_kvm_cloudstack.cfg

This script is missing auto selfprovisionng to the CloudStack feature for test purpose. It will be updated after final tests.

Tested and suggested deployment architecture (for the small deployments):



From my point of view implementation of proposed architecture will be:

  • core components: cloud controller + network storage installed and configured. 
  • to add new host just connect it to network and power.
  • all rest done automatically.
Private Cloud? it's that simple:

Build a Cloud Controller:
Add NFS NAS:
And finally add all your old hardware:





CloudStack-BIND dns integration.

How to integrate CloudStack DNS with your organization DNS environment?
All recommendation bellow based on the approach of using one sub-domain per CloudStack network.
 1. The simplest way: conditional forwarding from windows DNS server to CloudStack VR for the corresponding sub-domain or (BIND) sub-domain delegation to the VR. 
pros: easy to build
con: VR are running as not authoritative DNS for sub-domain, each time new record added dnsmasq service restarts and you have 2-3 sec of downtime (up to v4.1), all request going to VR.
  2.  CloudStack --> BIND full integration:
Following program solves DNS integration issues between CloudStack VR's DNS service and BIND DNS.
This program assumes that you are using sub-domain per network(each network has own sub-domain) (IMHO the best way fro naming instances in CloudStack)
How it works:
On event or on schedule program call CloudStack API and get list of Networks and list of VM. Using theses lists and preconfigured domain settings it creates the zone file for BIND, push it to server and refresh BIND.
This program could be run using 2 different ways: 
  1.  being installed on DNS server and update DNS records on scheduled interval. (schedule driven)
  2.  being installed on CloudStack management server and listen for the new vm deployment using CloudStack catalina.out log. (event driven)

Proposed version running on heavy used CloudStack environment with very frequent SaltStack driven deployments with almost no issue. 



The script is under active development and testing and will be updated.
Version 2.0 released with all parameters now loaded from dns_builder.conf file and local and remote DNS servers support. 
PS:
Some times this script fails because of non-expeted  ClaudStack response:
 
    dns_table = get_dns()
  File "/usr/bin/dns_builder.py", line 134, in get_dns
    output+= vm['name'] + "." + net_dict[vm['nic'][0]['networkname']] + "\t\t\t300\tIN\tA\t" + vm['nic'][0]['ipaddress'] +" \n"
IndexError: list index out of range

Instead of exception handling to keep this script running I used supervisord daemon.
It starts the script, makes sure it running, restarts in case of failure and takes care of logs.
Part of supervizord.conf file related to the script:
[program:dns_builder]
command=/usr/bin/dns_builder.py      ; the program (relative uses PATH, can take args)
priority=100                ; the relative start priority (default 999)
autostart=true              ; start at supervisord start (default: true)
autorestart=true            ; retstart at unexpected quit (default: true)
;startsecs=10                ; number of secs prog must stay running (def. 10)
startretries=5              ; max # of serial start failures (default 3)
;exitcodes=0,2               ; 'expected' exit codes for process (default 0,2)
;stopsignal=QUIT             ; signal used to kill process (default TERM)
;stopwaitsecs=10             ; max num secs to wait before SIGKILL (default 10)
;user=chrism                 ; setuid to this UNIX account to run the program
log_stdout=true             ; if true, log program stdout (default true)
log_stderr=true             ; if true, log program stderr (def false)
logfile=/var/log/dns_builder.log    ; child log path, use NONE for none; default AUTO
logfile_maxbytes=1MB        ; max # logfile bytes b4 rotation (default 50MB)
logfile_backups=2          ; # of logfile backups (default 10)

Monday, March 10, 2014

Firewall rule logging

Problem:
     When firewall is managed by NOC team and you are part of infosec it's really hard to maintain reasonable log level at firewall rules.
    Even If decision of log do not log this rule is part  of the change request process and made by infosec you have a risk of having to much logs, having duplicated logs of the same event form different firewalls or spending more time on each request to avoid problem mentioned.
     In all other cases you definitely end up having too much logs that could even overload and slow down you FW device (for me it happened on opendbsd pf firewall when syslog had consumed all memory ).
  Sure thing, you can optimize fw rules and reduce amount of logs - but  it's thankless time consuming process especially if you have many FW devices.
 
 Solution:
    Create special fw rules just for the log purpose. They will be at the begin of the rule list and identical on all your FW devices.

Result:
Logging rules that:

  • easy to create (based on network topology and security zones )
  • easy to check
  • easy to tune 
  • easy to distribute and install
Example:
For PF firewall those rules will look like: 


match log inet proto tcp from 145.23.56.15 to 10.156.25.15 port 80