Remote Lab Access and Control

A requirement I’ve quickly come to realize with building my lab is remote access into lab my equipment. This requirement is two fold, I don’t feel like always sitting in my basement to build topologies and I’m not always home when I will be studying. This need naturally led me to acquiring a terminal server, which was very helpful in fulfilling my first need of not having to always hang out in the basement when studying. I didn’t like the idea of always leaving my lab equipment on wasn’t exciting to me as I don’t like wasting electricity, so I found a Remote Power Control (RPC) unit also known as a switched PDU.

I enjoyed setting everything up so I figured I’d share the configuration steps I took to get two devices communicating with each other and functioning. The two devices I used were a Opengear IM7200 terminal server and a Avocent (Cyclades) PM10. The setup is pretty straightforward with minimal steps.

First you need to make sure the RPC unit is cable properly, for the PM10 a serial console connection is made of a UTP straight through cable from one of the serial ports on the Opengear terminal server to the “In” port on the PM10. You can daisy chain multiple PM10s together by going from the “out” port to the “in” port on the next PM10, but I recommend setting up each PM10 as an individual serial port on the terminal server. This gives more flexible control and you won’t lose multiple RPCs if you have a failure “up stream” in the daisy chain. After the cabling is taken care of its time to move on to the fun part, configuration!

The first configuration component is to configure the serial port on the IM7200 to the PM10. To do so navigate to the Serial Port configuration section: ynwxbpg

The next step is to configure the serial port connected to the PM10 by editing the port on the IM7200:um6g9qy

The following settings are specific to the PM10 connection and need to be configured on the serial port on the IM7200 for connectivity:rbf9hxzoihsmtcSettings include:

  • Label – Port name you would like
  • Baud Rate – 9600
  • Data Bits – 8
  • Parity – None
  • Stop bits – 1
  • Flow control – None
  • Port Pinout – Cisco Straight (X1)
  • Terminal Type – ansi

In addition to the required serial settings, the serial port must be set to a device type of “RPC” so that the terminal server knows how to handle the port:r8igm0v

Next navigate to the RPC configuration under Serial & Networks:mj3mucf

Next click on ” Add RPC”:ijx5dch

Next setup the RPC configuration on the IM7200 withe the following settings:dly8d0pSettings include:

  • Connected via – Serial Port previously configured
  • RPC Type – Cyclades PM10
  • Name – Whatever you would like to name it
  • Outlets (optional) – set it to 10 or leave it as default for auto-probing
  • Username / Password – Set to admin/password for PM10
  • Log Status – Enabled (Checked)
  • Log Rate – Setting you would like

The next step is to configure serial ports connected to console ports on the devices controlled by the RPC with the Power Menu enabled:twmrllo

The last step is to setup a Managed Device for each device to be controlled by the RPC, to do so navigate to “Managed Devices” under Serial & Networks:kawabbv

Click “Add Device”:fr3x9gy

Finally configure the device with a name, assigned console port, and assigned RPC port:f8ohwtg

After configuration the devices can be managed under devices:tlbf8dt

Or right from the console sessions via the terminal server:yla5om8

Happy labbing!

Advertisements

Cisco ISE REST API & Python

I’ve been faced with a fun little challenge on how to make sure our ISE deployment has every NAD (Network Access Device) configured appropriately to allow for successful EAP communications. Originally I was planning on utilizing a CSV and the bulk import tool to regularly import new devices into ISE as they were built. This allows for a number (small or large) of devices to be imported into ISE without taking too much time. This has worked well in the past but creates an reliance on making sure the CSV is proper and that someone (me) still has to manually login and import the file. With that I decided to look into other possibilities to remove the “me” from the process flow. At first I was looking into ways to automatically populate the CSV and then script out away to login to ISE and force the bulk import. While that option would work, it seemed to be too complicated to really deploy and rely on. I finally decided to give another whack at using the REST API ( I had previously tried years prior with ACS but did not have much luck).

There are a two things that need to be done on ISE prior to being able to utilize the REST API. The below screens and settings are based on ISE 2.2 but are similar between all recent releases of ISE:

  1. Create an account that will be utilized for the REST calls.  To do this navigate to: Administration > Admin Access > Administrators > Admin Users and click on “Add”:ialbmj4
    Currently there are two different access types you can assign: Read/Write or ReadOnly. For the code about to be run, we need Read/Write.
  2. Enable ERS (External RESTful Services) to allow REST calls. To do this navigate to: Administration > System > Settings > ERS Settings then select “Enable ERS for Read/Write” under the Primary Administration Node:fsdtf17
    This setting must be enabled after each upgrade as its set to disabled during the upgrade. If you plan to utilize the REST API, I recommend adding to your upgrade documentation / process that the REST API is enabled at the end of the upgrade.

After a user account is created and ERS is enabled the REST API can be utilized via HTTPS on port 9060. API documentation can be found at: https://ISE-PAN-IP:9060/ers/sdk

Now that the API is exposed its now for some fun! But first some cautions / warnings..

  1. This is by no means a tutorial about REST API or Python.
  2. You really should have a good understanding about REST API before enabling it. I’m still skeptical about the security around access when it comes to REST.
  3. You should never use a production system to develop code that makes changes to it.
  4. Use the code shown at your own risk!

And a few notes about the code…

  1. The below code is not complete, and needs tweaking to be functional. Its intention is simply to show a proof of concept for automating device creation.
  2. The code calls ‘nad.xml’ which is a separate XML file (can be found on my github repository). I will not be going over the file in this tutorial, but can be manipulated for actual use.
  3. The final output is not pretty and may not be complete depending on the number of devices being imported.
  4. The code below is a picture due to me not knowing how to easily paste code that looks nice on wordpress. A copy of the code can by found on my github repository.
  5. The IP address or FQDN of your ISE PAN needs to be updated prior to running the code.
  6. A proper authorization key needs to be added prior to running the code. This will be from the account you created earlier.
  7. It would be a good idea to create a variable for the ISE PAN information to use for multiple URLs.
  8. It would be a good idea to create a variable for the authorization information to use for multiple calls.

Now its really time for the fun part!

The below code is intended to do two things: Bulk create network devices in ISE and to verify the status of the bulk job:

kdtfnke

Code Breakdown:

  • Lines 2 – 8 are simply to deal with importing the XML file. You could just include it in the script and assign it to the payload variable (referenced in line 18) but that doesn’t make this usable in a production environment.
  • Line 11 is the URL used for the first API call. Don’t forget to update with your ISE PAN information.
  • Line 15 is where you should update your authorization information.
  • Line 18 has an extra variable in the request which is “verify” set to “False”. This lets you ignore certificate warnings. In my lab I didn’t bother deploying trusted certs so I needed this.
  • Line 18 is the actual API call being pushed. If you do not care about the status you could simply end here or just print the response.
  • Line 21 grabs just the location header from the response from the API call. The location header is a URL containing the BULK ID that is parsed from the entire URL to use for the second portion of the script.
  • Line 28 is the URL used for the second API call + the BULK ID. Don’t forget to update with your ISE PAN information.
  • Line 32 is where you should update your authorization information.
  • Line 36 has an extra variable in the request which is “verify” set to “False”. This lets you ignore certificate warnings. In my lab I didn’t bother deploying trusted certs so I needed this.

Lets see the code in action!

First we will look to see whats configured in ISE for network devices:9aczrfjNow lets run the script:hghklfn

As can be seen the XML containing 10 network devices was still in progress when the status check was run. If this was production code there are multiple options to avoid this such as a timer could be implemented, the user could be asked when to run the check, or constant checking until it completes.

Now lets see what we have in ISE:qghpxe9Ten brand new devices!

The API in Cisco ISE has many different functions that can allow for the creation, modification, or deletion of several different objects outside of network devices. This is just one example of the power that is available for automating functions within ISE that have been around for a while.

 

Hiding (filtering) a specific user from reporting in Cisco ISE

I ran into an interesting problem preparing for an 802.1x deployment – the authentications report in Cisco ISE was full of all the network devices checking to make sure ISE was still available (health checks). As seen below the load balancer’s keep alive fill the logs pretty much on their own, imagine trying to troubleshoot a login issue!1 YUCK!

Something else I found interesting that my Google Foo (or knowledge of ACS and how to filter out a certain user) was no match for trying to find a solution for my issue. Because of this, I decided a quick how-to on this would be helpful (I can’t be the only person who will want to filter out such an annoying problem).

First Navigate to Administration > System > Logging:2

Once in the System Settings for Logging, navigate to “Collection Filters”:3

At this point, the rest is pretty straight forward. But for completeness I am going to finish the whole process, so click “Add”:4

After that just fill in the type of attribute you want to filter (Username, Policy Set Name, NAS IP Address, Device IP Address, or MAC Address), the Value for the selected attribute, and the Filter Type (Filter All, Filter Passed, FIlter Failed, or Bypass Suppression [with time limit]). Finally, click “Submit”!

5

For me, it made the most sense to filter the username used for the monitors, and to only filter on passes for that username. This allows me to use the least amount of filters, and if a health monitor fails for any reason will show up in the reporting still.

Final result (don’t mind the old logs, I was too impatient to wait for them to clear):

6

Happy troubleshooting!

Controlling Traffic to a Virtual Server on F5

There are multiple ways to control what traffic is allowed or not allowed through a BIG-IP or for specific Virtual Servers (VS). The following method uses F5’s AFM (Application Firewall Manager) to create security policies which are then applied to a specific VS. For this method example, traffic from three specific hosts will be allowed to a specific VS, while all other traffic is blocked. The below diagram illustrates the environment used:
diagram

In this example the VS (192.168.35.100) is setup for TCP port 80 (HTTP) traffic only. This within itself controls some of the allowed traffic flow (assuming there are not other VS configured to accept all traffic) by only allowing port 80 traffic destined to the VS to be processed. An address list is created with the addresses that are allowed to the VS and then a Network Firewall Policy Rule is created and applied to the specified VS with an accept action. To finish this method, a block rule is added to the Network Firewall Rule Policy rule-set to drop all other traffic.

The first step is to create a Address List by going to Security > Network Firewall > Address Lists and either clicking the plus sign under Address Lists:
AddressLists

or selecting Create on the next screen:
CreateAddressList

Enter the following Information (Only Name and Addresses are required):
CreatingAddressList Click finish when completed. A side note, the first time I did this I spent a few moments figuring out you must push enter after typing the address in the address field.

The next step is to create the Rule Policies by going to Security > Network Firewall > Active Rules and clicking the plus sign under Active Rules:
ActiveRules

or selecting Add on the next screen:
AddRule

Enter the following information:

  • Context: Virtual Server – VS_Name
  • Policy: New – Policy_Name
  • Rule Properties:
    • Name: Rule_Name
    • Description: Rule_Description
    • Source:
      • Address/Region: Specify
      • Type: Address List
      • List: Address_List (From last step)
    • Action: Accept

NewRule Select Repeat to move onto the next rule.

For the next rule include the following information:

  • Rule Properties:
    • Name: Rule_Name
    • Description: Rule_Description
    • Source:
      • Address/Region: Any
    • Action: Drop

NewRule2 Click Finished when all set. When adding the block rule, or any other rule for that matter, make sure to add to the current policy. Otherwise, the current policy will be overwritten along with any rules in those policies. In this example creating the blocking rule improperly would overwrite the policy with the allow rule and block ALL traffic to the VS.

Once completed, it is now time to test. Attempting to browse to http://192.168.35.100 from an “allowed IP address” should allow access to the site while browsing from an IP address that isn’t on the list should be blocked.

Sending commands to multiple Terminal Sessions at one time

A few weeks ago I had to hunt down PCs but had no idea where they physically were or what switchports they were connected to. To determine the location of a PC the following steps were taken:

  1. Ping the PC IP address – This refreshes the ARP and MAC-Address tables with the current MAC address and ports
  2. Determine the MAC address from the ARP table on a layer three device < show ip arp | include IP_ADDRESS >
  3. Trace the MAC address to it’s port < show mac address-table address MAC_ADDRESS >

While in a normal environment, the most switches that have to be touched is usually two (the default gateway, and the switch the device is connected to). However, for this instance 9 switches were daisy chained together in a ring like fashion… Throw in the fact roughly 35 devices needed to be traced I got tired of clicking through all of the terminal sessions and tracing the MAC address I got annoyed pretty quickly… Luckily I remembered something that stuck out to me from one of the wonderful CBT Nuggets of the VIRL series by Anthony Sequeira. He had mentioned the Command (Chat) Window in SecureCRT that could be used to send the same commands to multiple sessions at the same time. After poking around for a few minutes I figured out how to enable the chat window and have it send commands to every active session. Once setup tracking down those PCs was a lot less time consuming!

Setting up Command Chat is pretty straight forward.

First enable Command (Chat) Window under View:
View

After the Chat Window has been enabled SecureCRT will have a blank box at the bottom of the screen as seen below:
CCW

The chat window alone will not send commands to all of the active sessions, that feature must b enabled by right clicking in the chat window and selecting “Send Commands to All Sessions”:
AllSessions

After that anything typed into the chat window will be sent to all active sessions!

Here is the command window in action:

1

45

As with anything always be mindful of what you are doing! Having sessions open that you may forget about and then push commands to can be very very bad!!

A few examples of when this can come in handy are:

  • Tracing MAC addresses
  • Tracing routes across the network
  • Creating the same VLANS across multiple switches
  • Pushing standard configurations for initial configurations in VIRL
  • Many more!