Remote Lab Access and Control

A requirement I’ve quickly come to realize with building my lab is remote access into lab my equipment. This requirement is two fold, I don’t feel like always sitting in my basement to build topologies and I’m not always home when I will be studying. This need naturally led me to acquiring a terminal server, which was very helpful in fulfilling my first need of not having to always hang out in the basement when studying. I didn’t like the idea of always leaving my lab equipment on wasn’t exciting to me as I don’t like wasting electricity, so I found a Remote Power Control (RPC) unit also known as a switched PDU.

I enjoyed setting everything up so I figured I’d share the configuration steps I took to get two devices communicating with each other and functioning. The two devices I used were a Opengear IM7200 terminal server and a Avocent (Cyclades) PM10. The setup is pretty straightforward with minimal steps.

First you need to make sure the RPC unit is cable properly, for the PM10 a serial console connection is made of a UTP straight through cable from one of the serial ports on the Opengear terminal server to the “In” port on the PM10. You can daisy chain multiple PM10s together by going from the “out” port to the “in” port on the next PM10, but I recommend setting up each PM10 as an individual serial port on the terminal server. This gives more flexible control and you won’t lose multiple RPCs if you have a failure “up stream” in the daisy chain. After the cabling is taken care of its time to move on to the fun part, configuration!

The first configuration component is to configure the serial port on the IM7200 to the PM10. To do so navigate to the Serial Port configuration section: ynwxbpg

The next step is to configure the serial port connected to the PM10 by editing the port on the IM7200:um6g9qy

The following settings are specific to the PM10 connection and need to be configured on the serial port on the IM7200 for connectivity:rbf9hxzoihsmtcSettings include:

  • Label – Port name you would like
  • Baud Rate – 9600
  • Data Bits – 8
  • Parity – None
  • Stop bits – 1
  • Flow control – None
  • Port Pinout – Cisco Straight (X1)
  • Terminal Type – ansi

In addition to the required serial settings, the serial port must be set to a device type of “RPC” so that the terminal server knows how to handle the port:r8igm0v

Next navigate to the RPC configuration under Serial & Networks:mj3mucf

Next click on ” Add RPC”:ijx5dch

Next setup the RPC configuration on the IM7200 withe the following settings:dly8d0pSettings include:

  • Connected via – Serial Port previously configured
  • RPC Type – Cyclades PM10
  • Name – Whatever you would like to name it
  • Outlets (optional) – set it to 10 or leave it as default for auto-probing
  • Username / Password – Set to admin/password for PM10
  • Log Status – Enabled (Checked)
  • Log Rate – Setting you would like

The next step is to configure serial ports connected to console ports on the devices controlled by the RPC with the Power Menu enabled:twmrllo

The last step is to setup a Managed Device for each device to be controlled by the RPC, to do so navigate to “Managed Devices” under Serial & Networks:kawabbv

Click “Add Device”:fr3x9gy

Finally configure the device with a name, assigned console port, and assigned RPC port:f8ohwtg

After configuration the devices can be managed under devices:tlbf8dt

Or right from the console sessions via the terminal server:yla5om8

Happy labbing!

Advertisements

A short little time lapse I made…

I decided that I wanted to almost double the time it took to re-cable my CCIE Lab so I made a time lapse video out of it. I think it turned out pretty well!

 

 

The layout is pretty straight forward, I have a 2801 and 3560 as a “hub” which acts is a central place of connectivity for 5 other “pods” that consist of a 1841 and 3560. A diagram will follow I’m sure.

 

Enjoy 🙂

CCIE Homelab Tips, Tricks, & Thoughts CSR1000V memory optimization

With the completion of my Master’s degree I now have more free time to start preparing for my deep dive into my CCIE studies. Recently I’ve been working on getting my lab environment together. What I’ve decided was to do a mixture of both physical hardware and virtual instances which includes a few “pods” of routers and switches along with a ESXi Server running both Cisco VIRL and a large number of CSR100V instances. There are both pros and cons to using either hardware or virtual instances, something I will probably cover at a later time, including a rundown of my lab environment once its completed.

Today I want to go over something neat I discovered when researching how to deploy CSR1000Vs for my lab, disabling large pages in ESXi. Disabling Large Pages in ESXi is essentially deduplication for memory. This allows for a very large memory savings when running multiple instances of similar VMs (such as 20 CSR1000V instances!). For me on 10 CSR1000V instances I saw RAM use go from 33GB down to 12GB, which allows for even more instances to be installed on the same amount of hardware.

Enabling this feature takes a matter of minutes to implement, and gives a great memory use savings.

*Please note*: I have not done extensive research on the interworking of what enabling this feature does. I have not personally run into any performance issues running this on my CCIELab ESXi host which runs mostly CSR instances with a few other VMs. If you are going to deploy this on a host that is used for other things please take baselines and use caution when deploying. I would also recommend not deploying this feature in production without first consulting a VMware expert.

Now to the fun part!

First lets take a note of what memory usage is currently at:
Before

To have the change take affect, the VMs have to be powered on after the change is made, so next is to power down all of your VMs.

After the VMs are powered down follow the following steps:

1. Navigate to Configuration
2. Under Software: Select Advanced Settings
Steps

3. Select “Mem”
4. Locate “Mem.AlocGuestLargePage” and set its value to 0
Steps2

5. Select Ok
6. Power on VMs

It will take a few minutes (roughly 5 – 10) for the CSRs to fully boot up and for ESXi to deduplicate memory. I recommend turning up as many instances you can and coming back in 10 minutes to enjoy all of the additional memory you just acquired.

And of course, a screenshot of my 10 CSR100Vs running with less memory utilization 🙂

after