Remote Lab Access and Control

A requirement I’ve quickly come to realize with building my lab is remote access into lab my equipment. This requirement is two fold, I don’t feel like always sitting in my basement to build topologies and I’m not always home when I will be studying. This need naturally led me to acquiring a terminal server, which was very helpful in fulfilling my first need of not having to always hang out in the basement when studying. I didn’t like the idea of always leaving my lab equipment on wasn’t exciting to me as I don’t like wasting electricity, so I found a Remote Power Control (RPC) unit also known as a switched PDU.

I enjoyed setting everything up so I figured I’d share the configuration steps I took to get two devices communicating with each other and functioning. The two devices I used were a Opengear IM7200 terminal server and a Avocent (Cyclades) PM10. The setup is pretty straightforward with minimal steps.

First you need to make sure the RPC unit is cable properly, for the PM10 a serial console connection is made of a UTP straight through cable from one of the serial ports on the Opengear terminal server to the “In” port on the PM10. You can daisy chain multiple PM10s together by going from the “out” port to the “in” port on the next PM10, but I recommend setting up each PM10 as an individual serial port on the terminal server. This gives more flexible control and you won’t lose multiple RPCs if you have a failure “up stream” in the daisy chain. After the cabling is taken care of its time to move on to the fun part, configuration!

The first configuration component is to configure the serial port on the IM7200 to the PM10. To do so navigate to the Serial Port configuration section: ynwxbpg

The next step is to configure the serial port connected to the PM10 by editing the port on the IM7200:um6g9qy

The following settings are specific to the PM10 connection and need to be configured on the serial port on the IM7200 for connectivity:rbf9hxzoihsmtcSettings include:

  • Label – Port name you would like
  • Baud Rate – 9600
  • Data Bits – 8
  • Parity – None
  • Stop bits – 1
  • Flow control – None
  • Port Pinout – Cisco Straight (X1)
  • Terminal Type – ansi

In addition to the required serial settings, the serial port must be set to a device type of “RPC” so that the terminal server knows how to handle the port:r8igm0v

Next navigate to the RPC configuration under Serial & Networks:mj3mucf

Next click on ” Add RPC”:ijx5dch

Next setup the RPC configuration on the IM7200 withe the following settings:dly8d0pSettings include:

  • Connected via – Serial Port previously configured
  • RPC Type – Cyclades PM10
  • Name – Whatever you would like to name it
  • Outlets (optional) – set it to 10 or leave it as default for auto-probing
  • Username / Password – Set to admin/password for PM10
  • Log Status – Enabled (Checked)
  • Log Rate – Setting you would like

The next step is to configure serial ports connected to console ports on the devices controlled by the RPC with the Power Menu enabled:twmrllo

The last step is to setup a Managed Device for each device to be controlled by the RPC, to do so navigate to “Managed Devices” under Serial & Networks:kawabbv

Click “Add Device”:fr3x9gy

Finally configure the device with a name, assigned console port, and assigned RPC port:f8ohwtg

After configuration the devices can be managed under devices:tlbf8dt

Or right from the console sessions via the terminal server:yla5om8

Happy labbing!

A short little time lapse I made…

I decided that I wanted to almost double the time it took to re-cable my CCIE Lab so I made a time lapse video out of it. I think it turned out pretty well!

 

 

The layout is pretty straight forward, I have a 2801 and 3560 as a “hub” which acts is a central place of connectivity for 5 other “pods” that consist of a 1841 and 3560. A diagram will follow I’m sure.

 

Enjoy 🙂

To homelab or not to homelab….? That is a question.. (Part 2) – Diagrams

In my previous post in this series I went over the gradual growth of my lab from just being on my desk to having its very own 42u network rack. Since then, my lab has grown and changed. Previously I was using the virtual switch right on my esxi hosts (which were 5.1), along with having separate local storage and managing them individually.

Today my virtual environment is managed by vCenter, has centralized storage, runs ESXi 5.5, and utilizes Cisco Nexus 1000v for distributed switching. I plan to do another post eventually on my centralized storage setup running FreeNAS so I won’t be talking directly about it too much. Instead, through building out my virtualized environment I realized that I don’t actually have any diagrams for my home network / lab.

The first diagram I started to work on is for my “server” environment. This includes my storage servers as well as my ESXi hosts and the guests running on the guests. It took a bit of time to figure out a good way to represent how the VMS are connected to the network but I think I found a good way of doing it.

Servers

Going through the diagram:

  • ESXi-1 (C1100)
    • 72 GB ECCRAM
    • iSCSI LUN
    • Dual connected IP and iSCSI connectivity
    • 25 Configured VMs
      • 11 Active
  • ESXi-2 (C1100)
    • 72 GB ECC RAM
    • iSCSI LUN
    • Dual connected IP and iSCSI connectivity
    • 29 Configured VMs
      • 11 Active
  • unRAID (WhiteBox)
    • 8GB RAM
    • Single DataStore (1 Disk Parity) [ 22TB]
    • Dual connected IP connectivity
  • FreeNAS (C2100)
    • 24GB ECC RAM
    • Single DataStore (1 Disk Parity) [ 22TB]
    • Dual connected IP connectivity
    • Quad connected iSCSI connectivity
    • Dual PSU
  • Access Switch
    • WS-C2960S-48LPS-L
  • Other
    • APC BR1500G (1500VA/865W)
    • APC BR24BPG (additional four 9Ah batteries for UPS)

 

Next steps will be to add the rest of the equipment, and my CCIE Lab. Can’t forget about that 🙂

To homelab or not to homelab….? That is a question.. (Part 1)

Many people wonder if it is worth while to have a home-lab to study on and to use to learn new technologies; this post won’t be to discuss the pros or cons on having a home-lab (maybe another time) but instead to display the growth of my home-lab over the last few years and to maybe give some ideas to others on the possibilities of their own home-lab.

 

**WARNING!!!** This will be a very long post… With lots of (sometimes low quality) pictures.

 

 

And there was a big bang!

10400959_1014326762333_2011_n
I really wouldn’t call this a home-lab…. but it was my very first home network that connected a few computers (and my server) together.

Lets jump forward a few years…

At first I had a an 1841, a PIX 501, two 2900XLs, and an 2960.

1915248_1180481036086_5074179_n
Lots of cable mess….

Shortly after I acquired a 2800, got to barrow a 3750, and purchased an ASA5505

170097_1567871560607_805646_o
Blurry photo… but at least the cables look nice. (Bonus: I made that ceramic guard penguin)

I built a few servers (a couple physical and one ESXi box, as virtualization was just starting to get popular) and suddenly had a need for a better way to contain my equipment… plus I wanted my room to not be a constant 85 degrees…. so enter the need for a rack.

Unfortunately I knew that any rack I got wouldn’t fit in my room so I had to find a location to place it that would allow me to run all the house’s Ethernet to and provide enough power. I considered the garage as it would be away from the house (noise would no longer be a problem) and I figured it would be cooler compared to anywhere else in the house. The three main drawbacks to the garage was getting cables to it (it wasn’t attached to the house) and our winters can be pretty brutal. So I decided to have my gear in the basement. This in itself had some issues. For starters its an unfinished basement so dust can be plentiful. Another major issue was flooding, as the house is at the bottom of a hill the basement had a tendency to flood… luckily over the years measures were taken to help prevent flooding and have worked thus far.

With a location decided on, it was time to find a rack…. A quick searched let me know they were not going to be cheap! So I started watching craigslist for a hopefully good find. It took a few months of searching everyday but finally found one (apparently not many people have network racks to sell in my area). I had found a great bargain on a 26u music rack that the owner had used for network equipment. It took some work to get home (had to take the whole thing apart to fit it in the geo prizm) but it was finally back together and being populated.

2011-07-29_17-39-38_17
Another blurry photo but you can see the servers I had and the patch panels I used to terminate all of the Ethernet runs in the house. You can also make out my ASA5505, 3560-8pc, and a gig netgear switch.

2011-08-01_16-08-47_734
A better picture… with some cable management.

Over the next few years i acquired hardware to practice on for the many certifications exams I took. The look of the rack was never the same week to week as I was always moving things to try out something new.

471077_2872117725946_1966415969_o
A random photo I found of the rack at one point in time…

IMG_20140308_105611648
My old rack’s final form…

A quick rundown (top to bottom)

  • UPS (Provides 35 minutes of uptime for the entire rack)
  • Front of my “Cross-Connect”
  • ASA5505 (VPN)
  • 2811 (CCME & WLC)
  • 2960 (Access Layer)
  • 3550 (Gig and “Core”)
  • FiOS router/modem
  • Unraid Server (22TB of storage)
  • ESXi Server (72GB of RAM running multiple VMs)
  • Test Servers
  • 2x 1142’s (Not pictured)

Finally I started to outgrow the rack I had and begun my search once more for a new rack…

The biggest reasons I wanted a new rack was that I was planning on purchasing another ESXi server, and they didn’t properly fit in the rack I had as they were too long for the rack mount rails. Another reason is I wanted to begin building my CCIE lab, but wanted a way to have it racked and organized.

Naturally I started my search on Craigslist. I found a few, but they either didn’t come apart (a requirement I had to be able to get it into the basement) or they just weren’t what I wanted. I decided I wanted a full 42u rack cabinet that was closed on all four sides and had locks. Because of my desires the price for a rack (and the actual availability changed dramatically).

I ended up picking the Tripp Lite SR42UBKD. This model ships fully broken down (perfect for getting it into my basement) and included sides and locking front and back doors. The best deal at the time was purchasing through Staples. Shipping was included which was nice, I must also point out that the rack ships via freight and requires a lift-gate to get off the truck.

 

IMG_20140307_150346345
The day it arrived!

IMG_20140307_152149615
Unboxed and waiting to be put together!

 

Installation wasn’t too bad, especially with having another set of hands to help out

IMG_20140308_122251985IMG_20140308_143329802IMG_20140308_143338755
All put together and ready for equipment!

IMG_1057
Some of the initial equipment installed and power run for.

 

IMG_4989
CCIE Lab fully cabled and ready for studying!

The CCIE Lab consists of 5 “pods”, each pod having a 2800 and a 3560 switch. All 2800s  have three connections: one back to a central 2800 via Serial for WAN emulation, one connection via FastEthernet back to the switch in its “pod”, and finally  a FastEthernet connection to a central 3560 switch. Each pod switch has a connection back to the central switch to provide for switching labs.

 

IMG_4982
Rack fully populated (Rack blanks at bottom are hiding UPS)

Rack rundown (top to bottom after the CCIE lab):

  • ??? (Don’t worry about that little guy)
  • ASA5505
  • 2811
  • “Cross-Connect”
  • 2960 PoE
  • Cisco WLC 2504
  • Unraid Storage Server (22TB Storage)
  • ESXi Host1
  • ESXi Host2
  • Rackmount Keyboard & Monitor
  • UPS (providing 32 minutes of uptime for all “production” devices)

I feel that this is a good place to stop. I spent a few days writing this post and would like to get it posted! Next time I will go into greater detail on my “Cross-Connect” as well as a more in-depth view into my current rack’s layout.