IRP Installation and Configuration Guide

IRP Installation and Configuration Guide

2 Configuration

2.1 Configuration files

IRP is currently configured using a set of textual Unix-style configuration files, located under the /etc/noction/ directory. In-line comments can be added to any line, using the “#” symbol.
Several separate configuration files are presented as follows:
  • /etc/noction/irp.conf - the main IRP configuration file contains configuration parameters for all IRP components, including algorithm parameters, optimization modes definitions and providers settings.
  • /etc/noction/db.global.conf - database configuration file for all IRP components, except the Frontend component.
  • /etc/noction/db.frontend.conf - database configuration file for the Frontend component.
  • /etc/noction/exchanges.conf - Exchanges configuration file (3.12.7.2↓).
  • /etc/noction/inbound.conf - Inbound prefixes configuration file (3.12.12↓).
  • /etc/noction/policies.conf - Routing Policies configuration file (1.2.9↑).
  • /etc/noction/user_directories.conf - User Directories configuration file (3.12.10.2↓).
Additional configuration files can be used for several core, global and explorer preferences, as described in sections 4.1.2.23↓, 4.1.7.17↓ and 4.1.9.5↓.
A comprehensive list of all the IRP parameters along with their description can be found in the Configuration parameters reference↓ chapter.

2.2 Global and Core Configuration

The default values, specified for the Core service are sufficient for a proper system start-up. Several parameters can be adjusted during the initial system deployment. For a comprehensive list please see the Global parameters↓ and Core settings↓ sections.
During the initial setup and configuration stage, one must pay attention to the following configuration parameters:


2.3 Collector Configuration

Depending on the preferred method of traffic data collection, one or both collector components should be configured. As specified in the IRP Components↑ section, IRP can gather traffic data using the Flow (from now on: irpflowd) and Span collector (irpspand).
First of all, specific configuration to each collector will be described, along with the required router configuration changes.

2.3.1 Irpflowd Configuration

Irpflowd is a NetFlow/sFlow collector that receives and analyzes network traffic information, generated by your router(s).
NetFlow - an IP network statistics protocol developed by Cisco Systems, Inc. for collecting and transferring statistics regarding IP traffic information from network devices such as switches/routers to network analysis applications. Irpflowd currently supports the following NetFlow versions: v1, v5, v9.
sFlow - is a protocol designed for monitoring network, wireless and host devices. Developed by the sFlow.org Consortium, it is supported by a wide range of network devices, as well as software routing and network solutions.

Flow collector use is mandatory for Multiple Routing Domain networks since SPAN does not carry attributes to distinguish traffic between different providers.

2.3.1.1 Flow agents

Multi-router networks usually simultaneously carry traffic to a prefix over multiple providers. Flow collector needs to know the exact details of such a configuration in order to correctly determine the overall provider volume and active flows. Each provider configured in IRP can be explicitly set to match Flow statistics to specific Flow agents and help IRP Flow collector assign accurate statistics for each provider.
Flow agents have been added in order to support Optimization for multiple Routing Domains↑ but when available they are used to enhance IRP capabilities in other areas too. For example a correct set of Flow agents for all providers enables IRP to accurately determine a prefix’s current route. IRP components, especially Core during decision making can spot the latest prefix statistics matching a given Flow agent(s) and infer amount of traffic is sent over individual Providers. Collected data is later used to make Performance and Bandwidth decisions with knowing about multiple best routes.

In case Flow agent data is missing IRP relies on past probing data to determine the current route of a prefix.
Flow agents are specified in the form of:
IPv4/interfaceID
where IPv4 is the source IP address of the packets coming from the agent and interfaceID is a numerical identifier of the interface in the range 1- 4294967295. The interface ID is usually its SNMP Interface ID on the router.
A collection of such values is assigned when multiple physical interfaces are used. For example:
peer.X.flow_agents = 8.8.8.8/1 8.8.8.8/2 8.8.8.8/3 8.8.8.8/4
The value is set via parameter peer.X.flow_agents↓ or under Providers and Peers configuration in Frontend. The Frontend will also retrieve and suggest a list of available values that can be matched with the provider.

2.3.1.2 Configuration

To use the irpflowd collector, the following steps must be completed:
  1. NetFlow/sFlow/jFlow must be configured on the router(s), which must send traffic flow information to the main IRP server IP (Figure 2.1↓). See (2.3.1.3↓) for specific network device configuration instructions
    figure diagrams/collector_flow-example.png
    Figure 2.3.1: Flow export configuration

  2. Irpflowd must be enabled (by setting the collector.flow.enabled↓ parameter):
    collector.flow.enabled                       = 1
    
  3. A list of all the networks, advertised by the edge routers that IRP will optimize, should be added to the configuration. This information should be specified in the collector.ournets↓ parameter.
  4. For security reasons, the list of valid Flow sending IP addresses must be configured in the collector.flow.sources↓, to protect irpflowd from unauthorized devices sending Flow data.
    Example:
    collector.flow.sources                       = 10.0.0.0/29 
    
  5. In case the Flow exporters are configured to use non-standard port numbers (2055 for NetFlow/jFlow and 6343 for sFlow), then collector.flow.listen.nf↓ and collector.flow.listen.sf↓ must be adjusted accordingly:
    collector.flow.listen.nf                     = 2055
    collector.flow.listen.sf                     = 6343
    

2.3.1.3 Vendor-specific NetFlow configuration examples

The following sections contain vendor-specific configuration examples, along with possible pitfalls and version-specific issues

NetFlow configuration on Cisco 7600/6500 series routers

Listing 2.1: Global MLS settings configuration
(config)# mls netflow 
(config)# mls flow ip interface-full 
(config)# mls flow ipv6 interface-full 
(config)# mls sampling packet-based 512 8192 
(config)# mls nde sender version 7
Please replace the IP address and port with the actual IRP host IP and the collector UDP port (2055 by default)
Listing 2.2: Global NetFlow settings and export configuration
(config)# ip flow-cache entries 524288 
(config)# ip flow-cache timeout inactive 60 
(config)# ip flow-cache timeout active 1 
(config)# ip flow-export version 9 
(config)# ip flow-export destination 10.11.12.14 2055
Ingress flow collection must be enabled on all interfaces facing the internal network.
MLS NetFlow sampling must be enabled to preserve router resources.
Listing 2.3: Per-interface NetFlow settings configuration
(config)# int GigabitEthernet 3/6 
(config-if)# mls netflow sampling 
(config-if)# ip flow ingress
Flexible NetFlow configuration on Cisco 6500 series routers IOS 15.0SY series
Listing 2.4: Flexible NetFlow monitor configuration
(config)# flow monitor IRP-FLOW-MONITOR
(config-flow-monitor)# record platform-original ipv4 full
(config-flow-monitor)# exporter IRP-FLOW-EXPORTER
(config-flow-monitor)# cache timeout inactive 60
(config-flow-monitor)# cache timeout active 60
(config-flow-monitor)# cache entries 1048576
Listing 2.5: Flexible NetFlow exporter configuration
(config)# flow exporter IRP-FLOW-EXPORTER
(config-flow-exporter)# destination 10.11.12.14
(config-flow-exporter)# source Loopback0
(config-flow-exporter)# transport udp 2055
(config-flow-exporter)# template data timeout 120
Please replace the IP address and port with the actual IRP host IP and the collector UDP port (2055 by default). Also replace the source interface with the actual one.
Listing 2.6: Flexible NetFlow sampler configuration
(config)# sampler flow-sampler
(config-sampler)# mode random 1 out-of 1024
Listing 2.7: Per-interface Flexible NetFlow settings configuration
(config)# interface FastEthernet0/0
(config-if)# ip flow monitor IRP-FLOW-MONITOR sampler flow-sampler input
(config-if)# ip flow monitor IRP-FLOW-MONITOR sampler flow-sampler output
NetFlow configuration on Cisco 7200/3600 series routers
Please replace the IP address and port with the actual IRP host IP and the collector UDP port (2055 by default)
Do not attempt to configure 7600/6500 series routers according to 7200/3600 router’s configuration guide.
Listing 2.8: NetFlow configuration on Cisco 7200/3600 series routers
Router(config)# ip flow-cache entries 524288 
Router(config)# ip flow-cache timeout inactive 60 
Router(config)# ip flow-cache timeout active 1 
Router(config)# ip flow-export version 9 
Router(config)# ip flow-export destination 10.11.12.14 2055
Ingress/egress flow export configuration on peering interfaces
According to Cisco IOS NetFlow Command Reference regarding the "ip flow" command history in IOS releases, this feature was introduced in IOS 12.3(11)T, 12.2(31)SB2, 12.2(18)SXE, 12.2(33)SRA.
NetFlow exporting must be configured on each peering interface which is used to send and/or receive traffic:
Listing 2.9: Ingress/egress flow export configuration on peering interfaces
Router(config)#interface FastEthernet 1/0 
Router(config-if)#ip flow ingress 
Router(config-if)#ip flow egress
Ingress flow export configuration (earlier IOS releases)
Ingress flow export must be enabled on all interfaces facing the internal network. Prior to IOS 12.3(11)T, 12.2(31)SB2, 12.2(18)SXE, 12.2(33)SRA, flow export can be enabled only for ingress traffic, therefore it must be enabled on each interface that transmits/receives traffic from/to networks that must be improved
Listing 2.10: Ingress flow export configuration
Router(config)#interface FastEthernet 1/0 
Router(config-if)#ip route-cache flow
NetFlow/sFlow configuration examples for Vyatta routers (VC 6.3)
Do not configure both NetFlow and sFlow export for the same router interface. It will lead to traffic statistics distortion.
Sampling interval must be set to at least 2000 for 10G links and 1000 for 1G links in order to save resources.
Listing 2.11: Configuring NetFlow export on Vyatta
vyatta@vyatta# set system flow-accounting netflow server 10.11.12.14 port 2055 
vyatta@vyatta# set system flow-accounting netflow version 5
Listing 2.12: Configuration of an interface for the flow accounting
vyatta@vyatta# set system flow-accounting interface eth0 
vyatta@vyatta# commit
jFlow export configuration for Juniper routers
Routing Engine-based sampling supports up to eight flow servers for both version 5 and version 8 configurations. The total number of collectors is limited to eight, regardless of how many are configured for version 5 or version 8. During the sampling configuration, the export packets are replicated to all the collectors, configured to receive them. If two collectors are configured to receive version 5 records, then both collectors will receive records for a specified flow.
Default export interval for active/inactive flows on some Juniper routers is 1800 seconds. IRP requires significantly more frequent updates. The export interval recommended by IRP is 60 seconds. Refer your JunOS documentation on how to set export intervals. These parameters are named flow-active-timeout and flow-inactive-timeout.
Listing 2.13: Juniper flow export configuration
forwarding-options {
	sampling {
		input {
			family inet {
				rate 1000;
			}
		}
	}
	family inet {      
		output {
			flow-server 10.10.3.2 {
				port 2055;
				version 5;
				source-address 10.255.255.1;
			}
		}
	}
}
NetFlow export must be configured on all the interfaces facing the providers. In some cases, it may also be necessary to enable NetFlow forwarding on the interfaces facing the internal network.
Listing 2.14: Per-interface NetFlow sampling configuration
interfaces {
	xe-0/0/0 {
		unit 0 {
			family inet {
				sampling {
					input output;
				}
			}
		}
	}
}

2.3.2 Irpspand Configuration

Irpspand acts like a passive network sniffer, analyzing the traffic that is provided to one or more dedicated network interfaces on the IRP server (defined in collector.span.interfaces↓) from a mirrored port on your router or switch. Irpspand looks up the IP header in the mirrored traffic. When the link level traffic is VLAN tagged as for example in IEEE 802.1Q or 802.1ad for Q-in-Q datagrams, Irpspand will advance its IP packet sniffer past VLAN tags. For better results (higher performance and more analyzed traffic), as specified in the IRP Technical Requirements↑ section, we recommend using Myricom 10Gbps NICs, with Sniffer10G license enabled.
Enable collector.span.size_from_ip_header↓ configuration parameter if packets are stripped before forwarding to IRP’s SPAN port.
To use the irpspand collector, the following steps must be completed:
  1. Configure port mirroring on your router or switch, as shown in figures (2.3.2↓) and (2.3.3↓).
    figure diagrams/collector_span-port-router.png
    Figure 2.3.2 Span port configuration (on the router)
    or:
    figure diagrams/collector_span-port-switch.png
    Figure 2.3.3 Span port configuration (on the switch)

  2. Enable the span collector by setting the collector.span.enabled↓ parameter in the configuration file:
     collector.span.enabled                       = 1
    
  3. Define the list of network interfaces that receive mirrored traffic, by setting the collector.span.interfaces↓ parameter (multiple interfaces separated by space can be specified):
    collector.span.interfaces                     = eth1 eth2 eth3
    
  4. A list of all the networks advertised by the edge routers that IRP will optimize must be added to the configuration. This information should be specified in the collector.ournets↓ parameter.
  5. In case blackouts, congestions and excessive delays are to be analyzed by the system, the collector.span.min_delay↓ must be turned on as well
    collector.span.min_delay                      = 1
    


2.4 Explorer Configuration

As described in the 1.2.2↑ section, the explorer is the actual service that performs active remote network probing and tracing.
Explorer runs active and passive probings through all the available providers in order to determine the best one for the particular prefix. Such metrics as packet loss, latency, link capacity and link load are taken into consideration. In order to run a probe through a particular provider, Explorer needs the following to be configured:
  1. An additional IP alias for each provider should be assigned and configured on the IRP server. This IP will be used as a source address during the probing process.
    It is recommended to configure reverse DNS records for each IP using the following template:
    performance-check-via-<PROVIDER-NAME>.
    HARMLESS-NOCTION-IRP-PROBING.<YOUR-DOMAIN-NAME>.
  2. Policy-based routing (PBR) has to be configured on the edge router(s), so that traffic originating from each of these probing IP addresses will exit the network via specific provider. See specific PBR configuration in the Specific PBR configuration scenarios↓ section.
If network has Flowspec capabilities then alternatively Flowspec policies can be used instead of PBR. Refer for example Flowspec policies↑, global.flowspec.pbr↓.
  1. Policy-based routing has to be configured to drop packets rather than routing them through the default route in case that the corresponding Next-Hop does not exist in the routing table.

2.4.1 Specific PBR configuration scenarios

PBRs can be setup in multiple ways, depending on the existing network infrastructure.
We will assume the following IP addresses/networks are used:
10.0.0.0/24 - used on the IRP server as well as the probing VLANs

		10.0.0.2/32 - main IRP server IP address
		10.0.0.3-10.0.0.5 - probing IP addresses
		10.0.0.250-10.0.0.254 - router-side IP addresses for the probing VLANs
		10.0.1.0/24 - used for GRE tunnel interfaces, if needed
		10.10.0.0/24 - real edge routers IP addresses

10.11.0.0/30 - BGP session with the 1st provider, 10.11.0.1 being the ISP BGP neighbor IP

10.12.0.0/30 - BGP session with the 2nd provider, 10.12.0.1 being the ISP BGP neighbor IP

10.13.0.0/30 - BGP session with the 3rd provider, 10.13.0.1 being the ISP BGP neighbor IP

Vlan 3 - the probing Vlan

eth0 - the probing network interface on the IRP server
In a production network, please change the IP addresses used in these examples to the actual addresses assigned and used in the network. Same goes for the Vlan interface.
Brocade routers use Cisco compliant configuration commands. Unless otherwise noted, the Cisco configuration examples will also work on Brocade devices.
Case 1: Single router, two providers, and separate probing Vlan.
10.0.0.1/32 is configured on the edge router, on the probing vlan interface (ve3).
figure diagrams/explorer_pbr-config.png
Figure 2.4.1 PBR configuration: single router and separate probing Vlan

In this case, a single PBR rule should be enforced on the Edge router for each of the probing IP addresses.
Listing 2.15: Cisco IPv4 PBR configuration: 1 router, 2 providers
access-list 1 permit ip host 10.0.0.3
access-list 2 permit ip host 10.0.0.4
!
route-map irp-peer permit 10
 match ip address 1
 set ip next-hop 10.11.0.1
 set interface Null0
!
route-map irp-peer permit 20
 match ip address 2
 set ip next-hop 10.12.0.1
 set interface Null0
!
interface ve 3
ip policy route-map irp-peer
Route-map entries with the destination set to Null0 interface are used for preventing packets to flow through the default route in the case that the corresponding next-hop does not exist in the routing table (typically when a physical interface administrative/operational status is down).
Cisco ASR9000 routers running IOS XR use a different PBR syntax.
Listing 2.16: Cisco IPv4 PBR configuration for IOS XR: 1 router, 2 providers
  configure
    ipv4 access-list irp-peer
      10 permit ipv4 host 10.0.0.3 any nexthop1 ipv4 10.11.0.1 nexthop2 ipv4 169.254.0.254
      11 permit ipv4 host 10.0.0.4 any nexthop1 ipv4 10.12.0.1 nexthop2 ipv4 169.254.0.254
    end
​
   router static
    address-family ipv4 unicast
    169.254.0.254 Null0
    end
​
   interface FastEthernet1/1
    ipv4 access-group irp-peer ingress
    end
For Juniper routers, a more complex configuration is required:
Listing 2.17: Juniper IPv4 PBR configuration: 1 router, 2 providers
[edit interfaces]
xe-0/0/0 {
	unit 3 {
	    family inet {
    	        filter {
        	        input IRP-policy;
            	}
        	}
	}
}    
[edit firewall]
family inet {
   filter IRP-policy {
       term irp-peer1 {
           from {
               source-address 10.0.0.3/32;
           }
           then {
               routing-instance irp-isp1-route;
           }
        }
        term irp-peer2 {
            from {
                source-address 10.0.0.4/32;
            }
            then {
                routing-instance irp-isp2-route;
            }
        }
        term default {
            then {
                accept;
            }
        }
    }
}
[edit]
routing-instances {
    irp-isp1-route {
        instance-type forwarding;
        routing-options {
            static {
                route 0.0.0.0/0 next-hop 10.11.0.1;
            }
        }
    }
    irp-isp2-route {
        instance-type forwarding;
        routing-options {
            static {
                route 0.0.0.0/0 next-hop 10.12.0.1;
            }
        }
    }
}
routing-options {
    interface-routes {
        rib-group inet irp-policies;
    }
    rib-groups {
        irp-policies {
            import-rib [ inet.0 irp-isp1-route.inet.0 irp-isp2-route.inet.0 ];
        }
    }
}
PBR configuration on Vyatta routers.
Unfortunately, prior to and including VC6.4, Vyatta does not natively support policy-based routing. Thus, the PBR rules should be configured using the standard Linux ip toolset. To make these rules persistent, they should be also added to /etc/rc.local on the Vyatta system.
Listing 2.18: Vyatta IPv4 PBR configuration example
ip route add default via 10.11.0.1 table 101
ip route add default via 10.12.0.1 table 102
ip rule add from 10.0.0.3 table 101 pref 32001
ip rule add from 10.0.0.4 table 102 pref 32002
Vyatta versions VC6.5 and up, natively support source-based routing. The following example can be used:
Listing 2.19: Vyatta (VC6.5 and up) IPv4 PBR configuration example
# Setup the routing policy:
set policy route IRP-ROUTE
set policy route IRP-ROUTE rule 10 destination address 0.0.0.0/0
set policy route IRP-ROUTE rule 10 source address 10.0.0.3/32
set policy route IRP-ROUTE rule 10 set table 103
set policy route IRP-ROUTE rule 20 destination address 0.0.0.0/0
set policy route IRP-ROUTE rule 20 source address 10.0.0.4/32
set policy route IRP-ROUTE rule 20 set table 104
set policy route IRP-ROUTE rule 30 destination address 0.0.0.0/0
set policy route IRP-ROUTE rule 30 source address 0.0.0.0/0
set policy route IRP-ROUTE rule 30 set table main
commit
​
# Create static route tables:
set protocols static table 103 route 0.0.0.0/0 nexthop 10.11.0.1
set protocols static table 104 route 0.0.0.0/0 nexthop 10.12.0.1
commit
​
# Assign policies to specific interfaces, Vlan 3 on eth1 in this example:
set interfaces ethernet eth1.3 policy route IRP-ROUTE
​
# Verify the configuration:
show policy route IRP-ROUTE
show protocols static
show interfaces ethernet eth1.3
Case 2: Two edge routers, two providers, and a separate probing Vlan.
Following IP addresses are configured on the routers:
- 10.0.0.251 is configured on R1, VE3
- 10.0.0.252 is configured on R2, VE3
figure diagrams/explorer_pbr-config_vlan-2R.png
Figure 2.4.2 PBR configuration: two routers and separate probing Vlan

To reduce the number of PBR rules to one per each router, additional source based routing rules must be configured on the IRP server. This can be done either by using the ip command, or by adjusting the standardized /etc/sysconfig/network-scripts/route-eth* and /etc/sysconfig/network-scripts/rule-eth* configuration files. Both ways will be presented.
Listing 2.20: IRP server source-based routing using the ‘ip‘ command
ip route add default via 10.0.0.251 table 201
ip route add default via 10.0.0.252 table 202
ip rule add from 10.0.0.3 table 201 pref 32101
ip rule add from 10.0.0.4 table 202 pref 32102
Listing 2.21: IRP server source-based routing using standard CentOS configuration files
#/etc/sysconfig/network-scripts/route-eth0:
default via 10.0.0.251 table 201
default via 10.0.0.252 table 202
#/etc/sysconfig/network-scripts/rule-eth0:
from 10.0.0.3 table 201 pref 32101
from 10.0.0.4 table 202 pref 32102
Refer to OS configuration manual for configuration guidelines.
Router configuration looks similar to the previous case. A Cisco/Brocade example will be provided.
Some Brocade routers/switches have PBR configuration limitations. Please refer to the "Policy-Based Routing" → "Configuration considerations" section in the Brocade documentation for your router/switch model.
For example, BigIron RX Series of switches do not support more than 6 instances of a route map, more than 6 ACLs in a matching policy of each route map instance, and more than 6 next hops in a set policy of each route map instance.
On the other hand, some Brocade CER/CES routers/switches have these limits raised up to 200 instances (depending on package version).
Listing 2.22: Cisco IPv4 PBR configuration: 2 routers, 2 providers, separate Vlan
#Router R1
access-list 1 permit ip host 10.0.0.3
!
route-map irp-peer permit 10
 match ip address 1
 set ip next-hop 10.11.0.1
 set interface Null0
!
interface ve 3
ip policy route-map irp-peer
​
#Router R2
access-list 1 permit ip host 10.0.0.4
!
route-map irp-peer permit 10
 match ip address 1
 set ip next-hop 10.12.0.1
 set interface Null0
!
interface ve 3
ip policy route-map irp-peer
​
Case 3: Complex network infrastructure, multiple routers, no probing VLAN
figure diagrams/explorer_pbr-via-gre-config.png
Figure 2.4.3 PBR configuration via GRE tunnels

In specific complex scenarios, traffic from the IRP server should pass multiple routers before getting to the provider. If a separate probing Vlan cannot be configured across all routers, GRE tunnels from IRP to the Edge routers should be configured.
It is also possible to configure the PBRs without GRE tunnels, by setting PBR rules on each transit router on the IRP provider path.
Brocade routers do not support PBR set up on GRE tunnel interfaces. In this case the workaround is to configure PBR on each transit interface towards the exit edge router(s) interface(s).
GRE Tunnels configuration
Listing 2.23: IRP-side IPv4 GRE tunnel CLI configuration
modprobe ip_gre
ip tunnel add tun0 mode gre remote 10.10.0.1 local 10.0.0.2 ttl 64 dev eth0
ip addr add dev tun0 10.0.1.2/32 peer 10.0.1.1/32
ip link set dev tun0 up
Listing 2.24: IRP-side IPv4 GRE tunnel configuration using standard CentOS configs
#/etc/sysconfig/network-scripts/ifcfg-tun0
DEVICE=tun0
TYPE=GRE
ONBOOT=yes
MY_INNER_IPADDR=10.0.1.2
MY_OUTER_IPADDR=10.0.0.2
PEER_INNER_IPADDR=10.0.1.1
PEER_OUTER_IPADDR=10.10.0.1
TTL=64        
Refer to OS configuration manual for configuration guidelines.
Listing 2.25: Router-side IPv4 GRE tunnel configuration (Vyatta equipment)
set interfaces tunnel tun0
set interfaces tunnel tun0 address 10.0.1.1/30
set interfaces tunnel tun0 description "IRP Tunnel 1"
set interfaces tunnel tun0 encapsulation gre
set interfaces tunnel tun0 local-ip 10.10.0.1
set interfaces tunnel tun0 remote-ip 10.0.0.2
Listing 2.26: Router-side IPv4 GRE tunnel configuration (Cisco equipment)
interface Tunnel0 routers
ip address 10.0.1.1 255.255.255.252 
tunnel mode gre ip 
tunnel source Loopback1 
tunnel destination 10.0.0.2 
Listing 2.27: Router-side IPv4 GRE tunnel configuration (Juniper equipment)
interfaces {
     gr-0/0/0 {
         unit 0 {
             tunnel {
                 source 10.0.0.2;
                 destination 10.10.0.1;
             }
             family inet {
                 address 10.0.1.1/32;
             }
         }
     }
 }
The above configuration is for the first edge router (R1). One more GRE tunnel should be configured on the 2nd router (R2).
As soon as the GRE tunnels are configured, the source-based routing and the PBR rules should be configured, similar to the previous section.
Listing 2.28: IRP server IPv4 source-based routing via GRE using the ‘ip‘ command
ip route add default dev tun0 table 201
ip route add default dev tun1 table 202
ip route add default dev tun2 table 203
ip rule add from 10.0.1.2 table 201 pref 32101
ip rule add from 10.0.1.6 table 202 pref 32102
ip rule add from 10.0.1.10 table 202 pref 32103
Listing 2.29: IRP server IPv4 source-based routing via GRE using standard CentOS configuration files
#/etc/sysconfig/network-scripts/route-tun0:
default dev tun0 table 201
default dev tun1 table 202
default dev tun2 table 203
#/etc/sysconfig/network-scripts/rule-tun0:
from 10.0.1.2 table 201 pref 32101
from 10.0.1.6 table 202 pref 32102
from 10.0.1.10 table 203 pref 32103
Refer to OS configuration manual for configuration guidelines.
Router configuration looks similar to the previous cases. A Cisco/Brocade example will be provided.
Listing 2.30: Cisco IPv4 PBR configuration: 2 routers, 2 providers, separate Vlan
#Router R1
access-list 1 permit ip host 10.0.1.2
access-list 2 permit ip host 10.0.1.6
!
route-map irp-peer permit 10
 match ip address 1
 set ip next-hop 10.11.0.1
 set interface Null0
!
route-map irp-peer permit 20
 match ip address 2
 set ip next-hop 10.12.0.1
 set interface Null0
!
​
interface Tunnel0
ip policy route-map irp-peer
interface Tunnel1
ip policy route-map irp-peer
​
#Router R2
access-list 1 permit ip host 10.0.1.10
!
route-map irp-peer permit 10
 match ip address 1
 set ip next-hop 10.13.0.1
 set interface Null0
!
interface Tunnel0
ip policy route-map irp-peer
​
Case 4: Internet Exchanges configuration examples
The PBR rules for Cisco routers/switches are generated by IRP, following the template below.
Listing 2.31: Cisco IPv4 PBR configuration template
!--- repeated block for each peering partner
no route-map <ROUTEMAP> permit <ACL>
no ip access-list extended <ROUTEMAP>-<ACL>
​
ip access-list extended <ROUTEMAP>-<ACL>
  permit ip host <PROBING_IP> any dscp <PROBING_DSCP>
​
route-map <ROUTEMAP> permit <ACL>
  match ip address <ROUTEMAP>-<ACL>
  set ip next-hop <NEXT_HOP>
  set interface Null0
​
!--- block at the end of PBR file
interface <INTERFACE>
  ip policy route-map <ROUTEMAP>
​
The “<>” elements represent variables with the following meaning:
  • <ROUTEMAP> represents the name assigned by IRP and equals the value of the Route Map parameter in PBR Generator ("irp-ix" in Figure 4)
  • <ACL> represents a counter that identifies individual ACL rules. This variable’s initial value is taken from ACL name start field of PBR Generator and is subsequently incremented for each ACL
  • <PROBING_IP> one of the configured probing IPs that IRP uses to probe link characteristics via different peering partners. One probing IP is sufficient to cover up to 64 peering partners
  • <PROBING_DSCP> an incremented DSCP value assigned by IRP for probing a specific peering partner. This is used in combination with the probing IP
  • <NEXT_HOP> represents the IP address identifying the peering partner on the exchange. This parameter is retrieved during autoconfiguration and preserved in Exchange configuration
  • <INTERFACE> represents the interface where traffic conforming to the rule will exist the Exchange router. This is populated with the Interface value of PBR Generator
The PBR rules for Brocade non-XMR router/switches are generated by IRP, following the template below.
Listing 2.32: Cisco IPv4 PBR configuration template
!--- repeated block for each peering partner
no route-map <ROUTEMAP> permit <ACL>
no ip access-list extended <ROUTEMAP>-<ACL>
​
ip access-list extended <ROUTEMAP>-<ACL>
  permit ip host <PROBING_IP> any dscp-matching <PROBING_DSCP>
​
route-map <ROUTEMAP> permit <ACL>
  match ip address <ROUTEMAP>-<ACL>
  set ip next-hop <NEXT_HOP>
  set interface Null0
​
!--- block at the end of PBR file
interface <INTERFACE>
  ip policy route-map <ROUTEMAP>
The “<>” elements represent variables with the same meaning as per Cisco example.
Brocade XMR routers use keyword “dscp-mapping” instead of “dscp-matching”.
The PBR rules for Juniper routers/switches are generated by IRP, following the template below.
It is important to note that the different sections of the PBR rules (load replace/merge relative terminal) should be entered independently and not as a single file that is output by IRP. Also take note that the last group includes a ’load merge’ combo and not a ’load replace’ as the first three groups.
Listing 2.33: Juniper IPv4 PBR configuration template
load replace relative terminal
[Type ^D at a new line to end input]
​
interfaces {
  <INTERFACE> {
      unit <INTERFACE_UNIT> {
          family inet {
              filter {
                  replace:
                  input <ROUTEMAP>;
              }
          }
      }
   }
}
​
​
load replace relative terminal 
[Type ^D at a new line to end input]
​
firewall {
   family inet {
      filter <ROUTEMAP> {
          replace:
          term <ROUTEMAP><ACL> {
              from {
                  source-address <PROBING_IP>;
                  dscp <PROBING_DSCP>;
              }
              then {
                  routing-instance <ROUTEMAP><ACL>-route;
              }
          }
...
          replace:
          term default {
              then {
                  accept;
              }
          }
      }
   }
}
​
​
load replace relative terminal 
[Type ^D at a new line to end input]
​
routing-instances {
      replace:
      <ROUTEMAP><ACL>-route {
          instance-type forwarding;
          routing-options {
              static {
                  route 0.0.0.0/0 next-hop <NEXT_HOP>;
              }
          }
      }
...
}
​
​
load merge relative terminal 
[Type ^D at a new line to end input]
​
routing-options {
      interface-routes {
          replace:
          rib-group inet <ROUTEMAP>rib;
      }
      rib-groups {
          replace:
          <ROUTEMAP>rib {
              import-rib [ inet.0 <ROUTEMAP><ACL>-route.inet.0 ... ];
          }
      }
} 
The “<>” elements represent variables with the following meaning:
  • <INTERFACE> represents the interface where traffic conforming to the rule will exist the Exchange router. This is populated with the Interface value of PBR Generator
  • <INTERFACE_UNIT> is the value of the Interface Unit parameter in PBR Generator
  • <ROUTEMAP>represents the name assigned by IRP and equals the value of the Route Map parameter in PBR Generator
  • <ACL> represents a combined counter like "00009" that identifies individual ACL rules. This variable’s initial value is taken from ACL name start field of PBR Generator and is subsequently incremented for each ACL
  • <PROBING_IP> one of the configured probing IPs that IRP uses to probe link characteristics via different peering partners. One probing IP is sufficient to cover up to 64 peering partners
  • <PROBING_DSCP> an incremented DSCP value assigned by IRP for probing a specific peering partner. This is used in combination with the probing IP
  • <NEXT_HOP> represents the IP address identifying the peering partner on the exchange. This parameter is retrieved during autoconfiguration and preserved in Exchange configuration
Note that Juniper routers/switches need an additional parameter in order to correctly configure PBRs - Interface unit.
Verifying PBR configuration
To ensure that the PBRs are properly configured on the router(s), the standard *NIX traceroute command can be used, by specifying each probing IP as the source IP address for tracing the route. The route should pass through a different provider (note the ISP BGP neighbor IP).
Listing: 2.34: PBR validation using ‘traceroute‘
root@server ~ $ traceroute -m 5 8.8.8.8 -nns 10.0.0.3
traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
 1  10.0.0.1       0.696 ms  	0.716 ms  	0.783 ms
 2  10.11.0.1  	0.689 ms  	0.695 ms  	0.714 ms
 3  84.116.132.146 14.384 ms	13.882 ms  	13.891 ms
 4  72.14.219.9    13.926 ms	14.477 ms  	14.473 ms
 5  209.85.240.64  14.397 ms	13.989 ms  	14.462 ms
​
root@server ~ $ traceroute -m 5 8.8.8.8 -nns 10.0.0.4
traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
 1  10.0.0.1       0.696 ms  	0.516 ms  	0.723 ms
 2  10.12.0.1  	0.619 ms  	0.625 ms  	0.864 ms
 3  83.16.126.26   13.324 ms	13.812 ms  	13.983 ms
 4  72.14.219.9    15.262 ms	15.347 ms  	15.431 ms
 5  209.85.240.64  16.371 ms	16.991 ms  	16.162 ms
Another useful method for checking that the PBRs are properly configured is to use Explorer self check option. Make sure that the providers are added to IRP configuration before executing Explorer self check.
Listing: 2.35: PBR validation using Explorer self check
root@server ~ $ /usr/sbin/explorer -s
Starting PBR check
​
PBR check failed for provider A[2]. Diagnostic hop information: IP=10.11.0.12 TTL=3
PBR check succeeded for provider B[3]. Diagnostic hop information: IP=10.12.0.1 TTL=3
In order to ensure that the PBRs are properly configured for Internet Exchanges, the method below can be used.
Listing 2.36: PBR validation using ‘iptables‘ and ‘traceroute‘
root@server ~ $ iptables -t mangle -I OUTPUT -d 8.8.8.8 -j DSCP --set-dscp <PROBING_DSCP>
​
root@server ~ $ traceroute -m 5 -nns <PROBING_IP> 8.8.8.8
traceroute to 8.8.8.8, 30 hops max, 60 byte packets
 1 ...
 2 ...
 3 <NEXT_HOP>  126.475 ms !X^C
where
  • <NEXT_HOP> is a Peering Partner’s next-hop IP address in IRP configuration
  • <PROBING_DSCP> is a Peering Partner’s DSCP value in IRP configuration
  • <PROBING_IP> is a Peering Partner’s probing IP address in IRP configuration
The first IP of the trace leaving client infrastructure should be on the Exchange and the next-hop should belong to the correct Peering Partner.
iptables rules should be deleted after all tests are done.

2.4.1.1 Current route detection

Current route detection algorithm requires the explorer.infra_ips↓ parameter to be configured. All the IP addresses and subnets belonging to the infrastructure should be added here. These IP addresses can be easily determined by using the traceroute command.

2.4.1.2 Providers configuration

Before Explorer starts probing the prefixes detected by the Collector, all providers should be configured.
For the complete list of provider settings please see the Providers settings↓ section.
Before configuring providers in IRP, the BGP sessions need to be defined, see BGPd Configuration↓
Please note that some Brocade router models do not support SNMP counters per Vlan, therefore physical interfaces should be used for gathering traffic statistics.
For the sample configuration below, let’s assume the following:
figure diagrams/explorer_pbr-config.png
Figure 2.4.4 PBR configuration: single router and separate probing Vlan

ISP1 - the provider’s name

10.0.0.1 - Router IP configured on the probing Vlan

10.0.0.3 - Probing IP for ISP1, configured on the IRP server

10.11.0.1, 10.11.0.2 - IP addresses used for the EBGP session with the ISP, 10.11.0.2 being configured on the router

400Mbps - the agreed bandwidth

1Gbps - the physical interface throughput

’public’ - read-only SNMP community configured on R1

GigabitEthernet2/1 - the physical interface that connects R1 to ISP1
As presented in Listing 2.4.1.2↓, all the parameters are pretty self-explanatory. Please check the BGP Monitoring↑ section as well.
Listing 2.37: Sample provider configuration
peer.1.95th                                  = 400
peer.1.95th.bill_day                         = 1
peer.1.bgp_peer                              = R1
peer.1.cost                                  = 6
peer.1.description                           = ISP1
peer.1.ipv4.next_hop                         = 10.11.0.1
peer.1.ipv4.probing                          = 10.0.0.3
peer.1.ipv4.diag_hop                         = 10.11.0.1
peer.1.ipv4.mon                              = 10.11.0.1 10.11.0.2
peer.1.limit_load                            = 1000
peer.1.shortname                             = ISP1
peer.1.snmp.interfaces                       = 1:GigabitEthernet2/1
peer.1.mon.ipv4.bgp_peer		             = 10.11.0.1
​
snmp.1.name                                  = Host1
snmp.1.ip                                    = 10.0.0.1
snmp.1.community                             = public
SNMP parameters validation
To make sure that the SNMP parameters are correct, the ‘snmpwalk‘ tool can be run on the IRP server:
Listing 2.38: SNMP parameters validation
root@server ~ $ snmpwalk -v2c -c irp-public 10.0.0.1 ifDescr
IF-MIB::ifDescr.1 = STRING: GigabitEthernet1/1
IF-MIB::ifDescr.2 = STRING: GigabitEthernet1/2
IF-MIB::ifDescr.3 = STRING: GigabitEthernet2/1
IF-MIB::ifDescr.4 = STRING: GigabitEthernet2/2
IF-MIB::ifDescr.5 = STRING: GigabitEthernet2/3
IF-MIB::ifDescr.6 = STRING: GigabitEthernet2/4
​
root@server ~ $ snmpwalk -v2c -c irp-public 10.0.0.1 ifIndex
IF-MIB::ifIndex.1 = INTEGER: 1
IF-MIB::ifIndex.2 = INTEGER: 2
IF-MIB::ifIndex.3 = INTEGER: 3
IF-MIB::ifIndex.4 = INTEGER: 4
IF-MIB::ifIndex.5 = INTEGER: 5
IF-MIB::ifIndex.6 = INTEGER: 6
​

2.4.2 Flowspec PBR

In networks that have Flowspec and Redirect to IP capabilities, PBR can be implemented by means of Flowspec policies. Refer details in Flowspec policies↑.
In order to use assigned providers or peers on Internet Exchanges in IRP the same set of source IP addresses and DSCP values (for Internet Exchanges) are assigned to individual providers/peers. These values are used to automatically generate Flowspec policies that redirect IRP probes to designated next-hops.
In order to use this feature global.flowspec↓, global.flowspec.pbr↓ and bgpd.peer.X.flowspec↓must be enabled.
Flowspec PBR cannot be used during non-intrusive (global.nonintrusive_bgp↓) mode.


2.5 BGPd Configuration

For IRP to inject the improved prefixes to the routing tables, proper iBGP sessions have to be configured between the edge routers and the IRP. The IRP BGP daemon acts similarly to a route-reflector-client. Following criteria need to be met:
  1. An internal BGP session using the same autonomous system number (ASN) must be configured between each edge router and the IRP. BGP sessions must not be configured with next-hop-self (route reflectors can’t be used to inject routes with modified next-hop) - the next-hop parameter advertised by IRP BGPd should be distributed to other iBGP neighbors.
  2. route-reflector-client must be enabled for the routes advertised by IRP BGPd to be distributed to all non-client neighbors.
  3. Routes advertised by IRP BGPd must have a higher preference over routes received from external BGP neighbors.
    This can be done by different means, on the IRP or on the router side:
    • Local-pref can be set to a reasonably high value in the BGPd configuration
    • Communities can be appended to prefixes advertised by BGPd
    Avoid colisions of localpref or communities values assigned to IRP within both its configuration and/or on customer’s network.
    • Multi-exit-discriminator (MED) can be changed to affect the best-path selection algorithm
    • Origin of the advertised route can be left unchanged or overridden to a specific value (incomplete, IGP, EGP)
    LocalPref, MED and Origin attribute values are set with the first nonempty value in this order: 1) value from configuration or 2) value taken from incoming aggregate or 3) default value specified in RFC4271.

    Communities attribute value concatenates the value taken from incoming aggregate with configuration value. The router should be configured to send no Communities attribute in case it is required that IRP announces Communities attribute that contain only the configured value.
  4. BGP next-hop must be configured for each provider configured in IRP (please refer to Providers configuration↑ and Providers settings↓)
Special attention is required if edge routers are configured to announce prefixes with empty AS-Path. In some cases, improvements announced by BGPd may have an empty AS-Path. Thus, when edge router does not have any prefix-list filtering enforced, all the current improvements can be improperly advertised to routers - this may lead to policy violations and session reset. Refer to AS-Path behavior in IRP BGPd↓ regarding the way to disallow announcements with empty AS-Path. None of the improvements advertised by IRP should be advertised to your external peers
We recommend the routes to be injected into the edge router which runs the BGP session with the provider. This ensures that the routes are properly redistributed across the network.
For example, there are two routers: R1 and R2. R1 runs a BGP session with Level3 and R2 with Cogent. The current route is x.x.x.x/24 with the next-hop set to Level3 and all the routers learn this route via R1. The system injects the x.x.x.x/24 route to R2 with the next-hop updated to Cogent. In this case the new route is installed on the routing table and it will be properly propagated to R1 (and other routers) via iBGP.
However if the system injects the new route to R1 instead of R2, the route’s next-hop will point to R2 while R2 will have the next-hop pointing to R1 as long as the injected route is propagated over iBGP to other routers. In this case a routing loop will occur.
In the following BGP session configuration example, the IRP server with IP 10.0.0.2 establishes an iBGP session to the edge router (IP: 10.0.0.1). The local-pref parameter for the prefixes advertised by IRP is set to 190. BGP monitoring (see BGP Monitoring↑, BGPd settings↓) is enabled.
Listing 2.39: iBGP session configuration example
bgpd.peer.R1.as             = 65501
bgpd.peer.R1.our_ip         = 10.0.0.2
bgpd.peer.R1.peer_ip        = 10.0.0.1
bgpd.peer.R1.listen		 = 1
bgpd.peer.R1.localpref      = 190
bgpd.peer.R1.shutdown	   = 0
bgpd.peer.R1.snmp.ip		= 10.0.0.1
bgpd.peer.R1.snmp.community = public
Vendor-specific router-side iBGP session configuration examples:
Vyatta routers:
Listing 2.40: Vyatta IPv4 iBGP session configuration example
set protocols bgp 65501 neighbor 10.0.0.2 remote-as ’65501’
set protocols bgp 65501 neighbor 10.0.0.2 route-reflector-client
set protocols bgp 65501 parameters router-id ’10.0.0.1’
Listing 2.41: Vyatta IPv6 iBGP session configuration example
delete system ipv6 disable-forwarding
commit
set protocols bgp 65501 neighbor 2001:db8:2::2 remote-as ’65501’
set protocols bgp 65501 neighbor 2001:db8:2::2 route-reflector-client
set protocols bgp 65501 neighbor 2001:db8:2::2 address-family ’ipv6-unicast’
set protocols bgp 65501 parameters router-id ’10.0.0.1’
Listing 2.42: Vyatta IPv4 route-map for setting local-pref on the router
set protocols bgp 65501 neighbor 10.0.0.2 route-map import ’RM-IRP-IN’
set policy route-map RM-IRP-IN rule 10 action ’permit’
set policy route-map RM-IRP-IN rule 10 set local-preference ’190’
Listing 2.43: Vyatta IPv6 route-map for setting local-pref on the router
set protocols bgp 65501 neighbor 2001:db8:2::2 route-map import ’RM-IRP-IN’
set policy route-map RM-IRP-IN rule 10 action ’permit’
set policy route-map RM-IRP-IN rule 10 set local-preference ’190’
Cisco routers:
Listing 2.44: Cisco IPv4 iBGP session configuration example
router bgp 65501
neighbor 10.0.0.2 remote-as 65501
neighbor 10.0.0.2 send-community
neighbor 10.0.0.2 route-reflector-client
Listing 2.45: Cisco IPv6 iBGP session configuration example
router bgp 65501
neighbor 2001:db8:2::2 remote-as 65501
neighbor 2001:db8:2::2 send-community
neighbor 2001:db8:2::2 route-reflector-client
 
or
 
router bgp 65501
neighbor 2001:db8:2::2 remote-as 65501
no neighbor 2001:db8:2::2 activate
address-family ipv6
neighbor 2001:db8:2::2 activate
neighbor 2001:db8:2::2 send-community
neighbor 2001:db8:2::2 route-reflector-client
Listing 2.46: Cisco IPv4 route-map for setting local-pref on the router
router bgp 65501
neighbor 10.0.0.2 route-map RM-IRP-IN input
route-map RM-IRP-IN permit 10
 set local-preference 190
Listing 2.47: Cisco IPv6 route-map for setting local-pref on the router
router bgp 65501
neighbor 2001:db8:2::2 route-map RM-IRP-IN input
route-map RM-IRP-IN permit 10
 set local-preference 190
Listing 2.48: Limiting the number of received prefixes for an IPv4 neighbor on Cisco
router bgp 65501
neighbor 10.0.0.2 maximum-prefix 10000
Listing 2.49: Limiting the number of received prefixes for an IPv6 neighbor on Cisco
router bgp 65501
neighbor 2001:db8:2::2 maximum-prefix 10000
Juniper equipment:
The Cluster ID must be unique in multi-router configuration. Otherwise improvements will not be redistributed properly. Cluster ID is optional for single router networks.
Listing 2.50: Juniper IPv4 iBGP session configuration example
[edit]
routing-options {
    autonomous-system 65501;
    router-id 10.0.0.1;
}
protocols {
    bgp {
        group 65501 {
            type internal;
            cluster 0.0.0.1;
            family inet {
                unicast;
            }
            peer-as 65501;
            neighbor 10.0.0.2;
       }
    }
}
Listing 2.51: Juniper IPv6 iBGP session configuration example
[edit]
routing-options {
    autonomous-system 65501;
    router-id 10.0.0.1;
}
protocols {
    bgp {
        group 65501 {
            type internal;
            cluster 0.0.0.1;
            family inet6 {
                any;
            }
            peer-as 65501;
            neighbor 2001:db8:2::2;
        }
    }
}
Listing 2.52: Juniper IPv4 route-map for setting local-pref on the router
[edit]
routing-options {
    autonomous-system 65501;
    router-id 10.0.0.1;
}
protocols {
    bgp {
        group 65501 {
            type internal;
            peer-as 65501;
            neighbor 10.0.0.2 {
                preference 190;
            }
       }
    }
}
Listing 2.53: Limiting the number of received prefixes for an IPv4 neighbor on Juniper
protocols {
    bgp {
        group 65501 {
            neighbor 10.0.0.2 {
                family inet {
                    any {
                        prefix-limit {
                            maximum 10000;
                            teardown;
                        }
                    }
                }
            }
        }
    }
}

2.5.1 AS-Path behavior in IRP BGPd

Every advertised prefix contains an AS-Path attribute. However, in some cases this attribute can be empty.
For compatibility purposes, the IRP BGPd has a few algorithms handling the AS-Path attribute:
  1. The advertised prefix will be marked with a recovered AS-Path attribute.
    Recovered AS-Path is composed of consecutive AS-Numbers that are collected during exploring process. Please note that recovered AS-Path may differ from the actual BGP Path.
  2. The advertised prefix will be marked with the AS-Path from the aggregate received via BGP.
  3. If the advertised prefix, for whatever reason, has an empty AS-Path, it can be announced or ignored, depending on the BGPd configuration.
For a detailed description refer to bgpd.as_path↓.
In case certain routes should be preferred over decisions made by IRP, use one of the following ways:
- Routers may be configured to have more preferable localpref / weight values for such routes so the best path algorithm always selects these routes instead of the routes injected by the BGPd daemon.
- Routes may be filtered or have lower localpref / weight set up, using incoming route-map applied to BGP session with IRP.
- Networks that must be completely ignored by IRP can be specified in global.ignored.asn↓, global.ignorednets↓ parameters or marked with a BGP Community listed in global.ignored_communities↓, so no probing / improving / announcing will be made by IRP.
If split announcements are enabled in IRP (refer 4.1.4.31↓) the routes announced by IRP will be more specific.

2.5.2 BGPd online reconfiguration

BGPd can be reconfigured without BGP sessions restart. This feature increases network stability and reduces routers’ CPU load during session initiation.
Use the commands below to inform IRP BGPd to reload its configuration:
Listing 2.54: Reload IRP BGPd configuration
root@server ~ $ service bgpd reload

2.6 Failover configuration

Failover relies on many operating system, networking and database components to work together. All these must be planned in advance and require careful implementation.
We recommend that you request Noction’s engineers to assist with failover configuration.
Subsequent sections should be considered in presented order when configuring a fresh IRP failover setup. Refer to them when troubleshooting or restoring services.

2.6.1 Initial failover configuration

Prerequisites

Before proceeding with failover configuration the following prerequisites must be met:
  • One IRP node is configured and fully functional. We will refer to this node as $IRPMASTER.
  • Second IRP node is installed with the same version of IRP as on $IRPMASTER. We will refer to this node as $IRPSLAVE.
Second IRP node MUST run the same operating system as $IRPMASTER.
When troubleshooting problems, besides checking matching IRP versions ensure the same versions of irp MySQL databases are installed on both failover nodes.
  • IRP services, MySQL and HTTP daemons are stopped on $IRPSLAVE node.
  • Network operator can SSH to both $IRPMASTER and $IRPSLAVE and subsequent commands are assumed to be run from a $IRPMASTER console.
$IRPMASTER and $IRPSLAVE must have different hostnames.

Configure communication channel from $IRPMASTER to $IRPSLAVE

This channel is used during failover configuration and subsequently $IRPMASTER uses it to synchronize IRP configuration changes when these occur. It uses key-based authentication without a passphrase to allow automated logins on $IRPSLAVE by $IRPMASTER processes.
Adjust firewalls if any so that $IRPMASTER node can access $IRPSLAVE via SSH.
Generate SSH keys pair WITHOUT passphrase:
Listing 2.55: Generate keys on $IRPMASTER
root@IRPMASTER ~ # ssh-keygen -t rsa -b 2048 -f ~/.ssh/id_rsa -C "failover@noction"
Default keys files are used. In case your system needs additional keys for other purposes we advise that those keys are assigned a different name. If this is not possible then keys file name designated for failover use should be also specified in IRP configuration parameter global.failover_identity_file↓.
Copy public SSH key from master to slave instance:
Listing 2.56: Install public key on $IRPSLAVE
root@IRPMASTER ~ # cat ~/.ssh/id_rsa.pub | while read key; do ssh $IRPSLAVE "echo $key >> ~/.ssh/authorized_keys"; done
Check if $IRPMASTER can login to $IRPSLAVE without using a password:
Listing 2.57: Check SSH certificate-based authentication works
root@IRPMASTER ~ # ssh $IRPSLAVE

Install certificate and keys for MySQL Multi-Master replication between $IRPMASTER and $IRPSLAVE

MySQL Multi-Master replication uses separate communication channels that also require authentication. IRP failover uses key-based authentication for these channels too.
Adjust firewalls if any so that $IRPMASTER and $IRPSLAVE can communicate with each other bidirectionally.
Create Certificate Authority and server/client certificates and keys. Commands must be run on both $IRPMASTER and $IRPSLAVE nodes:
Listing 2.58: Generate CA and certificates
# cd && rm -rvf irp-certs && mkdir -p irp-certs && cd irp-certs

# openssl genrsa 2048 > ‘hostname -s‘-ca-key.pem

# openssl req -new -x509 -nodes -days 3600 -subj "/C=US/ST=CA/L=Palo Alto/O
=Noction/OU=Intelligent Routing Platform/CN=‘/bin/hostname‘ CA/
emailAddress=support@noction.com" -key ‘hostname -s‘-ca-key.pem -out ‘
hostname -s‘-ca-cert.pem

# openssl req -newkey rsa:2048 -days 3600 -subj "/C=US/ST=CA/L=Palo Alto/O=
Noction/OU=Intelligent Routing Platform/CN=‘/bin/hostname‘ server/
emailAddress=support@noction.com" -nodes -keyout ‘hostname -s‘-serverkey.pem
-out ‘hostname -s‘-server-req.pem

# openssl rsa -in ‘hostname -s‘-server-key.pem -out ‘hostname -s‘-serverkey.pem

# openssl x509 -req -in ‘hostname -s‘-server-req.pem -days 3600 -CA ‘
hostname -s‘-ca-cert.pem -CAkey ‘hostname -s‘-ca-key.pem -set_serial 01
-out ‘hostname -s‘-server-cert.pem
CHAPTER 2. CONFIGURATION 78

# openssl req -newkey rsa:2048 -days 3600 -subj "/C=US/ST=CA/L=Palo Alto/O=
Noction/OU=Intelligent Routing Platform/CN=‘/bin/hostname‘ client/
emailAddress=support@noction.com" -nodes -keyout ‘hostname -s‘-clientkey.pem
-out ‘hostname -s‘-client-req.pem

# openssl rsa -in ‘hostname -s‘-client-key.pem -out ‘hostname -s‘-clientkey.pem

# openssl x509 -req -in ‘hostname -s‘-client-req.pem -days 3600 -CA ‘
hostname -s‘-ca-cert.pem -CAkey ‘hostname -s‘-ca-key.pem -set_serial 01
-out ‘hostname -s‘-client-cert.pem
Verify certificates. Commands must be run on both $IRPMASTER and $IRPSLAVE nodes:
Listing 2.59: Verify certificates
# openssl verify -CAfile ‘hostname -s‘-ca-cert.pem ‘hostname -s‘-server-cert.pem ‘hostname -s‘-client-cert.pem 

server-cert.pem: OK
client-cert.pem: OK
Install certificates in designated directories. Commands must be run on both $IRPMASTER and $IRPSLAVE nodes:
Listing 2.60: Install certificates in designated directories
# mkdir -p /etc/pki/tls/certs/mysql/server/ /etc/pki/tls/certs/mysql/client/ /etc/pki/tls/private/mysql/server/ /etc/pki/tls/private/mysql/client/

# cp ‘hostname -s‘-ca-cert.pem ‘hostname -s‘-server-cert.pem /etc/pki/tls/certs/mysql/server/
# cp ‘hostname -s‘-ca-key.pem ‘hostname -s‘-server-key.pem /etc/pki/tls/private/mysql/server/
# cp ‘hostname -s‘-client-cert.pem /etc/pki/tls/certs/mysql/client/
# cp ‘hostname -s‘-client-key.pem /etc/pki/tls/private/mysql/client/

# cd && rm -rvf irp-certs
Cross copy client key and certificates:
Listing 2.61: Cross copy client key and certificates
root@IRPMASTER ~# scp "/etc/pki/tls/certs/mysql/server/$IRPMASTER-ca-cert.pem" "$IRPSLAVE:/etc/pki/tls/certs/mysql/client/"
root@IRPMASTER ~# scp "/etc/pki/tls/certs/mysql/client/$IRPMASTER-client-cert.pem" "$IRPSLAVE:/etc/pki/tls/certs/mysql/client/"
root@IRPMASTER ~# scp "/etc/pki/tls/private/mysql/client/$IRPMASTER-client-key.pem" "$IRPSLAVE:/etc/pki/tls/private/mysql/client/"

root@IRPMASTER ~# scp "$IRPSLAVE:/etc/pki/tls/certs/mysql/server/$IRPSLAVE-ca-cert.pem" "/etc/pki/tls/certs/mysql/client/"
root@IRPMASTER ~# scp "$IRPSLAVE:/etc/pki/tls/certs/mysql/client/$IRPSLAVE-client-cert.pem" "/etc/pki/tls/certs/mysql/client/"
root@IRPMASTER ~# scp "$IRPSLAVE:/etc/pki/tls/private/mysql/client/$IRPSLAVE-client-key.pem" "/etc/pki/tls/private/mysql/client/"
Adjust file permissions. Commands must be run on both $IRPMASTER and $IRPSLAVE nodes:
Listing 2.62: Set file permissions for keys and certificates
# chown -R mysql:mysql /etc/pki/tls/certs/mysql/ /etc/pki/tls/private/mysql/ 

# chmod 0600 /etc/pki/tls/private/mysql/server/* /etc/pki/tls/private/mysql/client/*

Configure MySQL replication on $IRPSLAVE

Each node of MySQL Multi-Master replication is assigned its own identifier and references previously configured certificates. There are also configuration parameters such as binary/relay log file names and auto-increment and auto-increment-offset values.
IRP includes a template config file /usr/share/doc/irp/irp.my_repl_slave.cnf.template. The template designates $IRPSLAVE as second server of the Multi-Master replication and includes references to ‘hostname -s‘ that need to be replaced with the actual hostname of $IRPSLAVE before installing. Apply the changes and review the configuration file. Alternatively a command like the example below can be used to create $IRPSLAVE config file from template. Make sure to use the actual short hostname instead of the provided variable:
Listing 2.63: Example $IRPSLAVE configuration from template
root@IRPSLAVE ~# sed ’s|‘hostname -s‘|$IRPSLAVE|’ < /usr/share/doc/irp/irp.my_repl_slave.cnf.template  > /etc/noction/mysql/irp.my_repl_slave.cnf
The config file created above must be included into $IRPSLAVE node’s MySQL config my.cnf. It is recommended that line “!include /etc/noction/mysql/irp.my_repl_slave.cnf” is un-commented.
Check Multi-Master configuration on $IRPSLAVE:
Listing 2.64: Check MySQL on $IRPSLAVE works correctly
root@IRPSLAVE ~# service mysqld start
root@IRPSLAVE ~# tail -f /var/log/mysqld.log
root@IRPSLAVE ~# mysql irp -e "show master status \G"
root@IRPSLAVE ~# service mysqld stop

Configure MySQL replication on $IRPMASTER

Configuring $IRPMASTER is done with running services.
Configuring MySQL Multi-Master replication on $IRPMASTER should only be done after confirming it works on $IRPSLAVE.
Similarly to $IRPSLAVE above IRP comes with a template configuration file for $IRPMASTER - /usr/share/doc/irp/irp.my_repl_master.cnf.template. The template designates $IRPMASTER as first server of the Multi-Master replication and includes references to ‘hostname -s‘ that need to be replaced with the actual hostname of $IRPMASTER before installing. Apply the changes and review the resulting configuration file.
Alternatively a command like the example below can be used to create $IRPMASTER config file from template. Make sure to use the actual hostname instead of the provided variable:
Listing 2.65: Set $IRPMASTER as a first node for Multi-Master replication
root@IRPMASTER ~# sed ’s|‘hostname -s‘|$IRPSLAVE|’ < /usr/share/doc/irp/irp.my_repl_master.cnf.template > /etc/noction/mysql/irp.my_repl_master.cnf
Again, the config file created above must be included into $IRPMASTER node’s MySQL config my.cnf. It is recommended that line "!include /etc/noction/mysql/irp.my_repl_master.cnf" is un-commented.
Check MySQL Multi-Master configuration on $IRPMASTER:
Listing 2.66: Check MySQL on $IRPMASTER works correctly
root@IRPMASTER ~# service mysqld restart
root@IRPMASTER ~# tail -f /var/log/mysqld.log
root@IRPMASTER ~# mysql irp -e "show master status \G"​
If Multi-Master configuration on $IRPMASTER fails or causes unrecoverable errors, a first troubleshooting step is to comment back the included line in /etc/my.cnf on master and slave and restart mysqld service to revert to previous good configuration.

Create replication grants on $IRPMASTER

Replication requires a MySQL user with corresponding access rights to replicated databases. The user must be assigned a password and is designated as connecting correspondingly from $IRPMASTER or $IRPSLAVE. Unfortunately hostnames cannot be used for this and the exact IP addresses of the corresponding nodes must be specified.
The user is created only once in our procedure since after being created the database on $IRPMASTER is manually transferred to $IRPSLAVE and the user will be copied as part of this process.
Grant replication access to replication user:
Listing 2.67: Replication user and grants
mysql> CREATE USER ’irprepl’@’<mysql_slave1_ip_address>’ IDENTIFIED BY ’<replication_user_password>’;
mysql> GRANT REPLICATION SLAVE ON *.* TO ’irprepl’@’<mysql_masterslave1_ip_address>’ REQUIRE CIPHER ’DHE-RSA-AES256-SHA’;
mysql> CREATE USER ’irprepl’@’<mysql_master2_ip_address>’ IDENTIFIED BY ’<replication_user_password>’;
mysql> GRANT REPLICATION SLAVE ON *.* TO ’irprepl’@’<mysql_slave2_ip_address>’ REQUIRE CIPHER ’DHE-RSA-AES256-SHA’;

Copy IRP database configuration and database to $IRPSLAVE

IRP failover copies the irp database and corresponding configuration file from $IRPMASTER to $IRPSLAVE before activating second server. This avoids the need to manually verify and synchronize identifiers.
Copy root’s .my.cnf config file if exists:
Listing 2.68: Copy database root user configuration file
root@IRPMASTER ~# scp /root/.my.cnf $IRPSLAVE:/root/
Copy config files:
Listing 2.69: Copy database configuration files
root@IRPMASTER ~# scp /etc/noction/db.global.conf /etc/noction/db.frontend.conf $IRPSLAVE:/etc/noction/
Preliminary copy database files:
Listing 2.70: Copy database data files
root@IRPMASTER ~# rsync -av --progress --delete --delete-after --exclude="master.info" --exclude="relay-log.info" --exclude="*-bin.*" --exclude="*-relay.*" /var/lib/mysql/ $IRPSLAVE:/var/lib/mysql/
Preliminary copy ensures that large files that take a long time to copy are synced to $IRPSLAVE without stopping MySQL daemon on $IRPMASTER and only a reduced number of differences will need to by synced while MySQL is stopped. This operation can be rerun one more time to reduce the duration of the downtime on $IRPMASTER even more.
Finish copy of database files (OS with Systemd):
Listing 2.71: Copy differences of database files (OS with Systemd)
root@IRPMASTER ~# systemctl stop httpd24-httpd mariadb # CentOS
root@IRPMASTER ~# systemctl stop apache2 mysql # Ubuntu
root@IRPMASTER ~# systemctl start irp-stop-nobgpd.target
​
systemctl start irp-shutdown-except-bgpd.target
systemctl start irp-shutdown.target

root@IRPMASTER ~# cd /var/lib/mysql && rm -vf ./master.info ./relay-log.info ./*-bin.* ./*-relay.*
root@IRPMASTER ~# rsync -av --progress --delete --delete-after /var/lib/mysql/ $IRPSLAVE:/var/lib/mysql/
The procedure above tries to reduce the downtime of MySQL daemon on $IRPMASTER. During this time BGPd preserves IRP Improvements. Make sure this action takes less than bgpd.db.timeout.withdraw↓.
Finish copy of database files (OS with RC-files):
<">Listing 2.72: Copy differences of database files (OS with RC-files)
root@IRPMASTER ~# service irp stop nobgpd

root@IRPMASTER ~# service httpd24-httpd stop # CentOS
root@IRPMASTER ~# service mysqld stop        # CentOS
root@IRPMASTER ~# service apache2 stop       # Ubuntu
root@IRPMASTER ~# service mysql stop         # Ubuntu

root@IRPMASTER ~# cd /var/lib/mysql && rm -vf ./master.info ./relay-log.info ./*-bin.* ./*-relay.*
root@IRPMASTER ~# rsync -av --progress --delete --delete-after /var/lib/mysql/ $IRPSLAVE:/var/lib/mysql/
The procedure above tries to reduce the downtime of MySQL daemon on $IRPMASTER. During this time BGPd preserves IRP Improvements. Make sure this action takes less than bgpd.db.timeout.withdraw↓.
Start MySQL on $IRPSLAVE and check if there are no errors at /var/log/mysqld.log on CentOS or /var/log/mysql/error.log on Ubuntu.
First $IRPSLAVE must be checked.
Start MySQL on IRP master and check if there are no errors in MySQL logs as above.

Start replication (Slaves) on both $IRPMASTER and $IRPSLAVE

MySQL Multi-Master replication uses a replication scheme where a replicating master acts as both replication master and replication slave. The steps above configured IRP nodes to be capable of acting as replication masters. Here we ensure that both nodes are capable of acting as replication slaves.
IRP provides a template command that needs to be adjusted for each $IRPMASTER and $IRPSLAVE and will instruct MySQL daemon to take the replication slave role. The template is /usr/share/doc/irp/changemasterto.template.
The template generates a different command for each $IRPMASTER and $IRPSLAVE nodes and requires multiple values to be reused from configuration settings described above. The command that is run on one node points to the other node as its master.
Make $IRPMASTER a replication slave for $IRPSLAVE:
Listing 2.73: Set $IRPMASTER as replication slave
$IRPMASTER-mysql> 
CHANGE MASTER TO 
MASTER_HOST=’$IRPSLAVE-ip-address’,
MASTER_USER=’irprepl’,
MASTER_PASSWORD=’$IRPSLAVE-password>’,
MASTER_PORT=3306,
MASTER_LOG_FILE=	’$IRPSLAVE--bin.000001’,
MASTER_LOG_POS=	<$IRPSLAVE-bin-log-position>,
MASTER_CONNECT_RETRY=10,
MASTER_SSL=1,
MASTER_SSL_CAPATH=’/etc/pki/tls/certs/mysql/client/’,
MASTER_SSL_CA=’/etc/pki/tls/certs/mysql/client/$IRPSLAVE-ca-cert.pem’,
MASTER_SSL_CERT=’/etc/pki/tls/certs/mysql/client/$IRPSLAVE-client-cert.pem’,
MASTER_SSL_KEY=’/etc/pki/tls/private/mysql/client/$IRPSLAVE-client-key.pem’,
MASTER_SSL_CIPHER=’DHE-RSA-AES256-SHA’;
You must manually check what values to assign to
$IRPSLAVE-bin.000001 and <$IRPSLAVE-bin-log-position>
by running the following MySQL command on $IRPSLAVE
mysql> show master status
For the initial configuration the values for $IRPSLAVE-bin.000001 and <$IRPSLAVE-bin-log-position> must be as follows:
Binlog file: $IRPSLAVE–bin.000001
Binlog position: 106
Run the commands below to run the replication and check the slave status:
Listing 2.74: Starting replication slave on $IRPMASTER
mysql> START SLAVE \G
mysql> show slave status \G
Check the Slave_IO_State, Last_IO_Errno, Last_IO_Error, Last_SQL_Errno, Last_SQL_Error values for errors. Make sure there are no errors.
Make $IRPSLAVE a replication slave for $IRPMASTER:
Listing 2.75: Set $IRPSLAVE as replication slave
$IRPSLAVE-mysql> 
CHANGE MASTER TO 
MASTER_HOST=’$IRPMASTER-ip-address’,
MASTER_USER=’irprepl’,
MASTER_PASSWORD=’$IRPMASTER-password>’,
MASTER_PORT=3306,
MASTER_LOG_FILE=	’$IRPMASTER-bin.000001’,
MASTER_LOG_POS=	<$IRPMASTER-bin-log-position>,
MASTER_CONNECT_RETRY=10,
MASTER_SSL=1,
MASTER_SSL_CAPATH=’/etc/pki/tls/certs/mysql/client/’,
MASTER_SSL_CA=’/etc/pki/tls/certs/mysql/client/$IRPMASTER-ca-cert.pem’,
MASTER_SSL_CERT=’/etc/pki/tls/certs/mysql/client/$IRPMASTER-client-cert.pem’,
MASTER_SSL_KEY=’/etc/pki/tls/private/mysql/client/$IRPMASTER-client-key.pem’,
MASTER_SSL_CIPHER=’DHE-RSA-AES256-SHA’;
You must manually check what values to assign to
$IRPMASTER-bin.000001and <$IRPMASTER-bin-log-position>
by running the following MySQL command on $IRPMASTER
mysql> show master status
For the initial configuration the values for $IRPMASTER-bin.000001 and <$IRPMASTER-bin-log-position> must be as follows:
Binlog file: $IRPMASTER–bin.000001
Binlog position: 106
Run the commands below to run the replication and check the slave status:
Listing 2.76: Starting replication slave
mysql> START SLAVE \G
mysql> show slave status \G
Run IRP and HTTPD services on $IRPMASTER and $IRPSLAVE (OS with Systemd):
Listing 2.77: Starting IRP services and Frontend (OS with Systemd)
# systemctl start irp.target
Run IRP and HTTPD services on $IRPMASTER and $IRPSLAVE (OS with RC-files):
Listing 2.78: Starting IRP services and Frontend (OS with RC-files)
# service httpd24-httpd start
# service irp start​
Start services on $IRPMASTER first if the actions above took very long in order to shorten MySQL downtime.

Configure Failover using Wizard on $IRPMASTER

These steps should only be performed once SSH communication channel and MySQL Multi-Master irp database replication have been setup and are verified to work. Refer subsections of Failover configuration↑ above.
Run failover wizard:
Login into IRP’s Frontend on $IRPMASTER node and run Configuration -> Setup wizard -> Setup Failover.
A valid failover license should be acquired before failover configuration settings become available.
Configure IRP failover:
Follow Setup Failover wizard steps and provide the required details. Refer Setup Failover wizard ↓ for details. Once your configuration changes are submitted IRP will validate configuration and if it is valid the configuration will be synced to $IRPSLAVE.
Only after this synchronization step takes place will $IRPSLAVE node know what is its role in this setup.
Apply configuration changes to edge routers:
Ensure designated probing IPs, BGP session(s) are also setup on edge routers. Refer to corresponding sections of this document for details.
Enable failover:
Use IRP’s Frontend Configuration -> Global -> Failover to configure IRP failover on both nodes. Monitor both IRP nodes and ensure everything is within expectations.
It is recommended that after finishing the preparatory steps above both IRP master and slave nodes run with disabled failover for a short period of time (less than 1 hour) in order to verify that all IRP components and MySQL Multi-Master replication work as expected. Keep this time interval shortto avoid a split-brain situation when the two nodes make contradictory decisions.

Synchronize RRD statistics to $IRPSLAVE

RRD statistics is collected by both IRP failover nodes and needs not be synchronized during normal operation. Still, when IRP failover nodes are setup at a large time interval between them it is recommended that RRD files are synchronized too. This will ensure that past statistics is available on both IRP nodes. During short downtimes of each of the nodes synchronization is less usefull but can be performed in order to cover the short downtime gaps in RRD based graphs.
Sample command to synchronize IRP’s RRD statistics:
Listing 2.79: Synchronize RRD
root@IRPMASTER ~ # rsync -av /var/spool/irp/ $IRPSLAVE:/var/spool/irp

2.6.2 Re-purpose operational IRP node into an IRP failover slave

Sometimes an existing fully configured IRP node is designated to be re-purposed as a failover slave. Take the following actions to make a node as an IRP slave:
  1. upgrade IRP to the version matching IRP failover master node
  2. create a backup copy of your configuration
  3. delete the following configuration files: /etc/noction/irp.conf,/etc/noction/exchanges.conf,/etc/noction/policies.conf
  4. proceed with configuration as detailed in Initial failover configuration↑

2.6.3 Re-purpose operational IRP failover slave into a new master

If a master node fails catastrophically and cannot be recovered an operational IRP slave can be re-purposed to become the new master. Once re-purposing is finished a new slave node can be configured as detailed in Initial failover configuration↑.
Failover status bar (3.2.2↓) includes a context menu with functions to switch IRP failover node role. Simply click on the failover status bar’s ’Slave node’ icon and when prompted if you want to switch role confirm it.
If you switched a node’s role by error, simply change it one more time to revert to previous configuration.

2.6.4 Recover prolonged node downtime or corrupted replication

Multi-Master replication used by IRP failover is able to cope with downtime of less than 24 hours. Replication will not be able to continue in case of prolonged downtime or corrupt replications. Recovery in this case might use either:

MySQL Multi-Master recovery prerequisites

Before proceeding with recovery of replication the following prerequisites must be met:
  • Currently active IRP node is designated as MySQL sync ’origin’. This node currently stores reference configuration parameters and data. These will be synced to the node being recovered and we designate it as ’destination’.
  • Recovery should be scheduled during non-peak hours.
  • Recovery must finish before bgpd.db.timeout.withdraw↓ (default 4h) expires. If recovery can not be completed in time it is required to start MySQL on the active node.

MySQL Multi-Master recovery procedure

It shall be noted that recovery closely follows actions described in Copy IRP database configuration and database to $IRPSLAVE↑ and Start replication (Slaves) on both $IRPMASTER and $IRPSLAVE↑ with the clarification that data and files from origin node target destination node (instead of IRP failover master files targeting IRP failover slave). Refer sections Copy IRP database configuration and database to $IRPSLAVE↑, Start replication (Slaves) on both $IRPMASTER and $IRPSLAVE↑ for details about the steps below.
The steps are as follows:
  1. destination: stop httpd, irp, mysqld
  2. origin: sync /etc/noction/db.* to slave:/etc/noction/
  3. origin: sync /root/.my.cnf to slave:/root/.my.cnf
  4. origin: sync /var/lib/mysql/ to slave:/var/lib/mysql/
    exclude files:
    master.info relay-log.info -bin.* -relay.*

    wait until sync at (4) succeeds and continue with:
  5. origin: stop httpd, irp (except bgpd), mysqld
  6. origin: delete files master.info relay-log.info -bin.* -relay.*
  7. origin: sync /var/lib/mysql/ to slave:/var/lib/mysql/
  8. destination: start mysqld and check /var/log/mysqld.log for errors
  9. origin: start mysqld and check /var/log/mysqld.log for errors
  10. origin: run CHANGE MASTER TO from the /usr/share/doc/irp/changemasterto template
  11. destination: run CHANGE MASTER TO from the /usr/share/doc/irp/changemasterto template
  12. destination: show slave status \G
  13. origin: show slave status \G
  14. origin: start IRP (bgpd should be already running), httpd
  15. destination: start IRP, httpd

2.7 Frontend Configuration

The system frontend can be accessed over HTTP or HTTPS using the main IP or the FQDN of the IRP server. If the IRP server is firewalled, the TCP:80 and TCP:443 port must be open for each IP/network that should have access to the frontend.
The default username and password are “admin”/”admin”. It is strongly recommended to change the password ASAP. Feel free to use the frontend once it becomes accessible. Alternatively local account passwords can be set using irp-passwd utility under root account directly on the IRP server, for example:
#irp-passwd.sh <username> <new-password>
Default “admin” user is created only for the initial IRP launch. Separate user accounts must be created for each user. A separate administrative account must be created for the Noction support team as well.
Do not use the default “admin” user account for regular system access.
For the frontend to function correctly, server’s time zone should be configured in /etc/php.ini. Example: date.timezone=America/Chicago.

2.8 Administrative Components

There are two additional components in the IRP system: dbcron and irpstatd. For detailed configuration parameters description, please see the Administrative settings↓ section.
Dbcron handles periodic database maintenance tasks and statistics processing.
Irpstatd gathers interface statistics from the edge routers. For this component to function properly, the
peer.X.snmp.ip↓/peer.X.snmp.ipv6↓, peer.X.snmp.interface↓ and peer.X.snmp.community↓ need to be set for each provider.
For more details, see the Providers configuration↑ and Providers settings↓ sections.

2.9 BMP configuration

IRP’s BMP monitoring station passively listens on a designated TCP port for monitoring routers to establish BMP sessions. Further BMP setup is performed on monitoring router where
  • BMP monitoring station’s IP address and port are set and also
  • filtering rules regarding what BMP data is sent to IRP are applied if needed.
By default BMP implementations do not filter routing data sent to a monitoring station.
BMP related features are configured individually for example:
It is recommended that BMP is set as the primary source for current route reconstruction only when BMP data is available for all providers.

2.10 IRP Initial Configuration

The platform initial configuration can be performed by using the Initial Setup wizard (Initial Setup↓) which can be accessed via the main platform IP address over HTTP or HTTPS protocol.
Example: https://10.11.12.13

2.11 IRP software management

Software repository

IRP packages (current, new and old versions) are available in Noction’s repositories hosted at repo.noction.com. Coordinate getting access to the repository with Noction’s representatives.

Software installation and upgrade

Once the IRP packages repository is configured standard CentOS package management tools like yum are used to install, upgrade or downgrade IRP.
Installation:
CentOS:
yum install irplite
Ubuntu Server:
apt-get update
apt-get install irplite
Optional packages irplite-documentation and irplite-api-samples should be installed separately.
Upgrade to latest version:
CentOS:
yum upgrade "irplite*"
Ubuntu:
apt-get update
apt-get upgrade
Downgrade to a specific version:
CentOS:
yum downgrade "irplite*1.0*"
Ubuntu Server:
IRP Lite added support for Ubuntu Server starting with version 2.0. IRP Lite versions prior to 2.0 will not be offered for Ubuntu Server.
Check for available IRP versions:
apt-cache policy irplite
Edit file /etc/apt/preferences.d/irplite (refer to apt_preferences man page for package pinning) to pin desired version:
Package: irplite irplite-*
Pin: version 2.0.0-RELEASE~build11806~trusty
Pin-Priority: 1001 
apt-get update
apt-get upgrade irplite

2.12 Starting, stopping and status of IRP components

In order to determine the status of IRP components or to start, stop and restart them standard Linux utilities are used.
Critical errors preventing startup of a particular component are logged to console. Details can be obtained from logs stored in /var/log/irp/ directory.

Managing software components in OS with systemd

Starting single component:
systemctl start explorer
Multiple components separated by space can be listed
Stopping single component:
systemctl stop explorer
Multiple components separated by space can be listed
Starting all components:
systemctl start irp.target
This also starts all prerequisites
Stopping all components:
systemctl start irp-shutdown.target
Stopping all components except bgpd:
systemctl start irp-shutdown-except-bgpd.target
Restarting all components:
systemctl start irp-shutdown.target
systemctl start irp.target
Obtaining overall status of all components:
systemctl list-dependencies irp.target
Individual statuses can be checked by executing command systemctl status component_name

Managing software components in OS with RC-files

Starting single component:
service explorer start
Stopping single component:
service explorer stop
Starting all components:
service irp start
Stopping all components:
service irp stop
Stopping all components except bgpd:
service irp stop nobgpd
Restarting all components:
service irp restart
Obtaining status of all components:
service irp status