The pfSense Store

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Derelict

Pages: [1] 2 3 4 5 ... 606
1
General Questions / Re: STP and network
« on: Yesterday at 09:53:21 pm »
The inside addresses just need to be routed to the CARP VIP and not to one of the interface addresses.

3
NAT / Re: Help a newbie with routing
« on: Yesterday at 06:24:36 pm »
You direct inbound connections to internal hosts with port forwards. Firewall > NAT, Port forwards

You direct outbound connections to be source-translated with outbound NAT. Firewall > NAT, Outbound

You can use 1:1 NAT to establish a 1:1 mapping of outside-to-inside addresses for connections in both directions. Firewall > NAT, 1:1

You cannot just make connections to one address go to three different hosts inbound. Not without narrowing it down to specific, unique ports.

You can make all connections from 192.168.1.100, 192.168.1.101, and 192.168.1.102 to the outside "Masquerade" as 100.100.100.2 using specific outbound NAT rules.

Probably going to need more details to help more than that.

4
General Questions / MOVED: Help a newbie with routing
« on: Yesterday at 06:18:17 pm »

6
If the default is left selected in the server configuration, every site gets one address out of the pool. You have to manually select topology net30 there to get the old behavior.

The size of the subnet mask really doesn't matter. It is the number of hosts on the broadcast domain that actually matters. Using unnecessarily large subnets:

1. Increases the likelihood of "colliding" with another private site for VPN purposes, forcing someone (or both) to renumber or perform NAT - both undesirable.
2. Increases the likelihood of configuration errors because people and some gear tend to assume /24.

With "just" a half-dozen sites you can consider creating a different Site-to-Site configuration for each one or use a single site with iroutes or a combination of both.

In the latter case you have more flexibility because you can "push" settings to the clients:

Quote
One is 10.0.0.0/16, one is 10.20.0.0/21, and one is 10.6.0.0/24.

Server configuration:
Tunnel Network: Something unused anywhere - probably a /24
Remote Networks: 10.0.0.0/16,10.20.0.0/21,10.6.0.0/24
Local Networks: [Insert Local Subnet/CIDR],10.0.0.0/16,10.20.0.0/21,10.6.0.0/24
Inter-Client Communication: Enabled.
Topology: subnet

Client-specific Overrides:
Site 1 Remote Network: 10.0.0.0/16
Site 2 Remote Network: 10.20.0.0/21
Site 3 Remote Network: 10.6.0.0/24

It is possible you might see some (almost always harmless) errors logged in that configuration when OpenVPN at a remote site tries to add the route for its own network because they are globally pushed to everyone. One such technique to mitigate that would be to take manual control of the client-specific overrides (using the advanced box) like this:

Site 1:
iroute 10.0.0.0 255.255.0.0;
push-reset;
push "route 10.20.0.0 255.255.248.0";
push "route 10.6.0.0 255.255.255.0";

Site 2:
iroute 10.20.0.0 255.255.248.0;
push-reset;
push "route 10.0.0.0 255.255.0.0";
push "route 10.6.0.0 255.255.255.0";

Etc.

You could also do something like this:

Server configuration:
Tunnel Network: Something unused anywhere - probably a /24
Remote Networks: [none]
Local Networks: [Insert Local Subnet/CIDR]
Inter-Client Communication: Enabled.
Topology: subnet
Custom options:
route 10.0.0.0 255.255.0.0;
route 10.20.0.0 255.255.248.0;
route 10.6.0.0 255.255.255.0;

Client-specific Overrides:
Site 1 Remote Network: 10.0.0.0/16
Site 1 Local Network/s: 10.20.0.0/21,10.6.0.0/24

Site 2 Remote Network: 10.20.0.0/21
Site 2 Local Network/s: 10.0.0.0/16,10.6.0.0/24

Site 3 Remote Network: 10.6.0.0/24
Site 3 Local Network/s: 10.0.0.0/16,10.20.0.0/21

A caveat here is by taking manual control of the routes on the server, pfSense will not know what the remote networks are so you will lose some things that are automated there such as source rules for Automatic Outbound NAT for the remote networks. They will also have to be manually-added.

Another consideration is firewalling. In this server, point-to-multipoint configuration you are relying on OpenVPN to pass all traffic between the sites. There is no way to firewall it other than the OpenVPN rules at the remote endpoint controlling what traffic is allowed (which might very well be sufficient). If you create a tunnel for each site you have control over what traffic can traverse between endpoints. You have even greater control if you assign interfaces for them and use per-instance rules. Each instance will also be assigned a CPU core as needed. Point-to-multipoint connections will all use the same core on the server side - at least I am pretty sure this is still true in OpenVPN 2.4 on pfSense.

You could also do a tunnel to each site with a /30 and use OSPF. I don't know if that is worth all of the configuration. It depends on how dynamic the routing table is I suppose. Probably a call you'll have to make.

ETA: Moved to OpenVPN

7
Tunnel network is a /21? Why? Expecting thousands of clients on one server?

You cannot arbitrarily route subnets across a network like that. You cannot run OSPF between endpoints on a network like that. If you want to use OpenVPN and OSPF you have to configure a different PtP tunnel process for every endpoint. (Shared-key mode or SSL/TLS with a tunnel network of /30). In PtP mode OpenVPN will accept traffic for any destination routed to it and shove it across the tunnel without looking for an iroute (explained later).

When you configure an OpenVPN server as SSL/TLS with a tunnel network larger than /30 it configures itself in "server" mode.

The networks work like this:

Server Side
Tunnel Network = The tunnel address of the client comes from this network
Remote Network = Server Kernel Route into OpenVPN Server Process
Local Network = Client kernel route pushed to client and installed as a kernel route into OpenVPN on that side.

Server Client-Specific Override
Remote Network = Internal OpenVPN route (iroute) telling the process into which tunnel to send traffic that is not addressed to the client's tunnel address
Local Network = Client kernel route pushed to client and installed as a kernel route into OpenVPN on that side in addition to the Local Network(s) configured in the server (if any)

So, in the case of the example of these remote networks if you want everyone to be able to talk to everyone:
Quote
Corporate HQ uses 10.10.0.0/24
Branch 1 uses 10.10.1.0/24
Branch 2 uses 10.10.2.0/24
Branch 3 uses 10.10.3.0/24

Server Configuration
Local Network(s): 10.10.0.0/22 (Pushes this route to clients - they install it into their routing table) 10.10.[123].0/24 is longer so it will be the best route for that locally
Remote Network(s): 10.10.0.0/22 (Installs this route in the local routing table - 10.10.0.0/24 is longer so it will be the best route locally.

Client Specific Overrides
Branch 1: Remote Network(s): 10.10.1.0/24 (Installs iroute in OpenVPN telling traffic to go out this tunnel)
Branch 2: Remote Network(s): 10.10.2.0/24 (ditto)
Branch 3: Remote Network(s): 10.10.3.0/24 (ditto)

Anyway that's what I would try...

8
What type of OpenVPN server is this (SSL/TLS, Remote Access, Point-to-multipoint? Shared key?) What is the tunnel network mask?

9
General Questions / Re: STP and network
« on: Yesterday at 01:23:25 pm »
What switches do you have?

10
General Questions / Re: STP and network
« on: Yesterday at 01:10:15 pm »
So stack them and do that. Your switches have to be truly stackable (or support something like multi-chassis trunking), not some fake manage-all-as-one-switch marketing term stack.

Brocade ICX-6430:

lag Management dynamic id 81
 ports ethernet 1/1/14 ethernet 2/1/14                           
 primary-port 1/1/14
 deploy
 port-name NAS_LAGG0 ethernet 1/1/14
 port-name NAS_LAGG1 ethernet 2/1/14
!



Switch>sh lag id 81
Total number of LAGs:          2
Total number of deployed LAGs: 2
Total number of trunks created:2 (27 available)
LACP System Priority / ID:     1 / cc4e.24b3.68b8
LACP Long timeout:             90, default: 90
LACP Short timeout:            3, default: 3

=== LAG "Management" ID 81 (dynamic Deployed) ===
LAG Configuration:
   Ports:         e 1/1/14 e 2/1/14
   Port Count:    2
   Primary Port:  1/1/14
   Trunk Type:    hash-based
   LACP Key:      20081
Deployment: HW Trunk ID 1
Port    Link    State   Dupl Speed Trunk Tag Pvid Pri MAC             Name
1/1/14  Up      Forward Full 1G    81    No  81   0   cc4e.24b3.68c5  NAS_LAGG0 
2/1/14  Up      Forward Full 1G    81    No  81   0   cc4e.24b3.68c5  NAS_LAGG1 

Port   [Sys P] [Port P] [ Key ] [Act][Tio][Agg][Syn][Col][Dis][Def][Exp][Ope]
1/1/14       1        1   20081   Yes   L   Agg  Syn  Col  Dis  No   No   Ope
2/1/14       1        1   20081   Yes   L   Agg  Syn  Col  Dis  No   No   Ope

                                                                 
 Partner Info and PDU Statistics
Port       Partner         Partner     LACP      LACP     
          System MAC         Key     Rx Count  Tx Count 
1/1/14    0cc4.7a47.7be2      203  2575780   2602883
2/1/14    0cc4.7a47.7be2      203  2575772   2602882



Switch>sh stack
T=905d23h3m21.8: alone: standalone, D: dynamic cfg, S: static
ID   Type          Role    Mac Address    Pri State   Comment                   
1  S ICX6430-24    active  cc4e.24b3.68b8 128 local   Ready
2  S ICX6430-24    standby cc4e.24b3.6978   0 remote  Ready

    active       standby                                                       
     +---+        +---+                                                       
 =2/3| 1 |2/1==2/3| 2 |2/1=                                                   
 |   +---+        +---+   |                                                   
 |                        |                                                   
 |------------------------|                                                   
Standby u2 - protocols ready, can failover
Current stack management MAC is cc4e.24b3.68b8

11
CARP/VIPs / Re: Testing High Availability
« on: November 23, 2017, 06:28:18 am »
Yes. Me.

I have tried to duplicate several of these reports and the only case I can find where there might be a problem is described here:

https://redmine.pfsense.org/issues/8100


12
General Questions / Re: STP and network
« on: November 23, 2017, 06:12:09 am »
LAN interface is different. You have to make sure that all of your LAN clients are given the LAN CARP address as their default gateway, DNS server (if applicable) etc.

Bottom line is you can't expect HA to just work. It does work fine but it requires additional configuration for things that are otherwise automatic. Such as outbound NAT, DHCP server attributes, etc.

13
Captive Portal / Re: Captive Portal - What is Allowed?
« on: November 23, 2017, 05:41:51 am »
Enabling captive portal adds rules, but they are not in pf. They are in ipfw.

https://doc.pfsense.org/index.php/Captive_Portal_Troubleshooting

14
General Questions / Re: STP and network
« on: November 23, 2017, 05:38:56 am »
When you run HA you have to make sure outbound NAT states are created on the CARP VIP not the interface address. Else you will experience dropped connections on failover because WAN address on the primary node is different that WAN address on the secondary node.

https://doc.pfsense.org/index.php/Configuring_pfSense_Hardware_Redundancy_(CARP)

15
Virtualization installations and techniques / Re: killed: out of swap space
« on: November 23, 2017, 04:19:55 am »
Your problem isn't disk, it is RAM. If you have enough RAM you don't swap.

The culprit is probably pfBlockerNG.

Pages: [1] 2 3 4 5 ... 606