Hi Fabrice,
In pf.conf:
...
[general]
#
# general.domain
#
# Domain name of PacketFence system.
domain=earthcolor.com
#
# general.dhcpservers
#
# Comma-delimited list of DHCP servers. Passthroughs are created to allow DHCP transactions from even "trapped" nodes.
dhcpservers=127.0.0.1,172.16.1.12,192.168.35.12
[alerting]
#
# alerting.emailaddr
#
# Email address to which notifications of rogue DHCP servers, violations with an action of "email", or any other
# PacketFence-related message goes to.
emailaddr=***@mydomain.com
[database]
#
# database.pass
#
# Password for the mysql database used by PacketFence. Changing this parameter after the initial configuration will *not* change it in the database it self, only in the configuration.
pass=databasepasswd
host=127.0.0.1
[graphite]
db_host=127.0.0.1
[active_active]
galera_replication_username=pfcluster
galera_replication_password=clusterpasswd
[interface eth0.2]
enforcement=vlan
ip=192.168.2.1
type=internal
mask=255.255.255.0
[interface eth0]
ip=172.16.1.50
type=management
mask=255.255.248.0
[interface eth0.3]
enforcement=vlan
ip=192.168.3.1
type=internal
mask=255.255.255.0
[interface eth0.6]
enforcement=inlinel2
ip=192.168.6.1
type=internal
mask=255.255.255.0
...
Thanks
Darryl
From: Durand fabrice [mailto:***@inverse.ca]
Sent: Friday, May 19, 2017 10:22 PM
To: packetfence-***@lists.sourceforge.net
Subject: Re: [PacketFence-users] Creating PF 7 cluster radiusd errors
Hello Darryl,
can you post your pf.conf (without sensible info)
Regards
Fabrice
Le 2017-05-18 à 14:40, Sokolowski, Darryl a écrit :
Hi Fabrice,
Thanks.
haproxy conf:
# This file is generated from a template at /usr/local/pf/conf/haproxy.conf
# Any changes made to this file will be lost on restart
global
external-check
nbproc 2
cpu-map 1 1
cpu-map 2 2
user haproxy
group haproxy
daemon
pidfile /usr/local/pf/var/run/haproxy.pid
log 172.16.1.16 local0
stats socket /tmp/proxystats level admin process 1
stats bind-process 1
maxconn 4000
#Followup of https://github.com/inverse-inc/packetfence/pull/893
#haproxy 1.6.11 | intermediate profile | OpenSSL 1.0.1e | SRC: https://mozilla.github.io/server-side-tls/ssl-config-generator/?server=haproxy-1.6.11&openssl=1.0.1e&hsts=yes&profile=intermediate
#Oldest compatible clients: Firefox 1, Chrome 1, IE 7, Opera 5, Safari 1, Windows XP IE8, Android 2.3, Java 7
tune.ssl.default-dh-param 2048
ssl-default-bind-ciphers ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS
ssl-default-bind-options no-sslv3 no-tls-tickets
ssl-default-server-ciphers ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS
ssl-default-server-options no-sslv3 no-tls-tickets
#OLD SSL CONFIGURATION. IF RC4 is required or if you must support clients older then the precendent list, comment all the block between this comment and the precedent and uncomment the following line
#ssl-default-bind-ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:ECDHE-RSA-DES-CBC3-SHA:ECDHE-ECDSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA
lua-load /usr/local/pf/var/conf/passthrough.lua
listen stats
bind 172.16.1.16:1025
mode http
timeout connect 10s
timeout client 1m
timeout server 1m
stats enable
stats uri /stats
stats realm HAProxy\ Statistics
stats auth admin:NTadmin0
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
errorfile 400 /usr/share/haproxy/400.http
errorfile 403 /usr/share/haproxy/403.http
errorfile 408 /usr/share/haproxy/408.http
errorfile 500 /usr/share/haproxy/500.http
errorfile 502 /usr/share/haproxy/502.http
errorfile 503 /usr/share/haproxy/503.http
errorfile 504 /usr/share/haproxy/504.http
frontend main
bind localhost:3306
mode tcp
option tcplog
default_backend mysql
bind-process 1
backend mysql
mode tcp
option tcplog
# disabled for now since it proved to be useless - ***@inverse.ca<mailto:***@inverse.ca>
# TODO: remove it after cluster testing
#option external-check
#external-check command /usr/local/pf/var/run/db-check
#external-check path "/usr/bin:/bin"
timeout connect 3s
server MySQL0 172.16.1.19:3306 check
server MySQL1 172.16.1.49:3306 check backup
server MySQL2 172.16.1.50:3306 check backup
backend proxy
option httpclose
option http_proxy
option forwardfor
# Need to have a proxy listening on localhost port 8888
acl paramsquery query -m found
http-request set-uri http://127.0.0.1:8888%[path]?%[query<http://127.0.0.1:8888%25[path]?%25[query>] if paramsquery
http-request set-uri http://127.0.0.1:8888%[path<http://127.0.0.1:8888%25[path>] unless paramsquery
backend static
option httpclose
option http_proxy
option forwardfor
http-request set-uri http://127.0.0.1:8889%[path]?%[query<http://127.0.0.1:8889%25[path]?%25[query>]
frontend portal-http-192.168.2.5
bind 192.168.2.5:80
stick-table type ip size 1m expire 10s store gpc0,http_req_rate(10s)
tcp-request connection track-sc1 src
http-request lua.change_host
acl host_exist var(req.host) -m found
http-request set-header Host %[var(req.host)] if host_exist
http-request lua.select
acl action var(req.action) -m found
acl unflag_abuser src_clr_gpc0 --
http-request allow if action unflag_abuser
http-request deny if { src_get_gpc0 gt 0 }
reqadd X-Forwarded-Proto:\ http
use_backend %[var(req.action)]
default_backend 192.168.2.5-backend
bind-process 2
frontend portal-https-192.168.2.5
bind 192.168.2.5:443 ssl no-sslv3 crt /usr/local/pf/conf/ssl/server.pem
stick-table type ip size 1m expire 10s store gpc0,http_req_rate(10s)
tcp-request connection track-sc1 src
http-request lua.change_host
acl host_exist var(req.host) -m found
http-request set-header Host %[var(req.host)] if host_exist
http-request lua.select
acl action var(req.action) -m found
acl unflag_abuser src_clr_gpc0 --
http-request allow if action unflag_abuser
http-request deny if { src_get_gpc0 gt 0 }
reqadd X-Forwarded-Proto:\ https
use_backend %[var(req.action)]
default_backend 192.168.2.5-backend
bind-process 2
backend 192.168.2.5-backend
balance source
option httpclose
option forwardfor
acl status_501 status 501
acl abuse src_http_req_rate(portal-http-192.168.2.5) ge 20
acl flag_abuser src_inc_gpc0(portal-http-192.168.2.5) --
acl abuse src_http_req_rate(portal-https-192.168.2.5) ge 20
acl flag_abuser src_inc_gpc0(portal-https-192.168.2.5) --
http-response deny if abuse status_501 flag_abuser
server 127.0.0.1 127.0.0.1:80 check
frontend portal-http-192.168.3.5
bind 192.168.3.5:80
stick-table type ip size 1m expire 10s store gpc0,http_req_rate(10s)
tcp-request connection track-sc1 src
http-request lua.change_host
acl host_exist var(req.host) -m found
http-request set-header Host %[var(req.host)] if host_exist
http-request lua.select
acl action var(req.action) -m found
acl unflag_abuser src_clr_gpc0 --
http-request allow if action unflag_abuser
http-request deny if { src_get_gpc0 gt 0 }
reqadd X-Forwarded-Proto:\ http
use_backend %[var(req.action)]
default_backend 192.168.3.5-backend
bind-process 2
frontend portal-https-192.168.3.5
bind 192.168.3.5:443 ssl no-sslv3 crt /usr/local/pf/conf/ssl/server.pem
stick-table type ip size 1m expire 10s store gpc0,http_req_rate(10s)
tcp-request connection track-sc1 src
http-request lua.change_host
acl host_exist var(req.host) -m found
http-request set-header Host %[var(req.host)] if host_exist
http-request lua.select
acl action var(req.action) -m found
acl unflag_abuser src_clr_gpc0 --
http-request allow if action unflag_abuser
http-request deny if { src_get_gpc0 gt 0 }
reqadd X-Forwarded-Proto:\ https
use_backend %[var(req.action)]
default_backend 192.168.3.5-backend
bind-process 2
backend 192.168.3.5-backend
balance source
option httpclose
option forwardfor
acl status_501 status 501
acl abuse src_http_req_rate(portal-http-192.168.3.5) ge 20
acl flag_abuser src_inc_gpc0(portal-http-192.168.3.5) --
acl abuse src_http_req_rate(portal-https-192.168.3.5) ge 20
acl flag_abuser src_inc_gpc0(portal-https-192.168.3.5) --
http-response deny if abuse status_501 flag_abuser
server 127.0.0.1 127.0.0.1:80 check
frontend portal-http-192.168.6.5
bind 192.168.6.5:80
stick-table type ip size 1m expire 10s store gpc0,http_req_rate(10s)
tcp-request connection track-sc1 src
http-request lua.change_host
acl host_exist var(req.host) -m found
http-request set-header Host %[var(req.host)] if host_exist
http-request lua.select
acl action var(req.action) -m found
acl unflag_abuser src_clr_gpc0 --
http-request allow if action unflag_abuser
http-request deny if { src_get_gpc0 gt 0 }
reqadd X-Forwarded-Proto:\ http
use_backend %[var(req.action)]
default_backend 192.168.6.5-backend
bind-process 2
frontend portal-https-192.168.6.5
bind 192.168.6.5:443 ssl no-sslv3 crt /usr/local/pf/conf/ssl/server.pem
stick-table type ip size 1m expire 10s store gpc0,http_req_rate(10s)
tcp-request connection track-sc1 src
http-request lua.change_host
acl host_exist var(req.host) -m found
http-request set-header Host %[var(req.host)] if host_exist
http-request lua.select
acl action var(req.action) -m found
acl unflag_abuser src_clr_gpc0 --
http-request allow if action unflag_abuser
http-request deny if { src_get_gpc0 gt 0 }
reqadd X-Forwarded-Proto:\ https
use_backend %[var(req.action)]
default_backend 192.168.6.5-backend
bind-process 2
backend 192.168.6.5-backend
balance source
option httpclose
option forwardfor
acl status_501 status 501
acl abuse src_http_req_rate(portal-http-192.168.6.5) ge 20
acl flag_abuser src_inc_gpc0(portal-http-192.168.6.5) --
acl abuse src_http_req_rate(portal-https-192.168.6.5) ge 20
acl flag_abuser src_inc_gpc0(portal-https-192.168.6.5) --
http-response deny if abuse status_501 flag_abuser
server 127.0.0.1 127.0.0.1:80 check
haproxyctl "show health":
# pxname svname status weight
stats FRONTEND OPEN
stats BACKEND UP 0
main FRONTEND OPEN
mysql MySQL0 UP 1
mysql MySQL1 UP 1
mysql MySQL2 DOWN 1
mysql BACKEND UP 1
proxy BACKEND UP 0
static BACKEND UP 0
192.168.2.5-backend 127.0.0.1 DOWN 1
192.168.2.5-backend BACKEND DOWN 0
192.168.3.5-backend 127.0.0.1 DOWN 1
192.168.3.5-backend BACKEND DOWN 0
192.168.6.5-backend 127.0.0.1 DOWN 1
192.168.6.5-backend BACKEND DOWN 0
From: Fabrice Durand [mailto:***@inverse.ca]
Sent: Thursday, May 18, 2017 8:14 AM
To: packetfence-***@lists.sourceforge.net<mailto:packetfence-***@lists.sourceforge.net>
Subject: Re: [PacketFence-users] Creating PF 7 cluster radiusd errors
Hello Darryl,
it looks to be related to haproxy.
Can you paste the haproxy.conf and can you do the following:
gem install haproxyctl
export HAPROXY_CONFIG=/usr/local/pf/var/conf/haproxy.conf
haproxyctl "show health"
and paste the result.
Regards
Fabrice
Le 2017-05-17 à 17:54, Sokolowski, Darryl a écrit :
Hi,
Very confused at this point.
I tried using the new documentation and same result. No connection.
If I try to connect to mysql from command line to localhost, it works:
[***@PacketFence ~]# mysql -h localhost -p
Enter password:
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 9
Server version: 10.1.21-MariaDB MariaDB Server
Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]>
If I try to connect to 127.0.0.1, it errors:
[***@PacketFence ~]# mysql -h 127.0.0.1 -p
Enter password:
ERROR 1130 (HY000): Host '172.16.1.50' is not allowed to connect to this MariaDB server
However, I'm on the console of the packetfence server in both cases.
The other thing I don't understand is why it is listening on an IP address of one of the servers defined in the cluster, but not the server I'm starting the database on, when I haven't even gotten to the point of configuring those other members of the cluster yet. The server I'm on is 172.16.1.50.
[***@PacketFence ~]# netstat -nlp|grep 3306
tcp 0 0 172.16.1.19:3306 0.0.0.0:* LISTEN 15484/mysqld
tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN 2596/haproxy
The haproxy stats, of course, say everything is disconnected.
I can't find an haproxy log.
In packetfence.log I see over and over:
May 17 21:13:53 PacketFence packetfence: INFO pf-mariadb(2590): There is an alive quorum but no db available on any server (main::startup_clean_shutdown)
May 17 21:13:53 PacketFence packetfence: INFO pf-mariadb(2590): This node is safe to bootstrap from. Starting in bootstrap mode. (main::startup_clean_shutdown)
May 17 21:14:46 PacketFence pfqueue: pfqueue(4095) ERROR: [mac:00:50:56:95:fe:b1] Can't bind : IO::Socket::INET: connect: Connection refused
(pf::ip4log::_get_lease_from_omapi)
May 17 21:14:46 PacketFence pfqueue: pfqueue(4095) WARN: [mac:00:50:56:95:fe:b1] Unable to perform a Fingerbank lookup for device with MAC address '00:50:56:95:fe:b1' (pf::fingerbank::__ANON__)
May 17 21:15:01 PacketFence pfqueue: pfqueue(4098) FATAL: [mac:unknown] unable to connect to database: Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2 "No such file or directory") at /usr/share/perl5/vendor_perl/CHI/Driver.pm line 546.
(pf::db::db_connect)
But I thought it needs to connect TCP, not a socket, and I thought I saw somewhere that if the cluster is defined, mysql binds using tcp on an ip instaed of a socket, but I can't find that now.
Any help? My eyes are crossing at this point.
Thanks
Darryl
From: Sokolowski, Darryl [mailto:***@earthcolor.com]
Sent: Thursday, May 11, 2017 11:31 AM
To: packetfence-***@lists.sourceforge.net<mailto:packetfence-***@lists.sourceforge.net>
Subject: Re: [PacketFence-users] Creating PF 7 cluster radiusd errors
Hi Thierry,
No, I'm using the PDF from here: https://packetfence.org/downloads/PacketFence/doc/PacketFence_Clustering_Guide-7.0.0.pdf.
Glancing at the doc in the link you sent, I already see differences.
I don't remember seeing anything alarming in the logs, but I will re-try with this new documentation, check haproxy again and report back.
Thanks
Darryl
From: Thierry Laurion [mailto:***@inverse.ca]
Sent: Thursday, May 11, 2017 10:06 AM
To: packetfence-***@lists.sourceforge.net<mailto:packetfence-***@lists.sourceforge.net>
Subject: Re: [PacketFence-users] Creating PF 7 cluster radiusd errors
Hi Darryl,
Are you using this version of the guide? https://github.com/inverse-inc/packetfence/blob/devel/docs/PacketFence_Clustering_Guide.asciidoc
There is a problem with haproxy/mariadb configuration.
You should have a look at haproxy logs/status.
Thierry
On 05/09/2017 01:22 PM, Sokolowski, Darryl wrote:
Anyone have any help for clustering?
I've gone thru the Clustering Guide over and over, and can't see where I'm going wrong.
I've tried 4 or 5 times now, starting with my own server builds, and with the PF-ZEN appliances as a starting point, and always have errors when it comes to starting the pf services.
When I get to the end of step 3 in the guide, and restart the pf services (/usr/local/pf/bin/pfcmd service pf restart), I get an error writing to the L2 cache, and it hangs after restarting haproxy.
...
[***@PacketFence ~]# /usr/local/pf/bin/pfcmd service pf restart
service|command
carbon-cache|already stopped
carbon-relay|already stopped
collectd|already stopped
dhcpd|already stopped
haproxy|stop
httpd.aaa|already stopped
httpd.admin|stop
httpd.collector|already stopped
httpd.dispatcher|already stopped
httpd.graphite|already stopped
httpd.parking|already stopped
httpd.portal|already stopped
httpd.proxy|already stopped
httpd.webservices|already stopped
iptables|stop
keepalived|already stopped
p0f|already stopped
pfbandwidthd|already stopped
pfdetect|already stopped
pfdhcplistener|already stopped
pfdns|already stopped
pffilter|already stopped
pfmon|already stopped
pfqueue|already stopped
pfsetvlan|already stopped
pfsso|already stopped
radiusd-acct|already stopped
radiusd-auth|already stopped
radiusd-load_balancer|already stopped
radsniff|already stopped
redis_ntlm_cache|already stopped
redis_queue|already stopped
routes|stop
snmptrapd|already stopped
statsd|already stopped
winbindd|already stopped
Could not write namespace config::PfDefault to L2 cache !
Could not write namespace config::Documentation to L2 cache !
Could not write namespace config::Cluster to L2 cache !
Could not write namespace config::Pf to L2 cache !
Could not write namespace interfaces to L2 cache !
Could not write namespace interfaces::management_network to L2 cache !
Could not write namespace resource::local_secret to L2 cache !
Could not write namespace config::Network to L2 cache !
Could not write namespace config::Switch to L2 cache !
Could not write namespace config::Authentication to L2 cache !
Could not write namespace config::Domain to L2 cache !
Could not write namespace resource::authentication_sources to L2 cache !
Could not write namespace resource::fqdn to L2 cache !
Could not write namespace config::AdminRoles to L2 cache !
Could not write namespace config::ApacheFilters to L2 cache !
Could not write namespace config::Authentication to L2 cache !
Could not write namespace resource::authentication_config_hash to L2 cache !
Could not write namespace resource::authentication_lookup to L2 cache !
Could not write namespace resource::authentication_sources to L2 cache !
Could not write namespace resource::passthroughs to L2 cache !
Could not write namespace config::Profiles to L2 cache !
Could not write namespace resource::guest_self_registration to L2 cache !
Could not write namespace config::BillingTiers to L2 cache !
Could not write namespace config::Cluster to L2 cache !
Could not write namespace config::Pf to L2 cache !
Could not write namespace resource::CaptivePortal to L2 cache !
Could not write namespace resource::Database to L2 cache !
Could not write namespace resource::fqdn to L2 cache !
Could not write namespace config::Pfdetect to L2 cache !
Could not write namespace resource::trapping_range to L2 cache !
Could not write namespace resource::stats_levels to L2 cache !
Could not write namespace resource::passthroughs to L2 cache !
Could not write namespace interfaces to L2 cache !
Could not write namespace interfaces::listen_ints to L2 cache !
Could not write namespace interfaces::dhcplistener_ints to L2 cache !
Could not write namespace interfaces::ha_ints to L2 cache !
Could not write namespace interfaces::internal_nets to L2 cache !
Could not write namespace interfaces(PacketFence) to L2 cache !
Could not write namespace interfaces::internal_nets(PacketFence) to L2 cache !
Could not write namespace interfaces::inline_enforcement_nets to L2 cache !
Could not write namespace interfaces::vlan_enforcement_nets to L2 cache !
Could not write namespace interfaces::monitor_int to L2 cache !
Could not write namespace interfaces::management_network to L2 cache !
Could not write namespace interfaces::management_network(PacketFence) to L2 cache !
Could not write namespace interfaces::portal_ints to L2 cache !
Could not write namespace interfaces::inline_nets to L2 cache !
Could not write namespace interfaces::routed_isolation_nets to L2 cache !
Could not write namespace interfaces::routed_registration_nets to L2 cache !
Could not write namespace interfaces::radius_ints to L2 cache !
Could not write namespace interfaces(PacketFence) to L2 cache !
Could not write namespace interfaces::internal_nets(PacketFence) to L2 cache !
Could not write namespace interfaces::management_network(PacketFence) to L2 cache !
Could not write namespace config::Pf(PacketFence) to L2 cache !
Could not write namespace resource::cluster_servers to L2 cache !
Could not write namespace resource::cluster_hosts to L2 cache !
Could not write namespace config::DNS_Filters to L2 cache !
Could not write namespace FilterEngine::DNS_Scopes to L2 cache !
Could not write namespace config::DhcpFilters to L2 cache !
Could not write namespace FilterEngine::DhcpScopes to L2 cache !
Could not write namespace config::Documentation to L2 cache !
Could not write namespace config::Domain to L2 cache !
Could not write namespace resource::domain_dns_servers to L2 cache !
Could not write namespace config::Domain(PacketFence) to L2 cache !
Could not write namespace config::Firewall_SSO to L2 cache !
Could not write namespace config::FloatingDevices to L2 cache !
Could not write namespace config::Network to L2 cache !
Could not write namespace interfaces to L2 cache !
Could not write namespace interfaces::listen_ints to L2 cache !
Could not write namespace interfaces::dhcplistener_ints to L2 cache !
Could not write namespace interfaces::ha_ints to L2 cache !
Could not write namespace interfaces::internal_nets to L2 cache !
Could not write namespace interfaces::internal_nets(PacketFence) to L2 cache !
Could not write namespace interfaces::inline_enforcement_nets to L2 cache !
Could not write namespace interfaces::vlan_enforcement_nets to L2 cache !
Could not write namespace interfaces::monitor_int to L2 cache !
Could not write namespace interfaces::management_network to L2 cache !
Could not write namespace interfaces::management_network(PacketFence) to L2 cache !
Could not write namespace interfaces::portal_ints to L2 cache !
Could not write namespace interfaces::inline_nets to L2 cache !
Could not write namespace interfaces::routed_isolation_nets to L2 cache !
Could not write namespace interfaces::routed_registration_nets to L2 cache !
Could not write namespace interfaces::radius_ints to L2 cache !
Could not write namespace interfaces(PacketFence) to L2 cache !
Could not write namespace interfaces::internal_nets(PacketFence) to L2 cache !
Could not write namespace interfaces::management_network(PacketFence) to L2 cache !
Could not write namespace config::PKI_Provider to L2 cache !
Could not write namespace config::PfDefault to L2 cache !
Could not write namespace config::Pf to L2 cache !
Could not write namespace resource::CaptivePortal to L2 cache !
Could not write namespace resource::Database to L2 cache !
Could not write namespace resource::fqdn to L2 cache !
Could not write namespace config::Pfdetect to L2 cache !
Could not write namespace resource::trapping_range to L2 cache !
Could not write namespace resource::stats_levels to L2 cache !
Could not write namespace resource::passthroughs to L2 cache !
Could not write namespace interfaces to L2 cache !
Could not write namespace interfaces::listen_ints to L2 cache !
Could not write namespace interfaces::dhcplistener_ints to L2 cache !
Could not write namespace interfaces::ha_ints to L2 cache !
Could not write namespace interfaces::internal_nets to L2 cache !
Could not write namespace interfaces::internal_nets(PacketFence) to L2 cache !
Could not write namespace interfaces::inline_enforcement_nets to L2 cache !
Could not write namespace interfaces::vlan_enforcement_nets to L2 cache !
Could not write namespace interfaces::monitor_int to L2 cache !
Could not write namespace interfaces::management_network to L2 cache !
Could not write namespace interfaces::management_network(PacketFence) to L2 cache !
Could not write namespace interfaces::portal_ints to L2 cache !
Could not write namespace interfaces::inline_nets to L2 cache !
Could not write namespace interfaces::routed_isolation_nets to L2 cache !
Could not write namespace interfaces::routed_registration_nets to L2 cache !
Could not write namespace interfaces::radius_ints to L2 cache !
Could not write namespace interfaces(PacketFence) to L2 cache !
Could not write namespace interfaces::internal_nets(PacketFence) to L2 cache !
Could not write namespace interfaces::management_network(PacketFence) to L2 cache !
Could not write namespace config::Pf(PacketFence) to L2 cache !
Could not write namespace config::Pfmon to L2 cache !
Could not write namespace config::PfmonDefault to L2 cache !
Could not write namespace config::Pfqueue to L2 cache !
Could not write namespace config::PortalModules to L2 cache !
Could not write namespace config::Profiles to L2 cache !
Could not write namespace FilterEngine::Profile to L2 cache !
Could not write namespace resource::URI_Filters to L2 cache !
Could not write namespace config::Provisioning to L2 cache !
Could not write namespace config::RadiusFilters to L2 cache !
Could not write namespace FilterEngine::RadiusScopes to L2 cache !
Could not write namespace config::Realm to L2 cache !
Could not write namespace config::Report to L2 cache !
Could not write namespace config::Roles to L2 cache !
Could not write namespace config::Scan to L2 cache !
Could not write namespace config::Violations to L2 cache !
Could not write namespace FilterEngine::Violation to L2 cache !
Could not write namespace resource::accounting_triggers to L2 cache !
Could not write namespace resource::bandwidth_expired_violations to L2 cache !
Could not write namespace config::VlanFilters to L2 cache !
Could not write namespace FilterEngine::VlanScopes to L2 cache !
Could not write namespace config::Wmi to L2 cache !
Could not write namespace resource::array_test to L2 cache !
Could not write namespace resource::local_secret to L2 cache !
Could not write namespace config::Switch to L2 cache !
Could not write namespace resource::default_switch to L2 cache !
Could not write namespace resource::switches_group to L2 cache !
Could not write namespace resource::switches_ranges to L2 cache !
Could not write namespace interfaces::management_network to L2 cache !
Could not write namespace interfaces::management_network(PacketFence) to L2 cache !
Could not write namespace resource::SwitchTypesConfigured to L2 cache !
Could not write namespace resource::cli_switches to L2 cache !
Could not write namespace resource::reverse_fqdn to L2 cache !
Could not write namespace resource::switches_list to L2 cache !
Created symlink from /etc/systemd/system/packetfence-base.target.wants/packetfence-haproxy.service to /usr/lib/systemd/system/packetfence-haproxy.service.
haproxy|start
Created symlink from /etc/systemd/system/packetfence.target.wants/packetfence-httpd.aaa.service to /usr/lib/systemd/system/packetfence-httpd.aaa.service.
<HANGS HERE>
...
My cluster.conf as follows:
# Cluster configuration file for active/active
# This file will have it deactivated by default
# To activate the active/active mode, set a management IP in the cluster section
# Before doing any changes to this file, read the documentation
[CLUSTER]
management_ip=172.16.1.16
[CLUSTER interface eth0]
ip=172.16.1.16
type=management,high-availability
[CLUSTER interface eth0.2]
ip=192.168.2.5
type=internal
[CLUSTER interface eth0.3]
ip=192.168.3.5
type=internal
[packetfence.earthcolor.com]
management_ip=172.16.1.50
[packetfence.earthcolor.com interface eth0]
ip=172.16.1.50
type=management,high-availability
mask=255.255.248.0
[packetfence.earthcolor.com interface eth0.2]
enforcement=vlan
ip=192.168.2.1
type=internal
mask=255.255.255.0
[packetfence.earthcolor.com interface eth0.3]
enforcement=vlan
ip=192.168.3.1
type=internal
mask=255.255.255.0
[packetfence2.earthcolor.com]
management_ip=172.16.1.49
[packetfence2.earthcolor.com interface eth0]
ip=172.16.1.49
type=management,high-availability
mask=255.255.248.0
[packetfence2.earthcolor.com interface eth0.2]
enforcement=vlan
ip=192.168.2.2
type=internal
mask=255.255.255.0
[packetfence2.earthcolor.com interface eth0.3]
enforcement=vlan
ip=192.168.3.2
type=internal
mask=255.255.255.0
[packetfence3.earthcolor.com]
management_ip=172.16.1.19
[packetfence3.earthcolor.com interface eth0]
ip=172.16.1.19
type=management,high-availability
mask=255.255.248.0
[packetfence3.earthcolor.com interface eth0.2]
enforcement=vlan
ip=192.168.2.3
type=internal
mask=255.255.255.0
[packetfence3.earthcolor.com interface eth0.3]
enforcement=vlan
ip=192.168.3.3
type=internal
mask=255.255.255.0
My pf.conf follows:
[general]
#
# general.domain
#
# Domain name of PacketFence system.
domain=earthcolor.com
#
# general.dhcpservers
#
# Comma-delimited list of DHCP servers. Passthroughs are created to allow DHCP transactions from even "trapped" nodes.
dhcpservers=127.0.0.1,172.16.1.12,172.17.1.12,192.168.35.12
[alerting]
#
# alerting.emailaddr
#
# Email address to which notifications of rogue DHCP servers, violations with an action of "email", or any other
# PacketFence-related message goes to.
emailaddr=***@earthcolor.com<mailto:emailaddr=***@earthcolor.com>
[database]
#
# database.pass
#
# Password for the mysql database used by PacketFence. Changing this parameter after the initial configuration will *not* change it in the database it self, only in the configuration.
pass=<mypassword>
host=127.0.0.1
[monitoring]
db_host=127.0.0.1
[active_active]
galera_replication_username=pfcluster
galera_replication_password=<mypassword>
[interface eth0]
ip=172.16.1.50
type=management
mask=255.255.248.0
[interface eth0.2]
enforcement=vlan
ip=192.168.2.1
type=internal
mask=255.255.255.0
[interface eth0.3]
enforcement=vlan
ip=192.168.3.1
type=internal
mask=255.255.255.0
At first I had no error when checking config with pfcmd checkup. But after restarting the database with "/usr/local/pf/sbin/pf-mariadb --force-new-cluster" , I get error connecting to database:
# /usr/local/pf/bin/pfcmd checkup
Checking configuration sanity...
Could not write namespace config::PfDefault to L2 cache !
Could not write namespace config::Documentation to L2 cache !
Could not write namespace config::Cluster to L2 cache !
Could not write namespace config::PfDefault to L2 cache !
Could not write namespace config::Documentation to L2 cache !
Could not write namespace config::Cluster to L2 cache !
Radius configuration is missing from raddb directory. Assuming this is a first run.
FATAL - Unable to connect to your database. Please verify your connection settings in conf/pf.conf and make sure that it is started.
WARNING - unknown configuration parameter monitoring.db_host if you added the parameter yourself make sure it is present in conf/documentation.conf
FATAL - Cannot connect to database to check schema version: unable to connect to database: Lost connection to MySQL server at 'reading initial communication packet', system error: 25 "Inappropriate ioctl for device" at /usr/local/pf/lib/pf/version.pm line 42.
In /var/lib/mysql/PacketFence.err:
2017-05-09 16:39:49 140032874637056 [Note] WSREP: Flow-control interval: [16, 16]
2017-05-09 16:39:49 140032874637056 [Note] WSREP: Restored state OPEN -> JOINED (0)
2017-05-09 16:39:49 140032874637056 [Note] WSREP: Member 0.0 (PacketFence) synced with group.
2017-05-09 16:39:49 140032874637056 [Note] WSREP: Shifting JOINED -> SYNCED (TO: 0)
2017-05-09 16:39:49 140033174883072 [Note] WSREP: New cluster view: global state: 1e95c0b6-34d6-11e7-a3de-ae380f6439d6:0, view# 1: Primary, number of nodes: 1, my index: 0, protocol version 3
2017-05-09 16:39:49 140033175202048 [Note] WSREP: SST complete, seqno: 0
2017-05-09 16:39:49 7f5c03aa2900 InnoDB: Warning: Using innodb_additional_mem_pool_size is DEPRECATED. This option may be removed in future releases, together with the option innodb_use_sys_malloc and with the InnoDB's internal memory allocator.
2017-05-09 16:39:50 140033175202048 [Note] InnoDB: Using mutexes to ref count buffer pool pages
2017-05-09 16:39:50 140033175202048 [Note] InnoDB: The InnoDB memory heap is disabled
2017-05-09 16:39:50 140033175202048 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
2017-05-09 16:39:50 140033175202048 [Note] InnoDB: GCC builtin __atomic_thread_fence() is used for memory barrier
2017-05-09 16:39:50 140033175202048 [Note] InnoDB: Compressed tables use zlib 1.2.7
2017-05-09 16:39:50 140033175202048 [Note] InnoDB: Using Linux native AIO
2017-05-09 16:39:50 140033175202048 [Note] InnoDB: Using SSE crc32 instructions
2017-05-09 16:39:50 140033175202048 [Note] InnoDB: Initializing buffer pool, size = 500.0M
2017-05-09 16:39:50 140033175202048 [Note] InnoDB: Completed initialization of buffer pool
2017-05-09 16:39:50 140033175202048 [Note] InnoDB: Highest supported file format is Barracuda.
2017-05-09 16:39:50 140033175202048 [Note] InnoDB: 128 rollback segment(s) are active.
2017-05-09 16:39:50 140033175202048 [Note] InnoDB: Waiting for purge to start
2017-05-09 16:39:50 140033175202048 [Note] InnoDB: Percona XtraDB (http://www.percona.com) 5.6.34-79.1 started; log sequence number 1724734259
2017-05-09 16:39:50 140031821842176 [Note] InnoDB: Dumping buffer pool(s) not yet started
2017-05-09 16:39:50 140033175202048 [Note] Plugin 'FEEDBACK' is disabled.
2017-05-09 16:39:50 140033175202048 [Note] Server socket created on IP: '172.16.1.19'.
2017-05-09 16:39:50 140033175202048 [Note] /usr/sbin/mysqld: ready for connections.
Version: '10.1.21-MariaDB' socket: '/var/lib/mysql/mysql.sock' port: 3306 MariaDB Server
2017-05-09 16:39:50 140033174883072 [Note] WSREP: REPL Protocols: 7 (3, 2)
2017-05-09 16:39:50 140033174883072 [Note] WSREP: Assign initial position for certification: 0, protocol version: 3
2017-05-09 16:39:50 140032932943616 [Note] WSREP: Service thread queue flushed.
2017-05-09 16:39:50 140033174883072 [Note] WSREP: GCache history reset: old(00000000-0000-0000-0000-000000000000:0) -> new(1e95c0b6-34d6-11e7-a3de-ae380f6439d6:0)
2017-05-09 16:39:50 140033174883072 [Note] WSREP: Synchronized with group, ready for connections
Any help greatly appreciated
Thanks
From: Sokolowski, Darryl [mailto:***@earthcolor.com]
Sent: Wednesday, May 3, 2017 2:58 PM
To: packetfence-***@lists.sourceforge.net<mailto:packetfence-***@lists.sourceforge.net>
Subject: [PacketFence-users] Creating PF 7 cluster radiusd errors
Hi all,
I am trying to create a PF 7 cluster following the Clustering Guide, and get the following error when performing the checkup at the bottom of page 8:
...
/usr/local/pf/bin/pfcmd checkup
Checking configuration sanity...
WARNING - unknown configuration parameter monitoring.db_host if you added the parameter yourself make sure it is present in conf/documentation.conf
...
I did have to add [monitoring] section to the pf.conf, as it didn't exist.
Is there something I need to add to documentation.conf?
If I attempt to start the pf services, radiusd-acct, radiusd-auth, and radiusd-load_balancer, all fail to start. All show the similar message "start request repeated too quickly, refusing to start"
...
# systemctl -l status packetfence-radiusd-acct.service
â packetfence-radiusd-acct.service - PacketFence FreeRADIUS multi-protocol accounting server
Loaded: loaded (/lib/systemd/system/packetfence-radiusd-acct.service; enabled)
Active: failed (Result: start-limit) since Wed 2017-05-03 14:10:06 EDT; 3min 57s ago
Docs: man:radiusd(8)<man:radiusd%288%29>
man:radiusd.conf(5)<man:radiusd.conf%285%29>
http://wiki.freeradius.org/
http://networkradius.com/doc/
Process: 5628 ExecStartPre=/usr/sbin/freeradius -d /usr/local/pf/raddb -n acct -Cxm -lstdout (code=exited, status=1/FAILURE)
Process: 5581 ExecStartPre=/usr/local/pf/bin/pfcmd service radiusd generateconfig (code=exited, status=0/SUCCESS)
Main PID: 1876 (code=exited, status=0/SUCCESS)
May 03 14:10:06 packetfence systemd[1]: Starting PacketFence FreeRADIUS multi-protocol accounting server...
May 03 14:10:06 packetfence systemd[1]: packetfence-radiusd-acct.service start request repeated too quickly, refusing to start.
May 03 14:10:06 packetfence systemd[1]: Failed to start PacketFence FreeRADIUS multi-protocol accounting server.
May 03 14:10:06 packetfence systemd[1]: Unit packetfence-radiusd-acct.service entered failed state.
May 03 14:10:11 packetfence systemd[1]: Starting PacketFence FreeRADIUS multi-protocol accounting server...
May 03 14:10:11 packetfence systemd[1]: packetfence-radiusd-acct.service start request repeated too quickly, refusing to start.
May 03 14:10:11 packetfence systemd[1]: Failed to start PacketFence FreeRADIUS multi-protocol accounting server.
May 03 14:10:15 packetfence systemd[1]: Starting PacketFence FreeRADIUS multi-protocol accounting server...
May 03 14:10:15 packetfence systemd[1]: packetfence-radiusd-acct.service start request repeated too quickly, refusing to start.
May 03 14:10:15 packetfence systemd[1]: Failed to start PacketFence FreeRADIUS multi-protocol accounting server.
...
In packetfence.log I see:
...
May 3 14:10:05 packetfence packetfence: WARN radsniff-wrapper(5613): requesting member ips for an undefined interface... (pf::cluster::members_ips)
May 3 14:10:05 packetfence packetfence: FATAL radsniff-wrapper(5613): Use of uninitialized value $_ in concatenation (.) or string at /usr/local/pf/lib/pf/services/manager/radsniff.pm line 45.
(pf::services::manager::radsniff::make_filter)
May 3 14:10:06 packetfence packetfence: WARN radsniff-wrapper(5635): requesting member ips for an undefined interface... (pf::cluster::members_ips)
May 3 14:10:06 packetfence packetfence: FATAL radsniff-wrapper(5635): Use of uninitialized value $_ in concatenation (.) or string at /usr/local/pf/lib/pf/services/manager/radsniff.pm line 45.
(pf::services::manager::radsniff::make_filter)
May 3 14:10:07 packetfence packetfence: INFO pfcmd.pl(5590): generating /usr/local/pf/var/conf/ssl-certificates.conf (pf::services::manager::httpd::generateCommonConfig)
May 3 14:10:07 packetfence packetfence: INFO pfcmd.pl(5590): generating /usr/local/pf/var/conf/captive-portal-common (pf::services::manager::httpd::generateCommonConfig)
May 3 14:10:07 packetfence packetfence: WARN radsniff-wrapper(5641): requesting member ips for an undefined interface... (pf::cluster::members_ips)
May 3 14:10:07 packetfence packetfence: FATAL radsniff-wrapper(5641): Use of uninitialized value $_ in concatenation (.) or string at /usr/local/pf/lib/pf/services/manager/radsniff.pm line 45.
(pf::services::manager::radsniff::make_filter)
May 3 14:10:10 packetfence packetfence: WARN pfcmd.pl(5633): requesting member ips for an undefined interface... (pf::cluster::m
...
Any help greatly appreciated.
Thanks
Darryl
________________________________
CONFIDENTIALITY NOTICE <<<
This electronic mail (e-mail) message, including any and/or all attachments, is for the sole use of the intended recipient(s), and may contain confidential and/or privileged information, pertaining to business conducted under the direction and supervision of EarthColor, Inc. All e-mail messages, which may have been established as expressed views and/or opinions (stated either within the e-mail message or any of its attachments), are left to the sole responsibility of that of the sender, and are not necessarily attributed to EarthColor, Inc. Unauthorized interception, review, use, disclosure or distribution of any such information contained within this e-mail message and/or its attachment(s), is(are) strictly prohibited. If you are not the intended recipient, please contact the sender by replying to this e-mail message, along with the destruction of all copies of the original e-mail message (along with any attachments).
------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
PacketFence-users mailing list
PacketFence-***@lists.sourceforge.net<mailto:PacketFence-***@lists.sourceforge.net>
https://lists.sourceforge.net/lists/listinfo/packetfence-users
--
Thierry Laurion
***@inverse.ca<mailto:***@inverse.ca> :: +1.514.447.4918 *120 :: https://inverse.ca
Inverse inc. :: Leaders behind SOGo (https://sogo.nu) and PacketFence (https://packetfence.org)
________________________________
CONFIDENTIALITY NOTICE <<<
This electronic mail (e-mail) message, including any and/or all attachments, is for the sole use of the intended recipient(s), and may contain confidential and/or privileged information, pertaining to business conducted under the direction and supervision of EarthColor, Inc. All e-mail messages, which may have been established as expressed views and/or opinions (stated either within the e-mail message or any of its attachments), are left to the sole responsibility of that of the sender, and are not necessarily attributed to EarthColor, Inc. Unauthorized interception, review, use, disclosure or distribution of any such information contained within this e-mail message and/or its attachment(s), is(are) strictly prohibited. If you are not the intended recipient, please contact the sender by replying to this e-mail message, along with the destruction of all copies of the original e-mail message (along with any attachments).
------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
PacketFence-users mailing list
PacketFence-***@lists.sourceforge.net<mailto:PacketFence-***@lists.sourceforge.net>
https://lists.sourceforge.net/lists/listinfo/packetfence-users
--
Fabrice Durand
***@inverse.ca<mailto:***@inverse.ca> :: +1.514.447.4918 (x135) :: www.inverse.ca<http://www.inverse.ca>
Inverse inc. :: Leaders behind SOGo (http://www.sogo.nu) and PacketFence (http://packetfence.org)
------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
PacketFence-users mailing list
PacketFence-***@lists.sourceforge.net<mailto:PacketFence-***@lists.sourceforge.net>
https://lists.sourceforge.net/lists/listinfo/packetfence-users