Table of Contents
NTPD
NTP servers communicate over port 123 UDP and unlike most UDP protocols the source port is NOT a high port, but uses 123 as well. The firewall must be configured to allow UDP on both source and destination ports 123 between your new NTP server and the Stratum 1 server.
iptables
Using iptables
[bash]# iptables -I INPUT -p udp --dport 123 -j ACCEPT [bash]# service iptables save
firewallld
Using firewallld. Assuming default zone is public.
[bash]# sudo firewall-cmd --zone=public --add-service=ntp [bash]# sudo firewall-cmd --reload
To verify the rules.
[bash]# firewall-cmd --list-all public (default, active) interfaces: eth0 sources: services: dhcpv6-client ntp ssh ports: masquerade: no forward-ports: icmp-blocks: rich rules:
Now that we have our firewall rules in place to allow NTP synchronization, let's get the service installed and started.
Most modern Linux/UNIX distributions come with NTP already installed. For Red Hat based distros you can install the NTP package with yum
[bash]# yum install ntp
The main configuration file for NTP in Red Hat based linux based systems is ntp.conf located in the /etc directory. For this first step we will open that file in our favorite editor and place the servers we want to use in the following format. The following assume our external clock source server is 10.1.19.12. The first two lines were commented out as we are using an external clock source instead of local clock.
# server 127.127.1.0 # local clock # fudge 127.127.1.0 stratum 10 server 10.1.19.12
Now we have to restrict the access these time servers will have on our system. In the example below we are telling NTP that these servers are not allowed to modify run-time configuration or query our system. The specified mask below is limiting the access to a single IP, or single host subnet.
restrict 10.1.19.12 mask 255.255.255.255 nomodify notrap noquery
Now since we are setting up a server to “serve” time to other clients we have to tell it from which networks to allow NTP requests. We use the same basic restrict statement as above, but this time you will notice the noquery option is removed allowing said network to query this server. The following example allows everyone within the 10.64.64.0/24 network to query the server.
restrict 10.64.64.0 mask 255.255.255.0 nomodify notrap
As with most services localhost gets full access. For this we use the same restrict statement but with no options.
restrict 127.0.0.1
Restart NTPD
[bash]# service ntpd restart
In RH 7.X. Restart NTPD
[bash]# systemctl start ntpd.service
That's it, we have now configured our NTP server to pull time synchronization from stratum 1 servers, and accept time synchronization requests from computers on our network. Now we have to start the service and make sure the service starts at boot. Before we go crazy let's make sure everything is working as expected and also run an initial update.
First, let's run an initial update.
[bash]# ntpq -p 10.1.19.12 remote refid st t when poll reach delay offset jitter ============================================================================== +padhux31.in.rea 223.255.185.2 2 u 39 128 356 0.120 -0.329 1.510 tmhhux50.in.rea 10.1.19.12 3 u 399 512 166 114.090 -0.349 8.670 tmhhux51.in.rea 10.1.19.12 3 u 419 1024 356 113.770 -1.400 15.980 *223.255.185.2 .MRS. 1 u 904 1024 325 121.200 -2.306 3.520 padhux04.in.rea 133.100.9.2 3 - 61d 1024 0 0.120 23.153 16000.0
You should now set the runlevels required for the ntpd service, then restart it.
[bash]# chkconfig --level 2345 ntpd on [bash]# /etc/init.d/ntpd restart Or in RH7.x [bash]# systemctl enable ntpd.service
You can check which runlevels the service will be active with the following command.
[bash]# chkconfig --list ntpd Or in RH7.x [bash]# systemctl list-unit-files | grep ntpd
To see if the service started successfully, you should check the system log file.
[bash]# grep ntpd /var/log/messages Nov 8 22:52:48 gcc-rhel ntpd[15652]: ntpd 4.2.2p1@1.1570-o Mon May 30 15:43:16 UTC 2011 (1) Nov 8 22:52:48 gcc-rhel ntpd[15653]: precision = 1.000 usec Nov 8 22:52:48 gcc-rhel ntpd[15653]: Listening on interface wildcard, 0.0.0.0#123 Disabled Nov 8 22:52:48 gcc-rhel ntpd[15653]: Listening on interface lo, 127.0.0.1#123 Enabled Nov 8 22:52:48 gcc-rhel ntpd[15653]: Listening on interface eth0, 10.64.64.6#123 Enabled Nov 8 22:52:48 gcc-rhel ntpd[15653]: kernel time sync status 0040 Nov 8 22:52:48 gcc-rhel ntpd[15653]: getaddrinfo: "::1" invalid host address, ignored Nov 8 22:52:48 gcc-rhel ntpd[15653]: frequency initialized 71.419 PPM from /var/lib/ntp/drift Nov 8 22:57:08 gcc-rhel ntpd[15653]: synchronized to 10.1.19.12, stratum 2 Nov 8 22:57:08 gcc-rhel ntpd[15653]: kernel time sync enabled 0001
You can now query the NTP server with the ntpq (query) tool. The output display after ntpd has been (re)started will be similar to the first table. As ntpd is allowed to run for a while, the table will start to fill with synchronisation details.
[bash]# ntpq -pn remote refid st t when poll reach delay offset jitter ============================================================================== *10.1.19.12 223.255.185.2 2 u 14 256 377 115.690 -2.822 0.922
<note important>We need to wait until ntpd sync with the clock source first before it will start working. This can take up to 5 mins. Monitor /var/log/messages for the synchronized messages</note>
Offset
If the offset is too high, NTP won't sync. We can do manual sync (STEP) by running. E.g. offset below is 26s
remote refid st t when poll reach delay offset jitter ============================================================================== 202.84.191.10 10.100.1.21 6 u 5 64 3 0.313 -26321. 0.827
ntpdate -u <ntp IP>
Reachability
Start your NTP daemon. Curious to see how well it's syncing up, you monitor the output of ntpq -pn and watch the reachability statistics in the reach field climb towards the mysterious upper bound of 377. At last, the system has reached that exalted state, and all is well with the world.
[bash]# ntpq -pn remote refid st t when poll reach delay offset jitter ============================================================================== *10.1.19.12 223.255.185.2 2 u 14 256 377 115.690 -2.822 0.922
Each remote server or peer is assigned its own buffer by ntpd. This buffer represents the status of the last eight NTP transactions between the NTP daemon and a given remote time server. Each bit is a boolean value, where a 1 indicates a successful transaction and a 0 indicates a failure. Each time a new packet is sent, the entire eight-bit register is shifted one bit left as the newest bit enters from the right.
The net result is that dropped packets can be tracked over eight poll intervals before falling off the end of the register to make room for new data. This recycling of space in the register is why it's called a circular buffer, but it may make more sense to think of it in linear terms, as a steady, leftward march–eight small steps, and then the bit ends up wherever bits go when they die.
For reasons that seemed good to the developers, this register is displayed to the user in octal values instead of binary, decimal or even hex. The maximum value of an eight-bit binary number is 11111111, which is 377 in octal, 255 in decimal and 0xFF in hex.
So why does the value of the reach field drop when packets are being successfully sent and received? For those of you who dream in octals, this next part may seem obvious. For ordinary mortals, it requires closer scrutiny.
The answer is that the lower numerical values are caused by the left-shifting of the reachability register. Remember, the buffer is not a metric, it is a FIFO log. So, if you have received the last eight NTP packets successfully, the log contains all 1s and the reach field contains the octal value of 377.
Let's assume that on the next update, a packet is dropped. Because NTP is a UDP-based protocol with no delivery guarantees, this is not necessarily a cause for alarm. But the NTP daemon dutifully logs the failure in the circular buffer and waits for the next poll period. The log now contains 11111110 and a reach field value of 376.
If the next seven polls are successful, seven 1s are added from the right-hand side of the register, pushing the 0 representing the dropped packet further towards the left (and digital oblivion). Listing 4 shows the progression of a single dropped packet through the reachability register.
Buffer Octal Decimal -------- ----- ------- 11111110 376 254 11111101 375 253 11111011 373 251 11110111 367 247 11101111 357 239 11011111 337 223 10111111 277 191 01111111 177 127 11111111 377 255
As you can see, the 0 representing the dropped packet is moved one bit to the left every time the daemon polls its server. The numerical value assigned to each bit is higher on the left than on the right. In the binary system, the leftmost digit holds a value of 128, while the rightmost digit represents a value of 1. So, as the zero moves leftwards, it actually produces a lower numerical value, despite the fact that its distance from the right-hand side of the register represents an increase in the time since the packet was dropped.
If the zero falls off the end of the register, and no other packets have been dropped, the value of the reach field jumps back to 377 with no intervening steps. This can be very confusing if you insist on viewing the numbers as a connection metric rather than as a history log.