Recent Questions - Server Fault most recent 30 from serverfault.tech 2021-01-15T23:03:07Z https://serverfault.tech/feeds https://creativecommons.org/licenses/by-sa/4.0/rdf https://serverfault.tech/q/1050041 0 Create init.d Apache service for multiple instances on debian Stephane https://serverfault.tech/users/612703 2021-01-15T22:29:02Z 2021-01-15T22:29:02Z <p>I am trying to install a second instance of apache on debian. I used the multiple instance script. during the installation it says:</p> <pre><code>root@nextcloudpi:/usr/share/doc/apache2/examples# sudo sh setup-instance suitecrm2 Setting up /etc/apache2-suitecrm2 ... systemd is in use, no init script installed use the 'apache2@suitecrm2.service' service to control your new instance sample commands: systemctl start apache2@suitecrm2.service systemctl enable apache2@suitecrm2.service Setting up symlinks: a2enmod-suitecrm2 a2dismod-suitecrm2 a2ensite-suitecrm2 a2dissite-suitecrm2 a2enconf-suitecrm2 a2disconf-suitecrm2 apache2ctl-suitecrm2 Setting up /etc/logrotate.d/apache2-suitecrm2 and /var/log/apache2-suitecrm2 ... Setting up /etc/default/apache-htcacheclean-suitecrm2 root@nextcloudpi:/usr/share/doc/apache2/examples# sudo systemctl edit apache2.service </code></pre> <p>So systemd is in use and i don't get a service file in init.d When i try to start the service as mentioned in the code it says there is no apache2-suitecrm2.service.</p> <p>How do i create a correct init.d file to start the service, or how do i stop systemd from working so it is not in use and the script to create a second instance is allowed to place the script.</p> <p>I have looked in the documentation and found that there is a script secondary-init-script, also to be found in /use/share/doc/Apache2/examples. But I don't understand how this works ( see <a href="https://alioth-lists-archive.debian.net/pipermail/pkg-apache-commits/2010-February/000296.html" rel="nofollow noreferrer">https://alioth-lists-archive.debian.net/pipermail/pkg-apache-commits/2010-February/000296.html</a></p> <p>Thanks</p> https://serverfault.tech/q/1050039 0 How to increase available disk space in thinly provisioned VM? Jake Reece https://serverfault.tech/users/465782 2021-01-15T22:04:23Z 2021-01-15T22:04:23Z <p>I created a thinly provisioned CentOS VM on an ESXi instance. To use nice numbers, let's say I configured a max disk space of 500 GB. After installing the OS and booting, when I open File Manager and navigate to computer, it shows that ~20 GB are used and ~30 GB are left (50 GB total). So far so good - I expect the size to grow as that gap closes.</p> <p>But I need to install software that requires 40 GB of free space. When I run the installer, it sees that only 30 GB are available and exits. This raises several questions for me:</p> <ol> <li>Why did VMware decide to initialize the VM with 50 GB when the max was 500 GB?</li> <li>Could I have customized the initial size to allow enough space to install the software?</li> <li>How can I force the disk to grow to make room for the software I'm trying to install?</li> </ol> https://serverfault.tech/q/1050035 0 Runtime invalid argument james Pattinson https://serverfault.tech/users/612697 2021-01-15T21:20:35Z 2021-01-15T21:20:35Z <p>Im following this info to setup my cloud platform</p> <p><a href="https://github.com/Jaycar-Electronics/Motherload-Datalogger" rel="nofollow noreferrer">https://github.com/Jaycar-Electronics/Motherload-Datalogger</a></p> <p>I keep getting this error for Node.js 8 runtime</p> <p>The request contains invalid arguments build_environment_variables: build environment variables are not supported by this runtime</p> <p>My variables are SHEET and the google sheet ID</p> <p>total newbie and no idea how to fix it.</p> <p>Not sure what other info to post as I've followed the instructions to the letter. Thanks</p> https://serverfault.tech/q/1050033 0 Symlink with execution permission and... arguments? Kolay.Ne https://serverfault.tech/users/580517 2021-01-15T21:16:26Z 2021-01-15T21:16:26Z <p>I am using Linux Mint 20.1, which is based on Ubuntu 20.04. My kernel is <code>5.4.0-60-generic</code>. All the commands below are runned in <code>GNU bash, version 5.0.17(1)-release (x86_64-pc-linux-gnu)</code></p> <p>I have seen the same behaviour with some other commands, but I'll use <code>ping</code> as an example here. Let's see what happens if I run the following commands:</p> <pre class="lang-sh prettyprint-override"><code>nikolay@KoLin:~$ ping4 -c1 google.com PING google.com (108.177.14.139) 56(84) bytes of data. 64 bytes from lt-in-f139.1e100.net (108.177.14.139): icmp_seq=1 ttl=107 time=41.3 ms --- google.com ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 41.301/41.301/41.301/0.000 ms nikolay@KoLin:~$ ping6 -c1 google.com ping6: connect: Network is unreachable nikolay@KoLin:~$ </code></pre> <p>The output is reasonable. The error, obviously, occurs in the second run, because I don't have an IPv6 network configured on my laptop. But the output proves that <code>ping4</code> and <code>ping6</code> are two different things in my system. But what are they actually? They are both located in <code>/usr/bin</code>:</p> <pre class="lang-sh prettyprint-override"><code>nikolay@KoLin:~$ whereis ping{4,6} ping4: /usr/bin/ping4 /usr/share/man/man8/ping4.8.gz ping6: /usr/bin/ping6 /usr/share/man/man8/ping6.8.gz nikolay@KoLin:~$ </code></pre> <p>And what are these files actually?</p> <pre class="lang-sh prettyprint-override"><code>nikolay@KoLin:~$ ls -l /usr/bin/ping* -rwxr-xr-x 1 root root 72776 Jan 31 2020 /usr/bin/ping lrwxrwxrwx 1 root root 4 Jan 11 21:00 /usr/bin/ping4 -&gt; ping lrwxrwxrwx 1 root root 4 Jan 11 21:00 /usr/bin/ping6 -&gt; ping nikolay@KoLin:~$ </code></pre> <p>Wow! They are both symbolic links to the same executable <code>/usr/bin/ping</code>! But how's that possible? Is there some magical way to make symbolic link add execution arguments?</p> https://serverfault.tech/q/1050030 0 Implementing Reverse DNS in a Live Environment PermanentBeginner https://serverfault.tech/users/612696 2021-01-15T21:01:52Z 2021-01-15T21:07:11Z <p>I've been tasked with implementing reverse DNS internally in our Windows environment. What are some risks I should watch out for, and what is the best way to setup a test environment for changes like these?</p> <p>Should I just spin up a new domain controller and add a few DNS entries and then go from there? I'm a new sysadmin and just really want to avoid breaking something.</p> https://serverfault.tech/q/1050027 0 AWS Cloud Front, how to request origin using correct url Ollegn https://serverfault.tech/users/574120 2021-01-15T20:37:19Z 2021-01-15T20:37:19Z <p>I'm trying to cache a website on an EC2 using the URL xyz.com, so i created an Cloud Front with the origin pointed to xyz.com,</p> <p>But all information returned by the server (like button urls) are not relative and include the request URL, meaning that if Cloudfront access the origin with xyz.com, the contents returned by the CDN (for any cname used) will contain the origin url like an <code>&lt;a href=&quot;xyz.com/info&quot;&gt;&lt;a&gt;</code> instead of a &quot;cached&quot; url that points to the CloudFront distribution, like an cdn.xyz.com.</p> <p>tl:dr<br /> CloudFront should return:<br /> <code>&lt;a href=&quot;cached.xyz.com/info&quot;&gt;&lt;a&gt;</code> (which is the url accessed that points to the distribution)<br /> But it returns:<br /> <code>&lt;a href=&quot;uncached.xyz.com/info&quot;&gt;&lt;a&gt;</code> (which is the Origin URL that contains the data to be cached)</p> <p>Is there a way to spoof the Origin server to it think that URL that is being used to access it is the URL being used to access the CloudFront distribution, so it returns correct URLs?</p> https://serverfault.tech/q/1050025 1 Create GRUB Entry for Booting into CentOS Installation Media azurepancake https://serverfault.tech/users/339137 2021-01-15T20:17:55Z 2021-01-15T20:17:55Z <p>Normally when installing CentOS 7, you download the ISO, write it to a thumb drive and boot it. However, I have a unique scenario where I'd like to be able to place the CentOS 7 installation files on a system's <code>/boot/</code> partition and create a custom <code>GRUB2</code> entry that can boot into that environment. This can then be used to install CentOS 7 on the same system.</p> <p>I know this sounds silly, but I have my reasons..</p> <p>After doing some research, I'm trying to piece together how this works. Here is what I'm seeing in my head currently:</p> <ol> <li><code>GRUB2</code> would load the CentOS 7 kernel <code>vmlinuz</code> and <code>initrd.img</code>.</li> <li>After initializing the kernel and loading the <code>initrd.img</code> into memory, the <code>dracut-initqueue</code> service starts.</li> <li>The <code>dracut-initqueue</code> processes will then try to load the <code>stage2</code> image, which holds the installation runtime (Anaconda in this case).</li> <li>Finally, the CentOS installation process kicks off.</li> </ol> <p>So to start I..</p> <ol> <li>Downloaded the ISO, mounted it and copied the <code>/images/pxeboot/vmlinuz</code>, <code>/images/pxeboot/initrd.img</code> and <code>/LiveOS/squashfs.img</code> (which I believe contains the <code>stage2</code> image) files to my <code>/boot/</code> partition.</li> <li>Added the below entry to <code>/etc/grub.d/40_custom</code>.</li> </ol> <pre><code>menuentry &quot;CentOS 7 Installation&quot; { set root=(hd0,msdos1) linux /centos7-install/vmlinuz initrd=initrd.img stage2=/centos7-install/squashfs.img initrd /centos7-install/initrd.img } </code></pre> <p>After doing the above, I would hope that I would at least get up to loading Anaconda, however instead it simply loads up to &quot;dracut-initqueue timeout&quot; messages. Sadly, I haven't had any luck finding any logs that hint towards exactly what went wrong.</p> <p>Would anyone happen to have any ideas on how to go about this?</p> https://serverfault.tech/q/1050024 0 How does btrfs distribute chunks of large files when writing cclloyd https://serverfault.tech/users/454227 2021-01-15T20:15:56Z 2021-01-15T20:15:56Z <p>If I have 8 drives in raid1 configuration for a btrfs volume. I know it parities the data by blocks/chunks, so that any one chunk always has 2 copies.</p> <p>But when I copy a large file, say 50GB, does it attempt to store it sequentially on 2 disks, or break it up randomly to spread it out across all available disks?</p> <p>Aka will there be 2, generally sequential, 50GB sections; or will there be 100 GB of blocks randomly spread across all disks?</p> https://serverfault.tech/q/1050021 0 SSH closes on yum update and breaks update. Need to update multiple machines via ansible while bypassing this issue SB. https://serverfault.tech/users/600123 2021-01-15T20:06:05Z 2021-01-15T20:06:05Z <p>I'm trying to run yum update -y on multiple machines via ansible. Currently I am in testing mode, where I spin up machines solely to test how it'll work. Here's the code:</p> <pre><code> - name: Run yum updates yum: name: '*' state: latest </code></pre> <p>Works fine on CentOS and Amazon Linux 2 Machines. On the RHEL machine, or at least the one AMI I've tried on (Will be trying on others soon) after a while it times out. I receive an error that ansible was unable to connect due to the connection closing. (The connection is fine, other tasks work without issue.)</p> <p>I thought perhaps the task was timing out, so I ran it in async mode, but that didn't work.</p> <p>I launched a new instance from the same AMI (available on the AWS marketplace) and ssh'd manually to update. The connection closes similarly:</p> <pre><code> Updating : systemd-219-78.el7_9.2.x86_64 204/555 warning: /etc/systemd/logind.conf created as /etc/systemd/logind.conf.rpmnew Updating : 1:dbus-1.10.24-15.el7.x86_64 205/555 Installing : elfutils-default-yama-scope-0.176-5.el7.noarch 206/555 Updating : systemd-sysv-219-78.el7_9.2.x86_64 207/555 Updating : policycoreutils-2.5-34.el7.x86_64 208/555 Updating : iputils-20160308-10.el7.x86_64 209/555 Updating : initscripts-9.49.53-1.el7_9.1.x86_64 210/555 Connection to 107.23.117.115 closed by remote host. Connection to 107.23.117.115 closed. </code></pre> <p>When I reconnect and try to update, I get the following:</p> <blockquote> <p>There are unfinished transactions remaining. You might consider running yum-complete-transaction, or &quot;yum-complete-transaction --cleanup-only&quot; and &quot;yum history redo last&quot;, first to finish them. If those don't work you'll have to try removing/installing packages by hand (maybe package-cleanup can help).</p> </blockquote> <p>So to summarize:</p> <ol> <li>When running yum update, the connection closes randomly. Running update again has issues.</li> <li>This only seems to be an issue with this AMI is my guess. Since I don't know which AMIs we have built off this one and launched, I can't simply avoid this one.</li> <li>I need to find out why the update is breaking, and how it can be resolved both manually and via ansible.</li> </ol> <p>Thanks in advance!</p> https://serverfault.tech/q/1050020 0 Using winexe on linux to start notepad on windows server whatismyname123 https://serverfault.tech/users/612679 2021-01-15T19:39:17Z 2021-01-15T19:39:17Z <p>This question is related to running notepad on a remote machine.</p> <p>Server A: Windows Server 2019(user is administrator)</p> <p>Client B: Calling notepad.exe on server A and running it on the server A</p> <p>Window's solution is psexec, but I want to do it with winexe which is similar program but only for linux.</p> <p>There was a solution from this post: <a href="https://serverfault.tech/questions/501539/use-winexe-to-start-a-windows-process?answertab=active#tab-top">Use winexe to start a Windows process</a> Saying:</p> <p>Microsoft's official stance is that launching interactive processes remotely is too big of a security risk, and so they inhibit your ability to do it... but we can still work around it if we're willing to get dirty:</p> <pre><code>schtasks.exe /create /S COMPUTERNAME /RU &quot;NT AUTHORITY\SYSTEM&quot; /RL HIGHEST /SC ONSTART /TN &quot;RemoteProcess&quot; /TR &quot;program.exe \&quot;argument 1\&quot; \&quot;argument 2\&quot;&quot; schtasks.exe /Run /S COMPUTERNAME /I /TN &quot;RemoteProcess&quot; schtasks.exe /Delete /S COMPUTERNAME /TN &quot;RemoteProcess&quot; </code></pre> <p>The commenter said that it works like a charm, can someone explain to me how to make this work because I can't seem to grasp it, executing those 3 commands and then running some kind of winexe command ??</p> <p>So, those commmands somehow let me run GUI apps, ok anyone interested in helping how this works step by step ?</p> <p>I am really stuck on this problem on a verge of just quitting, but I need to make this happen. I am open to any other more solutions</p> https://serverfault.tech/q/1050013 0 Pointing My Domain Name to my VPS Şansal Birbaş https://serverfault.tech/users/408014 2021-01-15T19:02:52Z 2021-01-15T21:45:13Z <p>I have purchased a domain name and I asked them how to bind to my VPS. They answered, I should either use a control panel or create my own DNS server. So following guidelines about how to install BIND on Ubuntu 20.04, I did that. Registrar give two default nameservers for the domain name: cf21.hostingdunyam.net and cf212.hostingdunyam.net . And I created the zone file as :</p> <pre><code>$TTL 1d @ IN SOA cf21.hostingdunyam.net. shansal.zoho.com. ( 10 ; Serial 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 604800 ) ; Negative Cache TTL ; @ IN A 80.253.246.157 @ IN NS cf21.hostingdunyam.net. @ IN NS cf22.hostingdunyam.net. www IN A 80.253.246.157 </code></pre> <p>So far I was not able to point that domain to ip address of my VPS. So I need an urgent support in here.</p> https://serverfault.tech/q/1049996 0 In IoT Core, section what does massive-carrier mean? Steve N https://serverfault.tech/users/612538 2021-01-15T15:54:11Z 2021-01-15T22:36:06Z <p>In IoT Core, &quot;Create a registry&quot; section, in the &quot;Select a Cloud Pub / Sub topic&quot; box, what does massive-carrier mean?</p> https://serverfault.tech/q/1049989 1 Raw LVM disk write is four times faster on host than inside KVM guest? Nick https://serverfault.tech/users/15623 2021-01-15T15:16:28Z 2021-01-15T21:47:51Z <p>I've got a Debian 10.6 Host, with a Debian 10.6 guest. KVM/Qemu/libvirt. The host has a software RAID 10 array with 6 mechanical disks. LVM is on top of the RAID array. One LV is passed into the guest using:</p> <pre><code>&lt;disk type='block' device='disk'&gt; &lt;driver name='qemu' type='raw' cache='none' io='native'/&gt; &lt;source dev='/dev/raid10/lv0'/&gt; &lt;target dev='vdb' bus='virtio'/&gt; &lt;address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/&gt; &lt;/disk&gt; </code></pre> <p>The host writes to the logical volume at about 720 MB/s:</p> <pre><code>dd of=diskbench if=/dev/zero bs=30000000 count=2000 (60 GB, 56 GiB) copied, 82.7758 s, 725 MB/s (60 GB, 56 GiB) copied, 82.5263 s, 727 MB/s (60 GB, 56 GiB) copied, 83.8701 s, 715 MB/s (45 GB, 42 GiB) copied, 58.9086 s, 772 MB/s </code></pre> <p>Inside the guest though, the same test runs much slower:</p> <pre><code>dd of=diskbench if=/dev/zero bs=30000000 count=2000 (60 GB, 56 GiB) copied, 254.088 s, 236 MB/s (60 GB, 56 GiB) copied, 245.407 s, 244 MB/s (60 GB, 56 GiB) copied, 242.558 s, 247 MB/s </code></pre> <p>This system is not in production yet and not under load. What can I check for to improve write performance?</p> https://serverfault.tech/q/1049964 0 How to enable more than 4 GB in SQL Server X86 under Windows X64 AndreaTaroni86 https://serverfault.tech/users/251151 2021-01-15T12:58:21Z 2021-01-15T20:52:15Z <p>I have read <a href="https://serverfault.tech/questions/3342/how-does-a-32-bit-machine-support-more-than-4-gb-of-ram?">here</a> that I can use more than 4 GB in X86 SQL Server Systems and X86 Windows servers via /PAE option, but my scenario is different. I have a 32 bit SQL Server std in a Windows 2019 X64 std and I am going to upgrade to Sql Server X64 version. But I'm wondering, can I still enable somehow my sql server to use more than 4 GB even if it's X86? I'm asking it because with /PAE option in X86 server I can extend SQL Server memory (If I understand correctly), so maybe I can do something similar in X64 Windows server enviroment.</p> https://serverfault.tech/q/1049819 1 Cross Domain Authentication with ADFS (no domain trust) Björn Pahlen https://serverfault.tech/users/612482 2021-01-14T14:10:31Z 2021-01-15T22:34:00Z <p>Yesterday I was asked whether it is possible to establish a cross domain authentication with ADFS.</p> <p>Scenario:</p> <ul> <li>Two different Windows Domains (A &amp; B) without any trust configuration</li> <li>Network access between Domains is established with IPSec Site2Site (all ports needs to be opened separately)</li> <li>One specific Windows Service on a server in Domain A has to use an AD Account from Domain B for logon (Windows Service -&gt; Logon -&gt; This Account -&gt; Account from Domain B)</li> </ul> <p>Our partner doesn´t want to establish a domain trust due to security reasons and is therefore asking if we could realize this authentication process through ADFS?</p> <p>ADFS is quite new to me and I'm not sure if this scenario is even possible with ADFS?</p> https://serverfault.tech/q/1049590 0 Kubernetes - kubeadm join - Connection refused after new control plane joined Pie https://serverfault.tech/users/324604 2021-01-12T19:05:29Z 2021-01-15T21:19:56Z <p><strong>Issue</strong></p> <p>I am trying to join a second control-plane node to a K8S cluster. The first node has a IP of 10.1.50.4 where the second node has a IP of 10.1.50.5. There is a load balancer Virtual IP for the control-plane nodes of 10.1.50.250.</p> <p>K8S Version: 1.20.1-00</p> <p><strong>Command</strong></p> <pre><code>kubeadm join 10.1.50.4:6443 --token ozlhby.pbi2v5kp0x8ix9cl --discovery-token-ca-cert-hash sha256:7aff9979cace02a9f1e98d82253ef9a8c1594c80ea0860aba6ef628xdx7103fb --control-plane --certificate-key 3606aa528cd7d730efafcf535625577d6fx77x7cb6f90e5a8517a807065672d --v=5 </code></pre> <p><strong>Output</strong></p> <pre><code>I0112 02:20:39.801195 30603 join.go:395] [preflight] found NodeName empty; using OS hostname as NodeName I0112 02:20:39.801669 30603 join.go:399] [preflight] found advertiseAddress empty; using default interface's IP address as advertiseAddress I0112 02:20:39.802091 30603 initconfiguration.go:104] detected and using CRI socket: /var/run/dockershim.sock I0112 02:20:39.802715 30603 interface.go:400] Looking for default routes with IPv4 addresses I0112 02:20:39.802998 30603 interface.go:405] Default route transits interface &quot;ens160&quot; I0112 02:20:39.803501 30603 interface.go:208] Interface ens160 is up I0112 02:20:39.803739 30603 interface.go:256] Interface &quot;ens160&quot; has 2 addresses :[10.1.50.5/24 fe80::20c:29ff:fe2d:674d/64]. I0112 02:20:39.803903 30603 interface.go:223] Checking addr 10.1.50.5/24. I0112 02:20:39.804074 30603 interface.go:230] IP found 10.1.50.5 I0112 02:20:39.804230 30603 interface.go:262] Found valid IPv4 address 10.1.50.5 for interface &quot;ens160&quot;. I0112 02:20:39.804356 30603 interface.go:411] Found active IP 10.1.50.5 [preflight] Running pre-flight checks I0112 02:20:39.804727 30603 preflight.go:90] [preflight] Running general checks I0112 02:20:39.804935 30603 checks.go:249] validating the existence and emptiness of directory /etc/kubernetes/manifests I0112 02:20:39.805227 30603 checks.go:286] validating the existence of file /etc/kubernetes/kubelet.conf I0112 02:20:39.805375 30603 checks.go:286] validating the existence of file /etc/kubernetes/bootstrap-kubelet.conf I0112 02:20:39.805501 30603 checks.go:102] validating the container runtime I0112 02:20:39.957746 30603 checks.go:128] validating if the &quot;docker&quot; service is enabled and active [WARNING IsDockerSystemdCheck]: detected &quot;cgroupfs&quot; as the Docker cgroup driver. The recommended driver is &quot;systemd&quot;. Please follow the guide at https://kubernetes.io/docs/setup/cri/ I0112 02:20:40.118312 30603 checks.go:335] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables I0112 02:20:40.118439 30603 checks.go:335] validating the contents of file /proc/sys/net/ipv4/ip_forward I0112 02:20:40.118525 30603 checks.go:649] validating whether swap is enabled or not I0112 02:20:40.118634 30603 checks.go:376] validating the presence of executable conntrack I0112 02:20:40.118786 30603 checks.go:376] validating the presence of executable ip I0112 02:20:40.118920 30603 checks.go:376] validating the presence of executable iptables I0112 02:20:40.118991 30603 checks.go:376] validating the presence of executable mount I0112 02:20:40.119140 30603 checks.go:376] validating the presence of executable nsenter I0112 02:20:40.119218 30603 checks.go:376] validating the presence of executable ebtables I0112 02:20:40.119310 30603 checks.go:376] validating the presence of executable ethtool I0112 02:20:40.119369 30603 checks.go:376] validating the presence of executable socat I0112 02:20:40.119434 30603 checks.go:376] validating the presence of executable tc I0112 02:20:40.119508 30603 checks.go:376] validating the presence of executable touch I0112 02:20:40.119601 30603 checks.go:520] running all checks I0112 02:20:40.274926 30603 checks.go:406] checking whether the given node name is reachable using net.LookupHost I0112 02:20:40.275311 30603 checks.go:618] validating kubelet version I0112 02:20:40.459593 30603 checks.go:128] validating if the &quot;kubelet&quot; service is enabled and active I0112 02:20:40.489282 30603 checks.go:201] validating availability of port 10250 I0112 02:20:40.489826 30603 checks.go:432] validating if the connectivity type is via proxy or direct I0112 02:20:40.490313 30603 join.go:465] [preflight] Discovering cluster-info I0112 02:20:40.490582 30603 token.go:78] [discovery] Created cluster-info discovery client, requesting info from &quot;10.1.50.4:6443&quot; I0112 02:20:40.511725 30603 token.go:116] [discovery] Requesting info from &quot;10.1.50.4:6443&quot; again to validate TLS against the pinned public key I0112 02:20:40.527163 30603 token.go:133] [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server &quot;10.1.50.4:6443&quot; I0112 02:20:40.527277 30603 discovery.go:51] [discovery] Using provided TLSBootstrapToken as authentication credentials for the join process I0112 02:20:40.527323 30603 join.go:479] [preflight] Fetching init configuration I0112 02:20:40.527372 30603 join.go:517] [preflight] Retrieving KubeConfig objects [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' I0112 02:20:40.561702 30603 interface.go:400] Looking for default routes with IPv4 addresses I0112 02:20:40.561742 30603 interface.go:405] Default route transits interface &quot;ens160&quot; I0112 02:20:40.562257 30603 interface.go:208] Interface ens160 is up I0112 02:20:40.562548 30603 interface.go:256] Interface &quot;ens160&quot; has 2 addresses :[10.1.50.5/24 fe80::20c:29ff:fe2d:674d/64]. I0112 02:20:40.562680 30603 interface.go:223] Checking addr 10.1.50.5/24. I0112 02:20:40.562745 30603 interface.go:230] IP found 10.1.50.5 I0112 02:20:40.562774 30603 interface.go:262] Found valid IPv4 address 10.1.50.5 for interface &quot;ens160&quot;. I0112 02:20:40.562800 30603 interface.go:411] Found active IP 10.1.50.5 I0112 02:20:40.576707 30603 preflight.go:101] [preflight] Running configuration dependant checks [preflight] Running pre-flight checks before initializing the new control plane instance I0112 02:20:40.577061 30603 checks.go:577] validating Kubernetes and kubeadm version I0112 02:20:40.577369 30603 checks.go:166] validating if the firewall is enabled and active I0112 02:20:40.598127 30603 checks.go:201] validating availability of port 6443 I0112 02:20:40.598485 30603 checks.go:201] validating availability of port 10259 I0112 02:20:40.598744 30603 checks.go:201] validating availability of port 10257 I0112 02:20:40.598987 30603 checks.go:286] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml I0112 02:20:40.599271 30603 checks.go:286] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml I0112 02:20:40.599481 30603 checks.go:286] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml I0112 02:20:40.599533 30603 checks.go:286] validating the existence of file /etc/kubernetes/manifests/etcd.yaml I0112 02:20:40.599686 30603 checks.go:432] validating if the connectivity type is via proxy or direct I0112 02:20:40.599762 30603 checks.go:471] validating http connectivity to first IP address in the CIDR I0112 02:20:40.600028 30603 checks.go:471] validating http connectivity to first IP address in the CIDR I0112 02:20:40.600350 30603 checks.go:201] validating availability of port 2379 I0112 02:20:40.600510 30603 checks.go:201] validating availability of port 2380 I0112 02:20:40.600840 30603 checks.go:249] validating the existence and emptiness of directory /var/lib/etcd [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' I0112 02:20:40.699836 30603 checks.go:839] image exists: k8s.gcr.io/kube-apiserver:v1.20.1 I0112 02:20:40.796995 30603 checks.go:839] image exists: k8s.gcr.io/kube-controller-manager:v1.20.1 I0112 02:20:40.889726 30603 checks.go:839] image exists: k8s.gcr.io/kube-scheduler:v1.20.1 I0112 02:20:40.977887 30603 checks.go:839] image exists: k8s.gcr.io/kube-proxy:v1.20.1 I0112 02:20:41.072019 30603 checks.go:839] image exists: k8s.gcr.io/pause:3.2 I0112 02:20:41.164679 30603 checks.go:839] image exists: k8s.gcr.io/etcd:3.4.13-0 I0112 02:20:41.255987 30603 checks.go:839] image exists: k8s.gcr.io/coredns:1.7.0 [download-certs] Downloading the certificates in Secret &quot;kubeadm-certs&quot; in the &quot;kube-system&quot; Namespace [certs] Using certificateDir folder &quot;/etc/kubernetes/pki&quot; I0112 02:20:41.270660 30603 certs.go:45] creating PKI assets I0112 02:20:41.271129 30603 certs.go:474] validating certificate period for ca certificate [certs] Generating &quot;apiserver-kubelet-client&quot; certificate and key [certs] Generating &quot;apiserver&quot; certificate and key [certs] apiserver serving cert is signed for DNS names [k8s-master-1 kube.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.1.50.5 10.1.50.250] I0112 02:20:42.284014 30603 certs.go:474] validating certificate period for front-proxy-ca certificate [certs] Generating &quot;front-proxy-client&quot; certificate and key I0112 02:20:42.412481 30603 certs.go:474] validating certificate period for etcd/ca certificate [certs] Generating &quot;etcd/server&quot; certificate and key [certs] etcd/server serving cert is signed for DNS names [k8s-master-1 localhost] and IPs [10.1.50.5 127.0.0.1 ::1] [certs] Generating &quot;etcd/peer&quot; certificate and key [certs] etcd/peer serving cert is signed for DNS names [k8s-master-1 localhost] and IPs [10.1.50.5 127.0.0.1 ::1] [certs] Generating &quot;etcd/healthcheck-client&quot; certificate and key [certs] Generating &quot;apiserver-etcd-client&quot; certificate and key [certs] Valid certificates and keys now exist in &quot;/etc/kubernetes/pki&quot; I0112 02:20:44.631172 30603 certs.go:76] creating new public/private key files for signing service account users [certs] Using the existing &quot;sa&quot; key [kubeconfig] Generating kubeconfig files [kubeconfig] Using kubeconfig folder &quot;/etc/kubernetes&quot; [kubeconfig] Writing &quot;admin.conf&quot; kubeconfig file [kubeconfig] Writing &quot;controller-manager.conf&quot; kubeconfig file [kubeconfig] Writing &quot;scheduler.conf&quot; kubeconfig file [control-plane] Using manifest folder &quot;/etc/kubernetes/manifests&quot; [control-plane] Creating static Pod manifest for &quot;kube-apiserver&quot; I0112 02:20:45.370294 30603 manifests.go:96] [control-plane] getting StaticPodSpecs I0112 02:20:45.370640 30603 certs.go:474] validating certificate period for CA certificate I0112 02:20:45.370743 30603 manifests.go:109] [control-plane] adding volume &quot;ca-certs&quot; for component &quot;kube-apiserver&quot; I0112 02:20:45.370767 30603 manifests.go:109] [control-plane] adding volume &quot;etc-ca-certificates&quot; for component &quot;kube-apiserver&quot; I0112 02:20:45.370779 30603 manifests.go:109] [control-plane] adding volume &quot;k8s-certs&quot; for component &quot;kube-apiserver&quot; I0112 02:20:45.370790 30603 manifests.go:109] [control-plane] adding volume &quot;usr-local-share-ca-certificates&quot; for component &quot;kube-apiserver&quot; I0112 02:20:45.370802 30603 manifests.go:109] [control-plane] adding volume &quot;usr-share-ca-certificates&quot; for component &quot;kube-apiserver&quot; I0112 02:20:45.381917 30603 manifests.go:126] [control-plane] wrote static Pod manifest for component &quot;kube-apiserver&quot; to &quot;/etc/kubernetes/manifests/kube-apiserver.yaml&quot; [control-plane] Creating static Pod manifest for &quot;kube-controller-manager&quot; I0112 02:20:45.381975 30603 manifests.go:96] [control-plane] getting StaticPodSpecs I0112 02:20:45.382292 30603 manifests.go:109] [control-plane] adding volume &quot;ca-certs&quot; for component &quot;kube-controller-manager&quot; I0112 02:20:45.382324 30603 manifests.go:109] [control-plane] adding volume &quot;etc-ca-certificates&quot; for component &quot;kube-controller-manager&quot; I0112 02:20:45.382336 30603 manifests.go:109] [control-plane] adding volume &quot;flexvolume-dir&quot; for component &quot;kube-controller-manager&quot; I0112 02:20:45.382347 30603 manifests.go:109] [control-plane] adding volume &quot;k8s-certs&quot; for component &quot;kube-controller-manager&quot; I0112 02:20:45.382357 30603 manifests.go:109] [control-plane] adding volume &quot;kubeconfig&quot; for component &quot;kube-controller-manager&quot; I0112 02:20:45.382367 30603 manifests.go:109] [control-plane] adding volume &quot;usr-local-share-ca-certificates&quot; for component &quot;kube-controller-manager&quot; I0112 02:20:45.382377 30603 manifests.go:109] [control-plane] adding volume &quot;usr-share-ca-certificates&quot; for component &quot;kube-controller-manager&quot; I0112 02:20:45.383243 30603 manifests.go:126] [control-plane] wrote static Pod manifest for component &quot;kube-controller-manager&quot; to &quot;/etc/kubernetes/manifests/kube-controller-manager.yaml&quot; [control-plane] Creating static Pod manifest for &quot;kube-scheduler&quot; I0112 02:20:45.383285 30603 manifests.go:96] [control-plane] getting StaticPodSpecs I0112 02:20:45.383551 30603 manifests.go:109] [control-plane] adding volume &quot;kubeconfig&quot; for component &quot;kube-scheduler&quot; I0112 02:20:45.384124 30603 manifests.go:126] [control-plane] wrote static Pod manifest for component &quot;kube-scheduler&quot; to &quot;/etc/kubernetes/manifests/kube-scheduler.yaml&quot; [check-etcd] Checking that the etcd cluster is healthy I0112 02:20:45.391793 30603 local.go:80] [etcd] Checking etcd cluster health I0112 02:20:45.391826 30603 local.go:83] creating etcd client that connects to etcd pods I0112 02:20:45.391841 30603 etcd.go:177] retrieving etcd endpoints from &quot;kubeadm.kubernetes.io/etcd.advertise-client-urls&quot; annotation in etcd Pods I0112 02:20:45.436952 30603 etcd.go:101] etcd endpoints read from pods: https://10.1.50.4:2379 I0112 02:20:45.467237 30603 etcd.go:247] etcd endpoints read from etcd: https://10.1.50.4:2379 I0112 02:20:45.467292 30603 etcd.go:119] update etcd endpoints: https://10.1.50.4:2379 I0112 02:20:45.497258 30603 kubelet.go:110] [kubelet-start] writing bootstrap kubelet config file at /etc/kubernetes/bootstrap-kubelet.conf I0112 02:20:45.499069 30603 kubelet.go:139] [kubelet-start] Checking for an existing Node in the cluster with name &quot;k8s-master-1&quot; and status &quot;Ready&quot; I0112 02:20:45.506135 30603 kubelet.go:153] [kubelet-start] Stopping the kubelet [kubelet-start] Writing kubelet configuration to file &quot;/var/lib/kubelet/config.yaml&quot; [kubelet-start] Writing kubelet environment file with flags to file &quot;/var/lib/kubelet/kubeadm-flags.env&quot; [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... I0112 02:20:50.940170 30603 cert_rotation.go:137] Starting client certificate rotation controller I0112 02:20:50.946669 30603 kubelet.go:188] [kubelet-start] preserving the crisocket information for the node I0112 02:20:50.946719 30603 patchnode.go:30] [patchnode] Uploading the CRI Socket information &quot;/var/run/dockershim.sock&quot; to the Node API object &quot;k8s-master-1&quot; as an annotation I0112 02:21:01.078081 30603 local.go:148] creating etcd client that connects to etcd pods I0112 02:21:01.078135 30603 etcd.go:177] retrieving etcd endpoints from &quot;kubeadm.kubernetes.io/etcd.advertise-client-urls&quot; annotation in etcd Pods I0112 02:21:01.130781 30603 etcd.go:101] etcd endpoints read from pods: https://10.1.50.4:2379 I0112 02:21:01.240220 30603 etcd.go:247] etcd endpoints read from etcd: https://10.1.50.4:2379 I0112 02:21:01.240255 30603 etcd.go:119] update etcd endpoints: https://10.1.50.4:2379 I0112 02:21:01.240812 30603 local.go:156] [etcd] Getting the list of existing members I0112 02:21:01.282237 30603 local.go:164] [etcd] Checking if the etcd member already exists: https://10.1.50.5:2380 I0112 02:21:01.282791 30603 local.go:175] [etcd] Adding etcd member: https://10.1.50.5:2380 [etcd] Announced new etcd member joining to the existing etcd cluster I0112 02:21:01.370283 30603 local.go:182] Updated etcd member list: [{k8s-master-1 https://10.1.50.5:2380} {k8s-master-0 https://10.1.50.4:2380}] [etcd] Creating static Pod manifest for &quot;etcd&quot; [etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s I0112 02:21:01.372930 30603 etcd.go:488] [etcd] attempting to see if all cluster endpoints ([https://10.1.50.4:2379 https://10.1.50.5:2379]) are available 1/8 I0112 02:21:03.455137 30603 etcd.go:468] Failed to get etcd status for https://10.1.50.5:2379: failed to dial endpoint https://10.1.50.5:2379 with maintenance client: context deadline exceeded [upload-config] Storing the configuration used in ConfigMap &quot;kubeadm-config&quot; in the &quot;kube-system&quot; Namespace [mark-control-plane] Marking the node k8s-master-1 as control-plane by adding the labels &quot;node-role.kubernetes.io/master=''&quot; and &quot;node-role.kubernetes.io/control-plane='' (deprecated)&quot; [mark-control-plane] Marking the node k8s-master-1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] This node has joined the cluster and a new control plane instance was created: * Certificate signing request was sent to apiserver and approval was received. * The Kubelet was informed of the new secure connection details. * Control plane (master) label and taint were applied to the new node. * The Kubernetes control plane instances scaled up. * A new etcd member was added to the local/stacked etcd cluster. To start administering your cluster from this node, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Run 'kubectl get nodes' to see this node join the cluster. </code></pre> <p><strong>Network Tests - 10.1.50.4</strong></p> <p><code>kubectl get nodes</code></p> <p><a href="https://i.stack.imgur.com/MTVFI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MTVFI.png" alt="enter image description here" /></a></p> <p>10.1.50.4 &gt; <code>lsof -i -P -n | grep LISTEN</code></p> <p><a href="https://i.stack.imgur.com/kItjK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kItjK.png" alt="enter image description here" /></a></p> <p>Installed etcd-client and ran <code>etcdctl member list</code> (after 10.1.50.5 tried to join)</p> <p><a href="https://i.stack.imgur.com/MpHeX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MpHeX.png" alt="enter image description here" /></a></p> <p><code>etcdctl cluster-health</code> (after 10.1.50.5 tried to join)</p> <p><a href="https://i.stack.imgur.com/qLnTZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qLnTZ.png" alt="enter image description here" /></a></p> <p><code>systemctl restart network</code> (after 10.1.50.5 tried to join)</p> <p><a href="https://i.stack.imgur.com/Wh8im.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Wh8im.png" alt="enter image description here" /></a></p> <p><code>etcdctl --version</code> (after 10.1.50.5 tried to join)</p> <p><a href="https://i.stack.imgur.com/Xn4n7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Xn4n7.png" alt="enter image description here" /></a></p> <p><code>kubeadm version</code> (after 10.1.50.5 tried to join)</p> <p><a href="https://i.stack.imgur.com/oaoJ7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oaoJ7.png" alt="enter image description here" /></a></p> <p><code>kubectl get nodes (after 10.1.50.5 tried to join)</code></p> <p><a href="https://i.stack.imgur.com/8zRgy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8zRgy.png" alt="enter image description here" /></a></p> <p><strong>Network Tests - 10.1.50.5 - Before Join</strong></p> <p><code>route -n</code></p> <p><a href="https://i.stack.imgur.com/az8DW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/az8DW.png" alt="enter image description here" /></a></p> <p><code>nmap -p 6443 10.1.50.4</code></p> <p><a href="https://i.stack.imgur.com/L7qnJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/L7qnJ.png" alt="enter image description here" /></a></p> <p><code>ping 10.1.50.4</code></p> <p><a href="https://i.stack.imgur.com/tdMRV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tdMRV.png" alt="enter image description here" /></a></p> <p><code>ping 10.1.50.250</code></p> <p><a href="https://i.stack.imgur.com/u55iF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/u55iF.png" alt="enter image description here" /></a></p> <p><strong>Network Tests - 10.1.50.5 - After Join</strong></p> <p><code>route -n</code></p> <p>Same</p> <p><code>nmap -p 6443 10.1.50.4</code></p> <p><a href="https://i.stack.imgur.com/nHSMJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nHSMJ.png" alt="enter image description here" /></a></p> <p><code>ping 10.1.50.4</code></p> <p>Same</p> <p><code>ping 10.1.50.250</code></p> <p>Same</p> https://serverfault.tech/q/1049552 0 Is the HA option for Cloud SQL also a matter of data redundancy? sbrattla https://serverfault.tech/users/44680 2021-01-12T15:31:29Z 2021-01-15T21:03:23Z <p>I'm looking at the GCE &quot;Cloud SQL&quot; product, and specifically the MySQL flavour. I'm a bit confused about how this product operations in non-HA mode.</p> <p>I understand that in HA mode, you have a standby replica ready to kick in if your primary instance becomes unavailable.</p> <p>However, what happens if a non-HA instance becomes unavailable? Will GCE always, eventually at some point, bring it back up again? When it is brought up again, would worst case scenario then be that your data is 24 hours old (last daily backup)? Or is manual work actual required in this case; e.g you need to provision a new instance based on a backup yourself?</p> <p>It appears to me that the non-HA mode has a guarantee of 99.95% availability. Is the HA-mode then what covers your back during the 0.05% remaining time?</p> <p><a href="https://cloud.google.com/sql" rel="nofollow noreferrer">Google says</a> :</p> <blockquote> <p>Cloud SQL automates all your backups, replication, encryption patches, and capacity increases—while ensuring greater than 99.95% availability, anywhere in the world.</p> </blockquote> https://serverfault.tech/q/1048799 -1 BIND9 does not listen on external ip slawekh666 https://serverfault.tech/users/611236 2021-01-06T12:29:16Z 2021-01-15T20:59:15Z <p>I'm trying to configure the bind to broadcast my domain around the external network, but my server is not listening on the external ip.</p> <p>Everything works fine locally.</p> <p>my web interface:</p> <pre><code> enp4s0: flags=4163&lt;UP,BROADCAST,RUNNING,MULTICAST&gt; mtu 1500 inet INTERNAL_IP netmask 255.255.255.0 broadcast 192.168.55.255 inet6 INTERNAL_IP_V6 prefixlen 64 scopeid 0x20&lt;link&gt; lo: flags=73&lt;UP,LOOPBACK,RUNNING&gt; mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10&lt;host&gt; loop txqueuelen 1000 (Local Loopback) </code></pre> <p>even with iptables turned off:</p> <pre><code>sudo iptables -L -n Chain INPUT (policy DROP) target prot opt source destination ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 Chain FORWARD (policy DROP) target prot opt source destination ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 Chain OUTPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 </code></pre> <p>named.conf.options:</p> <pre><code> options { directory &quot;/var/cache/bind&quot;; statistics-file &quot;/run/named/named.stats&quot;; pid-file &quot;/run/named/named.pid&quot;; session-keyfile &quot;/run/named/session.key&quot;; forwarders { x.x.x.x; # gateway or router 8.8.8.8; 8.8.4.4; external ip; }; dnssec-validation auto; listen-on-v6 port 53 {any;}; listen-on port 53 {any;}; auth-nxdomain yes; # conform to RFC1035 allow-query { any; }; # allow anyone to issue queries allow-recursion { any; }; allow-query-cache { any; }; recursion no; # disallow recursive queries version &quot;[SECURED]&quot;; }; </code></pre> <p>netstat:</p> <pre><code>netstat -tulpn | grep :53 (Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.) tcp 0 0 INTERNAL_IP:53 0.0.0.0:* LISTEN - tcp 0 0 127.0.0.1:53 0.0.0.0:* LISTEN - tcp6 0 0 :::53 :::* LISTEN - udp 0 0 INTERNAL_IP:53 0.0.0.0:* - udp 0 0 127.0.0.1:53 0.0.0.0:* - udp 0 0 0.0.0.0:5353 0.0.0.0:* - udp6 0 0 :::53 :::* - udp6 0 0 :::5353 :::* - </code></pre> <p>ufw:</p> <pre><code>sudo ufw status Status: active To Action From -- ------ ---- DNS ALLOW Anywhere DNS (v6) ALLOW Anywhere (v6) </code></pre> <p>nmap shows:</p> <pre><code>53/tcp filtered domain </code></pre> <p>What am I doing wrong?</p> <p>Thank you in advance for your help, Slawek.</p> https://serverfault.tech/q/1030046 1 I cannot expand raid 5 capacity on an Adaptec RAID 71605 Jan https://serverfault.tech/users/587825 2020-08-13T20:36:26Z 2021-01-15T22:04:44Z <p>I have a Supermicro CSE-815 X8DTU-F server.</p> <p>In it I have installed an Adaptec RAID 71605 controller with 4 SAS HDDs.</p> <p>With them I set up a Raid 5 with 3 discs only. Now I have connected the disk that was catching and I want to expand it to a raid 5 with the 4 disks.</p> <p>I have done all the management of the adapter through &quot;maxView Storage Manager&quot; but at the moment of expanding the RAID it does not let me add the new disk. It just appears but doesn't let me select (see image).</p> <p><img src="https://i.stack.imgur.com/smBhr.png" alt="Try to add hdd" /></p> <p>It is my first post and I hope I have explained myself. I don't know if someone has hit him and can help me or is it some wrong configuration. Thanks for the help.</p> <p>Hi all. I think I have found the problem. Reviewing the screenshot I see that the new HDD detects it with 1,023 GB less (1047.55 MB less). This does not make sense since the 4 HDDs are identical. The disk is empty so I don't know why it says it is smaller. How can I reduce the size of the RAID 5 or fix this? Thank.</p> https://serverfault.tech/q/1007880 0 How do I run all Ansible plays at first host (all of them), then at second host (all of them) and so on - hosts one by one? Charles_Ashley https://serverfault.tech/users/565059 2020-03-22T13:19:49Z 2021-01-15T22:03:55Z <p>This is for a set of hosts, where no more than one host with a service may be down at time and host's set up may be a complex routine. I already tried <code>serial: 1</code> (it's Rolling Update in Ansible terms), <code>--limit $host</code>, <code>--fork 1</code>. All of them works in undesirable way: it still running play by play instead of host by host.</p> <p>Here are current properties and desired ones for a solution (it's a matter of subject and it's possible to rewrite solution from scratch too):</p> <ul> <li>I have set of playbooks - ready for use solution.</li> <li>I want to run this solution against each host one by one.</li> <li>Inventory is created with Python logic before a launch of a play (it's done).</li> <li>Hosts already are spread across arbitrary set of groups. Certain hosts are only members of one group, certain host are members of another group only. I have <em>dynamic</em> inventory (Ansible Dynamic inventory feature) and in inventory always have auto-generated group with plain list of <em>all</em> hosts involved.</li> </ul> <p>Looking for how to:</p> <ul> <li>All plays should be ran at a first host and finish.</li> <li>All plays should be ran at a second host and finish.</li> <li>And so on at arbitrary quantity of hosts.</li> <li>If a host is not from a group for a playbook, play should not be applied at this host.</li> </ul> <p>Advise me please: How may I achieve it?</p> <p>Below are simplified parts of the plays set. It have been created and generally it works.</p> <p>My top level playbook <code>site.yaml</code></p> <pre><code>--- - name: Site set up hosts: - masters - replicas serial: 1 roles: - role: do-01 - role: do-02 - import_playbook: play-do-11.yaml - import_playbook: play-do-12.yaml </code></pre> <p>I have play books: <code>play-do-11.yaml</code>, <code>play-do-12.yaml</code> like this:</p> <pre><code>--- - name: play-do-11 hosts: satellites serial: 1 roles: - role: actor </code></pre> <p>I'm starting Ansible playbook in this way:</p> <pre><code>for single_host in host-a host-b host-c ; do ansible-playbook \ --forks 1 \ --limit "$single_host" \ --inventory inventory.json \ "site.yaml" done </code></pre> <p>P.S. It's out of scope, but it adds flexibility to a solution I'm looking for. In fact I have dynamic inventory. It can be launched before any other items launch. There is auto-added group with plain list of all hosts in the inventory. Thus, I can create pre-generated Json and before a launch I have all host name's as plain strings. It is used in Shell launcher like this:</p> <pre><code>for host in $( cat inventory.json | jq -r ".\"group-with-all\".hosts | keys[]" ) ; do ansible-playbook --limit "${host}" ... done </code></pre> <p>It's well enough for automation: hosts grouping and properties still are managed in one place.</p> https://serverfault.tech/q/910759 5 RDP session Flickering - Only one user RazZ https://serverfault.tech/users/427696 2018-05-04T15:16:25Z 2021-01-15T23:00:45Z <p>So, This is a tricky one.</p> <p>I use a system where users connects via VPN to a Terminal Server on an other site. It is working fine for everybody but one user. For this user the screen is flickering/flashing/refreshing constantly and the session is unusable. It did work two days ago. No changes know by the user since then. Computer in Win10 and Server 2016</p> <p>What I did is:</p> <pre><code>play around with caching, resolution, color depth (in mstsc settings) ... &gt; same flickering erase the remote desktop cache &gt; same flickering log with mstsc on an other server with his computer &gt; works fine try to connect with his user on my own computer &gt; same flickering download the other remote desktop from microsoft store &gt; same flickering try to update network card driver, video driver &gt; all up to date windows update &gt; same flickering check network settings &gt; everything is fine properly sign out the session of the user on the server &gt; same flickering erase the user profile from the server &gt; same flickering </code></pre> <p>Performance of the PC are all good. No overload network, ram or cpu</p> <p>So I am clueless...any other ideas woudl be greatly appreciated.</p> <p>Ben</p> https://serverfault.tech/q/744078 4 Permission denied: Could not open password file. user1486269 https://serverfault.tech/users/327918 2015-12-18T20:05:58Z 2021-01-15T22:29:08Z <p>I am using Apache Red hat .</p> <p>I have .htaccess in my /var/www/html with permissions as followed</p> <pre><code>-rwxr-xr-x. 1 apache apache 127 Dec 18 14:17 .htaccess </code></pre> <p>.htaccess has following data set inside it </p> <pre><code>AuthType Basic AuthName "Restricted Access" AuthUserFile /var/www/html/server-auth/.htpasswd Require user manu </code></pre> <p>Permissions on var/www/html/server-auth/.htpasswd</p> <pre><code>-rwxr-xr-x. 1 apache apache 40 Dec 16 19:11 .htpasswd </code></pre> <p>When I open my web page on browser, and after putting username and password, the login prompts reappears. Even if the username and password is correct.</p> <p>Error logs:</p> <p>(13) Permission denied: Could not open password file: /var/www/html/server-auth/.htpasswd</p> <p>access to / failed, reason: verification of user id 'manu' not configured</p> <p>Any help!</p> https://serverfault.tech/q/661909 54 The right way to keep docker container started when it used for periodic tasks Korjavin Ivan https://serverfault.tech/users/90115 2015-01-23T16:24:20Z 2021-01-15T20:05:13Z <p>I have docker container with installed and configured software.</p> <p>There is no any programm supposed to be started/runned all the time.</p> <p>What I want - its ability to start some command depending on external events. like:</p> <pre><code>docker exec mysupercont /path/to/mycommand -bla -for </code></pre> <p>and </p> <pre><code>docker exec mysupercont /path/to/myothercommand </code></pre> <p>But "exec" impossible when container is stopped, and also this container have some "working" data inside, which used for that commands, so I can't use</p> <pre><code>docker run ... </code></pre> <p>each time, because it recreate container from image and destroy my data.</p> <p>What is the "right" and the "best" way to keep such container runned? Which command I can start inside?</p> https://serverfault.tech/q/642064 0 Postgres Remote Connection Timeout phouse512 https://serverfault.tech/users/252353 2014-11-05T01:07:45Z 2021-01-15T21:04:55Z <p>I've been reading tutorials extensively to help figure out my problem, but to no avail. </p> <p>I have a Redhat VM that I've installed Postgres on that I'm trying to make available for remote connections. When I'm on the machine, if I run the following command, I am able to connect to my desired table.</p> <pre><code>psql -U philhouse -d pwap </code></pre> <p>However, when I try running this:</p> <pre><code>psql -U philhouse -d pwap -h servername.nu.edu </code></pre> <p>I always time out. </p> <p>I've read multiple tutorials and guides, but still cannot figure it out. I've edited pg_hba.conf file, my postgres.conf file, as well as <em>attempted</em> to work with iptables. Here are the outputs of the following just so that you can see:</p> <p><strong>pg_hba.conf</strong> I've changed to allow all connections for testing purposes</p> <pre><code># TYPE DATABASE USER CIDR-ADDRESS METHOD # "local" is for Unix domain socket connections only local all all trust # IPv4 local connections: host all all 0.0.0.0/0 trust # IPv6 local connections: host all all ::1/128 trust </code></pre> <p><strong>postgres.conf</strong> - here I changed listen_addresses to * just for testing purposes</p> <pre><code>#------------------------------------------------------------------------------ # CONNECTIONS AND AUTHENTICATION #------------------------------------------------------------------------------ # - Connection Settings - #listen_addresses = '*' # what IP address(es) to listen on; # comma-separated list of addresses; # defaults to 'localhost', '*' = all # (change requires restart) #port = 5432 # (change requires restart) max_connections = 100 # (change requires restart) # Note: Increasing max_connections costs ~400 bytes of shared memory per # connection slot, plus lock space (see max_locks_per_transaction). #superuser_reserved_connections = 3 # (change requires restart) #unix_socket_directory = '' # (change requires restart) #unix_socket_group = '' # (change requires restart) #unix_socket_permissions = 0777 # begin with 0 to use octal notation # (change requires restart) #bonjour_name = '' # defaults to the computer name # (change requires restart) # - Security and Authentication - #authentication_timeout = 1min # 1s-600s #ssl = off # (change requires restart) #ssl_ciphers = 'ALL:!ADH:!LOW:!EXP:!MD5:@STRENGTH' # allowed SSL ciphers # (change requires restart) #ssl_renegotiation_limit = 512MB # amount of data between renegotiations #password_encryption = on #db_user_namespace = off </code></pre> <p>** iptables ** I just attempted this .. not 100% sure what to do here.</p> <pre><code># Firewall configuration written by system-config-securitylevel # Manual customization of this file is not recommended. *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :NETBKP - [0:0] :RH-Firewall-1-INPUT - [0:0] -A INPUT -p tcp -m multiport --dports 13720,13721,13782,13724,13783,13722,13723 -j NETBKP -A INPUT -s 127.0.0.1 -d 127.0.0.1 -j ACCEPT #-A INPUT -s 129.105.214.0/255.255.255.0 -p tcp -m tcp --dport 22 -j ACCEPT #-A INPUT -s 129.105.106.0/255.255.255.128 -p tcp -m tcp --dport 22 -j ACCEPT #-A INPUT -s 165.124.200.32/255.255.255.240 -p tcp -m tcp --dport 22 -j ACCEPT -A INPUT -s 129.105.106.0/255.255.255.128 -p udp -m udp --dport 161 -j ACCEPT # #FP 121688 -A INPUT -s 129.105.0.0/255.255.0.0 -p tcp -m tcp --dport 22 -j ACCEPT -A INPUT -s 165.124.0.0/255.255.0.0 -p tcp -m tcp --dport 22 -j ACCEPT -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT -A INPUT -p tcp -m tcp --dport 443 -j ACCEPT -A INPUT -p tcp -m tcp --dport 3000 -j ACCEPT -A INPUT -p tcp -m tcp --dport 5432 -j ACCEPT # # #-A INPUT -s 129.105.119.86 -j ACCEPT -A INPUT -j RH-Firewall-1-INPUT -A NETBKP -s 129.105.106.0/255.255.255.128 -j ACCEPT -A NETBKP -s 129.105.208.18 -j ACCEPT -A NETBKP -s 129.105.208.20 -j ACCEPT -A NETBKP -s 129.105.208.82 -j ACCEPT -A NETBKP -s 129.105.208.115 -j ACCEPT -A NETBKP -s 129.105.208.116 -j ACCEPT -A NETBKP -s 129.105.215.131 -j ACCEPT -A NETBKP -s 165.124.61.0/255.255.255.128 -j ACCEPT -A NETBKP -j REJECT --reject-with icmp-port-unreachable -A FORWARD -j RH-Firewall-1-INPUT -A RH-Firewall-1-INPUT -i lo -j ACCEPT -A RH-Firewall-1-INPUT -p icmp --icmp-type any -j ACCEPT -A RH-Firewall-1-INPUT -p 50 -j ACCEPT -A RH-Firewall-1-INPUT -p 51 -j ACCEPT -A RH-Firewall-1-INPUT -p udp --dport 5353 -d 224.0.0.251 -j ACCEPT -A RH-Firewall-1-INPUT -p udp -m udp --dport 631 -j ACCEPT -A RH-Firewall-1-INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT #-A RH-Firewall-1-INPUT -j LOG -A RH-Firewall-1-INPUT -j REJECT --reject-with icmp-host-prohibited COMMIT </code></pre> <p>Any thoughts or things that I'm missing? I've been working on this for 3 days and haven't made a bit of progress...</p> <p>Thanks for the help!!</p> https://serverfault.tech/q/613747 0 VirtualHost Server Not Found swtdrgn https://serverfault.tech/users/233497 2014-07-18T19:42:57Z 2021-01-15T22:03:55Z <p>After modifying the httpd.conf to include my virtual host configuration: <code>Include /private/etc/apache2/extra/httpd-vhosts.conf</code>, I added the following to my virtual host configuration file:</p> <pre><code># # Virtual Hosts # # If you want to maintain multiple domains/hostnames on your # machine you can setup VirtualHost containers for them. Most configurations # use only name-based virtual hosts so the server doesn't need to worry about # IP addresses. This is indicated by the asterisks in the directives below. # # Please see the documentation at # &lt;URL:http://httpd.apache.org/docs/2.2/vhosts/&gt; # for further details before you try to setup virtual hosts. # # You may use the command line option '-S' to verify your virtual host # configuration. # # Use name-based virtual hosting. # NameVirtualHost *:8085 # # VirtualHost example: # Almost any Apache directive may go into a VirtualHost container. # The first VirtualHost section is used for all requests that do not # match a ServerName or ServerAlias in any &lt;VirtualHost&gt; block. # #&lt;VirtualHost *:80&gt; # ServerAdmin webmaster@dummy-host.example.com # DocumentRoot "/usr/docs/dummy-host.example.com" # ServerName dummy-host.example.com # ServerAlias www.dummy-host.example.com # ErrorLog "/private/var/log/apache2/dummy-host.example.com-error_log" # CustomLog "/private/var/log/apache2/dummy-host.example.com-access_log" common #&lt;/VirtualHost&gt; #&lt;VirtualHost *:80&gt; # ServerAdmin webmaster@dummy-host2.example.com # DocumentRoot "/usr/docs/dummy-host2.example.com" # ServerName dummy-host2.example.com # ErrorLog "/private/var/log/apache2/dummy-host2.example.com-error_log" # CustomLog "/private/var/log/apache2/dummy-host2.example.com-access_log" common #&lt;/VirtualHost&gt; &lt;VirtualHost *:8085&gt; ServerName 127.0.0.1:8085 DocumentRoot "/Users/path/to/root/" &lt;/VirtualHost&gt; </code></pre> <p>Then I restarted my apache server, and it says that 127.0.0.1 cannot be found. I checked everything that I did is correct, but I could not figure what the problem is.</p> <ul> <li>The path to the directory that I want to serve exist.</li> <li><code>sudo apachectl -S</code> returns</li> </ul> <p>></p> <pre><code>VirtualHost configuration: wildcard NameVirtualHosts and _default_ servers: *:8085 is a NameVirtualHost default server 127.0.0.1 (/private/etc/apache2/extra/httpd-vhosts.conf:44) port 8085 namevhost 127.0.0.1 (/private/etc/apache2/extra/httpd-vhosts.conf:44) Syntax OK </code></pre> <p>&lt;</p> <p>Can someone point to me how I can get my virtual host running?</p> <p>[EDIT]</p> <p>Error Log (<code>/var/log/apache2/error_log</code>):</p> <pre><code>[Sat Jul 19 19:12:38 2014] [notice] SIGHUP received. Attempting to restart [Sat Jul 19 19:12:38 2014] [notice] Digest: generating secret for digest authentication ... [Sat Jul 19 19:12:38 2014] [notice] Digest: done [Sat Jul 19 19:12:38 2014] [notice] Apache/2.2.26 (Unix) DAV/2 mod_ssl/2.2.26 OpenSSL/0.9.8y configured -- resuming normal operations </code></pre> https://serverfault.tech/q/578814 0 S/MIME icon missing from OWA user2891127 https://serverfault.tech/users/197750 2014-02-27T17:52:48Z 2021-01-15T23:00:45Z <p>I'm trying to test S/MIME with OWA (Exchange 2010, Outlook 2010). Now my research has told me that the control must be installed first. So as someone with admin rights, open OWA, click on All Option, then Settings then the S/MIME icon and install the contorl. I also know it has to be done in IE, 32 bit. For myself and another freshly created user, it works fine. The icon is there. But I tested it with a third user, and there is no S/MIME icon. It's missing.</p> <p>The fact it's there for 2 accounts says it's enabled in the Outlook Web App Mailbox policies. I even installed it on my account and it works. There is only the default policy, so it can't be the user is assigned to a policy where it's disabled. He's assigned to the same policy as me in any case.</p> <p>So why do the other accounts have the icon, but the one account doesn't? Without the icon, I can't install the control.</p> https://serverfault.tech/q/518821 1 How can you do dynamic, key-based SSH similar to GitHub? Callmeed https://serverfault.tech/users/10597 2013-06-26T18:09:26Z 2021-01-15T20:28:44Z <p>I wan't to provide SSH/RSYNC like features to users of my app. I'd like them to just be able to paste/save an SSH key into my app <a href="https://help.github.com/articles/generating-ssh-keys" rel="nofollow noreferrer">similar to how GitHub does</a>: </p> <p><img src="https://i.stack.imgur.com/Ex7BW.png" alt="enter image description here"></p> <p>So, my question is, once I let users save their key (preferably in the database): </p> <p><strong>Is there a way I can provide/mimic SSH/RSYNC features without actually creating Linux user accounts?</strong> I'd love to be able to authorize based on their database-stored public key and let them RSYNC in/out of a specific folder (say, based on their username in the app, for example). </p> <p><em>(if it matters, it will be a Rails app deployed to an Ubuntu server)</em></p> https://serverfault.tech/q/451238 22 Why can't all zeros in the host portion of IP address be used for a host? Grezzo https://serverfault.tech/users/134082 2012-11-22T13:52:06Z 2021-01-15T22:02:54Z <p>I know that if I have a network <code>83.23.159.0/24</code> then I have 254 usable host IP addresses because:</p> <pre><code>83.23.159.0 (in binary: host portion all zeros) is the subnet address 83.23.159.1-254 are host addresses 83.23.159.255 (in binary: host portion all ones) is the broadcast address </code></pre> <p>I understand the use for a broadcast address, but I don't understand what the subnet address is ever used for. I can't see any reason that an IP packet's destination address would be set to the subnet address, so why does the subnet itself need an address if it is never going to be the endpoint for AN IP flow? To me it seems like a waste to not allow this address to be used as a host address.</p> <p>To summarise, my questions are:</p> <ol> <li>Is an IP packet's destination ever set to the subnet IP address?</li> <li>If yes, in what cases and why?</li> <li>If no, then why not free up that address for any host to use?</li> </ol> https://serverfault.tech/q/342863 3 VMware vSphere Client Cannot Connect Windows Ninja https://serverfault.tech/users/37860 2011-12-20T18:55:13Z 2021-01-15T20:07:46Z <p>When I try to connect to a couple of our ESXi servers with my vSphere client I get the following error message:</p> <p>"vSphere Client could not connect to "IP Address". An unknown connection error occurred. (The client could not send a complete request to the server. (The underlying connection was closed: An unexpected error occurred on a send.))</p> <p>I'm thinking this may have something to do with a version incompatibility but I'm not sure. Can somebody shed some light?</p> https://serverfault.tech/q/74370 5 How to migrate Samba User Accounts to a new linux server? jericho https://serverfault.tech/users/0 2009-10-14T13:22:03Z 2021-01-15T21:04:55Z <p>I have an Ubuntu 6.06 server that needs to be replaced by an Ubuntu 9.04 server clean setup, I already copied the entire samba file server directory to the new 9.04 server using rsync. I need to know how to migrate the existing user accounts (machine accounts) to the new server so as when I physically transfer the connections everything will be ok and I don't have to manually enter <code>smbpasswd -a &lt;user&gt;</code> on the new server.</p> <pre><code>passdb backend = tdbsam </code></pre> <p>network workstations accessing the share are either vista or xp.</p>