Recent Questions - Server Fault most recent 30 from 2021-04-13T06:39:41Z 0 How to activate/renew Ubuntu Advantage/ESM if you have HTTP connect proxy like Squid? uav 2021-04-13T06:18:22Z 2021-04-13T06:18:22Z <p>How to activate/renew Ubuntu Advantage/ESM on Ubuntu 14.04 if you have HTTP connect proxy like Squid - so you get security updates again? With the older tool I get errors and/or timeouts with <code>sudo apt update &amp;&amp; sudo apt dist-upgrade</code>.</p> 0 How do you run arbitrary "post-deploy" commands inside a container in on AWS EKS? sbrattla 2021-04-13T06:06:56Z 2021-04-13T06:06:56Z <p>We're looking into migrating away from a Docker Swarm to AWS EKS. We have not yet decided if it's going to be EKS + Fargate or EKS + EC2.</p> <p>When we deploy a project such as e.g. a Drupal site, we need to run certain post-deploy actions, such as updating database schema (<code>drush updb</code>) or importing configurations <code>drush cim</code>. We do this through a small piece of in-house built script which figures out which hosts a service runs on, and then logs on to one of the hosts and <code>exec</code> into the applicable task. Then, the specified commands gets executed.</p> <p>Is there a &quot;best practice&quot; for how to do this on EKS? As you can't just log on to the host in a Fargate setup, there must be another way to do this.</p> 0 Postfix without queues? rosstex 2021-04-13T06:05:51Z 2021-04-13T06:05:51Z <p>I would like to use Postfix as an MTA to deliver emails that I create locally using a Python SMTP client. But I would like to handle the re-sending of emails manually, rather than using Postfix's automatic queues. I also want to get the status codes from the destination server back to Python through my local Postfix. Is this possible?</p> 0 Is the Host: header required over SSL? ImBoredAllTheTime 2021-04-13T05:39:14Z 2021-04-13T05:39:14Z <p>Is the Host: header required over SSL even if the request is not HTTP/1.1?</p> <p>So, if a client connects over SSL, and sends the following request:</p> <pre><code>GET / HTTP/1.0 </code></pre> <ol> <li>Should the web server throw a bad request due to the missing Host: header?</li> <li>Should the web server respond with an HTTP/1.0 200 OK response?<br> (the index.html file always exists, so a request to <b>/</b>, would never lead to 403/404)</li> </ol> 0 Trying to find someone causing trouble on my network. Need to convert public to private IP Jordi 2021-04-13T04:15:07Z 2021-04-13T04:15:07Z <p>Somebody has started a instagram page in my hotel which is being used to bully and damage peoples reputation. I am trying to find them. Seeming they are connected to my internet is there any way I can grab their public ip through a link and then convert that to a private IP address that I can use to match up with a hostname of their device?</p> 0 Unable to update kafka cluster version in AWS MSK vaibhav kanchan 2021-04-13T03:53:16Z 2021-04-13T04:12:18Z <p>We have written python code to upgrade kafka version in AWS MSK and its giving error</p> <pre><code>.................................................. Traceback (most recent call last): File &quot;./;, line 71, in &lt;module&gt; update_kafka_version(name, targetKafkaVersion) File &quot;./;, line 40, in update_kafka_version update_kafka_version_response = client.update_cluster_kafka_version( File &quot;/usr/local/lib/python3.8/site-packages/botocore/;, line 357, in _api_call return self._make_api_call(operation_name, kwargs) File &quot;/usr/local/lib/python3.8/site-packages/botocore/;, line 676, in _make_api_call raise error_class(parsed_response, operation_name) botocore.errorfactory.BadRequestException: An error occurred (BadRequestException) when calling the UpdateClusterKafkaVersion operation: The specified parameter value is identical to the current value for the cluster. Specify a different value, then try again. </code></pre> <p>As per from the boto3 documentation for kafka</p> <p><a href="" rel="nofollow noreferrer"></a></p> <pre><code>response = client.update_cluster_kafka_version( ClusterArn='string', ConfigurationInfo={ 'Arn': 'string', 'Revision': 123 }, CurrentVersion='string', TargetKafkaVersion='string' ) </code></pre> <p>We have stored kafka zookeeper endpoints, bootstrap nodes, cluster arn and cluster version in parameter store and fetching the cluster arn from parameter store. We are fetching current version using describe-cluster but it's still giving error that the specified current version matches with the current cluster value.</p> <p>python module for boto3 in my laptop</p> <p>boto3 1.17.27 botocore 1.20.44</p> <p>Any advise on fix this issue will be highly appreciated.</p> 0 How to create User mongodb with specify action luong vy 2021-04-13T03:29:51Z 2021-04-13T03:29:51Z <p>My db have those Collection : users , transactions, balances,... Each collection manager by one VPS. How can i create user for each collection with specify action? Example VPS A can read only collection user but can't read password field which is have in users. VPS B can read / create collection balances but can't edit or delete it. Thanks!</p> 0 Can not see the new installed kernel during the boot screen Ashish 2021-04-13T03:26:02Z 2021-04-13T05:16:19Z <p>We have upgraded to RHEL7.6- 7.9 also the new kernel has been installed(<code>kernel-3.10.0-1160.15.2.el7.x86_64</code>) but we are unable to see the new kernel menu entry during the boot menu screen.</p> <p>As I have checked, the new kernel is installed and show in the grub.cfg file and reinstalled the kernel multiple times, but it does not show the new kernel during the boot screen.</p> <p><strong>df /boot</strong> <br></p> <pre><code>/dev/sda1 487652 209315 248641 46% /boot </code></pre> <p><strong>installed kernel versions</strong> (version 1160 is the new kernel from which we want to boot)</p> <pre><code>abrt-addon-kerneloops-2.1.11-52.el7.x86_64 Wed Jul 10 19:11:44 2019&lt; kernel-3.10.0-957.38.3.el7.x86_64 Sat Nov 23 15:16:16 2019 kernel-3.10.0-957.58.2.el7.x86_64 Fri Aug 28 23:01:22 2020 **kernel-3.10.0-1160.15.2.el7.x86_64 Sat Mar 6 17:41:02 2021** kernel-headers-3.10.0-1160.15.2.el7.x86_64 Sat Mar 6 17:40:25 2021 kernel-tools-3.10.0-1160.15.2.el7.x86_64 Sat Mar 6 17:42:54 2021 kernel-tools-libs-3.10.0-1160.15.2.el7.x86_64 Sat Mar 6 17:38:40 2021 libreport-plugin-kerneloops-2.1.11-42.el7.x86_64 Wed Jul 10 19:15:51 2019 </code></pre> <p>I have also changed the default kernel but it is not working either.</p> <p>Any workaround for the issue?</p> 0 issue with SPF records for website hosted on GCP Ahmed Xuberi 2021-04-13T03:23:38Z 2021-04-13T05:13:25Z <p>I am new to GCP. I have bought a domain from GoDaddy and hosted my website on GCP. My website is built on WordPress. I have contact forms on WordPress and when customer visit my website and leave a message and their emails beside right design I ado not get the emails on designated email address.</p> <p>My understanding is this is due to SPF records, as my email provider (Zoho) also mentions that SPF records are not correct. Requirement from Zoho:</p> <pre><code>v=spf1 ~all </code></pre> <p>Record in my GCP DNS Zone:</p> <pre><code>Mydomain(domainname) TXT 600 &quot;v=spf1&quot; &quot;; &quot;~all&quot; </code></pre> <p>While contacting Zoho support, they mentioned this is strange I should contact DNS provider.</p> <p>Can you suggest the solution?</p> 0 Kerberos status says masked for the kdc server Jennielyn Castro 2021-04-13T03:18:31Z 2021-04-13T03:18:31Z <pre><code>Failed to restart krb5-admin-server.service: Unit krb5-admin-server.service is masked.``` ```sudo systemctl status krb5-kdc.service ● krb5-kdc.service Loaded: masked (Reason: Unit krb5-kdc.service is masked.) Active: inactive (dead) since Tue 2021-04-13 02:42:45 UTC; 26min ago Main PID: 477 (code=exited, status=0/SUCCESS) ``` I am setting up a kdc server and client in ubuntu and this part was active earlier. How can I fix this error? </code></pre> 0 Kubernetes - vSphere Cloud Provider Alexandre Cardoso 2021-04-13T02:55:32Z 2021-04-13T02:55:32Z <p>I'm following this doc <a href="" rel="nofollow noreferrer"></a></p> <p>I am using a load balancer as my ControlPlaneEndpoint, now I would like to join a new master to the cluster passing the cloud-provider flag as well, through the below method it was possible join the workers however I can't do the same with a new Master.</p> <p><code>kubectl -n kube-public get configmap cluster-info -o jsonpath='{.data.kubeconfig}' &gt; discovery.yaml</code></p> <pre><code># tee /etc/kubernetes/kubeadminitworker.yaml &gt;/dev/null &lt;&lt;EOF apiVersion: caCertPath: /etc/kubernetes/pki/ca.crt discovery: file: kubeConfigPath: /etc/kubernetes/discovery.yaml timeout: 5m0s tlsBootstrapToken: y7yaev.9dvwxx6ny4ef8vlq kind: JoinConfiguration nodeRegistration: criSocket: /var/run/dockershim.sock kubeletExtraArgs: cloud-provider: external EOF </code></pre> <p>Thanks</p> 0 NLB with NGINX controller on my EKS cluster Each service I deploy creates its own NLB instead of using the existing one sumanth 2021-04-13T02:10:33Z 2021-04-13T02:10:33Z <p>I am trying to use an NLB with NGINX controller on my EKS cluster. Each service I deploy creates its own NLB instead of using the existing one. Here's what I'm doing, please help me where I'm going wrong <code>kubectl apply -f</code> Applying this deployment, service and Ingress</p> <pre><code>apiVersion: v1 kind: Service metadata: name: wordpress namespace: wordpress labels: app: wordpress annotations: &quot;nginx&quot; nlb-ip spec: type: LoadBalancer ports: - port: 80 targetPort: 8282 protocol: TCP name: http selector: app: wordpress --- apiVersion: extensions/v1beta1 kind: Ingress metadata: namespace: wordpress name: wordpress-ingress annotations: &quot;nginx&quot; spec: rules: - host: http: paths: - path: / backend: serviceName: wordpress servicePort: 8282 --- apiVersion: apps/v1 kind: Deployment metadata: name: wordpress namespace: wordpress labels: app: wordpress spec: selector: matchLabels: app: wordpress strategy: type: Recreate template: metadata: labels: app: wordpress spec: containers: - image: wordpress:4.8-apache name: wordpress env: - name: WORDPRESS_DB_HOST value: wordpress-mysql - name: WORDPRESS_DB_PASSWORD valueFrom: secretKeyRef: name: key: ports: - containerPort: 80 name: wordpress </code></pre> <p>Result:</p> <pre><code>$ k get svc -n wordpress NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE wordpress LoadBalancer 80:32639/TCP 9s wordpress-mysql ClusterIP None &lt;none&gt; 3306/TCP 3d10h $ k get ing -n wordpress NAME CLASS HOSTS ADDRESS PORTS AGE wordpress-ingress &lt;none&gt; 80 18s </code></pre> 0 How to create a scheduled task via GPO that runs at startup as SYSTEM with highest privileges for only certain machines InteXX 2021-04-13T02:09:10Z 2021-04-13T02:09:10Z <p>All of our workstations in the building are cabled with CAT5e, and because of the way things were built at construction time it's going to be prohibitively expensive to swap out the cabling for something that can handle gigabit speeds. (Yes, I know that theoretically in a perfect world CAT5e should handle gigabit, but in our experience this has resulted in file corruption.)</p> <p>This hasn't been too much of a problem so far, as we've been running under a 10/100 switch. But now we have a separate need to bump up to gigabit. We'll be using a managed switch so that we can limit those workstations' ports to <code>100 Mbps Full Duplex</code> to match the cabling.</p> <p>In order to avoid <a href="" rel="nofollow noreferrer">duplex mismatch</a>, we're also going to have to set the NICs on those machines to match the switch for those ports. I've worked up a small PowerShell script that does this quite nicely.</p> <pre><code>Get-NetIPAddress -AddressFamily IPv4 -IPAddress 192.168.0.* | ForEach { Get-NetAdapter -InterfaceIndex $_.InterfaceIndex | Where { $_.Status -eq 'Up' } | ForEach { $Property = Get-NetAdapterAdvancedProperty -Name $_.Name -DisplayName '*Duplex*' $Value = $Property.ValidDisplayValues | Where { $_ -match '100' -and $_ -match 'Full' } $Name = $Property.DisplayName If ( (Get-NetAdapterAdvancedProperty -Name $_.Name).DisplayValue -ne $Value ) { Set-NetAdapterAdvancedProperty -Name $_.Name -DisplayName $Name -DisplayValue $Value } } } </code></pre> <p>But the script must be run as admin, at startup with Highest Privileges and under <code>NT Authority\SYSTEM</code>. And only for the computers that are in the AD security group I've created.</p> <p>I've tried using GPO to create a Scheduled Task for this, as discussed <a href="" rel="nofollow noreferrer">here</a> and <a href="" rel="nofollow noreferrer">here</a>, but the task is never created. Nothing related shows up in the workstation's event logs.</p> <p>I need it as a GPO Preference under <code>User Configuration\Preferences\Control Panel Settings\Scheduled Tasks</code>, so that I can use item-level targeting and point to the security group.</p> <p>I'm not dead-set on accomplishing the task in this particular way, so if someone has an alternate idea I'm willing to consider it.</p> <p>But in the meantime, how can I get this Scheduled Task created on these workstations (without going around to everyone and doing it manually)?</p> 0 Why does Apache server return 404 on subfolder, when it was previously working Dave 2021-04-13T01:28:42Z 2021-04-13T02:54:29Z <p>I just installed some new SSL from GoDaddy on my Apache Ubuntu server.</p> <p>I then restarted via SSH and everything looks good.</p> <p>The root site (a wordpress install) now loads fine with https.</p> <p>However, there is another HTML site in the /app directory, which returns 404.</p> <p>This was previously working. I've not changed any config files.</p> <p>Any ideas?</p> 0 System.Net.WebException: There was an error downloading [URL] The underlying connection was closed: An unexpected error occured on send Notaras 2021-04-13T01:24:47Z 2021-04-13T01:43:42Z <p>I have an asmx web application installed on IIS 8.5 on windows server 2012. When i try to load it via WSDL client on the server itself, i get the following error:</p> <p>The application is running under a .net 2.0 app pool but the same error occurs while running under .net 4.0</p> <p><a href="" rel="nofollow noreferrer"><img src="" alt="enter image description here" /></a></p> <p>The error occurs if i use the full URL (which looks like: <a href="" rel="nofollow noreferrer"></a>) and also when i use https://localhost/app/test.asmx.</p> <p>Any help would be appreciated</p> -1 Postfix, Dovecot and Spamassassin unexpectedely fill-ups my disk usage Emilia 2021-04-12T23:04:44Z 2021-04-13T05:03:49Z <p>I am on VPS using CentOS 7, LAMP using Postfix, Dovecot and Spamassassin with Rainloop as my email client. When I have started Postfix using:</p> <pre><code>systemctl enable postfix systemctl restart postfix </code></pre> <p>and Dovecot as:</p> <pre><code>systemctl restart dovecot systemctl enable dovecot </code></pre> <p>After that my CPU usage goes above 90-99% as well my disk usage start fill-up unexpectedely as well as I am only able to send email and not being able to receive emails. Here is some outputs when I am running this command:</p> <pre><code> [root@server ~]# postconf -nf postconf: warning: /etc/postfix/ undefined parameter: mua_sender_restrictions postconf: warning: /etc/postfix/ undefined parameter: mua_client_restrictions postconf: warning: /etc/postfix/ undefined parameter: mua_helo_restrictions postconf: warning: /etc/postfix/ undefined parameter: mua_sender_restrictions postconf: warning: /etc/postfix/ undefined parameter: mua_client_restrictions postconf: warning: /etc/postfix/ undefined parameter: mua_helo_restrictions postconf: warning: /etc/postfix/ undefined parameter: virtual_mailbox_limit_maps alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases broken_sasl_auth_clients = yes command_directory = /usr/sbin daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix debug_peer_level = 2 debugger_command = PATH=/bin:/usr/bin:/usr/local/bin:/usr/X11R6/bin ddd $daemon_directory/$process_name $process_id &amp; sleep 5 dovecot_destination_recipient_limit = 1 header_checks = regexp:/etc/postfix/header_checks html_directory = no inet_interfaces = all inet_protocols = all mail_owner = postfix mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man message_size_limit = 30720000 meta_directory = /etc/postfix milter_default_action = accept mydestination = localhost, localhost.localdomain myhostname = mynetworks = newaliases_path = /usr/bin/newaliases.postfix non_smtpd_milters = $smtpd_milters proxy_read_maps = $local_recipient_maps $mydestination $virtual_alias_maps $virtual_alias_domains $virtual_mailbox_maps $virtual_mailbox_domains $relay_recipient_maps $relay_domains $canonical_maps $sender_canonical_maps $recipient_canonical_maps $relocated_maps $transport_maps $mynetworks $virtual_mailbox_limit_maps queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix3-3.5.8/README_FILES sample_directory = /usr/share/doc/postfix3-3.5.8/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop shlib_directory = /usr/lib/postfix smtp_tls_security_level = may smtpd_data_restrictions = check_policy_service unix:/var/log/policyServerSocket smtpd_milters = inet: smtpd_policy_service_default_action = DUNNO smtpd_recipient_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_authenticated_header = yes smtpd_sasl_path = private/auth smtpd_sasl_type = dovecot smtpd_tls_cert_file = /etc/pki/dovecot/certs/dovecot.pem smtpd_tls_key_file = /etc/pki/dovecot/private/dovecot.pem smtpd_use_tls = yes tls_server_sni_maps = hash:/etc/postfix/ unknown_local_recipient_reject_code = 550 virtual_alias_domains = virtual_alias_maps = proxy:mysql:/etc/postfix/, mysql:/etc/postfix/ virtual_gid_maps = static:5000 virtual_mailbox_base = /home/vmail virtual_mailbox_domains = proxy:mysql:/etc/postfix/ virtual_mailbox_maps = proxy:mysql:/etc/postfix/ virtual_transport = dovecot virtual_uid_maps = static:5000 postconf: warning: /etc/postfix/ unused parameter: virtual_create_maildirsize=yes postconf: warning: /etc/postfix/ unused parameter: virtual_maildir_extended=yes </code></pre> <p>AS well as when running:</p> <pre><code> [root@server ~]# postconf -Mf postconf: warning: /etc/postfix/ undefined parameter: mua_sender_restrictions postconf: warning: /etc/postfix/ undefined parameter: mua_client_restrictions postconf: warning: /etc/postfix/ undefined parameter: mua_helo_restrictions postconf: warning: /etc/postfix/ undefined parameter: mua_sender_restrictions postconf: warning: /etc/postfix/ undefined parameter: mua_client_restrictions postconf: warning: /etc/postfix/ undefined parameter: mua_helo_restrictions postconf: warning: /etc/postfix/ undefined parameter: virtual_mailbox_limit_maps smtp inet n - n - - smtpd -o content_filter=spamassassin submission inet n - n - - smtpd -o syslog_name=postfix/submission -o smtpd_tls_security_level=encrypt -o smtpd_sasl_auth_enable=yes -o smtpd_reject_unlisted_recipient=no -o smtpd_client_restrictions=$mua_client_restrictions -o smtpd_helo_restrictions=$mua_helo_restrictions -o smtpd_sender_restrictions=$mua_sender_restrictions -o smtpd_recipient_restrictions=permit_sasl_authenticated,reject -o milter_macro_daemon_name=ORIGINATING smtps inet n - n - - smtpd -o syslog_name=postfix/smtps -o smtpd_tls_wrappermode=yes -o smtpd_sasl_auth_enable=yes -o smtpd_reject_unlisted_recipient=no -o smtpd_client_restrictions=$mua_client_restrictions -o smtpd_helo_restrictions=$mua_helo_restrictions -o smtpd_sender_restrictions=$mua_sender_restrictions -o smtpd_recipient_restrictions=permit_sasl_authenticated,reject -o milter_macro_daemon_name=ORIGINATING pickup unix n - n 60 1 pickup cleanup unix n - n - 0 cleanup qmgr unix n - n 300 1 qmgr tlsmgr unix - - n 1000? 1 tlsmgr rewrite unix - - n - - trivial-rewrite bounce unix - - n - 0 bounce defer unix - - n - 0 bounce trace unix - - n - 0 bounce verify unix - - n - 1 verify flush unix n - n 1000? 0 flush proxymap unix - - n - - proxymap proxywrite unix - - n - 1 proxymap smtp unix - - n - - smtp relay unix - - n - - smtp showq unix n - n - - showq error unix - - n - - error retry unix - - n - - error discard unix - - n - - discard local unix - n n - - local virtual unix - n n - - virtual lmtp unix - - n - - lmtp anvil unix - - n - 1 anvil scache unix - - n - 1 scache dovecot unix - n n - - pipe flags=DRhu user=vmail:vmail argv=/usr/libexec/dovecot/deliver -f ${sender} -d ${recipient} spamassassin unix - n n - - pipe flags=DROhu user=vmail:vmail argv=/usr/bin/spamc -f -e /usr/libexec/dovecot/deliver -f ${sender} -d ${user}@${nexthop} spamassassin unix - n n - - pipe flags=R user=spamd argv=/usr/bin/spamc -e /usr/sbin/sendmail -oi -f ${sender} ${recipient} </code></pre> <p>Finally, when I stopped Postfix and Dovecot then my Disk Usage stops fill-ups but when I again start Postfix and Dovecot then again Disk usage start fill-ups.<br /> I am appreciating for any help to fix this issue; if something more needed to analyze this issue then I can shows at here.<br /> Thanks</p> 1 Load Balancing DNS with Google Cloud Platform Charlotte Wells 2021-04-07T15:40:19Z 2021-04-13T03:01:13Z <p>I plan to achieve load balancing by using Google to balance NS/DNS between each of three servers.</p> <p>I am setting up three servers with cluster DNS, records are replicated between each server.</p> <p>I plan to setup <code>NS1/2.example</code> to point towards Google's Load Balancer (Anycast) instead of pointing <code>NS1/2.example</code> to each individual server.</p> <p>How could I achieve that? What should I be aware of?</p> 0 Linux module load Ashish 2021-03-05T07:39:44Z 2021-04-13T03:07:43Z <p>I have one question one of my Linux box which is RHEL 7.8 the module joydev is loaded on one server but not on the other server.</p> <p>Server A =&gt; Module joydev is loaded successfully <br> Server b =&gt; Module joydev is not loaded</p> <p>I know I can load the module on server B but looking for root cause when both the system installation is some why the module loaded on one server but not on another.</p> 0 DNSSEC enable and lookaside guest 2019-08-21T07:24:28Z 2021-04-13T02:01:20Z <p>I came across a Bind setup where there is only one DNSSEC value set like this:</p> <pre><code>dnssec-validation yes; </code></pre> <p>and the keys in named.conf.options are declared like this:</p> <pre><code>include "/etc/bind.keys" </code></pre> <p>However, the rest of it of:</p> <pre><code>dnssec-enable yes; dnssec-lookaside auto; </code></pre> <p>is not set anywhere at all. </p> <p>Now the question is does this setup work at all? I do not see any errors anywhere. Would appreciate any comments / suggestions / advices at all. Many thanks in advance!</p> 0 AltRecipient AD attribute on mail enabled Public Folder cannot be synchronized in hybrid environment with O365 user66001 2019-02-21T06:42:30Z 2021-04-13T04:06:30Z <p>We have a hyrbid environment setup between Exchange 2010 and O365 for both mailboxes and Public Folders. Since putting Public Folders in hyrbid mode (through use of <a href="" rel="nofollow noreferrer"></a> ) we keep getting reports every export cycle containing the below for each mail enabled Public Folders:</p> <pre><code>The reference attribute [AltRecipient] could not be updated in Azure Active Directory. Remove the reference [PublicFolder] in your local Active Directory directory service. </code></pre> <p>Does anyone know why Azure has an issue with the AD attribute that stores forwarding address', which I understand once the Public Folders are migrated can have this functionality enabled?</p> 0 Waiting for localhost : getting this message on all browsers Apricot 2018-12-12T06:07:13Z 2021-04-13T04:06:30Z <p>I am using Ubuntu 14.04 and have php5 and mysql installed. I have 3 web applications on my /var/www/html folder. Until yesterday evening I was able to test and work on the applications. All of a sudden, I am not able to load any of my applications on any of the browsers. I have firefox and chrome installed. </p> <p>I have checked the availability of MySQL and Apache. Both are running correctly. I have also restarted Apache. I have cleared all the cookies and history from chrome and set it to default under chrome://flags.</p> <p>After removing all the history and cookies from Chrome, I could load the first login page and when I provide the UID and password, I get <code>Waiting for localhost</code> and the page is stalled. </p> <p>Of the three one of my smaller application loaded after 10 minutes, however a heavier application did not load at all. However, the browser loads plain html files.</p> <p>I have also tested on wifi, mobile internet dongle device and ethernet and there are no firewall issues. I have also cleared my machine's cache by</p> <pre><code>sudo /etc/init.d/dns-clean restart </code></pre> <p>None of this helped. Can someone guide me on how do I resolve this?</p> 0 EXIM SMTP allows to send mails without login / authentication via telnet to any domain Mateusz 2018-09-19T09:06:37Z 2021-04-13T06:01:40Z <p>I'm ashamed, but I have to ask for help. My server is being used for sending spam, I've found out I can simply connect with telnet (edit: from any server in office, home and even directly from CMD/Putty Telnet), add mail from/rcpt to/data without any login/authorization and send mail from my domain to any external mailbox (for example gmail accounts). I'm using Exim/SMTP/CSF on Debian, and have basic knowlegde about them. </p> <pre><code>root@vps:~# telnet 25 Trying 19x.10x.8x.1xx... Connected to Escape character is '^]'. 220 ESMTP Exim 4.91 Wed, 19 Sep 2018 10:48:05 +0200 mail from: 250 OK rcpt to: 250 Accepted data 354 Enter message, ending with "." on a line by itself test data. . 250 OK id=1g2Y9t-0003yu-Of </code></pre> <p>I want to prevent this and force any form of authentication to prevent sending spam from my server to external mailboxes. <strong>My second server while trying to do this same thing, after "rcpt to": command returns "550 authentication required".</strong> I think that's the proper behaviour, so you can't send spam.</p> <p>In my exim.conf I've got empty relay parameters (I've tried putting my server's IP or localhost adress, without luck):</p> <pre><code>addresslist whitelist_senders = lsearch;/etc/virtual/whitelist_senders addresslist blacklist_senders = lsearch;/etc/virtual/blacklist_senders domainlist blacklist_domains = lsearch;/etc/virtual/blacklist_domains domainlist whitelist_domains = lsearch;/etc/virtual/whitelist_domains domainlist local_domains = lsearch;/etc/virtual/domains domainlist relay_domains = domainlist use_rbl_domains = lsearch;/etc/virtual/use_rbl_domains hostlist auth_relay_hosts = hostlist bad_sender_hosts = lsearch;/etc/virtual/bad_sender_hosts hostlist bad_sender_hosts_ip = net-lsearch;/etc/virtual/bad_sender_hosts hostlist relay_hosts = hostlist whitelist_hosts = lsearch;/etc/virtual/whitelist_hosts hostlist whitelist_hosts_ip = net-lsearch;/etc/virtual/whitelist_hosts </code></pre> <p>Authentication section</p> <pre><code>begin authenticators plain: driver = plaintext public_name = PLAIN server_prompts = : server_condition = "${perl{smtpauth}}" server_set_id = $2 login: driver = plaintext public_name = LOGIN server_prompts = "Username:: : Password::" server_condition = "${perl{smtpauth}}" server_set_id = $1 </code></pre> <p>How can I protect my smtp socket? How can i force "authentication required" process? I tried to compare .conf files with my second server, but despite 2 days of tries I'm out of luck.</p> 0 Azure AD Connect Single-Sign On ApatheticRiku 2016-12-07T20:54:57Z 2021-04-13T03:02:25Z <p>I am trying to set up my domain for Single Sign-On to Azure-Connected services (Primarily, SharePoint Online). I have already run through the setup for Azure AD Connect and am currently able to synchronize my directory to Azure. I see my users in Azure and can sign in using an account. The next logical step for us is to enable Single Sign-On, so that our users are able to connect easier (our users are actually located on a subdomain, which is transparent to them and does not completely match their email addresses). Problem is, during the setup of AD Connect, the option to Enable Single Sign-On was not available. It simply was not on the normal User Sign-In prompt during setup. Has anyone else seen this, or am I simply missing something?</p> 1 Sometimes: Unable to connect to host, or the request timed out. MySQL through Sequel PRO FooBar 2016-01-26T13:24:10Z 2021-04-13T03:02:25Z <p>I have been struggling with this issue for over a year now, and it’s really giving me a headache.</p> <p>I often experience I am unable to connect to the MySQL server through SequelPRO. If I ssh into the server, I can use mysql fine, see processes, etc. My web app works fine too.</p> <p>When I try to SSH into my MySQL database through Sequel PRO, this message appears instantly:</p> <hr> <p>Unable to connect to host, or the request timed out. Be sure that the address is correct and that you have the necessary privileges, or try increasing the connection timeout (currently 10 seconds). MySQL said: Lost connection to MySQL server at 'reading initial communication packet', system error: 0</p> <hr> <p>ONLY solution is to reboot the server. Sometimes I’d reboot the server, and it still won’t work. After a few reboots it works. But usually it works every time.</p> <ul> <li>It happens on all my different forge servers (php5 &amp; php7) and has happened since day one. </li> <li>Restarting the mysql server (like sudo service restart mysql) does not work</li> <li>It happens on different networks (wifi, local, etc)</li> <li>I can connect fine from another mac with different SSH-key (same OSX and Sequel Pro build). I have even tried copying my own SSH key to the other computer, and logging on through that. That works fine as well.</li> <li>I happens at random times, often if my Sequel Pro was open when my mac when to sleep (but not always - sometimes I can open it 24 hours later, and still be connected). But all of a sudden, I’d be disconnected, and when I try to login again, it see the following error:</li> <li>In some situations, I can login again to MySQL through sequel pro, even though I did not do anything (i.e. reboot server).</li> </ul> <p>The way I connect:</p> <p>MySQL Host: Username: something Password: something Port: 3306 SSH Host: server-ip SSH User: something SSH Key: path to my id_rsa SSG Port: default/not-set</p> <p>Any ideas?</p> <p>My Sequel Pro version: v1.1 build 4499 My OSX: OS X El Capitan v 10.11</p> <p>Server: Ubuntu 14.04.3 LTS (GNU/Linux 3.13.0-71-generic x86_64)</p> <p>MySQL: Ver 14.14 Distrib 5.7.10, for Linux (x86_64) using EditLine wrapper</p> 3 Zabbix PDF Report Generation Mick 2014-08-26T09:17:03Z 2021-04-13T05:03:30Z <p>if any of you have an idea how to implement zabbix generate PDF reports? in forum I found some like this: <a href="" rel="nofollow noreferrer"></a> . </p> <p>I try implement this to my zabbix 2.2.3 but when I do this I still see Text (Unable to login:) I guess it is a problem version. as you can see it was only tested on 1.8.8 and 1.8.10. Does anyone have some idea?</p> <hr> <p>One problem fixed, is that it is failing due to API version issues, I download fresh from <a href="" rel="nofollow noreferrer"></a> and now I can generate PDF report, However, when I have select some site I see only ALL option and PDF is are empty</p> <p>Below I share screen and example report: <a href="" rel="nofollow noreferrer"></a></p> <p><img src="" alt="enter image description here"></p> <p>Anybody have some idea ?</p> <p>Regards Mick</p> 0 VMM 2012 Error 20552 - For ISO share VMM does not have appropriate permissions to access the resource morleyc 2013-05-24T15:30:28Z 2021-04-13T05:03:30Z <p>I have included an ISO network share in my VMM 2012 library by:</p> <ol> <li>Library servers -> Add Library Share -> Add Unmanaged Share.</li> <li>I then selected the file share e.g \fs1\ISO</li> <li>I set the share permissions on \fs1\ISO to everyone FULL</li> <li>I set the NTFS permissions to read-only for the following AD accounts: <ul> <li>VMM service account</li> <li>VMM Library account</li> <li>HV target host machine account</li> <li>Network service</li> </ul></li> </ol> <p>The problem I have is I still get error the following error regarding permissions:</p> <p><code>Error (20552) VMM does not have appropriate permissions to access the resource \\fs1.domain.local\ISO\Zabbix_2.0_x86.i686-0.0.1.preload.iso on the scvmma1.domain.local server.</code></p> <p><code>Ensure that Virtual Machine Manager has the appropriate rights to perform this action. Also, verify that CredSSP authentication is currently enabled on the service configuration of the target computer scvmma1.domain.local. To enable the CredSSP on the service configuration of the target computer, run the following command from an elevated command line: winrm set winrm/config/service/auth @{CredSSP="true"}</code></p> <p>I have also set the command on the VMM server <code>winrm set winrm/config/service/auth @{CredSSP="true"}</code> but no joy.</p> <p>Any ideas please?</p> 8 Outlook 2010 "Cannot open this item" on Windows 7 64-bit Michael 2012-08-09T19:13:27Z 2021-04-13T02:01:20Z <p>I have to admit this has stumped me...</p> <p><strong>User's Workstation</strong></p> <ul> <li>Outlook 2010 (32-bit) w/ Cached Exchange Mode enabled</li> <li>Windows 7 Pro (64-bit)</li> </ul> <p>Email account is on Exchange 2003</p> <p><strong>Problem</strong></p> <p>The user is unable to open certain emails in Outlook on this computer. Error msg is "<strong>Cannot open this item</strong>". The same user has a laptop with Outlook 2010 (32-bit) and Windows 7 Pro (32-bit). On his laptop he CAN open these emails without any problems. So to me that says this is a bug with Windows 7 Pro (64-bit). He can also open these emails on his BlackBerry.</p> <p><strong>Things I've tried to fix this problem...</strong></p> <ol> <li>Recreate his Outlook profile from scratch</li> <li>Recreate his Windows user profile from scratch</li> <li>Reinstall Office 2010 from scratch</li> <li>Move his Exchange mailbox to a different storage group on the server</li> <li>Installed a Microsoft Hotfix that supposedly fixes the problem (it did not)</li> </ol> <p>Strange thing is - most of the emails he cannot open were emails sent to him from a BlackBerry within the organization. Coincidence?</p> <p>Any help is greatly appreciated!</p> 0 How to: Make X.509 Certificates Accessible to WCF Daveo 2012-01-20T03:55:54Z 2021-04-13T06:01:40Z <p>I have followed the instructions here</p> <p><a href="" rel="nofollow noreferrer"></a></p> <p>I runnning windows 2003 IIS6 with a seperate user account for the Application pool I give this user access to the private key using cacls.exe. This works fine.</p> <p>However whenever something changes with the site or IIS the permission if lost. For example if I change the web.config file. restart IIS, wait 5 hours then the call to the SSL certifiate fails and I can no longer access it from my client that is trying to consume the WFC service. I logon as the APP POOL account run the cacls.exe again and it fixes it.</p> <p>How do i fix this perminatly as currently it stops every 5 hours or so.</p> <p><strong>Update</strong></p> <p>I have actually gone backwards. Now I cannot get it to work at all. These are the steps I follow</p> <pre><code>C:\FindPrivateKey&gt;FindPrivateKey.exe Trustedpeople currentuser Private key directory: C:\Documents and Settings\MYUSER\Application Data\Microsoft\Crypto\RSA\S--5-21-2205538328-2105125954-533649117-1053 Private key file name: ab715bc6d3b1ae3bdb1a9e8e21a3b851_817f45df-79ce-4f15-9345-15b5c81281a1 </code></pre> <p>Give permssions</p> <pre><code>Cacls.exe "C:\Documents and Settings\MYUSER\Application Data\Microsoft\Crypto\RSA\S-1-5-21-2205538328-2105125954-533649117-1053\ab715bc6d3b1ae3bdb1a9e8e21a3b851_817f45df-79ce-4f15-9345-15b5c81281a1" /E /G "WWWTEST2\MYUSER":R </code></pre> <p>Check permissions</p> <pre><code> Cacls.exe "C:\Documents and Settings\MYUSER\Application Data\Microsoft\Crypto\RSA\S-1-5-21-2205538328-2105125954-533649117-1053\ab715bc6d3b1ae3bdb1a9e8e21a3b851_817f45df-79ce-4f15-9345-15b5c81281a1" Everyone:R WWWTEST2\MYUSER:F </code></pre> <p>Set owner</p> <pre><code>subinacl /file "C:\Documents and Settings\MYUSER\Application Data\Microsoft\Crypto\RSA\S-1-5-21-2205538328-2105125954-533649117-1053\ab715bc6d3b1ae3bdb1a9e8e21a3b851_817f45df-79ce-4f15-9345-15b5c81281a1" /setowner=WWWTEST2\MYUSER </code></pre> <p>Error I get from site trying to consume the sevice</p> <pre><code>Exception: System.InvalidOperationException Message: Cannot find the X.509 certificate using the following search criteria: StoreName 'TrustedPeople', StoreLocation 'CurrentUser', FindType 'FindByThumbprint', FindValue 'b33e04f057a52cb73007aec81eee86d2f75e3c69'. Source: System.ServiceModel at System.ServiceModel.Security.SecurityUtils </code></pre> <p>When I login as MYUSER the account running the IIS app pool and go to "mmc" cetertifates snap in I can see the certificate in My User account under TrustedPeople</p> <p><strong>UPDATE</strong></p> <p>I was able to get it working by installing the cert on Local Machine / Personal and using winhttpcertcfg instead of cacls</p> 3 /etc/hosts entry for single IP server serving multiple domains Dan Grec 2011-08-11T19:59:20Z 2021-04-13T04:29:57Z <p>Running Ubuntu 10.04</p> <p>My server serves 3 different domains using named virtual hosts in Apache2. I'm current using different Named Virtual Servers to 301 redirect www to the non-www equivalent. It's working, but I don't understand the correct entries for my /etc/hosts file and I think that is causing problems for me trying to setup Varnish.</p> <p>I understand I need the localhost line</p> <pre><code> localhost localhost.localdomain </code></pre> <p>Should I also list each domain here? as in</p> <pre><code> localhost localhost.localdomain </code></pre> <p>What about the entry for the IP of the server? Do I need the following line?</p> <pre><code>&lt; IP.Of.Server &gt; </code></pre> <p>Also, should I be listing AND on each line, so they go into Apache and it can deal with the 301 redir?</p> 70 protocol version mismatch -- is your shell clean? rfreytag 2011-05-06T21:15:04Z 2021-04-13T05:22:59Z <p>When following the instructions to do rsync backups given here: <a href=""></a></p> <p>I get the error "protocol version mismatch -- is your shell clean?" </p> <p>I read somewhere that I needed to silence the prompt (PS1="") and motd (.hushlogin) displays to deal with this. I have done this, the prompt and login banner (MOTD) not longer appear, but the error still appears when I run: </p> <pre><code>rsync -avvvz -e "ssh -i /home/thisuser/cron/thishost-rsync-key" remoteuser@remotehost:/remote/dir /this/dir/ </code></pre> <p>Both ssh client and sshd server are using version 2 of the protocol. </p> <p>What could be the problem? Thanks.</p> <p>[EDIT] I have found <a href=""></a> which directs that it is sometimes necessary to "Force v2 by using the -2 flag to ssh or slogin</p> <pre><code> ssh -2 -i ~/.ssh/my_private_key remotemachine" </code></pre> <p>It is not clear this solved the problem as I think I put this change in AFTER the error changed but the fact is the error has evolved to something else. I'll update this when I learn more. And I will certainly try the suggestion to run this in an emacs shell - thank you.</p>