Featured Posts

Breaking News

How to protect your hosting company from ransomware attacks


Ransomware is a global threat that has taken down government organizations, public infrastructure (e.g., power), large and small businesses, and online applications. Ransomware is one of the most dangerous issues for any business, and it’s imperative that all hosts have systems in place to stop it to protect themselves and customer websites.

What is ransomware?

The unique payload from ransomware is what makes it dangerous compared to other malware. When ransomware executes, it scans local storage and network accessible storage for important files. The ransomware author determines important files, but it’s typically productivity files such as spreadsheets and documents, images, backups, presentations, and any other file type that could be considered critical to the organization. With the right files, the ransomware author increases the chance of the victim paying the ransom.

After ransomware finds its target files, it then encrypts these files with a cryptographically secure cipher, usually AES-256. This cipher is symmetric with a single key to decrypt and encrypt data. If the symmetric key is discovered, then the ransomware is no longer effective, so more sophisticated ransomware will then use an asymmetric key system (e.g., RSA) to encrypt the symmetric key.

If any files necessary for business productivity or functionality of an application are encrypted, the business can no longer continue its routine day-to-day operations. For something like a power plant, this can interrupt services to the public and (at worst) threaten human life if, for example, hospitals and critical services are affected. Ransomware holds important files hostage for an average business in exchange for payment made to the attacker in cryptocurrency. If the organization has no backups, the company is forced to pay the ransom, but files’ decryption is never guaranteed.

How to avoid ransomware attacks?

Unlike standard malware, ransomware has numerous vectors an attacker can use to install it on a system. It can be installed on a local workstation or a server to deliver its payload. To combat ransomware, a host needs a full range of defenses and cybersecurity standards that stops it from being uploaded to a server or executed on the network. Here are several ways you can avoid becoming the next victim of a ransomware attack.

Be prepared for the worst

It’s almost inevitable that at least one of your customers will be a ransomware target. To stop ransomware, you must prepare for the likelihood of targeting the server either directly or via upload on a customer’s site. This tip is general advice to hosts to expect an attack so that they do not disregard best practices in ransomware defenses. It’s not uncommon for some web hosts to assume that only customer sites could be damaged by ransomware, but sophisticated attacks could affect the host’s local server or the network. A companies website may just be the first stop for a ransomware attack. An attacker could set up a fake login portal and start sending your staff invitations to reset their passwords.

Generate regular backups with one copy on a remote location

The only way to recover from a ransomware attack is via backups. Some poorly developed ransomware discloses the key or some other method of deactivation, but most sophisticated ransomware has strategies built in to stop the decryption key from being discovered. If an attacker can compromise a user site or the host server itself, the primary recovery method is backups.

To force a victim to pay the ransom, authors build their malware to find files to help the targeted victim recover instead of paying. This technique means that ransomware will also encrypt backup files. For this reason, web hosts must have at least one backup copy located off-site or remotely. Many organizations store backup files in the cloud, which is not located directly on the network and is therefore

safe from ransomware encryption.

The frequency of backups is often determined by several metrics specific to the organization. The Recovery Point Objective (RPO) and the Recovery Time Objective (RTO) are two metrics that should be used to determine the frequency of backups. Still, the NIST also provides sound advice for backup and disaster recovery strategies.

Update servers and other systems regularly

Every responsible system vendor and developer releases updates. For a web host, applications and the Linux operating system will have periodic updates and patches that must be installed. The longer a system goes unpatched, the larger the window of opportunity for an attacker to exploit an open vulnerability. If this vulnerability allows for remote code execution, code injection, or uploading of malicious files, it could lead to a successful ransomware attack.

A web host is especially vulnerable if their customer sites are vulnerable. For example, if the customer does not secure their site from malicious code injection, it could run on the server and affect other sites on the same server. Keeping the operating system and other network appliances patched will protect from the latest vulnerabilities that would allow for ransomware execution.

Have a password complexity and length policy

Brute-force attacks on the root account or any administrator account are quite common across the web, especially against content management systems such as WordPress. Weak passwords such as those with few characters or no complexity make brute-force attacks possible. Attackers use a list of common passwords to “guess” an administrator’s credentials, so complex and lengthy passwords will stop this attack from being successful.

The OWASP has password policy suggestions, but you must also take into consideration compliance rules. You can also review NIST suggestions for password policies and complexity requirements to avoid brute-force attacks. It’s important to use a unique password for each login and store them in a password manager.

Use SSH keys when possible

For most web hosts, SSH is necessary so that administrators can remotely access servers and configure them. Using simple passwords for authentication leaves the SSH service open to brute-force attacks. Instead of using passwords, use SSH keys for each account that must access servers remotely.

The number of people who have remote SSH access should be restricted, but using unique keys will also reduce the chance of a compromise from a phishing attack. The root user should also be disabled and an alternative account used. Attackers use scripts that brute-force the root user credentials, so disabling this account adds to your security.

Disable cleartext channels and authentication

Several services on the internet allow for cleartext authentication, which means that any passwords and usernames can be eavesdropped on and obtained. POP3, IMAP, SMTP, FTP, and HTTP are a few examples of cleartext channels that should be disabled. All three have alternatives with encryption, so they should be used instead of cleartext channels.

Use signed software from trusted sources

When you download software and install it on your systems, you want to make sure that the software comes from a legitimate source. One way to do this is to install only signed software. Signed software has an encrypted signature that can only be decrypted by the developer’s public key. The signature provides details about the developer so that you can verify that it comes from a legitimate source. When you install the software, you can see the software’s signature to ensure that it comes from the developer and has not been tampered with by a malicious third party.

Validate software package integrity

An administrator may eventually download and install an rpm package manually, and just like other software, it should be validated. You can validate that a package is signed and is coming from a legitimate source using the rpm command. This command also validates that the download was successful and the files were not corrupted as they transferred over the internet.

Set up a firewall

Firewalls can be software or hardware-based, but you need at least one to block outside traffic from accessing internal systems. The firewall determines what ports and traffic can access internal systems. Some web hosts use an intrusion detection system (IDS) in addition to a firewall to not only block unauthorized traffic but also detect anomalies. For example, an attacker might scan the firewall for open ports, and an IDS could detect the scan and notify administrators.

Deploy a malware scanner

Code injection and uploading malware are two common attacks on the web. For a web host, malware could be injected or uploaded on customer sites. To stop this from happening, web hosts must have systems and scanners in place to stop malicious content before it can load into server memory or affect a customer’s site.

The malware scanner that you choose should proactively stop malware. Once malware installs on the system, it can perform numerous attacks such as ransomware encryption of important files or passive eavesdropping and theft of sensitive data. A scanner such as Imunify360 will find malware on the system, clean or quarantine it, and automatically remove malicious code in application files.

Train people for security awareness

Attacks such as phishing and social engineering rely on a targeted user’s inability to detect them. Even administrators with a background in technology can fall victim to one of these scams. To combat this issue, cybersecurity training and awareness should be a requirement for all personnel within the web host organization.

Research shows that security awareness training reduces phishing attack clickthrough rates by 22% in just three years. Some organizations go through exercises and send phishing emails to test users in their ability to detect a phishing email so that training can be further implemented to those who clicked on malicious links.

Apply the least privilege model

Authorization to network resources should apply the “least privilege” model, which states that users should only have access to the resources necessary to do their job function. Permission escalation happens when an attacker can use lower-level privileges to increase their authorization on the environment, and poorly managed permissions are one way to do it.

When users no longer hold a specific job title and/or move to a new position, their permissions must be reevaluated and any unnecessary privileges removed. Should an attacker gain access to an account, they are limited to the least privileges assigned to the compromised account. High-privilege accounts are also a target, but restricting users to specific resources necessary for their job function reduces the risk of a compromise if an attacker gains access to a lower-privileged account.

Perform a penetration test

You don’t know there are vulnerabilities on the network until you detect them. Hence, the best way to ensure that any unforeseen vulnerabilities are remediated is to have a third-party penetration test for all resources. A penetration tester will review your server configurations and test the environment and applications for exploits.

Penetration testing can be blackbox and whitebox. You provide source code and access to configurations in a whitebox penetration test. In a blackbox engagement, the penetration tester scans the system in the same way an attacker would and then exploits the system (usually a staging environment) in the same manner an attacker would compromise the system.

Implement automation

Any repeatable process should be automated to eliminate human error and perpetual protection. Configurations, deployment of software, and software updates can be automated so that no mistakes are made from simple human mistakes. Repeatable processes can be automated using DevOps tools such as Ansible, Chef, or Puppet.

Code automation is also possible to test for vulnerabilities and bugs before deployment. If you have in-house developers creating business applications, consider static application security testing (SAST) to ensure that any common vulnerabilities are patched before code is deployed to production.

Set up DNS failover

If DNS fails, your entire online presence is unreachable. Web hosts with customer sites relying on their DNS are also no longer operational. Since DNS is a critical component in online business, web hosts should use DNS clustering and failover to ensure that sites are always available even if one server crashes.

When DNS was first introduced, it was not secured. Threats such as DNS poisoning could trick users into divulging sensitive information on a malicious website. Adding DNS Security (DNS Sec) to client name lookups protects from any attacks that target nameservers. To add security to nameservers and client queries, DNS security adds two record types called RRSIG and DNSKEY, which are used to sign a domain to tell browsers that the IP linked to the domain is the legitimate server.

Stop exploits proactively with Imunify360

With Imunify360, you can proactively stop ransomware and automatically clean code injections, block malicious uploads, and detect ongoing attacks. It’s a full security and malware scanner & cleaner for hosting providers that will reduce much of the risk associated with ransomware and other online threats. The cybersecurity tool provides real-time information to administrators to act quickly on any pending threats and keep customer sites protected from malware and exploits. It’s a benefit for both web hosts and the customers that they support.


Powerful managed low cost  vps hosting 


No comments