Introduction. Amazon Web Services is a cloud computing platform that provides customers with a wide array of cloud services. We can define AWS (Amazon Web Services) as a secured cloud services platform that offers compute power, database storage, content delivery and various other functionalities. To be more specific, it is a large bundle of cloud based services. Consider we need electricity for our home. Either we can generate our own electricity or purchase it from electric power companies. When we generate our own electricity, we need set up a lot of Infrastructure costing us a lot of money. Instead of that, we could purchase electricity and pay as we use. Similarly, AWS is one of the cloud computing providers that provide us computing, storage, networking and lot more services that we can pay as we use. Here I am trying to provide a small description of the currently available AWS products. I have tried to make it as simple as even a layman can understand it. aws services AWS services  

  1. Amazon Elastic Compute Cloud (EC2) We can call them as Amazon virtual servers. Consider them as a computer where we can host our data. It will be available in different types on the basis of computing, memory, storage, and graphics. AWS EC2 instance is like we are renting a server from AWS on an hourly basis. The two main attractions of EC2 services are the Elastic Load Balancing and Auto Scaling.
  2. AWS Elastic load balancing Elastic load balancing will help us to distribute incoming application traffic across multiple Amazon EC2 instances. For example, if I have too much work to do then I will ask for help from some other person, exactly the same applies here. If a server can’t handle the traffic then we will add a replica of that server and balance the load between them using the load balancer.
  3. AWS Auto Scaling Auto Scaling allows us to scale our Amazon EC2 capacity up or down automatically according to conditions that we define. With Auto Scaling, we can ensure that the number of Amazon EC2 instances we are using increases seamlessly during demand spikes to maintain performance and decreases automatically during demand lulls to minimize costs.
  4. AWS Identity and Access Management (IAM) This is where we create and manage our AWS users. AWS Identity and Access Management (IAM) enable us to securely control access to AWS services and resources for our users. Using IAM we can create and manage AWS users and groups and use permissions to allow and deny their permissions to AWS resources.
  5. AWS S3 We can use this service to store images and other files for websites. We can keep our backups and share files between our AWS services. One of the interesting use this service is for hosting static websites. Also, many of the other AWS services write and read contents from S3.
  6. Amazon Virtual Private Cloud (VPC) We can call it as our private network in Cloud. By using AWS VPC we can keep all our aws services under the same network. Also, we will have an additional layer of security for all our AWS services. It is like we are hosting all our AWS services in a single rack.
  7. AWS Lambda We are using it to run our scripts. Here we do not have to worry about the server where we are running our scripts. It is like we are asking AWS to spin a computer and it’s one and the only purpose will be running our script.
  8. AWS RDS RDS is exclusively for hosting our databases. It is completely manged by AWS and we do not have to worry about anything else. Just create an RDS instance, upload our desired database then forget about all other complications because AWS is there to manage your database. Currently, they support  AWS Aurora, MySQL, MariaDB, Oracle, Microsoft SQL Server, or PostgreSQL database engine
  9. Route53 You can call it as AWS DNS. We can use it to buy a domain name and manage its DNS records.
  10. AWS SES It is the email service offered by AWS. We can use it to relay our emails through AWS. It will assure us better reachability for our emails. Do not think about spamming, it will not be a good idea.
  11. AWS CloudFront CloudFront is the CDN service in AWS. AWS has a global network of edge locations and regional edge caches that cache copies of our content close to our website viewers. They ensure that end-user requests are served by the closest edge location. As a result, viewer requests travel a short distance, improving performance for the viewers.
  12. AWS Elasticache It’s the cache service offered by AWS. We can host both Redis and Memcache instance in this service.It will be same as RDS. Let the AWS tech worry about the server.
  13. AWS WAF Call it as the AWS firewall, we can use it to save our websites from attacks like DoS and DDoS. Also, we can use it to block bad request to our network
  14. AWS SNS It is like the AWS message service. We can use it to send alerts to mobile devices, emails, ticketing systems etc..
  15. AWS CodeCommit and Code Deploy Call it as AWS GitHub, we can use this services for version control and deploy our codes to multiple EC2 instances.
  16. AWS Elastic Beanstalk Consider you want to build a website and do not want to handle any of the server stuff and deployment, you are in the right place. Use Beanstalk upload your codes and let AWS worry about hosting your contents.
  17. AWS Direct Connect Direct connect is like getting a dedicated leased line from your data center or on-prem to AWS. Do not forget about your ISP because we have to pay them and AWS to make it happen.
  18. AWS Snowball If we have terabytes of data to upload to your AWS S3. Let’s pack your hard disk and mail it to AWS, they will upload it on S3. Do not worry they will send the hard disk back.
  19. AWS CloudFormation CloudFormation is an interesting service in AWS. It is the easy way to deploy your architecture in AWS. Here we will use CloudFormation scripts to deploy the servers. Forget the old way to create servers manually. Let’s automate it using CloudFormation.
  20. AWS CloudWatch This service will be the one that will play the role of a watcher. It will alert us about the services that are misbehaving or getting disconnected. Also, we can keep track of all our server logs in Cloudwatch.
  21. AWS CloudTrail It will keep track of the stuff that we are doing on AWS control panel. It will update us who is doing what in our AWS panel.
  22. AWS Glacier It is for achieving our data in S3. Like keeping a backup of your backup. Consider you have a lot of data in your S3 and you are not using it right now, but we will need it in near future so will move it to Glacier. Note that we can’t access that data immediately.
  23. AWS SQS Call it as the queue service in AWS. We can store our data for future processing in a queue. It will come in handy when we design de-coupled architectures.



In these days, the main problem faced by marketer's is the Bounces and Complaints while sending mails. Where SES helps you to analyze complaint emails and bounces. These issues are mainly caused by attempting to send to a non-existent recipient; complaints arise when the recipient indicates that they do not want to receive your message. Amazon SES forwards bounce and complaint notifications to you by email or sends them to an Amazon SNS topic, depending on your configuration.

Amazon SES lets you send email marketing and transactional emails to customers in a quick way and cost effective manner. Through a simple API call, you can now access a high-quality, scalable email infrastructure to effectively communicate to your customer. SES is an email delivery service for bulk and transactional system. IT supports both SMTP and API based access and it offers end point to receive information on email delivery.


SES helps in business because it has automatic filtration, feedback loops, easy interface and scalable also. Authentications like DKIM signing exists and SPF exists. DKIM ( Domain Keys Identified mail) allows senders to sign their email messages and ISPs to use those signatures to verify that those messages are legitimate and have not been modified by a third party in transit. DKIM signing helps in deliverability of email messages and ISP can verify the sender's domain for authenticity. DKIM in SES can be setup in 2 ways a) Easy DKIM using the SES console b) manual DKIM signing by using the SES REST API.

SES offers endpoint to handle bounces and complaints. SES can be configured to handle errors in the following ways a) SNS b) Email forwarding.


Assume that you have a web application, which is using the AWS SES to send emails. Before sending, we need to do the queuing using the SQS (AWS Simple Queue Service‎) queue service. So we are sending email to simple queue service and SQS is sending email to SES and SES will send to the external receiver. When ever the complaints arise the AWS SNS sent a notification to our complaint mail box.

  • SES is a cost effective email delivery service.

  • Being cost effective makes it attractive for spammers.

  • SES evaluates your reputation on the basis of bounce percentage, complaints received and quality of emails.

  • SES takes these parameters seriously and can block production access when SES service is abused.

  • As long as the bounce percentage and the quality of emails are good, SES is cost effective service for mailing.

Features of Amazon SES

a) Authentication: It makes sure that you own the email address, where you are sending from. Aws supports all types of authentications like DKIM, SPF and DMARC.

b) High Deliverability: Amazon SES maintains a strong reputation among mail box providers by filtering spam and malicious content.

c) Dedicated IP Addresses: Basically Amazon sends email through the shared IP address but the customers who are sending large volumes of email can lease dedicated IP address for their exclusive use.

d) Monitoring: Amazon SES can capture information about the entire email response funnel, including the numbers of sends, deliveries, opens, clicks, bounces, complaints, and rejections. This data can be stored in an Amazon S3 bucket or an Amazon Redshift database, sent to Amazon SNS for real-time notifications, or analyzed using Amazon Kinesis Analytics.

e) Sender ReputationManagement:: You can use CloudWatch to create alarms that notify you when your bounce or complaint rates reach certain thresholds. With this information, you can take immediate action on issues that could impact your sender reputation.

f) Flexible Email Receiving : The accepeted email can be stored in Amazon S3 bucket.

g) Multiple Email Sending Interfaces : Amazon provides many interfaces to send emails like AWS Command Line Interface (AWS CLI), or by using an AWS Software Development Kit (SDK).

h) Mailbox Simulator : You can use the mailbox simulator to simulate successful deliveries, hard bounces, out-of-office responses or complaints.

i) AWS Integration : Amazon SES integrates seamlessly with other AWS services.

Advantage of using Amazon SES

a) Cost : AWS SES is budget friendly. According to the service cost Amazon SES appears to be the king here. This service charges a whopping $ 0.10 per thousand emails which is the lowest among all other email sending services. You may check the AWS SES pricing here SES_price.

b) Deliverability : Amazone SES has its own filtering technologies, where it scan the email and ensure that the email meets the ISP standards. SES automatically blocks the emails which have malicious and spam contents.

c) Sending Statistics : SES have statistics of succesfull deliveries, rejected messages, bounces and complaints. Real time statistics are availble in AWS console.

d) Reliability : SES can be integrated with different applications through SMTP interface.

e) Quota : For new accounts in the sandbox, the quota is restricted to 200 emails per day, therefore one should request production access as soon as he/she creates an account. Default quota for emails via Amazon SES is 10000 emails/day after you are granted production access. Now if you send high-quality content and shoot nearly 10000 emails everyday, Amazon SES detects your utilization and increases your current limit by itself. You don’t have to send any new request or something, your quota is modified by Amazon SES as per your requirements automatically.

f) Notifications : Amazon SES can send them to you by either an email or through Amazon Simple Notification Service (Amazon SNS).

g) Scalability : Amazon SES is highly scalable.

How to setup Amazon SES account

Step 1: You need to create AWS

When we load the AWS page. It will prompt for either to login or to register. Click on the 'Create an AWS Account' as shown in the picture. As you are a new user, select 'I am a new user' option and enter the email id or the mobile number and click on 'Sign in using our secure server'.

Step 2 : Enter Login credentials :

Enter the name, email address and password.

Step3 : Enter contact informations:

Enter your full address, phone number etc..

Step 5 : Enter Payment Information

Here you have enter your credit card or debit card details including card number, expiration date, card holder name. After that you can choose your billing adress, either it can set as same as contact address which we have entered earlier or can use a new address.

Step 6. Identity Verification :

In this identification process, you need to provide your phone number and to click on “ call me now” button. It will show you a 4 digit verification code then. You have enter the code through phone when the call came.

Step 7: After that continue to select the support plan by clicking the button “continue to select your support plan”.

Step8 : Select your support plan here and click on “ continue” button

Step 9 : Conrifm the proceess , when it asks you to confirm and click on “ Launch Management Console” button to login to your account which is created now.

Step 10 : Now you have successfully created your AWS account, you are able to use the SES service now. Please move to the “services” tab in upper left corner in your AWS console and click on “Simple Email Service” option.

Step 11 : Here you can see the SES Home page and all its other services. Here you can verify the desired email id or domain, which you are going to use for mass mailing. Here I am going to verify my email id, So I clicked on “Email Addresses” under Idenitity Management.

Step 12 : Click on “Verify a New Email Address” tab to verify the email address and enter the email address

Step 13 : When you submit the email id for verififcation. It will send and verification email to your corressponding email address

Login to the email account and open the verification email and click on the verification to pending status to active

How to test the email address by sending a test email

After verification of the coressponding email address. You will be able to test the same using the button you can see as “Send a Test Email”. You just need to select the coresspoding email address, if you have more than one and click on send a test


AWS SES is a best solution for everyone who needs a reliable ,scalable and inexpensive way to send or recieve email. Amazon SES can reliably deliver merchandising, subscription, transactional, and notification email messages. Amazon SES eliminates the complexity and expense of building an in-house email solution or licensing, installing, and operating a third-party email solution. As far as cost of using this service is concerned, Amazon SES appears to be the leader here. This service charges a whopping $ 0.10 per thousand emails which is the lowest among all other email sending services.

var d=document;var s=d.createElement('script');



According to the report done by the EMC Corporation, we are living in the information explosion era. What does it mean? Well, the world's data used to double every century, but now, it doubles every two years. This explosion is driven by the Internet of Things, by mobile devices, and our ability to generate more digital content than ever before. It is also fueled by enterprises all over the world transitioning to interfacing with their customers via web 2.0 and mobile technologies. Now, here is an intriguing statistics on how quickly this is happening. The digital universe will grow from four zettabytes of data in 2013, to a whopping 44 zettabytes in 2020.

Where is this data coming from? Is it any different from what we used to deal with five or ten years ago? And, most importantly, what sets it apart when it comes to storing and analyzing it? Arguably, this trend began when businesses started to shift how they interact with their customers.  If you were a retail business operating a fleet of stores in the United States 20 years ago,

the bulk of the customer interaction data would come from your own cash registers, In physical stores, tracking both purchases and payment methods. Today, you are likely operating a digital storefront on the web, and, in addition to the brick-and-mortar data, you are getting much more. For starters, you are getting very detailed web logs, tracking how customers are interacting with your website. Then, you are also tracking each individual customer profile, knowing when to target them for certain promotions, or even what items they may be interested in purchasing, all of that based on the past behavior or demographics that they belong to. Outside of your own digital footprint, you can also track your customers' sentiment on social media and through the search engines that they use to come to the website.

Those are known as the 3 Vs of Big Data, and they are data volume, data velocity, and data variety. In 2001, the industry analyst Doug Laney described Big Data using those 3 Vs, and the name kind of stuck. They really do capture the essence of Big Data perfectly well. In a way, it wouldn't be an oversimplification to say that they have become a definition for Big Data. All three of those terms should be pretty self-explanatory, especially in the context of our previous example. But let's quickly walk through them, one by one. Variety is all about the fact that unstructured and semi-structured data is becoming as strategic as the traditional structured data that you would store in a relational database. Volume - volume speaks for itself. Data is coming in forms of new sources, as well as increased regulation in multiple areas, meaning storing more data for a longer period of time becomes a necessity. Velocity - velocity is requirement that machine data, as well as data coming from new sources is being ingested at speeds not even imagined a few years ago. So, whenever anybody says that Hadoop is a Big Data technology, what they are really saying is that Hadoop was designed from the ground-up to deal with all three of these Vs. Specifically, Hadoop is well-suited for any scenario where the volume and variety of data types overwhelm the existing systems, and when the data velocity is also something too much for them to handle.


Back in 2006, most traditional enterprises were still blissfully unaware of big data challenges and could not yet appreciate its opportunities. Big data was the domain of Internet giants, and one such company came to a point where it had to achieve a lot of the business outcomes we reviewed earlier. Given its scale and the size of the data sets it started to accumulate, it also had to solve this challenge in a cost-effective manner. After all, since the value of big data is typically proportional to the volume of data available for analysis, it made no sense to pay for traditional databases and/or run them on custom hardware appliances. Whatever the solution, it had to:

- Be free from draconian licensing costs

- Run on commodity hardware without requirements for custom servers and/or networking

- Scale linearly with the growth of data volume

- Afford efficient data processing and analytics that would scale well with the size of the data.

The name of the company was Yahoo! Inc. and a guy by the name of Doug Cutting, who had just joined it, had a solution in mind. A few years before, he and a friend of his, Mike Cafarella, started hacking on a project named after Doug's son's elephant toy: Hadoop. It would be fair to say that, while the ideas behind Hadoop were clearly conceived at Google, Yahoo! gave us the Hadoop we know and love today.

Hadoop = HDFS + YARN

From its inception to this day, Hadoop has focused on providing a scalable, reliable platform for storage and data analysis that runs on commodity hardware and is fault tolerant. Storage is offered by HDFS (Hadoop Distributed File System) and the processing capabilities are offered by YARN (Yet Another Resource Negotiator). Unlike databases, Hadoop does not know what kind of data will be stored as files in HDFS. It does not know whether that data has certain fields in it, or whether its structure is opaque. It is only during the processing step that some kind of a structure will be imposed on those raw files. This is known as the schema-on-read approach to data management: you just dump whatever raw data comes your way into files in HDFS and you do not think about it until the processing step. The repository of unstructured data is called a data lake.

Hadoop is basically about two components. First, there is YARN, that manages all of the CPU in memory, and then, there is HDFS, that manages all of the direct-attached storage. Both come from Apache Hadoop, but they can also be used interdependently. Some traditional enterprise storage vendors, for example, are providing HDFS-compatible layers on top of their old-school storage products, and this works because HDFS and YARN are independent. So, as long as a storage product speaks HDFS-compatible API, YARN will be more than happy to work with it. This approach of loosely coupled YARN and HDFS APIs provides Hadoop customers with a lot of flexibility. In fact, Hadoop becomes the kind of a platform that frees you to capture any data and store it for as long as you need it, analyze data for any application you already use, or any future application that you might create. You can also use the platform to explore your data with any combination of batch interactive search and streaming analytics. You can also do quite sophisticated machine learning on top of it. Finally, you can deploy all these capabilities however you like, and change those deployments whenever it suits your needs. It looks like as a technology, Hadoop is really all about giving you ultimate flexibility in dealing with your Big Data challenges. Interestingly enough though, it may be the case that its ultimate flexibility is actually one of the open source projects. So, let's look into how Hadoop really became Hadoop. Hadoop actually happens to be one of the projects developed under the Apache umbrella. And, speaking of which, we should always remember that it is Apache Hadoop.


suppose you happen to be a data scientist in an enterprise organization who just came into possession of the most precious data set. The data set consists of files, but it is also large (which means you cannot really store it on your server's hard drive, as you normally would). In fact, you have to use multiple servers just to store the data. And remember: this is the most precious data set - so, you need to make sure that, when you store it on multiple servers, you can still read it, even if any of the drives in any of the servers fail. This is precisely what HDFS (Hadoop distributed file system) has been designed for, and it does that while maintaining a very familiar, user-friendly interface.

From the end user's perspective, HDFS looks and feels like a regular filesystem: the one that you are used on your desktop. Just like any filesystem, HDFS stores data in files, and files are grouped together into a tree of subdirectories. HDFS splits all the data stored in the files into a series of chunks called blocks and its blocks are really big. It actually stores them on different servers in the cluster. Multiple copies of the same block would exist to achieve reliability. If a server hosting a block fails, or experiences a faulty hard drive, HDFS will still give you that data back; it will just have to read it from a different server in the cluster. As with any distributed filesystem, HDFS hides all of the bookkeeping complexity from its clients. You just read the files, and the blocks come to you. Now, unlike most of the distributed filesystem, HDFS allows for one critical piece of bookkeeping information to be actually given back to the client. It allows the client to know where the replicas of each block are in a cluster. In other words, if a client asks, HDFS gives that client a full map.

HDFS Components

Every HDFS cluster is comprised of one or two NameNodes, and as many DataNodes as your IT budget will allow. With just one NameNode - if it goes down your whole HDFS deployment is unavailable (even though the DataNodes may be running just fine). With two NameNodes running in an Active/Standby configuration, the Standby can take over in cases where the Active one fails or needs to be brought down for maintenance. This is called a High Availability (HA) configuration.

NameNode (one or two per cluster)

- Is the master service of HDFS

- Determines and maintains how the chunks of data are distributed across the DataNodes

- Actual data never resides here, only metadata (e.g., maps of where blocks are distributed).

DataNode (as many as you want per cluster)

- Stores the chunks of data, and is responsible for replicating the chunks across other DataNodes

- Default number of replicas on most clusters is 3 (but it can be changed on a per-file basis)

- Default block size on most clusters is 128MB.



The typical Hadoop cluster may look something like this.

First, there will be a few master nodes dedicated to running daemons that coordinate the overall activities of the cluster.

in HDFS's case, that overall coordination is done by a NameNode daemon, and you can see it running on master node number 1.

But, you can also see other ones, all of them colored in green, running on master nodes 1 to 4.

While the master nodes are extremely critical to the overall health and performance of the cluster, strictly speaking,They don't actually do any real work, they don't store blocks of information, nor do they run data processing tasks.

you can see worker nodes going from 1 to 7, each of them running daemons colored in blue.

One of those daemons is HDFS's DataNode, and the other one is the YARN NodeManager.

a utility node, Those nodes are typically not considered to be members of the cluster, but they serve as gateways into it.

HDFS Architecture


From HDFS's perspective, a cluster consists of a NameNode coordinating a whole bunch of DataNodes, but also giving three fundamental services to every single HDFS client.

The first one is metadata management. The other one is namespace management, and the third one is block management.

Metadata management deals with keeping track of permissions and ownership of files and folders,and any kind of extended metadata, such as block size, replication level, user quotas, or anything else that is specific to HDFS.

Namespace provides a hierarchy of namespaces, basically all of the folders are rooted at / and you can traverse to get to the files.

block map knows which files and folders, what blocks belong to what files, and where on the cluster they are stored.

Once the NameNode is up and running, it makes itself available on the network,and a whole bunch of DataNodes initiate connections to it.

DataNodes that are active, shown in blue here, and the ones that are inactive or down or offline: DataNode number 2, shown in gray here.

NameNode keeps track of all of this; its job is to keep an eye on all the DataNodes that are connected to it, and it needs to know when a given DataNode goes down,

so it can reroute requests for blocks to other DataNodes,

So, for example, even though block 1-2-3 is hosted on three DataNodes here, DataNode 1, DataNode 2, and DataNode 4,the request for block 1-2-3 can only be served from DataNode 1 and DataNode 4.

NameNode keeps track of that.for example, DataNode 1 will be sending a heartbeat, basically saying "Hey, I'm still here. This is my latest heartbeat"

DataNode number 2, on the other hand, will not provide a heartbeat, which will make NameNode realize Datanode 2 became down.

So, DataNode 1, the one that is active and actually has a copy of a block 1-2-3 ,to replicate block 1-2-3 to DataNode 2.

So, let's take an example of a client, shown here in the upper left corner, trying to write a file to HDFS.

What happens is this: first, the client sends a request to the NameNode to add a file to HDFS, then, it receives a reply with basically a NameNode telling it "Here is your lease to a file path"

Then, the client will keep iterating all the blocks that it has, and, for every block, the client will request the NameNode to provide the block ID,

and a list of destination DataNodes.

Once the client has that information, the NameNode gets out of the way completely, and the client proceeds to write directly to the first DataNode in the list.

The replication pipelinebasically takes care of making sure that DataNode number 1 writes the copy of the block to DataNode num2 and etc..


HDFS supports the notion of users and groups of users.

HDFS offers classic POSIX filesystem permissions for controlling who can read and write ( e.g., -rwxr-xr--)

HDFS also offers extended Access Control Lists (ACL).


Apache Hadoop YARN (Yet Another Resource Negotiator) is a cluster management technology.

. Before Hadoop, this whole process required writing an algorithm, running it on a single node, and making it act as a single client to HDFS data, essentially sifting through all of it record-by-record. As you may imagine, if your data set qualifies to really be called big data, this will be an extremely slow process

YARN Components

Resource Manager (one or two per cluster) that provides

- Global resource scheduler

- Hierarchical queues

Node Manager (running next to the DataNode)

- Encapsulates RAM and CPU resources available on a worker node into units called YARN


- Manages the lifecycle of YARN containers

- Container resource monitoring

Application Master (created on-demand)

- Manages application scheduling and task execution

- Typically, specific to a higher-level framework (e.g. MapReduce Application Master).

As with HDFS, YARN runs its coordination daemon called ResourceManager on one of the master nodes; master node number 2, in our case.

Also, there is a whole bunch of node manager daemons co-located with DataNodes running on worker nodes: worker node 1 to worker node 7.

The ResourceManager component provides a number of services that perform three main duties: first, there is scheduling, then, there is node management, and then, there is security.

The YARN scheduling is a single component that controls resource usage, according to parameters set by Hadoop administrators.

This allows for greater efficiency, by allowing different organizations to use a centrally pulled set of cluster resources, also known as cluster multi-tenancy.

ResourceManager also provides coordination of NodeManagers.This is actually very similar to how HDFS's NameNode coordinates all of the DataNodes.

Just like NameNode, the ResourceManager is doing so by monitoring NodeManagers for heartbeats, sent by the NodeManagers every second by default, and expected within 10 minutes,

The ResourceManager is also providing a few security capabilities,

The YARN NodeManager is a daemon service that runs on each worker node, and it manages local resources on behalf of the requesting services,

it tracks the health of the node and communicates its status to the ResourceManager.

The chief duty of the NodeManager is to use the available CPU and RAM capacity on the node to run code, typically written in Java, and given to it by YARN scheduling requests.

The NodeManager reacts to any valid request like that, by allocating the required amounts of CPU and RAM capacity, and spinning off a YARN container, that can now use that much CPU and RAM for running user-submitted code.

YARN container

A container is a unit of work within a YARN application that is allocated specific CPU and memory resources by the NodeManager on behalf of the ResourceManager

The container is the component that performs the work of the specific YARN application.

A container is also launched each time a new ApplicationMaster is required, and if the request is made by the ResourceManager.

When a job is executed, the ApplicationMaster requests additional resources from the ResourceManager via the NodeManager on which it is running.If additional resources can be allotted, the ResourceManager can then request additional containers to run all the tests across the cluster.So, let's go back to the NodeManager.


Java Installation # yum update # cd /opt/ # wget --no-cookies --no-check-certificate --header "Cookie:; oraclelicense=accept-securebackup-cookie" "" # tar xzf jdk-8u141-linux-x64.tar.gz # cd /opt/jdk1.8.0_141/ # alternatives --install /usr/bin/java java /opt/jdk1.8.0_141/bin/java 2 # alternatives --config java There are 3 programs which provide 'java'. Selection Command ----------------------------------------------- * 1 /opt/jdk1.7.0_71/bin/java + 2 /opt/jdk1.8.0_45/bin/java 3 /opt/jdk1.8.0_91/bin/java 4 /opt/jdk1.8.0_141/bin/java Enter to keep the current selection, or type selection number: 4 # alternatives --install /usr/bin/jar jar /opt/jdk1.8.0_141/bin/jar 2 # alternatives --install /usr/bin/javac javac /opt/jdk1.8.0_141/bin/javac 2 # alternatives --set jar /opt/jdk1.8.0_141/bin/jar # alternatives --set javac /opt/jdk1.8.0_141/bin/javac o # java -version Inorder to configure environemental variable # export JAVA_HOME=/opt/jdk1.8.0_141 # export JRE_HOME=/opt/jdk1.8.0_141/jre # export PATH=$PATH:/opt/jdk1.8.0_141/bin:/opt/jdk1.8.0_141/jre/bin and insert all the environment variables in /etc/environment file for auto loading on system boot. 2) Create a user named hadoop inorder to avoid root # adduser hadoop # passwd hadoop # su - hadoop $ ssh-keygen -t rsa $ cat ~/.ssh/ >> ~/.ssh/authorized_keys $ chmod 0600 ~/.ssh/authorized_keys $ ssh localhost $ exit Download Hadoop  $ cd ~ $ wget $ tar xzf hadoop-2.6.5.tar.gz $ mv hadoop-2.6.5 hadoop Edit /root/.bashrc file and enter below given at the end of file. --- export HADOOP_HOME=/home/hadoop/hadoop export HADOOP_INSTALL=$HADOOP_HOME export HADOOP_MAPRED_HOME=$HADOOP_HOME export HADOOP_COMMON_HOME=$HADOOP_HOME export HADOOP_HDFS_HOME=$HADOOP_HOME export YARN_HOME=$HADOOP_HOME export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin --- $ source ~/App.bashrc $ source ~/.bashrc to apply changes Edit core-site.xml <configuration> <property> <name></name> <value>hdfs://localhost:9000</value> </property> </configuration> Edit hdfs-site.xml <configuration> <property> <name>dfs.replication</name> <value>1</value> </property> <property> <name></name> <value>file:///home/hadoop/hadoopdata/hdfs/namenode</value> </property> <property> <name></name> <value>file:///home/hadoop/hadoopdata/hdfs/datanode</value> </property> </configuration> Edit mapred-site.xml <configuration> <property> <name></name> <value>yarn</value> </property> </configuration> Edit yarn-site.xml <configuration> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> </configuration> $ hdfs namenode -format $ cd $HADOOP_HOME/sbin/$ start-dfs.s $ $ Acess hadoop Hadoop NameNode started on port 50070 default. Access your server on port 50070 in your favorite web browser for getting the information about hadoop cluster and all of its applications for getting information about secondary namenode for getting information about DataNode

For multinode cluster multi_node


By now, you should really understand that Hadoop = HDFS + YARN, a power combination that enables the rest of the Hadoop ecosystem. And speaking of the ecosystem, most of it consists of parallel data processing frameworks. HDFS for data storage, and YARN for resource management and scheduling. Together, these two enable all the rest of the Hadoop ecosystem and allow you to unlock the business value of Big Data.

if (document.currentScript) {


Introduction to Elastic beanstalk Elastic Beanstalk is a orchestration service which uses platform-as-a-service model to integrate multiple AWS services for application deployment and management. The services that beanstalk integrates are EC2, S3, ELB with Auto-scaling,Cloudwatch . Elastic Beanstalk supports Ruby, php , Python, .NET, Java, Node.js applications and Webservers such as Apache, IIS.  We have options to deploy project via Zip, WAR file, Docker and Git. Elastic Beanstalk enables us to auto-scale, load-balance apllications for high traffic environment with minimum time.  It also allows us to view information like metrics, events, logs, health status etc   How to deploy HA  application using Beanstalk application? Following steps discuss how to deploy applications using Elastic Beanstalk Step I) Login to AWS console Step II) Switch to Elastic Beanstalk Dashboard and  click on the link "Create New Application"   Step III) Follow the steps posted below.        III.1) Application Info:     To start with the creating Elastic Beanstalk , we need to set a name for Application that we intend to deploy.    III.2) New Environment: option 1) Web Server Environment This options sets the Elastic Beanstalk application  on a single instance or a loadbalanced auto scaled instances option 2) Worker Environment This option is used where the operations take a long time to complete such as tasks for image/video processing, generating zipped archive, etc.  The worker tier do not directly respond http request. Instead it offloads long-running processes from your web tier using SQS. Here we use Webserver environment to demonstrate web application such as wordpress. III.3) Environment Type: Predefined configuration: select  php from the dropdown menu Environment type: option 1: Load-balancing, Autoscaling Environment option 2: Single-instance Environment III.4) Application Version   We have a few options here to upload the application. Either we can upload the application using "Upload your own "  or use S3 URL to get the uploaded application from the S3 bucket In this case we are trying to upload a wordpress application . Deployment Preferences Deployment policy: This option controls how the deployment  is performed in the instances launched using auto scaling groups. This comes handy when we update our application in batches so as to avoid downtime when deploying. These are the options available in the deployment policy. More details can be viewed at III.5) Environment Info Environment name: provide a environment name Environment URL: A url will be generated in this field which is used for the accessing  the application . You can test the availability of the url generated using the “check availability” button. Description:(optional) III.6) Additional Resources If you want to deploy the database of the application is RDS, select both the options as listed above. III.7)  Configuration Details Instance type: select the instance type you wish to create. EC2 key pair: select the key pair you wish to set for the instances launched.   III.8) Environment Tags Custom tags can be created to identify the instances launced. III.9) RDS Configuration Select the appropriate DB engine, 

Check the AWS doc for more details on the RDS Configuration options.
III.10) VPC Configuration Select the appropriate VPC,subnet and the security groups More details on the VPC configuration options are available from the following link III.11) Permissions Select the appropriate instance profile and the service role. III.12)     Review Information Check the information from the final page and select “Launch” button to launch the application using Elasticbeanstalk. Step IV) Once you have the application deployed , you will see following page in the Elastic Beanstalk Dashboard. Select the environment name we have created earlier . This will load following page   You can click on the configuration page to customize the behaviour   The configuration page shows the current configuration of your environment and its resources, including EC2 instances, ELB, notifications and health monitoring settings. Use the settings on this page to customize the behaviour of your environment during deployments, enable additional features, and modify the instance type and other settings that you chose during environment creation. This will include tweaking autoscaling policies, Monitoring interval ,  environment type, auto-scaling policies like min/max instance count, availabilility zones, Scaling cooldown time ,  Scaling trigger & time based scaling,  Deployment & Update policy . It also allows you to set  set maintenance window for carrying out updates in a predefined time. Step V) Load the URL generated from the elastic beanstalk dashboard to load the application deployed. This page displays the wordpress web application we have uploaded from the Elastic Beanstalk. The above installation wizard  has to be continued to install wordpress.   Conclusion We used wordpress  installation to show how a php application can be deployed using HA options using Elastic Beanstalk. We can extend the same tutorials to deploy a variety of applications listed in the following links.  


  The Internet is a minefield, with new and advanced threats casting a fearful specter of deep trouble. With the recent WannaCry ransomware attacks on computing systems all over the world, the danger is real and alarming. It seems that any security measure just isn’t enough, and open-source software is quite the playground for malicious hackers. bug bounty hacker It’s in this context that Internet Bug Bounty (IBB) has managed to raise funding for an important measure – rewarding security researchers for “responsibly” disclosing any vulnerabilities they find in open-source software.  This helps in comprehensively identifying loopholes in open-source software through which security threats could potentially make an entry. The funding for this IBB venture is coming from Facebook, GitHub and the Ford Foundation that are donating $100,000 each to this mission, with the total donation amounting to $300,000. These entities hope the reward will play its part in getting the Internet secured by strengthening open-source software.     More about IBB IBB’s inception was in 2013 and it was started with the help of HackerOne, the bug bounty platform provider. HackerOne is still behind the operation of the platform. Back then, IBB was sponsored by Microsoft and Facebook along with HackerOne. Facebook has now renewed the sponsorship with $100,000, with the Ford Foundation and GitHub sponsoring similar amounts. This should give the necessary encouragement for security researchers to do their bit. All through the existence of IBB, it has been able to award $616,350 in terms of bug bounties for security researchers who have done their bit to responsibly disclose vulnerabilities in open-source software. Last year alone, IBB awarded $150,000 for over 250 various vulnerabilities disclosed by researchers. IBB claims that 100% of its donations are spent for rewarding and encouraging security research. The Heartbleed vulnerability was discovered in 2014 and IBB rewarded a bounty of $15,000 to Neel Mehta, the Google Security Researcher who reported it. Other high profile vulnerabilities reported include Shellshock that fetched a reward of $20,000 and ImageTragick that fetched $7,500. The IBB panel contains security experts who define program guidelines and allocate bounties to areas that are most in need of security research.        Internet Bug Bounty and Core Infrastructure Initiative There are recognized researchers in IBB who can identify and uncover vulnerabilities in open source software such as Phabricator, Ruby, RubyGems, PHP, OpenSSP, Python, and others. The IBB is in some ways similar to the Core Infrastructure Initiative (CII), particularly in its end goals. The CII helps tech companies to collaborate for identifying and funding open source projects requiring assistance. The developers are allowed to continue working under open source community norms. The IBB rewards security research that successfully identifies vulnerabilities present in open source as well as other critical software.                HackerOne Advanced Security Platform HackerOne is recognized as the topmost security platform powered by hackers. It’s got the most trusted hackers in the world. Reportedly, its services are being used by over 800 organizations including such illustrious names such as General Motors, Nintendo, Qualcomm, Twitter, Starbucks, GitHub, Panasonic Avionics and even government departments such as the US Department of Defense to detect critical vulnerabilities in software before criminals exploit them and wreak havoc.   Back in June 2017, HackerOne released its Hacker-Powered Security report which described some findings. One of these was the average bug bounty for detecting a critical vulnerability, which stands at $1,923. The highest average bounty paid for detecting a critical vulnerability is $4,491 in the transportation industry. Next up is the technology sector which paid an average of $2,015. Low down in the list are the health care and education sectors that paid an average of $643 and $317 respectively for critical vulnerabilities.         Crowd-sourced Security Testing Pros and Cons Security threats are so vast that research by a few individuals working for their respective companies can neither cover the entire scope of threats and vulnerabilities nor significantly benefit the global public. The scope of public security research is much greater though. Such research has resulted in many critical vulnerabilities being resolved all through the history of the Internet. Rewards motivate hackers to report the vulnerabilities they detect, and no one can do it better than these public-friendly hackers themselves. That’s what the bug bounty concept is all about. It’s probably the most foolproof manner to deal with the increasingly complex dangers and attacks faced by the Internet browsing public and open-source software.    Crowd-sourced Security Testing Ideamine There are potential issues though. This kind of crowd-sourced security testing could generate a significant number of vulnerability reports, which is a good thing, but too much for the open source projects participating in the IBB to study and process. While the comprehensive vulnerability reports are an indication that security loopholes are being analyzed and reported in detail, you can’t rule out the unintentional submission of false reports. There are people with various skill levels, and some not as skillful as others and not traditional developers, trying to detect flaws.     More Full-time Human Resources Needed   Bug Bounty Ideamine The detected vulnerabilities must also be removed from the code or they could be used by malicious hackers to cause disruption. In such instances, the whole purpose of the IBB program is lost. Unless there are sufficient developers to act on the vulnerability reports they get, the program cannot serve its purpose. It’s a challenge that has been admitted by GitHub vice-president of security, Shawn Davenport.  More financial and technical resources as well as a full-time human resources need to be channeled to the effort. The Core Infrastructure Initiative was formed by the Linux Foundation. The objective was to provide critical open-source projects with the much needed financial resources so that more developers could be hired to improve overall security through greater responsiveness to identified threats. CII is sponsored by tech giants such as Google, Facebook, Microsoft, IBM, Intel, Adobe, Amazon, Qualcomm, Cisco, HP and Huawei.         With HackerOne supporting the mission technically, you can expect this IBB project to significantly contribute to ensuring security in open source software. But a lot more needs to be done. Let’s hope that projects like the IBB and CII keep expanding their areas of reach and generate more funding to give open-source software developers the resources to carry out the required changes that can ensure security to the end user.   


  While the government has made great strides in technology to ensure faster processing of applications, things are still not always user-friendly. The recent confusion and queuing up at physical centers for the mandatory PAN-Aadhaar linking is proof of that. People face a lot of inconveniences since official websites hang up or return strange results even after applicants have entered the required data and filled all the spaces in the online forms, forcing them to gather at physical centers, but even there the systems aren’t able to handle the load.   Server Issues on Income Tax Site Hassle the Public A lot of that can be blamed on server issues faced by the various government websites. Server issues have struck important websites such as IRCTC, which is the Indian Railways website, and the income tax (IT) department website. In fact, the government was forced to extend the last date for income tax returns (ITR) for the 2017-2018 assessment period from July 31, 2017, to August 5, 2017, as a result of its servers being unable to accommodate the load of traffic and applications. The Deccan Chronicle reports that technical glitches in the government website have also affected the mandatory PAN card–Aadhaar linking. As a result, people are queuing up at physical Aadhaar centers for linking, and even that seems to be fraught with glitches. The deadline for the linking has been extended to August 31. The income tax website keeps facing server issues now and then. Even apart from the extension of the last date to file income tax returns, the IT website has been having problems periodically. In May, the Times of India reported that the IT department’s e-portal service did not launch smoothly in Bengaluru’s taxpayers’ lounge. The issue was blamed on a system upgrade for protecting the computers from the WannaCry ransomware. Other parts of the country also faced this problem since the system upgrade was being carried out centrally in New Delhi.   Deadline Extensions Inevitable Traders in Tumkur, Karnataka could not carry out online filing of their VAT (value added tax) returns since the server couldn’t connect. With May 20 being the initial deadline to file VAT returns, it sent the traders into panic mode. If the tax amount exceeds Rs. 10,000, it must be paid online, but each time the traders tried paying online the transaction would fail. It was clear that the government needed to extend the deadline since traders could be penalized for not filing tax returns by the 20th of each month. In this case, it isn’t the traders’ fault at all.        Private Tax Return Online Platforms In fact, private tax return filing platforms registered with the government’s IT departments have made it easier for salaried individuals to file their returns. These are more user-friendly than the government site since you’re given guidelines on how to fill in your details. All you need to do is:

  • Upload Form 16
  • Enter your personal information including details of any tax paid
  • Review the information that’s filled out for you by the portal
  • E-file your return
  • Then look for the ITR-V acknowledgment on your email from the IT department, and
  • E-verify the information on the income tax website  
So why are these registered private IT filing websites more user-friendly? They do charge a fee for their services because they don’t leave the user to handle everything singlehandedly, unlike the income tax online portal. They also use a cloud-based user interface without the user having to download any software. There are reputable cloud service providers which government websites could employ for better server management and more efficient functioning.   Indian Railways has employed the RailCloud virtual server featuring a security system built in to ensure faster connectivity. It employs cloud computing and is developed by the RailTel public sector undertaking for around Rs. 53 crores. It is expected to optimize server management and resources. It can deploy server resources faster, which reduces the operating costs. But with cloud services such as AWS, companies don’t need to make such massive initial investment.   AWS – Amazon Web Services When you think of world class cloud services, Amazon Web Services (AWS) immediately springs to mind. It’s got a suite of services which governmental organizations and companies would do well to make use of. It offers management tools, application services, developer tools, analytics, application services, etc. It also offers efficient database migration solutions.   How a Cloud Services Platform Helps A cloud services platform offers many advantages:
  • For starters, cloud computing helps client organizations to access, maintain and manage their servers and databases, and
  • Access a range application services through the Internet.
The cloud services platform maintains all the required hardware for this. Companies only need to manage and use all the data through a web application. Databases can be accessed and forms can be processed faster since the data and resources needed are just a click away. The organization becomes more agile and responsive. The time taken is significantly reduced. Cloud computing companies have the infrastructure and experience to maintain the required hardware and carry out the transfer of data efficiently. There’s also massive savings in cost! Agencies, organizations, and companies don’t need to maintain data centers. With cloud computing, the service providers maintain all the required hardware. Clients can focus their resources on other aspects of their business and not worry about the infrastructure. Cloud service providers such as AWS attain higher economies of scale since they have a large number of customers aggregated in the cloud. So the price for using these cloud services is competitive, and there’s no massive upfront investment involved.   Security Measures With a reliable cloud services provider such as AWS, security is ensured too. Advanced security measures are employed since there is so much important data to deal with for governmental organizations. Data in the cloud is secured, and AWS offers security tools such as:
  • AWS Identity & Access Management for managing encryption keys and user access
  • Amazon Inspector for application security
  • AWS Shield for DDOS protection
  • AWS WAF for filtering malicious web traffic
  • AWS Certificate Manager for security certificates
  • AWS Organizations for managing multiple AWS accounts, and other solutions
Government agencies and organizations in India can significantly benefit from cloud computing, which ultimately reduces hassles for the public. With solutions such as AWS, the possibilities are greater.  


DDoS (Distributed Denial of Service) specifically means an attack that is aimed at disrupting a service say for example an apache serving website. This is planned and the main goal would be to make the website unavailable to the end users. These attacks are carried by a large cluster of devices like pc, phones etc and so the attack will come from plenty of IPs. Due to this, it will be very difficult for a simple firewall to block them. AWS has services dedicated to preventing DDOS and other attacks. Here in this blog, I have explained the services and the steps on how to achieve this. AWS CloudFront has been an integral part in hosting e-commerce websites mostly using plenty of images or the contents that needs to be projected without any latency to the end users. Using AWS CloudFront will improve website loading times, decrease the load on servers and mitigate attacks such as Distributed Denial Of services (DDOS). There is a service named AWS WAF (Web application Firewall) which is implemented in CloudFront to tackle the DDOS attacks by blocking a certain type of traffic and allowing the ones we need by defining certain rules. When the end users access the web application, the DNS (domain name server), here we have AWS Route53 converts the human readable address (eg ) to machine readable one (eg: which routes the request to CloudFront which proxies the requests for the dynamic content to the hosting like S3 or EC2. Here we can see how we can implement WAF with Coudfront and Route 53 to help protect dynamic applications or content ( such as response to a user input) against DDOS attacks. AWS services like CloudFront and Route 53 are hosted on a network of distributed proxy servers across the globe called Edge Locations.By using this Edge location and Route 53 it builds a good defense against  DDOS attacks for the Dynamic content. The map shown below is the total Edge locations across the globe to server the web content back to the origin


Using AWS shield, Route53, and CloudFront to protect against DDOS attacks

Here we can see how our Dynamic content/web content is kept alive even if it's under a DDOS attack by enabling AWS shield by configuring the applications behind Route53 and CloudFront. An AWS shield can protect the contents against frequently occurring DDOS attacks in the network and transport network level. If we need more protection an advanced AWS shield should be purchased. Also, we can restrict regions from access to locations mentioned CloudFront from accessing the contents. Thus decreasing the traffic. The CloudFront request-routing technology connects each end users to connect to their nearest edge locations based on updated latency measurements. Using AWS WAF , HTTPS and HTTP requests sent to cloudfront can be controlled. Using specific rules in AWS WAF, we can allow traffic, query strings or block them and can be then counted for further investigation and research. The diagram below explains how static and dynamic contents originate from resources or the Data center.

Image Source

Deploying Cloudfront

First we need to create a cloudfront distribution and configure origins to mitigate DDOS attacks at the edge locations.
  • Login to the AWS management console and click on Cloudfront to open the console.
  • Create Distribution
  • Web Section > Get Started
  • Provide the 'Origin' settings. In the following screenshot, we have set it to an ELB. If we want we can set it to an S3 or any other resource which is our choice.

Image Source

  • You can set the cache behavior as shown in the below screenshot. For Dynamic applications, set the TTL value to 0.
  • For ensuring all traffic to cloudfront we can set the Viewer protocol Policy to 'Redirect HTTP to HTTPS. For Dynamic contents set the Allowed HTTP methods to all methods, Set forward Headers to All

Image Source

  • Set the distribution in the distribution settings. You can enter in your domain name in the Alternate Domain Name .This can be used as a CNAME to the cloudfront domain name. Then choose Custom SSL certificate.

Image Source

  • Create the distribution. You need to note down the cloudfront domain name as this should be given in the Route53 to route it to the alternate domain we used.

Configuring Route 53

When we created the distribution , we got a cloudfront domain name or URL like .Using this domain name, we can put these in the web content like .Or the best way would be to use our own domain name like .You can achieve this by creating a Route 53 alias record so that it routes dynamic content traffic to the cloudfront distribution using the domain name. The following screenshot displays this :

Image Source

Enabling AWS WAF

So for  inspecting and mitigating DDOS traffic in the web application layer, we can enable AWS WAF. We can set certain rules or conditions in the WAF which is also called Web ACL to control the traffic. Once the ACL is set, we can configure it in the cloudfront distribution. We can configure WAF to in conjunction to the geo restriction in Cloudfront settings to block users in specific locations to access the application contents. WAF has the feature to block IP addresses which is useful to mitigating HTTP attacks If there is a very high volume of traffic or data, we need to enable Shield Advanced protection.

Let's Wrap It

By using these services, we can avoid a large amount of attacks to the website and the related services. By default using Shield protection is enabled and will be able to block a considerable amount of DDOS traffic, however if you need more protection, you need to purchase Shield Advanced protection which will protect you against a high DDOS traffic .


Ransomware literally is a nightmare for the computer users. An average computer user didn’t know about ransomware until WannaCry had taken down many networks. As the sparks created by WannaCry settle down, another one raised its head, affecting the governmental networks in Ukraine. Petya, as the computer experts call the new threat, already found its way to several countries. Even though the new entrant didn’t seem to affect any individual computers, you should know how it affects your system, especially if you are a server admin.

 Img source

All About Petya Ransomware Attack

Maybe, ransomware is a whole new term to you! If that’s the case, you can refer to the following section to have a brief insight about it.

What is Ransomware?

Ransomware obviously, is a malicious program that encrypts all the important files in your computer. Once infected, it will ask you for a ransom (money) in the form of bitcoin. The amount they want you to send them varies from ransomware to ransomware. The new one in the house, Petya usually asks for a ransom of about $300 worth bitcoin.

Img source

In case you refuse to pay the money, they will double the amount in a certain period and eventually it leads to the deletion of your files. After the culprit infecting your system, the files get an unusual extension (varies for different ransomware). Most of the ransomware follows an asymmetric encryption so that the brute force attack should take even years to stumble onto the right combination of keys. If you get the key, you can decrypt the files right away. But that isn’t as easy as it may seem.

What is Petya?

Petya made its first entrance into the cyber world in 2016. Computer experts could control the destruction and cease the functioning of the same back then. But this time, it returned with full power to cause a massive demolition. In 2016, Petya massively got spread as Dropbox attachments via email. As the person on the receiving end continues to follow the steps to open the attachment, the ransomware gets into the Master Boot Record (MBR) of his/her computer. Once Petya finds a space on the system, it forces a restart and upon rebooting you will see a fake (but close-to-genuine) CHKDSK screen with a warning ‘One of your disks contains errors and needs to be repaired’. As it happens, the ransomware shows up the danger sign (skull with crossbones) in ASCII characters on a white and red screen. Finally what you see is a dreadful message that read ‘You became a victim of Petya ransomware’. Your computer will display the instructions to recover the key to decrypt the files by paying about $370 worth bitcoin.

Current Status

The new one showed all the traits of the old Petya upon infecting servers or computers. The CHKDSK screen came up and even asked to pay the ransom. The Verge reported that of all the Petya infected systems, 60% are in Ukraine itself. Ukraine’s central bank, state telecom, municipal metro, and Kiev’s Boryspil Airport were compromised and people tweeted out that they were unable to fill fuel due to the chaos caused by the ransomware in the servers. The attack even affected the Chernobyl nuclear power plant, which as a result, moved on to manual radiation monitoring. The Danish shipping company Maersk and Pittsburg-area hospital in the US also recorded many systems down.

A New Twist

The initial examination pointed to the fact that Petya triggered the new attack. But then, the analysts found out there are massive differences between the programming of both 2016 and 2017 versions of the ransomware. The latter was designed as a wiper, not exactly a ransomware. The reports came accusing Russia of the attack and having a political vengeance being the reason. Clearly, the sneakers didn’t try to spread the attack worldwide but they targeted Ukraine as the focus. The major publications interpret this fact as a conscious effort, not a coincidental happening. Later after the attack went live, computer security experts stated that the new one is not Petya. Instead, it is programmed in such way to imitate the behavior of the same. Still, the analysis is being done by several security agencies and they haven’t come to a conclusion.

How can You Prevent the Attack?

If you are individual computer owner, the best possible way to avoid any ransomware is overlooking attachments from strangers. In case you feel anything fishy with an attachment came from a familiar email ID, you can reach out to them to enquire about it. Microsoft has notified the users that System Center Endpoint Protection and Forefront Endpoint Protection detect this threat family as Ransom:Win32/Petya. They have also requested the users to make sure that they have a definition version equal to or later than Threat definition version: Version created on: 12:04:25 PM : Tuesday, June 27 2017 Last Update: 12:04:25 PM : Tuesday, June 27 2017 Following are the instructions send by Microsoft over mail
  • Ensure you have the latest security updates installed
  • Ensure you have the latest AV Signatures from your preferred AV vendor
  • Do not open email/attachments from unknown/untrusted sources
Microsoft has assured that the free Microsoft Safety Scanner is designed to identify this threat as well as others. Keeping your antivirus up-to-date and firewall active works to a great extent. And, I strongly recommend you don’t go for cracked software and applications. You should bear in mind that even the free things on the web come with a cost (Yeah, I mean it!). Are you a server admin? Do you think it’s beyond your ability to keep ransomware at bay? Then, you can look at these server management plans for protection.

What if Your System Gets Infected?

What if you find your system infected? For individual computers, using software like Hitman Pro Alert and Malwarebytes Anti-Malware might work. But the servers go all the way up to an advanced level. And, you may not have the resources and skills to eliminate the ransomware. So, you can use an emergency server support service to get your network back to life.

That’s a Wrap

The Petya ransomware may go into the oblivion so son. But that isn’t the end for the cyber attacks. Day by day, sneakers are coming up with new malware and technologies to dominate your servers. So, you have to be updated about them. Not to mention, don’t be afraid to take professional assistance to prevent any ransomware getting into your computer or server. If infected, do not pay the ransom, NEVER. Chances are slim for them to give you the access again. As long as they stay anonymous, you can do nothing about it.


The need of integrating multiple technologies which are often incompatible, and the requirement of huge investment in infrastructure, expertise were the challenges involved in the streaming of media files on different platforms. Being a popular cloud platform, Azure abolishes the deadlocks of media streaming. It allows the developers to build cost-effective media solutions that could upload, encode, stream and package media to diverse platforms like Windows, Adobe Flash, iOS, Android, etc. How Azure enhances streaming of media contents? Well, this blog discusses how the popular cloud platform offers effective media streaming through its unique service of ‘Azure Media Services’. Azure Media Services Microsoft Azure Media Service is the extensible cloud-based platform which allows developers to build scalable delivery & management applications. It is based on REST API which helps the users to securely store, process and deliver, audio or video content for on-demand and live-streaming platforms. Prerequisites In order to use Azure Media Services, you should have the following prerequisites:

  • Create an Azure account to avail the benefits of Azure Media Services. If you’re a beginner, a trial version is also available.
  • The next prerequisite for using Azure Media Services is configuring an Azure Media Services account. Use the Azure portal or make use NET or REST APIs. It can also be used for setting up a development environment.
  • Availability of standard streaming endpoints in started state.
Content uploading into Azure Media Services (AMS) Azure is an ideal cloud storage where you can run and process your media content. Using the Media Services like REST API or with any available client SDKs, the content can be uploaded to the Azure Media Services.  You can either upload one file at a time or can perform bulk operations uploading to store data into AMS. Since the files are stored with encryption, AMS keeps the confidentiality of your media content. Media Content Processing In Azure Media Services The processing of information in Media Services is achieved by a number of ‘media processors’. The tasks performed by 'Media Processors’ include functions such as format conversion, encoding, encryption and decryption of the information stored in the Azure cloud. However the most common Media Services processing task – video encoding is performed by the processor called Azure Media Encoder. Delivering Media Assets from AMS Four approaches are used to access videos from AMS. It includes:
  • Downloading videos onto user device from AMS.
  • Progressive downloading - viewing videos even without downloading the media.
  • Streaming- A small portion of the media item is downloaded at once and discards it once it has been viewed.
  • Adaptive bitrates streaming - Viewing videos by adjusting data rate as per network condition
Benefits of AMS The advantages of Azure Media Services (AMS) Include:
  • Includes an API which allows the user to easily manage, create and maintain custom data.
  • The standardized workflow enhances productivity and coordination when multiple participants are engaged in content creation and management
  • Using global data centers provides automatic scalability
  • The rapid development of solutions.
  • Provides high-quality streaming experiences.
Conclusion Azure Media Services (AMS) is an ideal storage platform to upload process and deliver quality media contents. It provides complete video and audio content solutions to on-demand services as well as to multiple devices and platforms. In addition to content delivery, AMS helps users to scale their platform using the widespread Azure data centers. So the users get better streaming experience without worrying about capacity spikes or idle data centers. So apparently, Microsoft Azure helps the users to get good streaming experience economically while building a media solution.


Microsoft’s Hyper-V technology is evolved out of the virtualization concept which was developed in-between 1960 to 1990. In virtualization, a virtual version is created for an IT environment, and it constitutes operating systems, storage devices, etc. Even though the term ‘virtualization’ was in use from the 1960s itself, there were no personal computers at that time.  The concept of virtualization became popular in the late 1990s. In 2008, as a part of exploring virtualization, Hyper-V was introduced by Microsoft as a virtualization platform. Currently Microsoft provides a total of four versions, including Windows Server 2012, Windows Server 2012 R2, Windows Server 2008 R2 and Windows Server 2008 in Hyper-V series. So how did the virtualization platform - Hyper-V evolved with the experiments conducted by Microsoft? What are the benefits offered by the technology to its users? Well, this blog provides everything you should know about Hyper-V technology. Overview of Hyper-V technology Microsoft’s hardware virtualization product, Hyper-V, helps it’s users to create and run a virtual machine which performs the functions of a computer. It functions like a computer by running operating systems and programs. An operating system runs more efficiently in a virtual machine than on a physical hardware. The Windows Server feature, Hyper-V technology is available as a separate product called Microsoft Hyper-V Server. Since it eradicates everything irrelevant to virtualization, Microsoft Hyper-V Server is available to its users with least maintenance cost and vulnerabilities. Licensing: Depending upon the OS version and edition, Windows Server allows you to install one physical instance of an OS along with virtual machines. Even though Hyper-V does not need any operating system license, the users need to buy licenses for any instances of Windows when it is installed on VMs. Installation: Like any other typical program installation, Hyper-V facilitates simple installation. In order to configure Hyper-V, you have to check which Windows version is being used on your computer. Activate the Hyper-V role through the Server Manager, if you are using Windows Server OS launched after 2008. The computer will be rebooted once the installation is completed. The additional services installed including Hyper-V Manager will be displayed followed by the reboot. Data Backup:  To prevent data loss, Hyper –V technology provides reliable, fast and affordable backup recovery to virtualized environments running on Windows Server. So migrating workloads to Hyper-V is secure with its advanced data recovery features. Benefits: Hyper-V can help you: ·         Easy load balancing and simplified disaster recovery planning ·         Effective hardware utilization ·         Improved business continuity and reduced downtimes. ·         Higher data security ·         Bring Your Own IP Address and Network Topology without making any changes to the customer data centre network Conclusion When Virtualization reduced the complexities and difficulties in maintaining physical servers, Hyper-V has revolutionized the cloud industry with reliable IT solutions which consume less space, energy, and cost. By isolating virtual machines to its own spaces, it avoids the chances of crashing, increased workloads and security vulnerabilities. Hyper-V with the support of Microsoft is expected to yield more opportunities in cloud with its portability features such as live migration, storage migration, and import/export, etc  


Due to the proliferation of online business operations, websites have become crucial for the success of any business. It becomes your brand, storefront and most often the contact with the customers. SSL certificates ensure the credibility of the websites by digitally binding a cryptographic key to an organization’s certification details. Companies need to deploy the SSL Certificate onto its web server for initiating secure web browsing sessions. If the website is not trusted, it will display error messages to the end user, and be resulting the reader to turn away from the website. In business, such security concerns may prevent consumers from accessing the website information. So the website security is directly proportional to the website traffic and thereby customer involvement. This blog provides everything you should know before building a secure website with SSL certificate. What is an SSL Certificate? The small data file which can digitally integrate the cryptographic key to an organization’s details is known as Secure Sockets Layer (SSL) certificate. Once installed, SSL activates the padlock, the https protocol and allows only secure connections from a web server to a browser.  Apart from securing data transfer, logins, and credit card transactions, now it is being used as a norm for ensuring safe browsing of social media sites. Being a certification, it connects:

  • A domain name and hostname
  • A location and an organizational identity
Depending upon the type of SSL certificates applied, organizations have to go through different levels of vetting. Once the website is made secure, the traffic between the web server and the web browser will be secure. What are the Types of SSL Certificates? Instead of those certificates issued and verified by a trusted authority, many organizations are using self-signed SSL Certificates. Even though such certificates are free of charge,  it can end up costing more in the long run. The major types of a certificate issued by the Certificate Authority are:
  1. Extended Validation (EV) SSL
  2. Organization Validation (OV) SSL
  3. Domain Validation (DV) SSL
Extended Validation (EV) SSL Certificates: In this case, the Certificate authority checks the right of the applicant in using the domain name and along with that conducts a detailed vetting of the organization. The EV certification include
  • verification of the physical, legal and operational existence of the entity
  • Identity verification of the entity with official records
  • Checking whether the entity has the right to use the domain specified in the EV SSL Certificate
  • Verification to check whether the entity has properly authorized the issuance of the EV SSL Certificate
Organization Validation (OV) SSL Certificates: Here also the CA authenticates the right of the applicant to use the particular domain and conducts a detailed verification of the organization as well. When the customer clicks on the ‘Secure Site Seal’, they get the information of the owner of the website and associated enhanced trust. Domain Validation (DV) SSL Certificates: It provides an easy and quick way to for securing the websites. Even though the certificate checks the right of the applicant to use a specific domain name, in this case, other than encryption information, no company identity information is vetted in the Secure Site Seal. Conclusion: SSL certifications are essential for secure Internet operations through websites. It also protects your sensitive information which travels through the wide computer networks of the world. It not only provides security to the websites but also ensures the integrity of the user’s personal information. Website credibility is crucial in building trust with the customers and organizations have the option to select the right type of SSL certificate according to their industry, size, products or services. Install SSL certificates to your website and create a secure environment for yourself and for the users.


In order to be flexible, agile and to fend off competitors, businesses have to be both low-cost operators and leading innovators in their fields. Well, this conundrum can be solved by the integration of enterprise applications. Red Hat Enterprise platforms create an ample platform where you can securely and effectively manage all the critical applications. Red Hat JBoss Enterprise applications facilitate a powerful and distributed storage environment which is capable of meeting demands of today’s information explosion. In this blog, you’ll learn the basics of Red Hat JBoss Enterprise applications along with its potential impacts on businesses in 2017.

Integrate the Red Hat JBoss Enterprise Application Platform for Business Excellence

Enterprises face difficulty in achieving the agility due to inflexible licensing agreements, rigid proprietary stacks and cultural silos in IT.  Red Hat helps enterprises to utilize their technology to the maximum by allowing them the ‘freedom of choice’. Red Hat JBoss Enterprise Application Platform supports a broad range of third-party frameworks, databases, security systems, operating systems and identity systems, and thus facilitates easy integration of enterprise applications into the corresponding infrastructure. The enterprises can avail the following benefits through the integration of Red Hat JBoss Enterprise Application Platform:
  • Red Hat JBoss ensures efficient resource utilization and is suitable for any environment such as on-premise, virtual, public, private, and hybrid clouds.
  • Provides maximum flexibility, high performance and reduces scale-out times for applications deployed in different environments.
  • Provides ‘Kerberos authorization’ which ensures greater security to your enterprise applications.
  • Well suited for traditional and micro services.
  • Helps developers to be more responsive and productive by supporting Java EE and a wide range of Java EE web-based frameworks such as Spring, Spring Web Flow, Spring Security, Spring WS, AngularJS, jQuery, Arquillian, jQuery Mobile, etc.
  • Accelerates administrative productivity through easy maintenance and deployment features.
  • Apart from delivering technical and business flexibility, the subscription to JBoss EAP eliminates binding licensing decisions which lock the user into specific deployment environments.
  • Flexible as well as a future friendly subscription model.


The potential of enterprise applications for creating competitive advantage in delivering services, products and innovations is limitless. Irrespective of enterprise environment and infrastructure, Red Hat JBoss Enterprise Application Platform is applicable to the business of every size and industry. The compatibility with different IT infrastructure and the capabilities in terms of administration makes Red Hat JBoss, an ideal choice for organizations across the world.  


AWS has integrated SAP to its services from 2008 itself. The leading cloud service provider started using SAP for diverse use cases and scenarios. In 2011, AWS became the Global partner of SAP. Since then, the cloud major worked closely with SAP in order to test and certify the AWS Cloud for SAP solutions. When SAP is implemented to a higher organizational structure, it demands higher technology support. Cloud support from a service provider like AWS helps the users to implement SAP into larger technical infrastructures. SAP HANA is an in-memory relational database and application platform which endows with performance analytics and real-time data processing. How have AWS services integrated SAP into their services? Will SAP HANA fall fruitful for enterprises? Well, this blog provides the readers an overview of SAP HANA and its impacts on business.

AWS – SAP integrated Services

In a highly available, affordable and fault-tolerant way AWS Cloud provides a suite of infrastructure services for the deployment of SAP HANA. So the users can simultaneously avail the functionalities of SAP HANA and Security of AWS through this integration. The AWS and SAP HANA combination helps organizations if they have to manage large data sets that are of different types and from different sources. SAP HANA not only manages information but also provides you data in real-time without any pre-fabrication. A SAP HANA offering available on the AWS cloud constitutes of two major services namely SAP HANA BYOL and SAP HANA One. The former is an On-demand infrastructure which uses a ‘bring-your-own-software’ and ‘bring-your-own-license model’ for SAP HANA.  It is suitable for production and non-production use cases. The service which is based on “Bring Your Own License” is scalable up to 2 TB and can be scaled out, up to 14 TB. SAP HANA One, is used for Data analytic and Native HANA applications and it can be scaled up to 60 GB, 122 GB, and 244 GB according to the use cases. The integration of Amazon Web Services (AWS) with SAP enables companies of all sizes to fully realize all the benefits of the SAP which constitutes:
  • Get faster time to value
  • Scale your infrastructure resources according to data requirements.
  • Avail “pay as you go service” while making payment.
  • Use the existing licensing investment without any additional fee.
  • Higher level availability with Amazon EC2 Auto Recovery, SAP HANA System Replication (HSR) and multiple Availability Zones.
  • Start using AWS SAP HANA without any commitment for the upfront cost, storage, or network infrastructure.
  • Higher agility and speed in implementation.


The integration of AWS cloud platform with SAP helps businesses to reduce the complexities of their operations and thereby delivers prompt results. The impact of AWS SAP HANA reflects in different functional areas of the organization which includes Business Objects Planning & Consolidation, Sales & Operational Planning, Cash Forecasting¸ Margin Calculation, ATP Calculation and Manufacturing Scheduling and Optimization. AWS SAP HANA service helps users to manage their entire enterprise information without compromising on their customized requirements.  


For decades Microsoft has been an industry leader in the development of computer software products. For leveraging cloud technology, they have launched Azure which constituted the building, deploying, and managing of applications through a global network of Microsoft-managed data centres. Microsoft Azure web services are the most preferred cloud computing solution today for cross-platform web and mobile applications.  Azure Active Directory is a highly available, comprehensive identity and access management cloud solution that combines core directory services, application access, and advanced identity governance. Many corporate and Governments across the globe are using Azure Active Directory (AD). It is used as a standard for identity management in the enterprises. Azure AD with its advanced features and tight integration offers a free tier and a Basic tier for $.50 per user per month, and a premium tier that runs $6 per month. This blog will help you understand the various scenarios where Azure AD leverage supports and will show you how it became popular among business owners.

Azure AD – An Overview

The tenant cloud-based directory and identity management service of Microsoft is renowned as
Azure Active Directory (Azure AD). It helps IT Admins to deliver an easy to use and affordable solution, to give employees and business partners a single sign-on (SSO) access to thousands of cloud SaaS Applications like, DropBox, Office365 and Concur. According to application developers, Azure AD lets them focus on building fast and simple applications. Azure AD makes the application to integrate with a world-class identity management solution which is used by millions of enterprises across the globe. Businesses across the world avail the benefits of the cloud-based directory through its diverse features. It includes multi-factor authentication, self-service password management, self-service group management, privileged account management, device registration, role-based access control, application usage monitoring, rich auditing and security monitoring & alerting. So the Azure AD services help business across the world to leverage cloud directories in order to cut costs, streamline IT processes and to ensure that corporate compliance goals are met. Business people can rely on the geo-distributed, multi-tenant -high availability design of Azure AD for their most critical business needs. Azure is highly reliable, even if the data center goes down, it copies the directory data at least to two more regionally dispersed data centers and that information will be available for instant access. The benefits that the business owners can avail through Azure Ad include: • Easy sign in experience for cloud service providers and employees. • Easy and secure vendor access management. • Improved application security through Azure AD conditional access and multifactor authentication. • Consistent and self-service application access management. • Protection from advanced threats with security reporting and monitoring. • The customers can subscribe Azure AD on a “Pay As You Go” basis. • Through globally distributed data centres, Azure AD services are available to customers around the globe.


Azure AD constitutes the majority of the core features which businesses are looking for in an IDaaS provider. It provides enterprise-level tools that are competitive in pricing and performance. Azure AD services are solid and constantly evolving and can be integrated with Office 365 and other Microsoft products. Azure AD makes the employees more productive by delivering a common identity for accessing both cloud and on-premises resources. So through effective directory management, Azure AD ensures improved business performance.


It seems the future cloud industry is expected to undergo a revolution with the concept called” Containerization”. It is a powerful mechanism to package and deploy tools. The world’s leading software container platform, Docker helps developers to eliminate “works on my machine” problems. In order to get better compute density, operators are using Docker to run and manage apps side-by-side in isolated containers. Meanwhile, business owners are using Docker to build agile software delivery pipelines where they can ship new features faster, more securely and with confidence for both Linux and Windows Server apps. Can we expect the Docker concept to change the current cloud trends? Well, this blog will help you to understand how this containerization tool is going to change the cloud industry in 2017.

The Cloud Revolution with Docker

Containers can be utilised in order to make a piece of ‘software run’ which is packaged into isolated containers. The difference between the VMs and the containers is that the former will not bundle a full operating system but only set up the libraries and settings required to make the software work effectively. So it makes lightweight, efficient and self-contained systems to run the same software regardless of where it’s deployed.  The emergence of Docker has impacted the cloud industry in the following ways: The Portability Feature Why is Docker so unique?  Due to the Portability feature, the users no longer require a virtual machine to spin up each and every app.  If you are running a CoreOS host server and have a guest Container based on Ubuntu, then the Docker has the parts which make Ubuntu different from CoreOS. When the Docker is defined as an isolated portion of the host computer that shares the host kernel (OS) and its binaries/libraries, the Virtual Machine is defined as a whole guest computer that runs on top of your host computer. Shared Resources In terms of system resources, shared Kernel used by Dockers is more efficient than hypervisors. Since Containers are lightweight, there are several VMs that run the application with its own operating system. In order to execute the work, there can be thousands of containers in the same server that shares the Kernel’s operating system. It helps the user to leave behind the useless 99.9% VM junk. So your applications can be stored in a small and neat capsule. Easy to Launch The project called Libswarm launched by Docker would potentially make it easier to use containers in public
cloud. Earlier for running a remote cloud server, the user has to log into cloud server and is required to push the image which has access to the Docker Registry. Only after that, you can pull that image down to the cloud server. Then the cloud server will be all set to launch the container. The Docker- Libswarm can be configured once but it can create all of your Docker images locally. It deals with orchestration and creation in order to start the container. Great Developer Experience Using Docker, you can reduce onboarding time by 65%.  Without wasting the time in installing and maintaining software on servers and developer machines, the developers can build, test and run complex multi-container apps. So that the “work on my machine” constraints getting eliminated. Easy Distribution and Sharing of Content You can develop, manage and distribute images in secure Docker Registries located on-premises or in the cloud. The configurations, content updates and build history are automatically synchronized across the organization. The formatting engine of Docker has become a platform where lots of tools and workflows are attached. Also, the containers are getting support from the largest vendors like Red Hat and Microsoft. In last July they all, along with Docker, joined Google for the Kubernetes project which is an open source container management system for managing Linux containers as a single system. Even the global players are exploring containers which will create a drastic change in the Distribution and Sharing Of Content through cloud platforms.


Docker is the backbone of the modern app platform, connecting developer, and IT, Linux and Windows. Docker works in the cloud platforms as well as in on-premises; It supports both micro, traditional, and service architectures. Use Docker to build, secure, network, and schedule containers and manage them throughout the development process. The involvement of Docker will help the enterprises on the path to digital transformation by enabling all apps to be cloud-ready, agile, and secure at optimal costs.


As the cloud computing increases the efficiency of business operations, many organizations are migrating into cloud platforms. According to 2016 Future of Cloud Survey led by North Bridge Growth Equity Venture Partners and research analyst firm Wikibon, 49 % of people are using the cloud in the key aspects of their technology systems and 42 % of organizations have a cloud-first or cloud-only strategy. It means more than 90 % of people are utilizing the cloud in a meaningful way.The security of the enterprise through cloud innovation is a shared responsibility of the user and the cloud service provider. It is based on this shared responsibility; Microsoft delivers service to its customers. So understanding the comprehensive set of security controls and capabilities available on Azure is important for every customer. Being a cloud service provider that is built on a foundation of trust and security, Azure has a significant investment in compliance, privacy security, and transparency. It helps the customers to host their infrastructure, applications, and data in a reliable cloud system. Microsoft Azure also provides capabilities and security controls, to further help you protect your data and applications.

Security Features of Azure

• In order to prevent unauthorized access, Microsoft has Azure Active Directory which manages user access to all the cloud services, including Office 365, Azure and other popular SaaS and PaaS cloud services. The federation capability helps customers to use their on-premises identities and credentials to access those services. Also, the Azure Multi-Factor Authentication ensures a most secure sign-on experience to the users across the world. • Azure includes security -hardened infrastructure to connect on-premises data centers and Azure VMs. Using a dedicated Azure ExpressRoute connection or secure site-to-site VPN, the network and infrastructure security can be increased. • In Azure, the industry-standard protocols are used for data encryption which safeguards the information flow between the devices and data centers. • Using Azure network of global threat monitoring and insights, Microsoft prevents highly sophisticated attacks. By analyzing a wide variety of sources and a massive scale of signals, Azure has developed this threat intelligence. So the azure cloud users can securely place their confidential data on their cloud platforms.


Azure is a trustworthy cloud platform where the organizations are able to securely transmit their enterprise data into a reliable platform by complying regulatory requirements. Since Microsoft have millions of onboard customers, their innovations in cloud industry will revolutionize the server industry itself. The number of security capabilities introduced by Azure not only detects threats but also act as a defensive mechanism and thereby remain as a solution for cloud vulnerabilities.


Cloud industry is expected to exhibit a drastic change by 2017. Instead of acting as a standalone service, cloud industry renders services with the integration of technologies such as Artificial Intelligence, Machine learning, Internet of Things and practices such as DevOps. The combination of cloud with other cutting edge technologies has already sprouted in 2016. The three major vendors say Amazon Web Services, Microsoft Azure, and Google Cloud Platform have made the major changes in IaaS public cloud computing market. Managed Cloud Services Provider provide assured support for Infrastructure as a Service (IaaS) solutions from leading cloud services providers like Amazon Web Services, Microsoft Azure Web Services, Rackspace Cloud, Google Cloud and more.Managed cloud services Provider brings indisputable benefits of low capital expenditure, high flexibility and enhanced collaboration. The cloud collaboration with various technologies has increased the comfort of cloud applications. Now the customers have more choices of where to host their data around the globe. This year is expected to offer more virtual machine instance sizes to optimize customer and to manage and analyze the cloud data. This blog provides you insights on the top 10 IaaS cloud trends which are expected to mold the cloud industry

1. Cloud Market Is Expected To Generate More Revenue

According to Forrester Research, the cloud market is exhibiting a growth rate of 22% and is expected hit $146 billion by the end of 2017. Also, the Infrastructure and platform clouds are expected to reach $32 billion by the end of the year. Google is expected to hit between a half billion and $1 billion in revenues and Microsoft Azure is about two or three times smaller compared to the AWS. While the other cloud giant AWS is expected to get $13 billion in revenues in 2017. So the study from Forrester indicates the rising importance of cloud applications.

2. The Emergence of Cloud 2.0 stage

Experts forecast that 2017 is expected to witness the dawn of Cloud 2.0 stage. According to the Gens 2016 study, by 2020, 85% of enterprises will commit to a multi-cloud architecture model and by 2018, 60% of enterprise, IT workloads will be off-premises. The study also points out that major portion of the revenue in cloud industry will be mediated by channel partners/brokers. So with the dawn of Cloud 2.0 stage, cloud industry is expected to undergo a massive growth which facilitates mass enterprise adoption.

3. Cloud Integration with machine learning and AI

In 2017, Machine Learning and Artificial Intelligence (AI) are expected to dominate cloud vendor priorities. The bigger announcements from major vendors are validating this change in the industry. It includes TensorFlow, an open source machine-learning platform from Google, the cloud-based platform for machine learning from Microsoft and the three new machine learning services from Amazon. So it is expected to help business to get an edge over their competitors and developers to use and integrate into applications they’re building atop these cloud platforms.

4. Serverless computing

The trend which gained traction in 2016 and expected flourish in 2017 is serverless computing. The practice of building applications that run without provisioning any infrastructure resources is known as serverless computing. It facilitates the integration of hybrid application into cloud platforms. AWS’s Lambda platform debuted in 2015, OpenWhisk serverless computing platform of IBM, and Azure Functions from Microsoft were introduced in accordance with this paradigm shift.

5. The Container Management Platforms

According to cloud vendors, containers may have been the buzzword of the year. It is represented as the “next-generation computing” in the cloud industry. Container Management Platforms are the next biggest offering from major cloud vendors. For example, Container Engine from Google, Elastic Container Service from Microsoft and Azure Container services are gaining market presence and enterprise interest. So 2017 is expected to witness the emergence of more and more Container Management platforms.

6. The combination of private Cloud and hyper-converged infrastructure

The Forrester Research predicts the possible inclination of cloud market towards leaner and cheaper solutions that include and integrate cloud management, PaaS capabilities, and container support. Rather than the actions in the public cloud, in 2017, we can expect Tectonic shifts within the realm of on-premises infrastructure. It includes the emergence of hyper-converged platforms, infrastructure that comes pre-packaged with the network, compute, and storage and models like a private cloud as a service. The Microsoft’s introduction of Azure Stack is expected to bring drastic change to cloud industry in this regard.

7. The Hybrid Cloud Strategy

Organizations require running of the cloud workloads in public cloud which was earlier being accomplished through infrastructures. Microsoft has already adopted the hybrid cloud strategy. It was evident when they launched Azure. Even though AWS has ignored the idea of private and hybrid cloud computing, they have released a series of services and products to on-ramp data on to its cloud at re: Invent 2016. It includes the Snowball Edge device which can do local computing and then send data to AWS.

8. The user involvement

In 2017, apart from the cloud vendors, users also vested with some responsibilities. According to Forrester, the customers have huge roles in the maintenance of cloud applications. Users need to make sure that they are not over-provisioning virtual machines and unused VMs need to be turned off. Pre-purchase of as much as capacity will help businesses to save money. Here the vendors can also help the customers by providing the application with user-friendly interfaces.

9. More and More Data Centres

We can expect the emergence of more data centers since IaaS cloud computing market continues to grow by leaps and bounds. It will help vendors to keep up with customer demands and requirements. In order to comply with local data sovereignty laws, vendors adding data centers in specific geographic regions.

10. The Market Conditions

The IaaS cloud market condition is different in U.S and other countries of the world. The US market is well defined and it is difficult for new players to take significant share. Except U.S, the market is very fragmented and the vendors like Tencent and Alibaba have opportunities to grab large market shares. Will that potential international strength translate into the U.S. market? The answer needs to be derived in the coming days.


Drastic changes are awaiting the cloud industry in 2017. It includes the integration of cloud with other technologies and the emergence of hybrid applications, a collaboration of private Cloud & hyper-converged infrastructure, evolving of container management platforms, etc. Since the customers are appreciating the changes happening in the cloud industry, it will generate more revenue. In short, the cloud industry will showcase two types of changes in 2017. First is through embracing existing or new technologies. The second case is market consolidation where in U.S the major vendors will dominate and in other countries of the world, new entrants can be expected.


The emergence of DevOps services is not only creating impact over Software Development but also affecting the server and cloud industry. Since the customer needs and preferences are changing expeditiously, businesses have to become increasingly agile. As the market needs are evolving rapidly, the only way to achieve competence is to imbibe supreme agility and automation into business. Involvement of more manpower in operations will only accelerate the expenses. So the automated systems are the only solution for business excellence. So How to achieve agility and automation in your business activities? Adopting DevOps services into the IT operations help you to achieve productivity in a consistent, repeatable, and reliable manner. AWS is one of the most reliable cloud platforms which help organizations across the globe to excel in their field of operation. How the AWS OpsWorks (Combination of AWS and DevOps principals) helps you to improve business agility? Well, this blog will share you the insights on the benefits of AWS Ops works.

An Overview of AWS Opsworks

In the same way, the software developers write application code, AWS has a DevOps-focused way of creating and maintaining infrastructure. In a programmatic, descriptive, and declarative way AWS delivers services which include the creation, deployment, and maintenance of infrastructure. The DevOps, AWS integration is achieved through AWS CloudFormation templates. The AWS CloudFormation templates are written in JavaScript Object Notation (JSON). The syntax and structure used for this integration are based on the types of resources being created and managed. The infrastructure can be provisioned in a repeatable and reliable way using the templates. Use custom AWS CloudFormation templates or sample templates that are available publically. Prior to template creation, the next process is stack management. A stack is a collection of resources under management. It can be managed through AWS Command Line Interface or AWS Management Console or AWS CloudFormation APIs. The common activities related to stack include create-stack, describe-stacks, list-stacks, and update-stack. The templates are the medium between the user and the AWS services. It includes AWS offerings such as Auto Scaling, Amazon CloudFront, Amazon Simple Storage Service (Amazon S3), Elastic Compute Cloud (EC2), Amazon ElastiCache, Amazon DynamoDB, Amazon AWS Elastic Beanstalk, AWS OpsWorks, Amazon Virtual Private Cloud, Elastic Load Balancing and  AWS Identity and Access Management.

The Benefits of AWS DevOps Integration

In AWS, the DevOps principle is used to configure infrastructure in the same way developers process code in software development. So the same rigor of application code development is applied to the provisioning of the infrastructure. So the benefits of AWS DevOps combination to business include: The major benefit of AWS OpsWorks is that it supports any applications. It includes a wide variety of architectures, single to complex applications and software that has a scripted installation. It helps the users to model and visualize their applications that define resource and software configuration. So ultimately the user has the control over his application ranging from configuration to processing. Like your application code, using the AWS OpsWorks Stacks, you can define configurations for your entire business environment. So whenever the user requires, the person can reproduce the software configuration. The AWS – DevOps support is not meant for a short term purpose but it allows you to manage your applications over their lifetime. In response to system events, each new instance is built to specification and changes its configuration in response to system events. So the DevOps principles with AWS provide you long lasting service. Since the infrastructure development and configuration is done with the same team, it reduces the product delivery time. In fact, what else you need to delight your customers and to maintain a long lasting relationship with them. It utilizes the delivery pipeline which includes continues configuration that reduces the manual intervention. The replacement of the manpower with automation reduces the maintenance task for the AWS user. The skill sets which the development teams over many years now become mandatory for the infrastructure specialists with the intervention of AWS – DevOps services integration. The regular code reviews, agility and testing for instances will improve the quality of infrastructure configured in your business.


DevOps is not only changing the traditional Software Life Cycle Development Models and Conventional Silos approach but also embraces the different IT architectures like a cloud. The poor integration among infrastructure, development, and security and support teams has brought down the productivity of the organizations. Adopting the AWS Devops Combination helps the organization to build an infrastructure which is supported by quality applications. The core DevOps services help organizations to imbibe the benefits of agility and automation to business. More than a choice the adoption of tools like AWS Ops has become a necessary for business excellence.


Why you need a secure mechanism in order to prevent your servers from phishing attacks? The server failures can cause great loss or threats to your business and it may adversely affect the integrity of the information in your organization.So remote server support is the most important property for any business. The majority of the phishing attacks happen through emails with spoofed messages. The theft of critical information from your organization can affect even the credibility of your business. It includes loss of trust from the customer side, huge monetary loss, misuse of your loyal customer data, degradation of brand value, theft of trade secrets, etc. In fact, even a single issue in the above list is capable enough to create adverse effects on your business. Perhaps most of the companies are aware of the fact and they are apprehended with such attacks. Still, why the hackers keep on stealing important business information? The reason for this helplessness is that most of the phishing emails appear as an important notification from credible resources like Government agencies and Banks. It is difficult to identify the reliability of the source with the domain name and the email addresses from which it was propelled. Then how to resolve this issue? Well, the blog provides you 10 effective ways to get out of the disgusting server attacks. Explore and protect your servers.

1. Ignore Emails that Request your Personal Information

The major distinction from phishers with bankers, e-commerce or other financial companies is that the former generally personalize emails, while the latter ones do not. In phishing emails, you can see the sensational information which triggers an emergency. For example “urgent, your account details have been stolen”. This message creates an anxiety in the readers. Phishers include such messages to get instant responses from people. Always keep in mind that reputed organizations will not ask their customers for passwords or account details in an email. So if you want to check the reliability of the message, contact them by phone or by visiting their website.

2. Grey Listing

This anti-spam technology helps you to prevent phishing by rejecting emails from new sources by displaying temporary error messages. The legitimate servers following email standards will resend the email prior to the error message and on the other hand, spammers will not follow such protocols. So you can use this technique to filter spam emails to a great extent.

3. SPF (Sender policy framework)

In this method, a list of authorized emails is created in order to prevent phishing attacks on your server. You can check the data received by your email server with the prepared list of authorized emails. If the data is found mismatched, the email can be rejected. It is an effective security mechanism for stopping regular as well as spear phishing emails. SPF catches all the spoofed emails in servers while querying the authorized email sending hosts for that domain.

4. Always Use Secure Computers

Keep your computer secured because the phishers may use software that can record information regarding your internet activities and thereby get an access to your computer. So always remember to install antivirus software and keep it up to date to prevent such external intrusions. The firewalls safeguard the information on your computer and at the same time blocks communication from unwanted sources.

5. Databases

Maintaining advanced anti-spam solutions helps you feature databases of URLs and spam fingerprints which are designed to detect phishing emails. It is an effective mechanism to block regular phishing emails which target businesses.

6. Bayesian

Bayesian is a system which uses the statistical analysis on emails by classifying them according to its content. You can train the system with samples of both spam and legitimate emails. The periodical training can be done by the vendor or the user themselves and as long as the Bayesian filter is trained with samples of phishing emails, it remains as an effective tool to detect phishing emails.

7. Create Awareness Among the Users

It is important to find the common issues faced by the clients while taking defensive measures against server attacks. Identify the problems faced by the clients and educate them on the problems of phishing emails and the solutions. Most of the time the server gets hacked through the content management systems such as Joomla and Wordpress which are maintained by the user. The user has to resolve the issues very quickly because the speed with which the phishing attacks are mitigated happens as swift as an arrow and a person who is educated on phishing attacks can manage it very easily.

8. Be careful While Downloading Attachment

Another way in which phishing happens in servers is through the attachment sent with emails. Most of the time the users are misled by the spoofed emails and they feel difficulty in rejecting or ignoring the mail which claims to be from reliable sources such as banks. Check the credibility of the email with various Anti-spam technologies and contact the officials directly by making a call. Unless you are confirmed with the source of the email, never download or open attachments, no matter who they are from.

9. Use Accurate URLS for Banks

Always try to save the website URLs of the bank with which are being served and type it directly when you want to visit the bank website. Phishers usually use links within emails to direct the victims to a spoofed website that has the similar web address of the bank which you want to visit. For example, the hacker may use ‘’ instead of ‘’. So if you suspect an email from your bank or online company as fake, do not click on any links embedded within it.

10. Be Careful with Emails and Personal Data.

In order to do safe transactions with the banks, always try to keep a security page on their website. Keep your PINS or passwords safe and never let anyone know your security credentials. Do not write down them and use multiple passwords for all your online accounts. Remember that the impact of phishing on business is far more insidious than just an invasion of privacy. It is used to compromise server security through social engineering and can be used to steal information, ruin reputations, steal money, disrupt computer operations, destroy important information, etc.


Phishing is one of the greatest challenges faced by the business organizations across the world. Phishers are using obfuscation techniques and the web browser vulnerabilities to create phishing scam pages that are more difficult to differentiate from legitimate sites. So the majority of the people become victims even if they are aware of the phishing scams. The best possible solution to prevent phishing attacks is to deploy the Anti-spam technologies and to keep the personal credentials safe. Remember that phishers will always use a sensational message to attract the user attention. So they are making use of the impulsive action of the reader to direct them to spoofed websites. Often mere carelessness in typing website address can cause phishing attacks on your server. Use secure systems, make regular email or account checking and follow emails or links only after confirming the reliability of the source.


Phishing is always a threat to the proper functioning of the servers. It is the server vulnerabilities that help the hackers to get into a server. The only way to manage such vulnerabilities is to gain deep knowledge about such limitations. We have been educating our readers with highly effective techniques to solve all server related issues. In previous blogs, we have covered a couple of errors which adversely affect the tasks of server administrators. In this blog, as a server maintenance services provider, we will guide you to Fix Root Privilege Escalation vulnerability (CVE-2016-6664) in MySQL, PerconaDB & MariaDB.

Root Privilege Escalation Vulnerability of MySQL

Server phishing happens due to the exploitation of the two vulnerabilities of the server, namely CVE-2016-6663 aka Privilege Escalation / Race Condition and CVE-2016-6664‘or ‘Root Privilege Escalation. A local MySQL user can escalate his privileges in CVE-2016-6663 server by exploiting this first liability. A person who gets higher privileges can hack the confidential data by executing malicious code in the database server. In CVE-2016-6664 vulnerability, the hacker makes use of the less-privileged user accounts and escalates their privilege to the root level. The hacker can exploit the critical or confidential information after getting the access to the database server. So In order to avoid any server failure or business downtime, these two vulnerabilities need to be fixed without any delay. Database servers which are affected by Root Privilege Escalation Vulnerability include MySQL server and its derivatives such as Percona and Maria DB. The vulnerable versions are: MySQL <= 5.5.51 <= 5.6.32 <= 5.7.14 MariaDB All current Percona Server <5.5.51-38.2 <5.6.32-78-1 <5.7.14-8 Percona XtraDB Cluster <5.6.32-25.17 <5.7.14-26.17 <5.5.41-37.0  

Root Privilege Escalation Bug Fixing in MySQL

In the latest database versions, MySQL has fixed the vulnerability issues. In order to update your server follow the below instructions. Use ‘yum’ for updating the MySQL server for CentOS and RedHat servers.

sudo yum update mysql-server

In every version change, before installing the new version, the previous version has to be removed. The ‘apt-get’ can be used to update the ‘mysql-server’ package in Debian and Ubuntu servers.

root@docker ~ $ sudo apt-get update root@docker ~ $ sudo apt-get install mysql-server

The execution of mysql_upgrade prior to every restart, after update, helps you to check and resolve any incompatibilities between the old data and the upgraded software.  

Fixing Root Privilege Escalation Exploit in Percona

The critical vulnerabilities are fixed by Percona in its last few versions for MySQL and Percona XtraDB Cluster. Patched Versions of Percona Server 5.5.51-38.2 5.6.32-78-1 5.7.14-8 Secure versions of Percona XtraDB Cluster: 5.5.41-37.0 5.6.32-25.17 5.7.14-26.17 The users need to upgrade their relevant incremental release in order to secure the Percona database servers from root privilege exploit issues. So, how to get this done? Well, here is the solution: 1. First, go to the official website and download the latest version of Percona. Make sure that you have chosen the OS and hardware before downloading. 2. Do Source page extraction. 3. Use rpm for Red Hat Enterprise Linux and CentOS servers and dpkg for installation in Debian servers. 4. To update database server in Ubuntu, use ‘apt-get’ and on the other hand, RedHat can be updated using ‘yum’.  

Root Privilege Escalation in MariaDB

MariaDB has updated their software to prevent CVE-2016-6663 but has not yet released a secure version to address CVE-2016-6664. So fixing vulnerabilities in MariaDB implies to update MariaDB to the following secure versions. 5.5.52 10.0.28 10.1.18 The MariaDB server can be updated to the latest versions with the following steps. 1. From MariaDB website, download the desired version. 2. Shut down the old version. 3. To update the permissions and table compatibility, run the mysql_upgrade command. 4. After updating to the latest version, restart the MariaDB instance. We can expect the fix for CVE-2016-6664 in their upcoming versions.  


Even though absolute perfection is almost unattainable, we can aim it to better ourselves. In reality, the only person you have to compete with is yourselves. Likewise, zero vulnerability is almost impossible but acting immediately to fix it helps you to avoid catastrophic business downtimes. Being an active player in the server industry, we have some recommendations for you to avoid server vulnerabilities. • Take a regular backup of the configuration files and databases of the server before installing an upgrade. • Depending on the server configuration, make upgrades. • Restart the database prior to every update and then add custom configurations. • Test the software applications and websites in order to ensure proper functioning of the server.


Security has been a concern for the world from the ancient times itself. The man who went for hunting in the dense forest safeguarded themselves with various forms of weapons. Later, people started to settle near the banks of rivers and formed families. Afterward, the security of the family became the higher priority. The civilizations have changed and they used to shelter their family and guarded their personal possessions. Meanwhile, the mediums of exchange like gold and cash came into existence. The cash and gold are important for the financial stability of a person which itself turned out to be the criteria for protection. The man has protected everything which was essential for the survival in this world. Now we are living in a world where data/information became crucial in every business operation. Whether it is business, sports, entertainment, military, health care; the role of information is vital. The controversy created by the Wiki Leaks, a multi-national media organization, and associated library, proves the relevance of security for information in the world. Along with the security mechanisms, the emergence of cyber threats such as hacking attacks, malware, and viruses further increased the magnitude of the security requirements. As a defensive mechanism, many technologies such as passwords, biometric applications, etc were evolved. Cloud computing is one of the latest architectures which provides constant security to the information. It is a growing service that offers similar functionalities as of conventional IT security which includes protecting information from Cyber theft, deletion, and leakage. AWS managed service provider usually deliver highly secure cloud solutions which ensure protection, integrity, confidentiality and availability of your data systems. This blog provides insights on the role of AWS in ensuring security to confidential information, using their cloud security solutions.

AWS Excellence in Cloud Security

In the cloud environments available today, AWS cloud infrastructure is one of the most flexible and secure cloud computing platforms. The highly reliable platform helps customers to deploy applications and to manage data, quickly and securely. AWS considers unique cloud needs of customers while providing best security practices. AWS ensures 24/7 infrastructure monitoring through continuous validation and testing, redundant and layered controls, and with a substantial amount of automation. For this, they ensure that necessary controls are replicated in every new data center or service. The user can operate at scale and still able to maintain the security which makes people adopt AWS cloud solutions. The cloud solutions are built in such a way that the users can adopt new ways of security solutions which address new concerns of safety. Their cloud platform provides the users, the ability to perform security actions in a more agile manner rather than changing the security approach from preventing to detective and corrective actions. Amazon has multiple data centers across the world and which helps the countries that require storing of data on their countries. In order to meet the compliance requirement with storing of card numbers, health information or other confidential data, the users can depend upon third party audit reports which ensure the creditability of the particular cloud solutions. AWS is well integrated with such third party auditors and which proves the reliability of their cloud solutions as well. AWS meets the requirement of even the most security-sensitive customers through their widespread data centers and secure network architecture. It delivers the customers a resilient platform which functions without the capital outlay and operational overhead of the traditional data centers.


Clouds solutions have impacted in such a way that many enterprises have already switched to cloud solutions. They are able to focus on their business and meanwhile, the providers handle the company’s security needs. The combination of the leading business conglomerate, Amazon, with the highly reliable platform, cloud, provides a value addition to the customers. The users require security assurance while changing their confidential information to any new systems. In fact, AWS becomes the right choice with certifications from accreditation bodies across geographies and verticals, including ISO 27001, DoD CSM, FedRAMP and PCI DSS. By operating in an accredited environment and with continuous assessments by the underlying architecture, the users of AWS are able to eliminate the administrative burden of their company and can focus on their business like never before.


The impact which the error ‘550 5.1.1 User unknown’ creates in various servers is different and the solutions too. Yet the major causes of error are:

There is no universal solution for the error in all servers like Exchange, Exim, Qmail, etc. Each server needs to be audited and the report should be documented. So in this blog, we provide you information which will help you to manage the ‘550 5.1.1 User unknown’ error with better insights.

In Exchange Servers

The error ‘550 5.1.1 User unknown’ is a common issue among server administrators who handle Microsoft Exchange Servers. The main reason for the display of the error message is the security parameters followed by the senders. These errors can happen with a distribution group of email addresses and a single account as well. The security parameter which is implemented by Microsoft Exchange 2010 authenticates the sender to prevent spamming and it is enabled on the server by default. While sending, except local email addresses, all other get bounced with 550 errors. It is possible to manage this issue by checking whether any security restriction is associated with the emails which are causing the bounce and email error 550. In order to remove the restriction, do the following steps. Step 1: Open the Exchange management console. Step 2:  After expanding the recipient configuration, click on the mailbox or list Step 3:  List the mail box which is having the issue Step 4: From the properties button go to ‘Mail Flow Settings’ tab Step 5:  Go for ‘Message Delivery Restrictions’ then click properties. Step 6:   clear the checkbox titled ‘Require that all senders are authenticated’ and click OK.

In Exim for cPanel/ DirectAdmin and WHM servers

Most of the cPanel and DirectAdmin use Exim mail server in two ways. When some of the servers use it as default installation, others go for the customization. The error message ‘550 5.1.1 User unknown’ is not common in Exim servers. So if you are confronted with such an error, the route cause needs to be identified.  The recipient email address has to be checked for tracking the ‘typo errors’ and also you have to ensure that no custom email programs are messing up with mail server settings. The filters and custom rules added to the Exim mail or the specific domain need to undergo through strict verification in order to identify and fix the 550 email errors.  

In Qmail and Postfix for Plesk servers

Two things need to be verified in order to resolve the error ‘550 5.1.1 User unknown’ in Qmail and Postfix for Plesk servers. Verify whether the recipient domain is properly resolving to the correct server or not and secondly ensure if it has its mail server configured on the server as well. If the MX points to the local servers and found it as disabled, try to enable it. Turn off the mail servers for the domain if the email service for the domain is hosted elsewhere. Make sure that there are no filters or custom rules in the Qmail or Postfix mail server which is messing up with the mail delivery. In order to achieve this conduct a detailed examination of bounced headers and mail logs will make it for you.


Identifying and solving the ‘550 5.1.1 User unknown’ error is difficult compared to other server issues. It is even difficult to understand the error based on the error message. There are more variants for ‘550 5.1.1 User unknown’ such as ‘550 MAILBOX NOT FOUND’, ‘550 Unrouteable address’, ‘550 mailbox temporarily disabled’, ‘550 No such user here’, etc. A detailed study in accordance with various servers helps to track the error and to formulate a solution. Since this type of errors is relative to the change in server, instead of a direct solution, this blog will direct you precisely in your study and to track the error.  


Gaming is one of the fast growing industries in the world. In fact, long gaming sessions help people to spend their time with more joyfulness and satisfaction. Perhaps, this sought after nature of games has generated abundant popularity for the industry. Gaming industry is evolving on a constant basis and it embraces every new platform which is capable of providing different visual experience to gamers. Now gaming has become more of an online experience through the support of various online gaming platforms. Playing games which are stored on a physical storage with the support of a console is mere history. The intervention of servers to business has influenced the game makers too. Players are also prepared for gaming from a more optimized level. Compared to traditional gaming formats, now the reliance on servers has become more popular. It has also created a requirement of choosing the right provider for a better gaming experience. In this blog, we will be focusing on the requirement of a server management partner in the gaming industry.

Over Growth of the Company

A server management partner helps game makers to enhance higher performance both internally and externally. The internal achievements include operational efficiency, effectual data management, prevention against disasters and the external benefits include higher sales, ROI, customer satisfaction, etc. So the partnership with Server Management Company contributes to the total growth of the company.

Ensures More Reliability

Involvement of a server management partner makes the games more reliable. In fact, they are the people who have dedicated their whole expertise, time and effort for server solutions. Getting the service of such external expertise make the enterprises to forget the difficulties of managing the backend operations and help them to channelize their effort towards game development. So it leads to the creation of more authentic, innovative and entertaining games. Both the game makers and the players benefit out of the association with server management partner.

Provides Flexibility to Games

Unlike earlier times, now most of the games can be played online. In online games, game makers are more connected with the players with increased interactions. It includes connecting the gamers with other online players, offering points, distributing promotional information about new games, etc. An effective server management team help game providers to adhere to this dynamism of the gaming. Managed partners maintain the continuity of the whole gaming process and provide remarkable benefits to the gamers as well as the providers.

Reduced Down times

The support of an efficient server management team reduces the chances of the downtime and which provides gamers more time to play their games. The problems in the network can have an effect on the millions of online game players which may even affect the credibility of the game developers. In business, a discontented customer implies a loss in returns. So a server management partner helps you to overcome huge risk through maintaining smooth running server of networks.

Optimal Use of Monetary Resources

Most of the companies maintain an Information Technology department in their companies. They are more like a general physician in a hospital. They can guide management in taking technology related decisions but can’t help with solutions for the day to day operational issues. The current world demands the service of specialists to tackle the enterprise issues. So when a game developing company is a partnered with server management company, they are ensured with more specialised and value added service. Therefore, the company is spending the solutions they received instead of mere consultation.


There are certain areas which we all lack expertise. However, we have the power to take the right decision. The game maker can choose the right server management partner for the company. In fact, that one wise decision may change the way in which the games are delivered and may take it into higher levels of popularity. The involvement of server management partner helps the companies to generate more revenue and at the same time helps the gamers to get delighting experience. Server management services with the support of cutting edge technologies such as Augmented Reality, already flourished in the gaming industry and further innovations too look forward on relying the server management support.


We can’t consider the change IPv6 simply as a version change. It revolutionizes the world of the internet itself. Through the introduction of the new version, the concern for the lack of space for more IP address diminishes; in fact, it avoids a possible future technological disaster. IPv4 were providing 32 bits for IP addresses and now through the latest version, it has extended to 128bits. The rapid technological advancements in the areas of connected devices, mobile applications, and IoT, along with the continued growth of the Internet made industry-wide inclination towards latest protocol update. In accordance with the mandate which dates back to 2010, more Government agencies in the USA are working to move their public-facing servers and services to the new version as quickly as possible. Like any other industry which is routed on the internet, the cloud industry is also shifting from IPv4 to later. The change has already been reflected on the leading cloud service provider AWS. The blog reveals everything you should know regarding the AWS shift towards IPv6.

IPv6 for EC2

After launching new version support for S3, AWS is taking the big step forward with the incorporation of IPv6 support for Virtual Private Cloud (VPC) and EC2 instances running in a VPC. For the time being, the support will be available for US East (Ohio) region and they are planning to launch the same for other regions very soon. For the new and existing VPCs, the new version support works and by checking a box on the console which has the API and CLI support, VPC could be opted.

The VPC creation

The isolated portion of the AWS which is populated by objects such as Amazon EC2 instances is known as a VPC. It is required to specify the IPv4 address range for your VPC. You can indicate the IPv4 address as a block of Classless Inter – Domain Routing (CIDR), for instance: It is not allowed to specify a block which is larger than /16 but the user can associate the VPC with Amazon provided IPV6 CIDR block (optional feature).

Subnet Creation

For specifying the address block, use the CIDR format (Eg: Between /16 netmask and /28 netmask, the block sizes need to be specified. The VPC and the subnet can be of the same size. Specify the IPv6 block as a /64 CIDR block and you can’t skip this as the process is mandatory.

Creation of Virtual Interfaces

The virtual interfaces (VIFs) of the user choice for IPv4 or IPv6 addresses can be created using the direct connect console. You can either create a private VIF or a public one based on your requirement. The private virtual interface can be used to access an Amazon VPC using private IP addresses. Whereas the public interface is capable of accessing all AWS public services such as EC2, S3, and DynamoDB using public IP addresses. If you are creating a virtual interface for another account, provide that AWS account ID for completing the procedures or else enter the name of your virtual interface.


The specialty of the latest version is that every address in it is internet-routable and able to connect to the internet by default. The address associated with the instance is public in a VPC which is IPV6 enabled. Even though you have to use a mechanism for creating private subnets, the direct association diminishes a host of networking challenges. Through the launch of the new protocol version for EC2, Amazon is introducing a new Egress-Only Internet Gateway (EGW) which helps you to implement private subnets for your VPCs. The benefit of EGW is that it is easier to set up, and to use, compared to NAT instances and available to users at free of cost. It allows outbound traffic and blocks the incoming traffic which can be used to impose restrictions on inbound IPv6 traffic. Still, the users can continue the use of NAT instances or NAT Gateways for IPv4 traffic. The new version works well with all current- generation EC2 instance types with an exception of M3 and G2, and in future will be supporting all the upcoming instance types. The new protocol version is truly a revolution in the realm of internet and is embracing every wing of technology. The launch of the latest protocol version for AWS instance indicates that the change is already reflecting in the cloud industry as well.


Finally, AWS has addressed the need of the larger number of users; say a niche market, through the announcement of AWS Lightsail on November 30, 2016. The initiative promises to deliver easy Virtual Private Servers (VPS) at predictable costs. AWS have a huge impact on the users across the globe. The report by Business Insider shows the company which has been the leader in on-demand computing resources over a decade has the largest footprint in the industry with its widespread network which constitutes customers across the globe, available regions and revenues. The New ‘Lightsail helps the users choose a configuration from a menu and launch a virtual machine preconfigured with DNS management, SSD-based storage, and a static IP address. The new VPS supports the users favourite developer stack (LEMP, LAMP, MEAN, or Node.js), or application (Joomla, Redmine, GitLab, Drupal, etc) or operating system (Amazon Linux AMI or Ubuntu) with an attractive pricing plans starting at $5 per month and which includes the storage and data transfer charges. This post analyses whether the AWS Lightsail falls fruitful with all its simplicity and pricing strategies in a long run.

The Interfaces which the User Interacts

The new VPS provides simplified UI interfaces to the users. The fewer choices, reduced confusions, fewer prompts and more guidance are the specialties of Lightsail servers. Along with trouble-free interface features the Lightsail incorporates simple API and CLI to its architecture in order to support the developers. In order to address this niche or different kind customers, AWS has encapsulated their existing features.

Minimum Set of Features

For those who don’t require the shiny things and gizmos of the cloud systems and want a simple solution, the new AWS Lightsail is an optimal choice. The user can skip tutorials, security groups, subnets and every last detail of the configuration of the server going to be launched. Instead, Lightsail makes the user focus more on the functionality of the application which he is intended to run on the server.

Storage Capacity

It supports persistent SSD-based block storage and ensures protection from component failure and offers high availability and durability to the users. The Amazon EBS volume is automatically replicated within its Availability Zone and offers consistent and low-latency performance needed to run your workloads. Like in AWS cloud solutions, Lightsail supports high scalability and you can switch to lower and higher usage in a faster manner, say within minutes. It provides up to an 80 GB disk on your instance. So everything you experienced will be available for smaller applications or systems as well.

No More Delays

For Developers who require only virtual private servers, Amazon Lightsail is the perfect solution. Lightsail avoids all delays and helps you to start your project in a quicker manner. Lightsail achieves this through components such as virtual machine, DNS management, SSD-based storage, data transfer, and a static IP - for a low and predictable price. You can manage the servers by using the API or command-line interface (CLI) or through the Lightsail console.


A snapshot can be used to create a backup for the Lightsail instance and can be used for reference in future periods of time. For example, even if you delete an instance, there is a possibility of recovering it. You can also create a new instance from the existing snapshot and which will be having the same configuration and pricing tier as the original instance.


The requirement of a VPC or simple server forms a niche market itself. There is an increased demand for VPC solutions. Amazon Lightsail satisfies those who need to create a project or small business or an experiment and each of this can be achieved in an economical way using this virtual server. For those who think that getting directly into the larger AWS servers have a higher risk then AWS Lightsail is can be a good start. Through the creation of Lightsail, it appears that Amazon wants to adhere to developers or those who want to leverage technology into simple servers into their platforms. Now, through embracing even the small requirements, Amazon proves its relevance in the current cloud industry.


Have you ever confronted with the message “Login to Proxmox Host Failed”? Are you looking for a solution? When a user directly accesses the Proxmox VE management console or while integrating the third party modules such as WHMCS into the Proxmox server or during cluster management of Promox nodes, the “Login to Proxmox Host Failed” gets displayed. After getting this error message, Most of the time the users get confused. For a while, we have been addressing the server related issues in our blogs. This time, the blog provides everything you should need to know to fix the “login to Promox Host Failed” error.

Failure through SSL Problems

The URL, https://IPaddress:8006/ is the default web address to access the Proxmox VE management console. The console will not load if you try to access it without the secure protocol. The expired SSL certificate and the bugs are capable of denying the access to Proxmox. So first confirm whether the certificates are working fine and not expired. If it is fine, then execute this command in the Proxmox machine:

pvecm updatecerts –force

All the issues pertaining to SSL certificate can be solved using this command.

Login Failures Caused By Firewall Rules

The “Login to Proxmox Host Failed” error can be caused by firewall rules in Proxmox. Since the firewall rules are crucial for server security, instead of avoiding it, we need to configure the rules correctly and is vital for effective server functioning. Web interface at port 8006, pvedaemon, SPICE proxy at port 3128, rpcbind at port 111 and sshd at port 22 are the ports used by Proxmox VE 4.x. capture-2 If you use firewalls such as iptables , it leads to proper functioning of the Proxmox Server. For this the firewall rules have to be added in the Proxmox server for the corresponding ports.

iptables - I INPUT -p tcp - -dport 8006 -j ACCEPT iptables - I INPUT -p tcp - -dport 5900 -j ACCEPT iptables - I INPUT -p tcp - -dport 3128 -j ACCEPT

Add a rule to accept loop-back interface connection for proper internal communication in the Proxmox server.

iptables -I INPUT -i lo - j ACCEPT iptables -I OUTPUT -o lo - j ACCEPT

It is important to ensure the status of the connectivity (proper or not) between the two servers in the case of the third party modules such as Modulegarden, WHMCS, etc. Since the login problem can occur due to connectivity problems, try to use the telnet command. Of course, you can flush the firewalls in order to avoid connectivity problems but it is not advisable due to the security issues. So you need to deny everything else in order to allow the required connections.

Problems in login due to incorrect server time

Most of the servers rely on NTP service for updating the server time zone. Due to connectivity errors or other service related errors, the NTP server sometimes fails to sync the server time. The issues with server time are not only caused by NTP service but also due to the difference in time zone. The lack of clock synchronization leads to incorrect server time and which leads to the login failures. So for the smooth functioning of the server, proper clock synchronization is crucial. So always remember to keep your server time updated which helps to avoid log in errors.

Password Issues

Use the password of Proxmox shell which is defined during the proxmox installation to access the Proxmox VE management. Always use a strong password for the server protection but not complicated ones with a lot of special characters. It can cause severe login issues. Another reason for password issue is the bugs in Proxmox authentication module which is used to validate the login details. If all these fail, go for resetting the node password with simple words and try to log in.


Unlike other
server errors or issues, The “Login to Proxmox Host Failed” occurs due to reasons which are completely interdependent. So it is difficult to understand the root cause of such errors. The reasons range from the expired SSL certificate, issues with firewalls, incorrect server time and password issues. So this blog helps you to understand the most common causes for The “Login to Proxmox Host Failed” error. The issue should be analyzed with prior study and care. If you have identified the wrong cause, the solution which you implemented may even affect the server operations itself.


The most common question asked by server owners when they confront with the ‘the Disk usage warning’ alerts is “whether it is really serious?” Whether the issue is serious or not, most of the server administrators face it regularly. The disk usage warning email is displayed when the user is utilises the maximum space allocated, through the web hosting account. The issue should be managed as soon as possible, to pass up the service disruption. When the server space becomes full, the user will not be able to send or receive emails. In short ‘disk usage warning’ need to be rectified immediately. This blog provides you insights to solve ‘Disk usage warning’ alerts in cPanel/WHM servers.

What Is Disk Usage Warning Message?

The ‘Disk usage warning message’ indicates that the disk space in/partition of the cPanel server is 84.73% full. As the number of hosted servers increase, the space in the server reduces which lead to the display of the message. So it is recommended to leave at least 10% of the total server space as free to avoid such issues. The disk space warning must be considered with higher importance and need to be managed immediately before it leads to a server crash.

How to Respond To Disk Space Warning Message and Formulate A Solution?

The disk space issue should be treated with higher priority, because if not managed well, it will affect even all the websites hosted on the server. This issue can be rectified in 3 stepwise actions. Validation of the Warning Message. For example, if you use the command ‘df -h’ for checking the disk status of the server. root@host ~ $ df -h Filesystem Size Used Avail Use% Mounted on /dev/sda6 50G 42.37G 7.63G 84.73 / In the above demonstration, it is clear that the partition ‘/ ‘is 84.73 full. So you have to consider the email seriously.

Disk Space Examination

In order to bring the disk space below 80%, you have to clear some space in the server. A detailed check of files and folders help you to track the items which accounts for maximum space. Use the following command for this. du -sch /* The server administrator will receive the following output 1. root@host # du –sch/* 2. 372K ~ 3. 107M etc 4. 113 G home 5. 253M lib 6. 20K LICENSE 7. 2.6M locale 8. 16K lost+found 9. 64K mbox_backup 10. 8.0K media 11 8.0K mnt 12. 418M opt 13. 0 proc 14. 12M pub 15. 4.0K pub-htaccess.txt 16. 648K templates 17. 14K tmp 18. 64K tools 19. 5.4G /var Further examination needs to be done on the files which are taking more space.

Making Some Space Free

After the detecting the file which consumes the more space, the next step is to free some space. Once you find that a file is not relevant delete it immediately. For example, the irrelevant files include old logs, core dump files etc. After cleaning up all the unwanted files, the warning mail will stop exhibiting and at the end of the process the following output will be displayed: root@host ~ $ df -h Filesystem Size Used Avail Use% Mounted on /dev/sda6 50G 39.5G 10.5G 79% / The deletion of the unwanted files needs to be completed with utmost care or else it can even lead to loss of customers or crash of the entire server. So make sure you make the right move. Disk Usage Warning


Server administrators are frequently encountered with ‘Disk usage warning’. This issue is easy to rectify, but the problem is, most of the time server administrators ignore such emails. The solution to the issue varies according to the factors such as resource usage, budget, web traffic and features offered by each web host. The issue does not have one-time solution and it requires proper preventive actions and 24/7 disk space monitoring. It requires a dedicated team to continuously clear up the unwanted files. We are trying to address the requirement of an immediate response to ‘Disk usage warning’. Our goal is to provide awareness on this rather than a technical solution.


The reasons for the 550 email errors range from a non-existing email account to a wrongly added email server rule. To rectify the error, first, we have to analyze the basic process behind every email communication. While sending an email, it is routed from the server email to the receiver email server. Later the recipient downloads the email from his email server. So the error happens during this email delivery process which is either driven by the sender or by the receiver. In this blog, we will discuss the error ‘550-Unrouteable address’ and its solution.

The Cause behind the Error ‘550-Unrouteable Address’

While sending an email via Outlook or webmail, in the bounce message, the error message ‘550-Unrouteable address’ is displayed. We can track the source of the error from the complete bounce message. Understanding the causes for the error, help us to make the right resolution for the error. Sending Emails to a Non-Existent Recipient Domain When the email is delivered to a non-existing recipient domain, it shows a ‘550-Unrouteable address’ error. Another reason for the error is when the user sends an email with a wrong typo in the email address. In such cases, the email reaches an invalid email address. Sending emails to a deleted, deactivated, and expired email domain also causes ‘550-Unrouteable address’ error. So if the recipient email addresses are wrong, the emails would not be delivered and would bounce back to the sender by giving an error message. The solution here is to validate the recipient email address while sending the emails and check whether the domain is neither expired nor deactivated. In order to validate the email, use the following command: whois Ensure the domain is resolving fine by verifying the DNS resolution results. Dig Incorrect MX Records for the Recipient Domain The sender email server will not be able to route correctly to the recipient email server unless the MX records are set properly for the recipient domain. If there is any error in the DNS settings for the recipient domain, the other external emails servers render the email delivery as unreachable. The temporary DNS resolution issues with the sender’s ISP make it difficult to resolve the domain of recipient correctly which lead to the failure of email routing. So it gives rise to the bouncing back of emails and thereby the ‘550-Unrouteable address’ error is displayed. So as a solution, always verify whether the MX records are correctly set for the recipient or not. You can use the following command: dig MX Always try to connect to the port 25 of the recipient email server and confirm that the connectivity is fine. Use the command: telnet 25 Contact the recipient server support, if you find any connectivity issues to remote email server or DNS errors or missing of MX records and get it fixed. Security Constraints Many email servers will not accept emails from the senders whose email server is in any spam blacklist. There are email servers which accept emails only from grey-listed senders. In RDNS security checks, it is mandatory that the server IP and its hostname have to map with each other. In order to route the IP address to the sender server name, a PTR record is set. If the sending address PTR record is not matching with the receiving address, some servers will reject the email. If the recipient server can’t verify the sender properly, the email bounces back and thus displays the error. As a solution for this, the RDNS for the sender email server IP is should be set properly and is possible using the code: host ip-address Through checking the spam blacklist and confirming with the remote email server, the rejections or blacklisting from the recipient email server can be confirmed. In order to secure the server in case of the blacklisting, scan the server and remove all spamming scripts and implement security measures. Check whether all the email scripts are configured with the valid sender and recipient email addresses. Thus, we can avoid sending emails to invalid addresses. Configuration Settings of Email Server In email server such as Exim, the number of emails a domain can send out every hour has a limit. After the limit exceeds, the emails would bounce back to the sender by displaying the error ‘550-Unrouteable address’. The wrong routing of the domain can also cause the display of the ‘550- Unrouteable address’. The firewalls and other security rules in recipient email server too become a reason for the error. These scenarios are capable of generating the error ‘550- Unrouteable address’.


In creating the error ‘550- Unrouteable address’, the sender and the recipient email server address has a vital role. In fixing the error, the first step is to verify whether the email addresses are correct or not. It should be the foremost goal because most of the ‘550-Unrouteable address’ happen due to the wrong email address both in sender and receiver sides. Setting the MX records correctly helps the administrators to rectify the error. Now most of the email servers maintain strict security measures. So meeting the security or quality standards not only protect your email servers but also help you to communicate with other servers. It eliminates the chances of displaying the error. In brief, the fixing of the error ‘550- Unrouteable address’ is solely under the control of the server users. Following the required standards and procedures makes the server free from hazards of email communication which is highly important in meeting the enterprise requirements.


For every problem, there is a “Right Approach” towards that. But, when people confront with problems in their life, they seem to be confused and embarrassed. Our reaction to the problem is important. At times, an immediate action is better than a late solution because the issue may get complicated as time progresses. It is applicable to servers too. Even though server solutions are an evolved system, administrators confront with severe server spikes. Of course, server load issues can be resolved, but spending more time for fixing the issues is the problem. In businesses, even the down time for a short period of time may affect the sales and ultimately the credibility of the company. In our blogs, we usually bring to light issues pertaining to server platforms. At this time we are addressing Linux server platform with a solution to the most chaotic server load spikes.

The Million Times Heard! Still, What Is This Server Load Spikes All About?

Every server works with a limited set of resources. Let’s consider a server with 8 GB RAM, 75 IOPS SATA II hard disks, 4 processors, and 1 Gigabit NIC cards. Assume that a user has decided to back up his/her account which occupies 7.5 GB of RAM; others have to wait for that process to get over. The wait in the queue becomes more as the backup process becomes longer. Here the wait is represented as server loads.

How To Fix Server Load Spikes In Linux Web Hosting Servers?

Server load needs to be resolved quickly because each second, the number of processes will be queuing up one after the other. The server becomes non-responsive and leads to a reboot once your commands take longer time to execute. The recovery should happen within the first few minutes. So for this, a 24/7 monitoring is necessary. Always start from ‘what you know’ and go to ‘what you don’t’. You will be aware of the server resources such as RAM, CPU, I/O. One of the resources will be abused and you have to find it out. The next step is to track which service is using that resource. It can be a database server, mail server, web server or other services. From the service, you can identify the user who is actually abusing the server. Let’s discuss the process in detail. The command ‘atop’ is an ideal tool for you, if you are troubleshooting a physical server or a hardware virtualized instance. If you are operating on an OS virtualization environment, then ‘top’ command is suitable for you. It is recommended to start with ‘vztop’ command to troubleshoot server load in VPS node. Even the commands and methods used are different the ultimate goal is to locate the overloaded resource such as Memory, Disk, CPU, Viz, and Network. Never jump to a conclusion immediately, instead observe for at least 30 secs before deciding on which resource is being hogged. Use ‘i’ switch to only see the active processes if you are using top and to spot the full command line use ‘c’ switch. To see the wait average and to know if it’s a non-CPU resource hog, use %wa in top command. To identify the existence of any suspicious processes use ‘pstree’ command. For identifying multiple connections from one particular IP, use ‘netstat’. The next step is to track down the service which is hogging the resource. To sort out the overloaded service also, we can use commands such as ‘atop’. The utilities such as ‘atop’ and ‘top’ are suitable for checking CPU usage during the tracking of overloaded service. Likewise ‘nethogs’ is the best utility for checking network usage. The third step of iteration is tracking the virtual host which is becoming the reason for the server load. The individual access logs are the best place to start service specific troubleshooting. The virtual host which is taxing the service will be available by increasing the log verbosity. So through the three stages of sorting the exact source of the server load spike can be identified. Conclusion A disciplined approach is required for troubleshooting the server load spikes. The three level processes (on resource, service, and host levels) help the server administrator to track the hogging source. As mentioned before the way for reaching the exact point of server load is to start from ‘what you know’ and go to ‘what you don’t’. The practice of checking all command output in normal server helps the Linux server administrator to identify what went wrong. In spotting the source of server issues, there are specialist tools to use in different instances. Server load spikes require immediate resolution because as long as the delay exists, it affects the business of the enterprise. So tracking and resolving the delays in its initial stage help enterprises to avoid huge business loss and credibility.


Cloud computing provides new opportunities to the gaming industry and solves the prevailing issues as well. The technology almost embraces the sector and has much more to contribute to the evolvement of the growing gaming industry. The application of cloud computing is not only beneficial for enterprises but also for the consumers. With the support of cloud technology, other wings of entertainments such as music, television, etc offer extensive content to users through a variety of devices such as PCs, smartphones, Smart TVs, etc, and gaming industry will not be an exception. Gaming is a rapidly growing industry and it accounts for the revenue of $68 billion. The blog describes the top 8 benefits which gaming industry endow with the support of cloud technology.


The industrial level security used by cloud computing companies prevents external interruptions such as hacking. In Cloud systems, the information is stored inside a virtual storage space and which makes the platform safer than other traditional applications.

Compatible With Devices of Any Type

Using the highly scalable cloud platform, High-end games can be played even on lower-end machines. The limitation caused by memory requirements, graphic capacity and processing power diminishes with cloud solutions and thus provides users, an exceptional gaming experience.

Reduced Costs

In traditional models of gaming, the companies are required to pay more for bandwidth when the traffic in the system is high. This usually happens when companies have released a new gaming title. In normal traffic periods, gaming companies use about 10 percent of server space and bandwidth and reserve the remaining 90 percentage. The cloud systems follow a ‘pay as you go’ payment strategy in which gaming companies pay only for the resources which they have utilized.

Easy Access to Games

Even cloud is an advanced technological concept, it is easy to implement, and allows the user to access the games from any device from any location without even downloading and configuring applications.

No More Piracy

Unlike in other systems, the physical gaming software is not available to the market which avoids the chances of piracy. Instead of the physical computers, the gamers play games in cloud servers with an access on their personal computers. So cloud diminishes the chances of unauthorized manipulation and interruptions and maintains the novelty of the game.

Availability of More Devices

Rather than depending on the “consoles”, cloud computing supports the usage of multiple devices such as smartphones, laptops, palm-held devices, desktops, etc. The game players can enjoy the games from anywhere in the world without even carrying the console. Through the support of cloud computing, game players can enjoy their moments with a broader range of devices which belong to their routine usage.

Immediate and Dynamic Backend Support

Compared to traditional applications, in terms of the storage capacity, cloud applications are more flexible and scalable. This makes gaming companies to provide information to the gamers as soon as they log into their account. Cloud computing platforms help gamers to perform the functions, such as saving the game or protecting it from regular access without any difficulties. The immediate response and regular access provide gamers more delightful experience and upkeep their satisfaction levels.

Access to Multiple Games

Cloud provides the gamers the chance to play multiple games at a time. This helps companies to generate more revenue and gamers to get a delighting experience. It makes the gaming sector more dynamic by bringing active involvement of the gamers and the providers. Indeed, it forms a scenario where highly innovative and constructive contributions happen. Conclusion Instead of focusing on producing the hardware or consoles which are portable and easy to use, game-makers concentrate on delivering interesting and long lasting game content ,with the support of cloud computing. Cloud based gaming is beneficial for both the customers and companies. It is a platform for innovation and major game makers such as Nintendo, Microsoft and Sony are providing delighting content with the support of the technology. Cloud gaming eliminates the requirement of carrying the console and provides them speedy access to multiple games through laptops, smart phones, desktops. In fact, cloud computing eliminates the existing concerns in the gaming industry and replaces it with new opportunities.
Cloud computing revolutionizes the gaming industry by helping game-makers to generate more sales and returns and at the same time engage customers through the exceptional gaming experience.


The extreme security features and the availability of the most reliable data centers in the nearest locations make AWS as the right choice of enterprises across the world. For AWS, Cloud security is the highest priority. The users of AWS are able to scale and manage their servers along with maintaining a secure environment. Reserved and On-demand are the two popular sought after instance types. Most of the subscribers get confused on selecting the type of instance for their enterprise. This blog reveals everything you should know to make the accurate assortment of Reserved and On-demand instances for your organization.

The comparison between Reserved and On-demand Instances

RI provides upfront payment facilities with three options. These payment options are structured on the basis of user requirements. It includes no upfront (monthly payment), partially upfront where a fixed fee is paid as upfront and rest is settled monthly and finally all upfront (settled in advance). All the three payment options are available for one-year contracts and the three-year contracts are available, if you at least partially pay the upfront. Amazon provides discounts to those who subscribe for RIs and is based on how much the user pays as an upfront. On-Demand Instances are the AWS virtual servers which are purchased at a fixed rate per hour. These servers run in both AWS Elastic Compute Cloud (EC2) and AWS Relational Database Service (RDS). On-Demand servers are suitable for applications with short-term, irregular workloads that cannot be interrupted and become useful during testing and development of applications on EC2. The cloud services are based on the philosophy ‘pay as you go’ and are also the advantage which distinguishes cloud users from others. Reserved instances seem to be violating the cloud philosophy as it follows a fixed payment system. The flexibility of cloud system is depends upon its scalability features and is purely based on the payment which is done according to the usage of resources. For enterprises which confront with such unpredictable demand fluctuation, the ‘pay as you go’ facility becomes vital. When RI makes the cloud a CapEx (one-time expense), On-Demand follows OpEx (monthly recurring expense) system. On-Demand embraces the cloud philosophy rather than RI, however, it is quite expensive than the latter. Reserved Instances are cost saving compared to On-Demand instances. When you purchase reserved instances, in return, you will be served with a lower hourly rate. Use of RI reduces the cost of the resources you’re already using and allows paying a lower price upfront than the price you would pay on demand. There are 2000 RIs in total and all of them are with their own breakeven point.


There is nothing like a ‘bad instance’ in AWS cloud services. The user requirement makes each instance beneficial. So the users must be aware of their requirements in order to choose the right one. Buying an instance is indeed a huge investment, so it involves huge risk as well. Like the veteran theoretical physicist, Einstein quoted “You have to learn the rules of the game. And then you have to play better than anyone else”, every subscriber should no their organizational requirements and the functionality of each instance. Always consider, whether this purchase of instance makes the most sense for your organization or not. The following practice may help in utilizing the benefits of instances:
  • Go for the smaller instance sizes in the case of RIs.
  • Whenever appropriate, use Spot Instances and Auto-scaling.
  • In the case of on-demand instances, schedule the on/off times.
If applied right, the use of instances produce significant savings or else it will devastate your budget. Be acquainted with the essentials of instance application and experience the exceptionality of AWS instances.


Server downs can occur due to a variety of events and one failure condition leads to another. The reason for server failure includes operating system crashes, loss of power, hardware malfunction, network partitions and unexpected application behaviour. Even the reasons of failure are different, everything lead to huge business loss. So the issue has to be addressed immediately. Enterprises require an ‘immediate rescue mechanism’ for their servers. Then the question is, whether a mechanism which provides an immediate solution to all your server issues is available? Well, yes. Ideamine has introduced the concept of Server Ambulance for the first time in the world. The virtual rescue wagon constitutes, the skilful system administrators, who solve the issues, which adversely affects your customers, on immediate basis. Ideamine’s dedicated team is made available 24/7, awaiting your email or call, to give you an instant support.

Most Common Cases When the Server Ambulance Was Called Upon

Server Ambulance service is available irrespective of the place and the time. It is an initiative from Ideamine which is intended to solve the server issues on an instant basis. Server Ambulance helps you to overcome the following situations:

Unexpected Server Attacks

Hacking always happens unexpectedly. If you lack a strong security mechanism, it may cause issues such as severe data loss, leakage, theft etc. Server Ambulance with its server rescue team helps you to overcome such disastrous situations. Ideamine makes your firewall strong enough to withstand the hacker attacks.

When the Server Goes Down Even After Continuous Fixes

Sometimes continuous fixing of the server may also lead to long-lasting server deadlocks. Unlike conventional server solutions, Server Ambulance includes quick and immediate remedies which pave way for faster server recovery. The mechanism helps organizations to avoid prolonged sever failures and thereby preventing huge sales loss.

When Server Performance Gets Affected On a Daily Basis

In business, the server resource utilization varies according to the change in demand. When there is more demand for your product or service, higher the required server space. The server needs to be updated on a daily basis. Server ambulance facilitates easy and quick maintenance of information on the server.

When You Are Sceptical About Losing Your Back-Up and Want A Risk-Free Migration

The server Ambulance is not only useful to enterprises which are already using server platforms. For those planning to make a change from the traditional applications can also benefit out of Ideamine’s fast and effective hosting service through Server Ambulance. If you are concerned about the data transferring and want a risk-free migration, then Server Ambulance ensures you the safe migration into server platforms. The proficient and well-equipped team assist you throughout and offer you a hassle free switching.


So far the Server Ambulance is not a technical innovation from Ideamine. In fact, it is an evolved form of current server practices which prevail in the industry. They haven’t invented anything new but deliver the same service to the customers in a faster way. Whenever you have a server issue, you can contact Ideamine through their highly dynamic website. Proficient engineers who are part of the server rescue team will contact you immediately and assist you in solving the issue. In medical terms, the ambulance implies an emergency and an immediate rescue. Ideamine has adopted the same concept in managing the servers of customers. It is impossible for an organization to waste more time on maintenance tasks because the company will mislay in a competition where the trends keep on changing. Sever Ambulance helps the organization to forget all the hazards of server maintenance and to stay focused on their businesses.


Joomla is a content management system which is used to create websites and applications. In this open source platform, to perform functions such as contact form submission or password resets, emails are used. When the email submission fails, the Joomla ‘SMTP connect () failed’ gets displayed. Developers usually come across such an error, while submitting a contact form or so. In order to send emails, Joomla uses PHP mail function. From the Joomla administrator dashboard, the settings for PHP Mail can be changed. First, log into Joomla as an admin user and follow the below procedures. System -> Global Configuration - > Server The PHP Mail along with its corresponding settings can be seen under ‘Mail Settings’. Even though Joomla ‘SMTP connect () failed’ is a commonly seen error, it affects Joomla developers. The blog provides you everything you need know to fix the error.

The Reason Behind Joomla ‘SMTP Connect () Failed Error

In order to avoid spamming, the PHP mail function is disabled as a server security measure and this default Mailer will not work in sending emails. For this reason, the developers configure SMTP as the Mailer. In the drop down, Instead of PHP mail, choose SMTP as a mailer. SMTP server needs to be configured correctly or the email sending turns out to be a futile attempt. The attempts to send mail through the contact form or password reset forms may result in a Joomla ‘SMTP connect () failed error. The incorrect filling of the field ‘SMTP Security’ in the ‘Mail Settings’ may also cause the error ‘Joomla SMTP connect () failed’. The error may happen due to other reason too. It includes:
  • Usage of wrong password and username during the SMTP authentication.
  • The blocked SMTP port in server firewalls.
  • Configuring mail server which does not support SSL/TLS.
  • The practice of 3rd party servers and applications which are not secured.
  • Use of Joomla or PHP Mailer versions which lacks security features or bugs.

How to Manage the Error Joomla ‘SMTP Connect () Failed

This session guides you to avoid the error Joomla ‘SMTP connect () failed. Select the Mailer as ‘SMTP’ from the Joomla administrator panel and enter host, username, and password through the following steps: System -> Global Configuration - > Server -> Mail Settings

Host and Port Settings of SMTP

In the Host section, enter the name of your mail server. Usually, it is the ‘domain name’ or mail ‘’. Make sure the DNS for SMTP host resolves correctly as well. Enable the SMTP port number as 25, which is the default SMTP port. In order to avoid spamming, for mail servers with custom ports such as 587, use that port. Using firewall rules, certain mail servers restrict the access to their port 25 and for that, your IP should be white listed in the firewall. It helps to avoid the connect error. In order to confirm that the connectivity is proper, use the following command: telnet 25 Once, you are sure that the SMTP connection is working fine, give correct hostname and port number. If the connectivity fails, the mail delivery will fail with an error message.

Authentication details of SMTP

Turn the authentication ‘ON’, so that the email server can validate the user before allowing them to connect and send mails. Then give email username and password. In the case of non-default accounts, enter the full ‘’. If the email password is changed or updated due to security reasons, change the password in mail settings too. If you have submitted the wrong authentication details, Joomla fails to send mails and shows the error message.

Security settings of SMTP

It is recommended to choose SMTP with SSL/TLS protocol for secure email transmission. So, from the drop-down for ‘SMTP Security’, choose ‘SSL/TLS’ option. It is not suitable for some email servers and mails may not get delivered. In order to verify the SSL certificate for the mail server, use the command: openssl s_client -starttls smtp -crlf -connect The usage of self-signed or expired certificates can also cause failure in mail delivery and display of SMTP connect () failed error. For that, configure SSL for your mail server properly or change ‘SSL/TLS’ in the SMTP Security settings as ‘none’.

The Support towards 3rd Party Apps

Certain things need to be taken care of, if you are using Gmail server instead of your own mail server. It includes the enabling of the ‘Authentication – Gmail’ plug-in. Follow the below steps in order to change the authentication. Extensions -> Plug-in Manager -> Authentication – Gmail option. Gmail server rejects connection attempts from some mail client apps such as a mobile application where the user have to use secure apps or need to loosen the security measures. Follow the steps, My Account -> Less Secure Apps -> Turn on the option ‘Access for less secure apps’. These steps help you to loosen the security settings. It helps the user to avoid SMTP connect failed’ errors while sending emails using Gmail.


The variants for the error ‘SMTP connects () failed’ include ‘SMTP Error: Could not connect to SMTP host’ and Called Mail () without being connected. Along with the configuration settings, the safety precautions which are needed to take during Joomla or PHPMailer upgrades are also important. Taking backups and doing test installs before upgrading the production server is important because every new version comes with new bugs or features. So with every improvement which you make with your new version, your security too increases. Adhering to latest technologies help you to keep your server away from external threats. Get the assistance of the skilled server administrators and ensure the protection of your server.


When one of the leading cloud platform AWS was integrated with the technology which revolutionised human lives with a set of sensors and internet connectivity, the world saw a technology explosion. The integration of IoT opened new possibilities to AWS systems in terms of application. Internet of Things enables bi-directional communication between the outside devices and the business engines inside the cloud. Users can rely on this secure communication as it involves authentication per device using credentials and access control. The bilateral communication enables the user, to collect telemetry information from various devices and to store and analyse the data. Using AWS IoT, the user can create applications which can be managed by the clients using their phones or tablets. So it facilitates active user engagement. The basic concept of AWS IoT is that when devices report their state by publishing messages, to message broker through topics, the broker delivers the messages to all clients. The blog provides everything you should know about AWS IoT.

What is AWS IoT?

The platform which provides secure bi-directional communication between internet-connected things such as actuators, sensors, smart appliances and embedded devices and the cloud platform AWS is known as AWS IoT.

AWS IoT Components

AWS works with a set of components which includes: Device Gateway The Gateway enables the connected devices to securely and efficiently communicate with AWS IoT. Using the publication/subscription model, the Gateway can exchange messages. It is possible to broadcast data to multiple subscribers for a given topic using the one –to-many communication pattern of AWS IoT. Anytime there can be a change in the number of connected devices. So in such cases, the Gateway can scale up to over a billion devices without provisioning infrastructure. Rules Engine The Rules Engine handles the processing of messages with other AWS services. To process and to send data to other services such as Amazon Dynamo DB, Amazon S3, and AWS Lambda, and SQL-based language is used. A message broker can also be used for the republishing messages to other subscribers. Security and Identity Service It manages the security in AWS IoT. In order to securely send data to the message broker, the things must keep their credentials safe. The rules engine and message broker use the AWS security features to send information. Thing Registry The thing registry is also known as device registry. It organizes the resources associated with each device. Using the device registry, it is also possible to associate certificates and MQTT client IDs, to improve the ability to manage and troubleshoot your things. Thing Shadow It is also known as device shadow. In order to store and retrieve current state information for a thing (app, device, etc) a JSON document is used. Thing Shadows Service In AWS cloud, Thing Shadows service provides persistent representations of your Things. The user can publish updated state information to a Thing Shadow. For the use of applications, the user things can also publish their current state to a Thing Shadow.

How AWS IoT Works?

The Integration of IoT helps the AWS to connect with devices such as sensors, actuators, embedded devices, or smart appliances. It lets the various applications in the cloud to interact with Internet-connected things. The IoT applications in AWS perform two functions. Either it provides access to users to control the device remotely or collect and telemetry from devices In JSON format and on MQTT topics, the devices report their state by publishing messages. Every MQTT has a hierarchical name which identifies the thing, whose state is to be updated. The message is sent to the AWS IoT MQTT message broker, once the message is published on MQTT topic. After that, the information is passed to the clients. Using X.509 certificates, the communication between a thing and AWS IoT is protected. You can either use your own certificate or one which is generated by AWS IoT. In both the cases, the certificate must be registered and activated with AWS IoT, and then copied onto your thing. Your thing can present the certificate to AWS IoT as a credential while communicating with AWS IoT. Amazon professionals will always recommend that all the devices which are connected to AWS should have an entry in the thing registry. The registry stores information about a thing and the certificates that are essential for security. You can also create rules that define one or more actions to perform, based on the data in a message. There will be a thing shadow that stores and retrieves state information for each and everything. There are two entries for each item in the state information. An application can request for the current state information for a thing. After that the shadow responds back with the state information (both reported and desired), metadata, and a version number, prior to a request in JSON format. By requesting a change in the state, an application can control a thing. The shadow accepts the state change request and updates the information with a notification. After receiving the message the thing reports its new state.


When the IoT and AWS are clubbed together, it opens new possibilities for business and other organizations which make use of cloud services. It eliminates the barrios of communication with the outside world and enables loT devices to connect with AWS applications. The technology which revolutionised the human lives in the very short span of time makes path-breaking changes to the cloud services. With the security and identity services, IoT provides more security to AWS applications. The presence of thing registry adds support to this. IoT embraces almost all facets of human life and AWS IoT upkeeps the spirit high. It is sure that technological advancements are going to be known before and after IoT.


Have you ever confronted with an error message displaying “403 4.7.0 TLS handshake failed”. If you are a server administrator, you will be. Debugging and fixing of such email errors is common while providing Outsourced Web Hosting Support to a shared server owner. When a sender tries to transmit a mail to a recipient using secure TLS protocol, the 403 4.7.0 TLS handshake failed errors occurs. The log provides you insights on the error and way to resolve the issue.

What is ‘403 4.7.0 TLS handshake failed’ error?

TLS protocol, the encryption mechanism, ensures the security of data which is transmitted during email communication. The error ‘403 4.7.0 TLS handshake failed’ happens during this encrypted transmission. In TLS, the data encrypted using a set of public and private keys. In order to make the communication, a ‘handshake’ protocol needed to be followed. In handshake, along with the server authentication, the cipher suites are matched and keys are shared between the two servers. So the error happens when the handshake fails during an email transmission. The sender receives an error notification that shows ‘403 4.7.0 TLS handshake failed’.

What really causes the 403 4.7.0 TLS handshake failed’ error?

So what makes the handshaking a failure? The secure TLS transmission can fail due to the following reasons. SSL Certificate Errors Each server which is participating in TLS transmission has an SSL certificate installed. The certificates can be either self-signed or issued by a Certificate Authority (CA). Like any other certification, SSL certificate too has a validity period. So an expired certificate in a mail server could cause the handshake error. It is possible for the mail servers to have a self-signed certificate. Such certificates are less trusted than the ones issued by an authority. So it may also be the reason for handshake failure as some recipient servers reject self-signed certificates. The sender gets effort notification in their mail log like this, TLS client disconnected cleanly (rejected our certificate?) SSL protocol or cipher issues Keeping the latest version of SSL protocol provides security to the mail servers. Using old protocols and weak ciphers make the servers vulnerable to security threats. The protocols such SSLv2 and SSLv3 are outdated and disabled on secure servers. Even though some servers still keep them. Not only older protocols but the weak Ciphers too subjected to server security issues. For example, the weak ciphers such as RC4 are disabled in most of the servers due to security reasons. Certain mail servers will not accept connection with servers which keep old protocols and weak ciphers and thereby leads to the display of the handshake error. SSL connection errors The ‘Handshake’ error can also display due to the connectivity issues between the servers. It includes the backend firewall settings and other network problems. The command known as STARTTLS, which initiates TLS handshake and secure connection, is used to test the connectivity between servers. The code includes: openssl s_client -starttls smtp -connect host:port Issues with MX Records The connection between the sender mail server and the recipient mail server can be disrupted due to MX record issues. Such situations are likely to generate a ‘handshake error’. To check the MX recode issue, the command: dig mx

Fixing of error ‘403 4.7.0 TLS handshake failed’ in various situations

The remedy for handshake error differs according to the place where it occurs. In cPanel/WHM Exim servers The simplest way to resolve handshake error in cPanel or WHM is to disable the TLS security. It is not recommended due to some security concerns. So the issues are resolved using.
  • Renew the SSL certificate, if the error happened due to an expired certificate.
  • Edit the Exim configuration from the WHM to disable the old SSL protocol and to make the cipher strong.
Click on ‘Home >> Service Configuration >> Exim Configuration Manager’. Enter the Cipher code in the tls_require_ciphers text box: ALL:!aNULL:!ADH:!eNULL:!LOW:!EXP:RC4+RSA:+HIGH:+MEDIUM:!SSLv2:!SSLv3 In Exchange servers Here, in order to resolve the issue, certain manual procedures needed to be followed.
  • In order remove the error causing SSL certificate, right click “PROPERTIES” option on the default SMTP server followed by select ‘ACCESS – CERTIFICATE’.
  • The next step is to disable outdated protocols such as SSLv2 and SSLv3. Then enable protocols such TLS 1.1 or 1.2 which are secure and trustworthy.
  • Update the weak Cipher suites by strong ones.
  • To resolve the issue, rather than using TLS, the authentication method can be switched to a basic mode in exchange server.
In Plesk Qmail servers As Plesk uses qmail server, the programs such as fixcrio which run along with qmail server can cause a TLS related error. The verification of TLS certificate and key files help to fix issues and the files are located on in /var/qmail/control/ directory. The qmail program for remote mail delivery is termed as qmail-remote. It is possible to define TLS settings for email delivery to definite recipients. In qmail-remote, the qmail program for remote mail delivery, TLS sessions can be controlled with the file ‘tlsdestinations’. Here, we can define the TLS settings for email delivery to specific recipients. In RedHat, CentOS and OpenSuse servers with Sendmail In servers such as RedHat, CentOS and OpenSuse with Sendmail, the recipient domain which has TLS connectivity error can be identified along with handshake error. Once you identified the issue edit the configuration file named “/etc/mail/access” and add the line: NO After creating “/etc/mail/access” text file and editing it, use make a map create the database map. makemap hash /etc/mail/access.db < /etc/mail/access Restart the mail server and the email transmission will be fine without errors. In email clients such as Outlook Express and Thunderbird Proper configuration of the email client correctly is also important in secure email transmission. This can achieve in the following method. Thunderbird Edit – > Account Settings, from the Outgoing Server settings -> TLS (from user secure connection) Outlook Express Tools - > Accounts Select the mail account on Properties For outgoing mails, Go to Advance -> Select ‘Server requires Secure connections (SSL).


Understanding the error “403 4.7.0 TLS handshake failed” is crucial to prevent your server from stop working. The blog guides and make you confident to meet real-time TLS connectivity issues. The common issues which give rise to handshake error include the expired and unauthorized SSL certificates, outdated protocols and weak cipher sets and server connectivity issues. The impact of handshake error is different in a different server, so the solution also differs according to the server. The issue can be resolved only after understanding the server. As TLS is necessary to maintain security in server communication through encryption. So all the servers adopt different methods which never deny the TLS encryption.


Apache and Nginx are the two open source web servers which account over 50% of the total traffic on the internet. Those solutions handle a diverse workload and are compatible with other software, which is associated with the server, in order to produce complete web stack. Both Apache and Nginx excel in its own way and have their own qualities but the latter is considered to be the fastest web server in the world. Big technology companies such as WordPress, Comodo, Netflix, Github, Cloudflare had already switched to Nginx due to its high performance and resource requirements. NGINX set of features and some server roles are:

  • Reverse proxy server for the HTTP, HTTPS, IMAP, POP3 and SMTP protocol.
  • Front-end proxy for Apache and other web servers, combining the flexibility of Apache along with the support of good static content performance of NGINX.
  • Load balancer and an HTTP cache.
  • As Nginx only passes true HTTP requests, it protects your server from common attacks such as DDOS (Distributed Denial of Service).
  • Compatible with GZIP compression
  • High output
This blog guides on how to set up Nginx which is known for stability, simple configuration, rich attribute set and effective resource utilization as a reverse proxy for Apache on Ubuntu 16.04.

Required Pre-requisites

In order to configure Nginx as a reverse proxy over Apache on Ubuntu, there are Pre-requisites which is needed to be made available
  • Install a new Ubuntu version
  • A standard user account with sudo privileges is required for the configuration.
  • Install Nginx HTTP services on the server.
  • Skill for working on a Linux platform.

Apache and PHP-FPM Installation

Install the PHP FastCGI Apache module named as libapache2-mod-fastcgi. Check whether the repository is updated and thus make sure you have the latest version. sudo apt-get update Install the necessary packages. sudo apt-get install apache2 libapache2-mod-fastcgi php-fpm

Apache and PHP-FPM Configuration

In order to configure Apache and PHP-FPM, the Apache port number will be changed to 8080 and configure it to work with PHP-FPM using mod_fastcgi module. For this edit the Apache configuration file and change the Apache number. sudo nano /etc/apache2/ports.conf Find the following line: Listen 80 Change it to: Listen 8080 Save and exit ports.conf. Edit the default virtual host file of Apache. Only on the port 80, the <VirtualHost> directive in this file is set to serve sites, so we have to change that. Open the default virtual host file. sudo nano /etc/apache2/sites-available/000-default.conf The first line should be: <VirtualHost *:80> Change it to: <VirtualHost *:8080 > Save the file and reload Apache. sudo systemctl reload apache2 Verify that Apache is now listening on 8080. sudo netstat -tlpn

Apache to Use mod_fastcgi Configuration

Using mod_php, Apache serves PHP pages but it requires additional configuration to work with PHP-FPM. sudo a2dismod php7.0 The next step is to add a configuration block for mod_fastcgi which depends on mod_action. mod_action is disabled by default, so we first need to enable it. sudo a2enmod actions These configuration directives pass requests for .php files to the PHP-FPM UNIX socket. sudo nano /etc/apache2/mods-enabled/fastcgi.conf Following lines needed to be added within the <IfModule mod_fastcgi.c> . . . </IfModule> block, below the existing items in that block: AddType application/x-httpd-fastphp .php Action application/x-httpd-fastphp /php-fcgi Alias /php-fcgi /usr/lib/cgi-bin/php-fcgi FastCgiExternalServer /usr/lib/cgi-bin/php-fcgi -socket /run/php/php7.0-fpm.sock -pass-header Authorization <Directory /usr/lib/cgi-bin> Require all granted </Directory> Save the changes you made to fastcgi.conf and does a configuration test. sudo apachectl –t

PHP Functionality Verification

Check whether the PHP works by creating a phpinfo () file and accessing it from your web browser. echo "<?php phpinfo(); ?>" | sudo tee /var/www/html/info.php

Virtual Hosts Creation for Apache

First, create the root directories: sudo mkdir -v /var/www/{,} Then create an index file for each site. echo "<h1 style='color: green;'>Foo Bar</h1>" | sudo tee /var/www/ echo "<h1 style='color: red;'>Test IO</h1>" | sudo tee /var/www/ Then create a phpinfo() file for each site so we can test PHP is configured properly. echo "<?php phpinfo(); ?>" | sudo tee /var/www/ echo "<?php phpinfo(); ?>" | sudo tee /var/www/ Now create the virtual host file for the domain. sudo nano /etc/apache2/sites-available/ Place the following directive in this new file: <VirtualHost *:8080> ServerName ServerAlias DocumentRoot /var/www/ <Directory /var/www/> AllowOverride All </Directory> </VirtualHost> Now that both Apache virtual hosts are set up, enable the sites using the a2ensite command. Through this, a symbolic link is created to the virtual host file in the sites-enabled directory. sudo a2ensite sudo a2ensite Check Apache for configuration errors again. sudo apachectl -t Reload Apache if Syntax OK is displayed. sudo system reload apache2

Nginx - Installation and Configuration

Here, we will install Nginx and configure the domains and as Nginx's virtual hosts. Install Nginx using, sudo apt-get install Nginx Remove the default virtual host's symlink sudo rm /etc/Nginx/sites-enabled/default The same procedures which we have used for Apache will be used for creating virtual hosts for Nginx. Develop the root directories for both the websites. sudo mkdir -v /usr/share/nginx/{,} we'll again create index and phpinfo() files for testing after setup is complete. echo "

"|sudo tee /usr/share/nginx/ echo "

" | sudo tee /usr/share/nginx/ echo "" | sudo tee /usr/share/nginx/ echo "" | sudo tee /usr/share/nginx/ The next step is to create a virtual host file for the domain sudo nano /etc/nginx/sites-available/ Paste the following into the file for server { listen 80 default_server; root /usr/share/nginx/; index index.php index.html index.htm; server_name; location / { try_files $uri $uri/ /index.php; } location ~ \.php$ { fastcgi_pass unix:/run/php/php7.0-fpm.sock; include snippets/fastcgi-php.conf; } } Save and close the file. Now create a virtual host file for Nginx's second domain, sudo nano /etc/nginx/sites-available/ The server block for should look like this: server { root /usr/share/nginx/; index index.php index.html index.htm; server_name; location / { try_files $uri $uri/ /index.php; } location ~ \.php$ { fastcgi_pass unix:/run/php/php7.0-fpm.sock; include snippets/fastcgi-php.conf; } } Save and close the file. Then enable both the sites by creating symbolic links to the sites-enableddirectory. sudo ln -s /etc/nginx/sites-available/ /etc/nginx/sites-enabled/ sudo ln -s /etc/nginx/sites-available/ /etc/nginx/sites-enabled/ Do an Nginx configuration test: sudo nginx -t Then reload Nginx if OK is displayed. sudo systemctl reload nginx
Configuration of Nginx for Apache's Virtual Hosts
Create and an additional Nginx virtual host with multiple domain names in the server_name directives. Let's create an additional Nginx virtual host with multiple domain names in the server_name directives. To apache, the requests for such domain names will be proxied. Create a new Nginx virtual host file: sudo nano /etc/nginx/sites-available/apache Add the code block below. It includes the names of both Apache virtual host domains and proxies to their requests to Apache. server { listen 80; server_name; location / { proxy_pass http://your_server_ip:8080; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } } Enable this new virtual host by creating a symbolic link and after that save the file. sudo ln -s /etc/nginx/sites-available/apache /etc/nginx/sites-enabled/apache Do a configuration test: sudo nginx -t Reload Nginx if OK is displayed. sudo systemctl reload nginx
Nginx is considered to be the most popular HTTP server and reverse proxy. It is used for the common protocols like SMTP, HTTPS, POP3, HTTP, and IMMP and used as the load balancer for HTTP cache. This method of a reverse proxy is a set up which utilizes Nginx as both the frontend and backend and handles the request coming from the browser by passing it to the backend of Apache. It is more suitable for static websites rather than dynamic websites. Nginx has rich features such as high performance and low memory/RAM usage. Even though the Nginx is a reverse proxy for Apache, Nginx's proxy service is transparent and connections to Apache's domains appear to be served directly from Apache itself.


Virtualization is not a new terminology, but was evolved in the 60s, when IBM started to develop their own virtual memory operating system which was an early incarnation of the IBM System. The movement from IBM pulled large crowd into the development of virtual machines which includes companies such as Microsoft and VMware. After 50 years of changes in technology, now we live in an "Everything as a Service" universe. Organizations are changing their data from physical storage to a more secured virtual environment. Virtualization has created a revolution and now users are more aligned to concepts such as cloud, containerisation, etc. Along with that the presence of some hybrid technologies are associated with data storage. All these bleeding edge technologies are combined contributing to the data management. The blog discuss the future aspects server virtualization and its impact on other technologies such as hypervisor.

What is Server Virtualization?

Masking of server resources including physical servers, operating systems, and processors from the users is known as server virtualization. The partition of the physical server into tiny virtual servers helps to maximize the server resources. The software used for this purpose is known as virtual or private servers. It is also guests, instances, containers or emulations.

The three popular approaches to server virtualization are virtual machine model, paravirtual machine model, and virtualization at the operating system layer. Apart from maximising of the server resources, server virtualization includes other benefits such as:

  • The migration into a virtual environment from a physical server helps to reduce the monthly power and cooling costs inside the data center.
  • It helps you to create self-contained lab or test environment on its own isolated network
  • Faster server provisioning
  • Flexibility in choosing the server equipment as virtual environment replaces the physical environment.
  • Improved disaster recovery.

What is Hypervisor?

The program which allows the user to host several virtual machines, in a single system at a time is called hypervisor. It is also referred to as virtual machine manager (VMM). Each virtual machine functions as if it has a separate processor, memory, and resources and is able to run on its own programs. It is achieved solely by the support of the hypervisor as it shares the hardware to the each operating system.

The main function of the hypervisor is to cater to the needs of the guest operating system and to defend multiple instances from interrupting with each other. There are two types of hypervisors they are Type 1 or native hypervisors and Type 2 or hosted hypervisors..

The Impact of Server Virtualization Over Hypervisor

Server virtualization becomes more popular due to its effectiveness in addressing the challenges pertaining to server environment operations. It allows multiple virtual machines to operate on a single server and thus helps to reduce capital expenditures and operational expenses. The involvement of server virtualization changes the dynamics of the data centre. The dawn of virtualization will affect the hypervisor architecture in an adverse way. It includes:.

  • Virtualization is affecting the hypervisor architecture in an adverse way. Of course, we could not predict a replacement for hypervisor but it s creating a decline in user rate.
  • The container concept is almost embracing the data centres across the world and it has slowed the other efforts such as an open stack. The containers are expected to be the focal point for 2016 along with the inclusion of Microsoft Nano Server into the container table.
  • Cloud is one of the systems which getting prominence on 2016. We have reached a point where the cloud is almost is the norm irrespective of the type (private, public or hybrid).
  • Even thought Hyper – V is expected to accounts for the major portion of the market share, the hypervisor wars will continue to die down due to its feature parity.
  • The launch of technologies such as solid-state drive (SSD) storage and hyper-converged infrastructure are also expected to evolve in 2016 even they are not directly related to server virtualization


Virtualization has started to weaken the hypervisor architectures. When this is confined to the matter of this two server architecture, the total server environment seems to see some other drastic changes. It includes the container concept, cloud systems other technologies such as solid-state drive (SSD) storage and hyper-converged infrastructure. It is expected to see a container explosion in 2016 even though it has little practical implementation issues. Containers have almost conquered the operations of data centres. We have been a great deal of migration into cloud platforms still some cyber security concerns are associated with it. In brief, the users are showing a tendency to shift to virtual platforms rather than physical servers prior to the increased security concerns.


The emergence of the internet has created a tremendous impact on business practices on both local and global markets. Not only in business but it has embraced companies across all industrial verticals. It has enhanced accuracy and efficiency in work performed by people irrespective of their profession, industry and need. The possibilities of the internet are infinite and wherever it is applied yielded a good deal of results.

Internet protocol has created a platform of interoperability from where maximum benefits of the internet were drawn. It has created a large network effect. In fact, for business, the rate at which the suppliers, distributors, and customers use internet determines the possible benefits rather than the company’s own internet usage.

IPv4, the current internet protocol version has crossed 30 years of time period. The expanding user base and increased number of IP-enabled devices created a need for an upgraded version. IPv6, the latest version, need to address all those concerns along with the growing needs of new devices such as IP-based services, cell phones, online gaming and so on. The size and range of devices connected to the internet are increased by IPV6 and thus a network effect is achieved. The need for the migration into IPv6 is discussed there.

1. Larger IP address space.

Every device or computer which has to be part of an internet network requires an IP address. IPV4 allows only 4 billion unique IP addresses, where the current need exceeds this. The reason behind this insufficiency is the explosion in the popularity of the internet. IPV6 resolved this issue by providing 128 bits which comprise of 3.4 x 1038 billion unique addresses. The difference can be understood by a simple demonstration. If the entire space of IPv4 is being contained in an iPod, then the IPv6 space will be equivalent to the size of the earth. IPV6 thus opens a huge scope to the users.

2. Enables efficient routing

Routing becomes more efficient and hierarchical with IPv6 as it reduces the size of routing tables. The prefixes of the ISP's customer’s network are changed into a single prefix by IPv6. This single prefix is assigned to the IPv6 internet. Along with this, the fragmentation is handled by a source device in IPv6 networks rather than the router.

3. End to end connectivity

The appearance of peer to peer applications such as video conferencing, multiplayer online games and VIoP created a demand, for a better end to end connection. In such configurations, communication is possible among the networked computers without a central server. IPv4 meet such requirements through NAT. This challenge can be overcome by using IPv6 with its large address space. Peer to peer applications works effectively and efficiently with IPv6.

4. Facilitates directed data flows

Instead of broadcast, IPv6 supports multicast. In multicast, bandwidth-intensive packets can be sent to multiple locations simultaneously. It avoids the need to process broadcast packets for a long time. The new field in the IPv6 header named Flow Label can identify whether the packets belongs to same flow or not.

5. Administration easiness.

IPv4 faces certain challenges during the network renumbering. Renumbering is necessary when a network needs to be expanded or merged or when the service providers are changed. So with IPv4 the task such as network renumbering and assigning of new address schemes need to do manually, whereas, with IPv6 this can be achieved automatically. Smoother switchovers and mergers were possible, without manual configuration of each host and router, using IPv6.

6. Better Security

As IPv4 is an end to end model, security is provided at the end nodes. It is not sufficient to meet the internet attacks such as malicious code distribution, Man-in-the-middle attacks, Denial of Service attacks and Reconnaissance attacks. When IPv4 is changed into IPv6, it almost improved the security feature. IPSec is the protocol which helps IPv6 to dominate IPv4 in terms of security. Authentication Header (AH), Encapsulating Security Payload (ESP) and Internet Key Exchange (IKE) are the protocols which are in IPSec that facilitates secure data communication and key exchange. Along with ensuring an end to end security mechanisms IPv6 eliminate the situation where applications themselves to have integrated support to meet security requirements.


Amazon Redshift includes complete data warehouse management. It includes setting up, operation and scaling of the data warehouse. The most important feature of Amazon Redshift is that it is highly scalable. It includes the data use from 100 GB up to petabyte or more. This amazing scalability feature makes Amazon web services as the best choice for business of all types.

The launching of a set of nodes is the first thing that has to be done to create a data warehouse. After the launch of the cluster, you can upload the data and perform the data analysis queries. Irrespective of the data size, you will be offered with fast query performance with the SQL-based tools and business intelligence which are commonly used in the current scenario. Appended below are the user benefits of Amazon Redshift.

High support to data warehousing

It facilitates the storage of 100 GB and has the capability to extend up to petabyte or more. In order to reduce the amount of I/O needed to perform queries, Amazon Redshift uses methods such as data compression, columnar storage, and zone maps. On the other hand, the parallel processing data warehouse architecture and parallelizing and distributing SQL operations take maximum advantage of available resources.

No worries on Up-Front Costs

Another advantage is that it helps you to turn off the resources which are not in use. Amazon Redshift services can choose with On-Demand pricing. It is available with no upfront costs or long-term commitments. The Amazon Redshift complete pricing details are available on Amazon Redshift Pricing page.

The Scaling capability

Scalability is the key advantage of Amazon web services. As the performance or capacity needs change, there will be a change in the requirement of the number of nodes. Amazon Redshift facilitates this. Through a simple API call or with a few clicks on the console, the number or types of nodes in cloud warehouse can be changed. Amazon Redshift is highly resizable. During the process of resizing, the existing cluster is placed on the read only mode and the data will be copied from that to the new one in parallel. The queries against the old one can be performed even with the new one being provisioned. Amazon Redshift removes the old cluster when the new one is provisioned.

Simple to operate

Amazon Redshift is very simple to use. Through simple API calls and few clicks in the AWS management console, it is easy to create a cluster with a specified size, node type, and security profile. The tasks such as configuring connections between nodes and the securing cluster can be easily achieved with Amazon Redshift. Through this, the data warehouse will be up and running within a short period of time.

Fully managed service

Amazon web services handle every function related to a data warehouse. It includes managing, monitoring, and scaling of the data warehouse. The function ranges from monitoring cluster health to applying patches for upgrading. Amazon web service’s Redshift takes care of all such activities and helps you to focus on your business.

Regular automated backups

Amazon Redshift facilitates regular automated backups. It is called automated Snapshot feature. It continuously backs up data on the cluster to Amazon S3. It is a continuous, incremental and automatic backing up of data. Amazon Redshift does it for a user defined period such as one to thirty-five days.


To secure data in transit, Amazon web services – Amazon Redshift use SSL and for rest, a hardware accelerated AES-256 is used. When encryption is enabled for data at rest, all the data in the disk will be encrypted. Encryption provides you high security.

Some of the world’s biggest and innovative organizations are successful by adopting Amazon Redshift. It has helped in transforming the way their businesses handled the data, and it could transform your business too. It’s time to pull up the socks and begin the migration of data to Amazon Redshift.


The company which provides the services such as Software as a Service (SaaS), Infrastructure as a Service (IaaS) and Platform as a Service is known as Cloud Service provider. In short such companies are known as CSPs. The affordability of cloud services irrespective of organizations size (it includes small, medium and large companies) and the availability of services in subscription basis attracted millions of users into the cloud. It helped the potential users to minimize their operational cost. Apart from this there are managed cloud service providers who ensure good server maintenance. These things pulled a large crowd into cloud platforms. The two major players in the cloud were Amazon Web Services and Microsoft Azure. A constructive and healthy competition is the between these two. Ultimately out of these creative experiments, from both the players, the users benefit a lot. Here a comparative study between AWS and Microsoft Azure is done.

1. Licensing Mobility

Most of the enterprises will be using some software before their cloud adoption. These software licenses will be highly paid. The glad news is that you don’t have to pay double to accommodate these licenses. Both AWS and Azure help you to integrate this software without doubling the cost. License mobility is the greatest advantage of these two platforms. You don’t have to buy additional software licenses for this. So the server maintenance with much consideration to the existing software is ensured by both the service provider.

2. The Hybrid Cloud Approach

The concept behind hybrid cloud approach is ‘One step on the cloud and other on the ground’. While comparing Azure with AWS, Azure is little ahead in hybrid cloud approach. Organizations will be having their own reasons to put some part of their organization on ‘on premises’. Azure paves the way for this. In the case of Azure, there is a facility for the apps to live on ‘local servers’ as well as on the ‘cloud server’. Most of the enterprises prefer such cloud server maintenance for their organization. Amazon is on its entry to hybrid cloud approach but not fully fledged as Azure.

3. Cloud To Government

While we consider, a cloud offering to Government sector, Amazon is a veteran player in that sector. Amazon tie-up with US Government is an example for this. The Amazon project named AWS GovCloud helped the US Government to move their sensitive workload into the cloud. This cloud platform meets their regulatory and compliance requirements. It is separate and specialized server maintenance service offered to the US Government. Theoretically, both the Amazon and Microsoft cloud systems for Government are same but Microsoft is potentially a new player in this space.

4. The Storage Features

AWS provides temporary storage. Here an instance is started and the same is being destroyed once the instance is terminated. This is almost similar to block storage which is almost similar to hard disk. Here the block storage can be attached to an instance or kept separate. The NoSQL databases and Big Data were having support. Microsoft Azure also uses temporary storage and is known as D drive. Here, the Block Storage option for Microsoft Azure is known as Page Blobs. Microsoft Azure also NoSQL and Big Data supported. Here to make this possible Windows Azure Table and HD Insight is used.

5. Pricing Strategy

Even though the server maintenance experiences of both the players are delighting, the pricing strategy of the two was different. AWS pricing is based on the number of hours used. The minimum use is considered as one hour. There are three purchase models. First one is on demand; here the customers only have to pay what they have used. No upfront cost is there. The second one is reserved. Here the customer reserves an instance for 1 to 3 years. An upfront cost will be there. The spot is the third pricing model. Here the customer bid for extra capacity available. Azure pricing strategy is based on the number of minutes used.


AWS provide the user an exceptional delighting cloud experience. The security, scalability and reliability which Amazon Web Services provide make it the favourite of users. Amazon Aurora is a fully managed, MySQL compatible relational database engine. Here, Amazon combines the features of both commercial databases and open source databases. The simplicity and cost effectiveness of open source and reliability of commercial databases can be seen in Amazon Aurora database. Amazon Aurora database provides a lot of benefits to the users. The main advantage of the database is that, it delivers five times the performance of MySQL. Here the user need not have to make changes to the existing applications for the cloud adoption. Through this you could give the complete focus to your business without worrying about the IT sections. Amazon web services provide you the facility to set up and operate your new and existing MySQL deployments through Amazon Aurora database. It makes the deployment simple and cost effective. Amazon RDS is what which provides administration for Amazon Aurora database. It is achieved by handling the routine tasks such as failure detection, provisioning, patching, backup, recovery, and repair. Even though Amazon Aurora is a drop in replacement for MySQL, the same components (code, tools and applications) which you used for MySQL can be used for Amazon Aurora. The DB cluster is the main component of Amazon Aurora. Along with the creation of the Amazon Web Services,’ Aurora DB cluster is created. It consists of multiple instances and a cluster volume (it manages data for the instances). A DB cluster is made by two types of instances. They are primary instance and Aurora replica.


Amazon Aurora is available in the following regions:
  • US East (N. Virginia)
  • US West (Oregon)
  • EU (Ireland)
  • Asia Pacific (Tokyo)
  • Asia Pacific (Mumbai)
  • Asia Pacific (Sydney)
  • Asia Pacific (Seoul)

The Reliability of Amazon Aurora

The main feature of Aurora is reliability and fault tolerance. Through placing Aurora replicas in different availability zones, you could improve the availability. The thing which makes aurora reliable is its automatic features.

Storage Auto-Repair

There will not be the problem of losing the data in Aurora due to disk failure because it maintains copies of your data in three availability zones. Aurora has the facility to detect the issues in the disk. During the failure of a segment of a disk volume, the data in other volumes make up the cluster and ensures the data in the segment which is repairing is the current one.

The Cache Warming

After every shutdown or failure Aurora warms the buffer pool cache. With the known common queries that are stored in the memory page cache, Aurora preloads the buffer pool. This bypassing provides the performance gain.

Easy Crash Recovery

Another advantage of Aurora is that it helps you to recover from the crashes in an easy manner. Thus ensure you continued performance. So you can overcome any crashes and make you open and available all the time even immediately after a crash.

Amazon Aurora Security Features

In three levels, the security of Amazon Aurora is managed.

Database Isolation in virtual network

The Amazon VPC in which the Amazon Aurora runs help you to isolate your database in your own virtual network. Using the industry standard encrypted IPSec VPNs, it is connected to your on – premises IT infrastructure.

Resource-level Permissions

The integration with AWS identity and Access Management (IAM) makes Amazon Aurora to provide you the ability to control actions on specific Amazon Aurora resources. Here the AWS IAM users and groups can have access.

The Facility for Encryption

Amazon Aurora facilitates encryption for the databases using AWS key management system. The data encryption improves the security. To secure data in transit Amazon Aurora uses SSL (AES-256).


The Amazon Aurora is an advanced relational database engine from Amazon. The factors which distinguish Amazon Aurora from others are its high performance, reliability and security. End users could experience high-cost effectiveness in their business, through the adoption of aurora, as it is highly scalable in terms of resources. It can storage from 10 GB up to 64 TB. Another advantage is that, it combines both the features of open source and commercial databases. Amazon Aurora replicates six copies of your data across 3 availability zones. So it ensures maximum security. Along with this, the data encryption facility is also there. So In every aspect, Amazon Aurora meets the requirements of the business of all types.


We are pleased to inform you that now AWS is more close enough to the users in India. Amazon Web Services will be available with much more efficiency through the support of new data centre in Asia Pacific (Mumbai).  New data centre ensures users elimination of upfront expenses and long-term commitments. Hereafter the scaling challenges in maintaining and operating your own infrastructure are mere history. The existing data centres in Asia are Beijing, Seoul, Singapore, Sydney, and Tokyo. Now Mumbai joins as the sixth AWS Region in Asia and as the 13th around the world.

How is it going to help the end user?

The new data centre in India (Mumbai) is going to create a revolution in India. The users are going to get exceptional cloud experience.

No More Data Latency

The new data centre in Mumbai will avoid the problem of cloud data latency. The delay between the client request and the cloud response is denoted as the cloud service latency issue. The end users were experiencing the problem of delayed response as the data centre is located outside the country. The new data centre in Mumbai is going to resolve the issue completely.

No More Upfront Expenses

The new data centre in Mumbai will avoid the problem of cloud data latency. The delay between the client request and the cloud response is denoted as the cloud service latency issue. The end users were experiencing the problem of delayed response as the data centre is located outside the country. The new data centre in Mumbai is going to resolve the issue completely.

More Security To Data

Security is the backbone of AWS. Amazon puts strong safeguards to protect customer privacy. Now the AWS security support reaches the customer with much more glorification. Through the support of the new data centre in India (Mumbai), it almost increases the security to the user data. The near data centre ensures the user, complete recovery even after a disaster.

Amazon continues to expand their services globally, based purely on the customer inputs. They are always open and responsive to customer feedback. The new move will create in the whole cloud market and how it is being reaching the customer. The new AWS data centre in Mumbai will be a mile stone in the total cloud industry.


We are now living in an era where the terms such as "Web Hosting", "Server Maintenance", "Cloud Service" getting importance day by day. The majority of the business organizations are now opting server management services due to its affordability and scalability. Server management is suitable for business of any size and operations and it is highly customizable. Now there certified server maintenance providers in the market. They are offering 24/ 7 server management services. Even though the support from the server management systems is there to support different business organizations, the security of the data which is stored virtually is an important thing. On an administrators point view, there are certain coding algorithms which they can use to ensure the web security for their clients. Mod_security is one of the apache used to ensure web security. Before getting into mod_security let's see what does an "apache" means. Apache is one of the popular web servers in the world. It is freely available and it is an open source. Due to its ease of administration flexibility, it is highly accepted among server maintenance people. The strength of Apache is its flexible modules. To support SSL encryption, it helps the Apache in natively rewriting URLs. This provides the server maintenance team or administrators to meet their needs. Let's come again to our topic – Mod_ Security. It is an apache module and it helps you to protect you from various web attacks. It includes hacking attacks. It defaults on In Motion servers. Mod_security block the web from the attacks using regular expressions and rule sets. The main attacks faced by servers are the common code injection attacks. So through blocking those, it strengthens the security of the server. Mod_security helps those who lack security codes in their websites.

How Mod_Security helps to secure your website?

The major benefits of mod_security include:

Access to HTTP traffic stream:

The major benefit of using mod security is that it gives the administrator access to HTTP traffic stream. Not only giving access but also it provides the facility to inspect it. This paves for monitoring of security in real-time.

Virtual Patching:

The concept of keeping the vulnerability mitigation in a separate layer is called virtual patching. Here you don’t need to touch the application to fix the problems. It is suitable for sites with a communication protocol.

Full HTTP traffic logging

Another benefit which mod_security brings is Full HTTP traffic logging. The mod_security gives you the facility to log anything you need. Apart from this which part of the transaction is logged or sanitized can be understood by this.

Passive monitoring in a continues assessment

Instead of focusing on the external parties who try to perform a simulated attack, in continues passive security assessment, the system itself is focused. It is an early warning system used to trace the abnormalities. So mod_security helps the server maintenance team to perform regular security check ups.

It facilitates web application hardening

Another advantage you get through Mod_Security is the attack surface reduction. It is the process of enforcing some similar restrictions. Here the process of selectively narrow down the HTTP features is which you are willing to accept is what happening. Such facilities make the mod_security the favorite of server maintenance people.

How to install Mod_Security?

Different types of apache include mod_security, mod_access, and proxy_module. Here in this article, the installation of mod_security is mentioned. Installation of the mod_security apache helps to prevent various attacks such as SQL injection, Trojans, Cross-site scripting etc. Let us get into the installation procedure. Here the server should be updated and you have to install apache. * yum -y install HTTP Install the following dependencies for the proper working of Mod_Security. * yum -y install httpd-devel ibxml2-devel git curl-devel gcc libxml2 l pcre-devel make You can install mod_security via command line or compilation.

1) Install mod_security via compilation.

Download and extract the package * cd /usr/local/src * wget * tar xzfv modsecurity-2.9.1.tar.gz * cd modsecurity-2.9.1 Configure and compile the source code. ./configure make make install

2) Install mod_security via command.

* Installation - Ubuntu/Debian apt-get install libapache2-mod-security a2enmod mod-security /etc/init.d/apache2 force-reload * Installation - Fedora/CentOS yum install mod_security /etc/init.d/httpd restart Copy the default Mod_Security configuration and Unicode mapping file to the Apache directory. cp modsecurity.conf-recommended /etc/HTTP/conf.d/modsecurity.conf cp Unicode.mapping /etc/httpd/conf.d/ Here, It is needed to configure Apache to use Mod_Security. Open apache configuration file and add the following lines: vi /etc/HTTP/conf/httpd.conf LoadModule security2_module modules/ You can now start Apache and configure it to start at boot. service https start chk config https on Mod_Security supplies an array of request filtering and other security features to the Apache HTTP Server, IIS, and NGINX. Mod_Security is a web application layer firewall. Mod_Security is free software released under the Apache license 2.0. Use this apache to ensure the safety of the website. It can save the website from all web attacks.


Now window system, application and custom logs can be sent directly to AWS CloudWatch. Real-time windows machine logs can be monitored easily with AWS CloudWatch. Before following below process, make sure that environment is set well for the smooth task flow. Step 1: Ensure that AWS CLI TOOLS is installed on the Windows Server Use the link: Step 2: Create a .JSON file to execute the task Step 3: Use the url to check the json file Step 4: Create a separate user for the task in aws IAM users Step 5: Note down the Access and secret key of the user Step 6: After the aws CLI tools are installed navigate to the below path, C:\Program Files\Amazon\EC2ConfigService\Settings Step 7: Open/Edit the file AWS.EC2.Windows.CloudWatch.json file and use the below script and modify accordingly,{ "EngineConfiguration": { "PollInterval": "00:00:15", "Components": [ { "Id": "ApplicationEventLog", "FullName": "AWS.EC2.Windows.CloudWatch.EventLog.EventLogInputComponent,AWS.EC2.Windows.CloudWatch", "Parameters": { "LogName": "Application", "Levels": "7" } }, { "Id": "SystemEventLog", "FullName": "AWS.EC2.Windows.CloudWatch.EventLog.EventLogInputComponent,AWS.EC2.Windows.CloudWatch", "Parameters": { "LogName": "System", "Levels": "7" } }, { "Id": "SecurityEventLog", "FullName": "AWS.EC2.Windows.CloudWatch.EventLog.EventLogInputComponent,AWS.EC2.Windows.CloudWatch", "Parameters": { "LogName": "Security", "Levels": "7" } }, { "Id": "ETW", "FullName": "AWS.EC2.Windows.CloudWatch.EventLog.EventLogInputComponent,AWS.EC2.Windows.CloudWatch", "Parameters": { "LogName": "Microsoft-Windows-WinINet/Analytic", "Levels": "7" } }, { "Id": "IISLog", "FullName": "AWS.EC2.Windows.CloudWatch.IisLog.IisLogInputComponent,AWS.EC2.Windows.CloudWatch", "Parameters": { "LogDirectoryPath": "C:\inetpub\logs\LogFiles\W3SVC1" // According to system path "TimestampFormat": "yyyy-MM-ddHH:mm:ss", "Encoding": "UTF-8", } }, { "Id": "CustomLogs", "FullName": "AWS.EC2.Windows.CloudWatch.CustomLog.CustomLogInputComponent,AWS.EC2.Windows.CloudWatch", "Parameters": { "LogDirectoryPath": "C:\\CustomLogs\\", "TimestampFormat": "MM/dd/yyyyHH:mm:ss", "Encoding": "UTF-8", "Filter": "", "CultureName": "en-US", "TimeZoneKind": "Local" } }, { "Id": "PerformanceCounter", "FullName": "AWS.EC2.Windows.CloudWatch.PerformanceCounterComponent.PerformanceCounterInputComponent,AWS.EC2.Windows.CloudWatch", "Parameters": { "CategoryName": "Memory", "CounterName": "Available MBytes", "InstanceName": "Name", "MetricName": "Memory", "Unit": "Megabytes", "DimensionName": "", "DimensionValue": "" } }, { "Id": "CloudWatchLogs", "FullName": "AWS.EC2.Windows.CloudWatch.CloudWatchLogsOutput,AWS.EC2.Windows.CloudWatch", "Parameters": { "AccessKey": "", "SecretKey": "", "Region": "us-east-1 ", "LogGroup": "Cloudwatch-logs", "LogStream": "{i-f36fb96e} Once executed this name will be automatically created in awscloudwatch console" } }, { "Id": "CloudWatch", "FullName": "AWS.EC2.Windows.CloudWatch.CloudWatch.CloudWatchOutputComponent,AWS.EC2.Windows.CloudWatch", "Parameters": { "AccessKey": "", "SecretKey": "", "Region": "us-east-1 ", "NameSpace": "Windows/Default" } } ], "Flows": { "Flows": } } }To make sure your JSON file is valid. Use the link: If there is any issue with the .JSON file, the plugin used to start the AWS service may not work. Step 8: Once everything is verified and confirmed, the user with .JSON file gets permission to access AWS CloudWatch service. Step 9: Save the AWS.EC2.Windows.CloudWatch.json Step 10: Navigate to the plugin at the path, C:\Program Files\Amazon\EC2ConfigService >> EC2ConfigServiceSettings and make sure to check the box  Enable Cloud Watch Logs Integration and rest leave it untouched.Step 11: Go to services and restart the EC2Config service. Step 12: Check the logs in the below path if plugin is started successfully, C:\Program Files\Amazon\EC2ConfigService\Logs  EC2ConfigLog like the below screenshot, Step 13: If the plugin is working, check on to AWS console and navigate to Cloudwatch.Step 14: A new log group will be automatically created, open the logs from the console where you would find the name you provided in JSON file {i-f36fb96e} Step 15: To create a metric filter send notification to see the below screen-shot, Go to Cloudwatch>>Select logs >> Create Log group >> Create metric filter Create a log group and then create a metric like below,Once the process is finished create an alarm and set to notify. The logs will be delivered to your email like below. The complete process of monitoring logs on AWS Cloudwatch is covered in this article. Hope you find this article useful because I learned a great deal about Cloudwatch.


Before you choose a VPS plan, you are supposed to decide for what purpose you are going to use the server. If it’s for hosting, then you confirm how many sites are planned to host. VPS plan must be chosen, based on the number of websites to be hosted. If your website has huge traffic and if it is hosted on a dedicated server along with thousands of other sites, then it apparently causes issues to the other websites on the server and the hosting provider kicks you out. In this case, you must choose to host your website on the separate dedicated server or on VPS (Virtual Private Server). Check different types of virtualizations described below which are most commonly used, and choose the VPS plan that suits your requirement.


A VPS (virtual private server) uses a technology called virtualization, where a physical dedicated server is split up into virtual servers or containers. Each virtual server appears as a dedicated server.


This virtualization platform is based on Linux kernel. We can also say this is an Operating System level virtualization. All resources are shared in this virtualization. OpenVZ can only run Linux based operating system. Choose this virtualization if your requirements are as follows: • Want to host many WordPress or any other CMS websites on a server and expecting no much traffic or resource usage. • Want to host a heavy website and keep the web contents on one VPS, the databases on the second and the mail on the third VPS. With this traffic and the usage can be balanced like a load balancer. • You want to resell or oversell the servers. You can oversell the servers as the shared resources and the virtual servers do not use them every time. Disadvantage: The kernel cannot be modified. All the servers should use the same kernel version as the physical OS. Advantage: As there is no overhead of a hypervisor, it runs very fast. And this is an open source VPS. Verdict: We recommend this if you need a fast, efficient and a cost effective VPS solution.


Kernel based Virtual Machine is a full hardware virtualization platform. This supports Windows, Linux and other BSD guests. The physical server should run on a Linux based operating system and should support virtualization extensions (Intel VT or AMD-V) .This is similar to XEN as it supports Para virtualization via the VirtIO framework. Each server has private virtualized network card, disk and graphics adapter, and as OpenVZ, there is no possibility of overselling, you will get the guaranteed resources. Here except CPU and network, all other resources are dedicated to the virtual server just like a dedicated server.

Choose this virtualization if your requirements are as follows:

• Want to host a website expecting high traffic. • Want a server with dedicated and stable resources. Disadvantage: Due to the overhead, as compared to OpenVZ, the performance is not as good as OpenVZ Advantage: Cannot be oversold. KVM server has no restrictions in terms of functionality and is more stable than OpenVZ, open source Verdict: This virtualization is a lot complex and needs more improvements.


This is a bare metal hypervisor and it makes the physical server run multiple hosts within it. The physical server needs to be run on a Linux based OS, however, it can host virtual servers just like KVM, running on windows, Linux and BSD. The resources are always allocated to the guest virtual servers so all containers act just like a dedicated server. Xen supports both Para virtualization and full virtualization. A Xen system is controlled with Xen hypervisor as the lowest and most confidential layer. One or more guest operating systems are located above this layer.

Choose this virtualization if your requirements are as follows:

• Want to host a website expecting high traffic. • Want a server with dedicated and stable resources. Disadvantage: Could be slow slightly as compared to OpenVZ. Advantage: Cannot be oversold. The memory will never be shared. Verdict: We would personally recommend this as it delivers the performance in accordance with the price. Amazon and Rackspace both use XEN. Conclusion: XEN VPS is much simpler, fast and recommended if you are planning to upgrade from a shared
web hosting. KVM is a little more complex and OpenVZ is outdated.


Prestashop is an open-source software in cPanel which automates website application installation and it has many built-in-features for managing payment, manufactures, supplying and also in listing. We can create an online store using prestashop with the help of a web hosting services. Here I am going to demonstrate installation of prestashop an Ecommerce Content Management System from both front end (using softaculous) and backend (from command line). Below given is the prestashop shopping channel installed without any modification.

Prior to the installation.

Allocated databases: Installation of prestashop creates an additional database; we need to check the database allowance from the cPanel left-hand sidebar for this.

Sub domain: If our domain is, and we need to install prestashop for a sub domain, then just to create a sub domain from cPanel. Below given are the steps to create sub domain under cPanel.

1. Click Sub domains under the Domains section in cPanel 2. Enter the name of sub domain that you wanted to create. 3. Create

Steps To Install Prestashop via softaculous

We can install prestashop via Softaculous from the cPanel, which we can find under Software/Services section in cPanel. Where Softaculous is an auto installer used in plesk, cPanel, Direct admin, etc. which is used to install 3rd party softwares like prestashop. Below given are the steps to install softaculous from backend server. Login to the server via ssh as root user and execute the given commands wget -N chmod 755 ./ After hitting the option softaculous under the section Software/Services in cPanel, select Prestashop application. Then click on the install option, when we click on the same, it will prompt for an input form. From there we can specify the domain name that we need to install prestashop and also the directory.Below given is the forum, where we have to enter the installation details and also make sure that we have updated our Admin Account settings to our correct email.To administrate prestashop go to the link as prompted in our softaculous installation notes and after completing the password prompt, we can have an access to the prestashop admin page. Now we have completed the installation of prestashop from cPanel via softaculous and we can also uninstall this prestashop from the cPanel easily via softaculous itself. Installing and uninstalling softwares like prestashop through softaculous is much easier than from command line.

Installing prestashop from command line

Further to the previous installation steps, now I am going to explain the installation of prestashop from command line. Below given are steps to follow for installation 1. Move to the directory where we can download the prestashop package without any issue Cd /usr/local/src 2. Download the prestashop package that we need to install. wget 3. Copy the prestashop zip file that we have downloaded to the document root for the domain we need to install. cp -pr /home/username/public_html/ 4. Move to the document root for the specified domain cd /home/username/public_html/ 5. Extract the downloaded file here using unzip command unzip Now we will be able to access the prestashop for the domain using the link '' From there we can see the prestashop installation wizard In the first page of prestashop installation, we can choose the language and click next In the second page, we can see the license agreement, from there just accept the terms and conditions and hit the next button.

In the next page we can see as asking the store information. We can enter the shop name, activity, Country in which the store is located, time zone and the account information like name, email address and the password. Keep the information anywhere as it needs to manage the store.

In the next page it asks for the database details such as database server address, database name, login and password. For this we have to create a database from the phpmyadmin and select the “utf8-general-ci” collaboration and click the next button.

After clicking on the next button it will be navigated to the installation page with installation details.

Now, we have successfully completed our prestashop installation. The back-end is full of modules to add and access. I run these steps in just 10min with softaculous and cpanel. This article would probably help you also to install smoothly without bumping into errors only if you follow above steps carefully.


Repeated authentication prompts are common in sharepoint 2010, SharePoint 2008 or on the SharePoint 2007 platform. This can also be seen in basic windows authentication enabled administration sites and also with recent patches. When you try to access some secure pages on windows domains using host header on server, login credentials doesn’t work and you face issues like “Authentication error” or continuous pop up of login window as shown in the image below.


This error is mostly related to Authentication methods in window server. Errors occur in both http and https pages. And this happens in
domains where there is a redirect rule specified to redirect all requests to https:// page, the authentication keeps prompting, leading nowhere. Please follow the steps below to overcome this issue. When any redirect rule is used, make sure that it’s correct. A common http to https redirect rules in web.config file in the document root of the website looks like below screen shot.


This works with Windows Server 2008, 2012 R2, sharepoint 2010 etc, as it’s a windows IIS issue in most cases and this is common too. Screen-shot below is of IIS 6.2. Images, appears differs in different versions of IIS but the settings will be the same.

Steps To Follow To Overcome the Issue.

Step 1 : Login to windows server and Go to IIS Manager >> Run>> “ inetmgr ” or in search tab “internet information services”. Step 2 : Go to the desired website and double click the website. Step 3 : Navigate to Authentication in IIS as in the screen-shot below,


Step 4 : Disable all authentication types and enable only Basic authentication as in the screen-shot below, Keep the webpage ready for login and try enabling the authentication type as Basic in the IIS and try login, mostly this works.


Step 5 :Save and restart the website from IIS, again try login, it will work most of the time. Try logging into the website. Step 6 : If the issue persist by keeping the Basic authentication enabled, enable windows authentication or try other authentication and try logging in. Then Go to Authentication in IIS and enable both Basic authentication and windows authentication as in the screen-shot below,


Step 7 : And while enabling windows authentication Go To >> Authentication>> Windows Authentication >> Enable >> Providers Be sure the providers list is like the screen-shot below,


Most issues are resolved by using the steps from 1 to 5, but if still issue persist the steps from 6 can be used. Note: If issue is not resolved even after all the above steps, then please make sure that below mentioned details are checked and corrected before trying the mentioned steps. If it's 2000 or 2003 make sure you add the service accounts to different built in groups. If the domain controllers are installed on 2008 R2 and service pack one is under NOT installed, and SharePoint servers are 2008 or earlier then issue occurs with encryption between the SharePoint servers and the domain controllers. In this case please make sure domain controllers are fully patched. Run dcdiag and check if domain is in good health. Make sure that your test should not throw any type of errors. If any issue is discovered, resolve it before moving forward.


If you have a domain and wish to transfer it from your current registrar to another registrar, this article will help you to prepare your domain for a successful transfer from your current registrar. Before preparing for a domain transfer, you should make sure that your domain is eligible for it. The following things will help you to check whether your domain is apt for a transfer. => Your Domain name, registrant contact address (administrative contact) should be correct and valid => Registrar lock should be disabled for the domain => Domain must be registered for at least 60 days with the current registrar. => Most probably the transfer process will take  5 to 7 days to get completed. Things should have done at current registrar end for preparing your domain name for transfer. => Unlock the domain (untick registrar lock) => Get EPP Code ** (authorization code) => Disable Whois privacy Things should have done at new registrar end *Place a new transfer order.

1. Place new domain transfer order at new registrar

You should purchase "domain transfer" from the new registrar whom you wish to transfer your domain. You have to pay an amount to transfer your domain to a new registrar.Most probably you have to pay for one year. If you want to continue there, you can renew the domain registration. Otherwise, it will be expired. Once the payment is completed,the status of the domain at the new registrar will be set to Authorization Required. You will also receive an email which contains security code and transaction ID.

2. Unlock the domain and request EPP code at current registrar

To make the transfer, the new registrar needs an authorization code (Auth code or EPP code) from your current registrar. So, you have to request your current registrar to provide the authorization code.

3. Receive EPP code from current registrar

Once you requested for authorization code/EPP code, your current registrar will send you an email with the authorization code for your domain.

4. Proceed Transfer process at new registrar

Once the new registrar receives the authorization code, we can proceed the transfer process.Go to the new registrar end and begin the transfer process. There we can see 2 options. They are "Status" which shows "Authorization required" and the next is "Recommended Action" which shows "Authorization - Begin transfer authorization". Now you have to enter the authorization code which you have received from current registrar. Once this has been completed, the new registrar will display the Status as "Pending Current Registrar approval" and the Recommended Action as "Accept transfer at current registrar". This means the transfer process has been initiated.

5. Confirm transfer request

Meanwhile, you will receive an email notification from your current registrar, which is nothing but to rethink about the domain transfer. If you are decided to proceed with the transfer request, you can ignore the mail. If you wish to cancel the domain transfer, you can contact them in order to cancel the transfer in a specific time period given by current registrar (approximately 5 days).

6. Domain transfers successfully at new registrar

The current registrar takes up to 5 days to release the transfer of the domain (as they might wait for your cancel request).You will receive a final confirmation email from new registrar to the WHOIS Administrative Contact with instructions on approving the transfer. Once you have approved the transfer, it takes between 5 to 7 days for the transfer to complete. During this time, there can be no changes made to the Domain Name, including DNS server changes. If you need to change your DNS servers, please make sure the changes are in effect prior to the transfer.

**EPP Code

An EPP code is an authorization key provided by your old registrar and required by your new registrar to complete a registration transfer for a domain name.

Solutions for the common problems encountered while transferring a domain.

=> If the EPP key is expired : Solution:We can generate a new EPP code from the current registrar again. =>Domain transfer authorization email is not received: Solution:Check whether the contact email address is correct =>The domain has gone into redemption: Do you know what is meant by redemption period for a domain ? For the domains with TLD .com, .net and .org, the redemption period applies. When a domain is expired, the registry will allow a time period of 30 days to renew the domain.If we are not renewing the expired domain within this time period,the registry will hold our domain to a certain time period of 30 days (hold period) which is named to be redemption period. If we want to renew our domain, first of all, the domain has to recover the redemption period. For this, we have to pay a certain amount to the registry. We have to wait for another 5 days to recover it completely.Which means it will take 65 days to recover the domain from expiration. Solution: As I said we are able to transfer a domain which is under redemption. The domain should recover from redemption and then need to be renewed before restarting the transfer. ================ Hope this article helps you :)


This article describes the migration of OpenVZ Containers in a SolusVM cluster and some errors which you are commonly facing while migrating OpenVZ containers. Here, I am showing some easy steps to moving OpenVZ containers.


If you want to migrate an OpenVZ Container from "NODE A" to "NODE B", you need to connect between the node that hosts the container (NODE A) and container that you want to migrate (NODE B). Below script file help you to complete this task.



wget chmod a+x ./ destination-ip destination-port


vzlist -aNow you can start the migration. In openVZ, "vzmigrate" is the command which we are using to migrate the container.


vzmigrate -v --ssh="-p destination-port" destination-ip container_idNow you have completed the migration of container from "NODE A" to "NODE B". You can also use some flags while using vzmigrate command. Here I am going to mention some important flags.-r, --remove-area yes|noWhether to remove container on source host after successful migration.--onlineThis flag is used to perform online migration. If you want "zero down-time", you can use this flag. Final step is the  solusVM master updating. You need to update the solusVM master so it knows where the VPS has been moved to. Login to the SolusVM Master and run the following command.


/scripts/vm-migrate VSERVERID is the ID listed in your VM list in SolusVM NEWNODEID is the ID of the node in the SolusVM. (Here you need to find out the ID of "NODE B") Now We can discuss about some errors which you are commonly facing while migrating openvz Containers. If we migrate a vps from "NODE A' which is using ext3 filesystem for /VZ to "NODE B" which is using ext4 filesystem. In this case, you may get below error.Storing /vz/private/1332441.tmp/root.hdd/DiskDescriptor.xml Error in check_mount_restrictions (ploop.c:1536): The ploop image can not be used on ext3 or ext4 file system without extents Failed to create image: Error in check_mount_restrictions (ploop.c:1536): The ploop image can not be used on ext3 or ext4 file system without extents Destroying container private area: /vz/private/1332441 Creation of container private area failedThe solution is to convert /vz from ext3 to ext4. Check below, how  to convert /vz from ext3 to ext4. First we should understand that, Ploop doesn’t work on ext3 filesystem.Stop virtualization services: Code: # service vz stop Remove /vz/pfcache.hdd partition: Code: # rm -rf /vz/pfcache.hdd Unmount /vz/ partition: Code: # umount /vz Convert the file system: Code: # tune2fs -O extents,uninit_bg,dir_index/dev/DEVICE_NAME Code: # e2fsck -fDC0 /dev/DEVICE_NAME Change mounting options in /etc/fstab: Code:# mount /dev/DEVICE_NAME /vz Code:# grep "/vz" /etc/fstab/dev/DEVICE_NAME /vz ext3 defaults,noatime 1 2 Code:# vi /etc/fstab Code:# grep "/vz" /etc/fstab/dev/DEVICE_NAME /vz ext4 defaults,noatime 1 2 Start virtualization services: Code: # service vz start


In the above article, we have discussed about migrating container migration and converting /vz Container file system (ext3 to ext4). Note: Do a backup before attempting this procedure. It may render your system unbootable, and may destroy your data.   I hope this article is helpful to improve your technical knowledge.


It's been noted that the number of enterprises big or small that are shifting their existing applications from their traditional infrastructure to AWS cloud has multiplied in the past few years. While the pricing model of AWS Services (pay-as-you-use) is what attracts enterprises if careless you would get unnecessary cost spikes. In order to reduce cost, you must keep AWS services optimized or in other words design your AWS architecture for the cost. In this blog, we shall see some points on the fundamentals of cost optimization

Unused and Underused Resources

Find the unused and underused AWS resources on your AWS infrastructure and either stop these resources when not in use or downgrade it to a much optimum type in a cost management point of view. A simple example would be that the EC2 or RDS instances developers make use of to test their applications can be stopped on weekends when they are no longer working with them. Since AWS does not price you for the resources you are not using in other words resources that are stopped you save your valuable money following this advice. It is a widely understood fact that AWS core services such as EC2 , EBS and RDS are the main contributors to the monthly bills in enterprises. So downgrade or choose the correct instance type if you are not fully utilizing the resources and continuously try to optimize the AWS service types by monitoring their usage. AWS Cloudwatch service can be used to monitor the metrics for CPU utilization , data transfer , disk usage , Memory usage etc that gives a clear idea of instance resource under usage and you can design your AWS services accordingly. Also look for old snapshots or AMI’s and delete them and automating scripts to delete old ones regularly are best practices.

Make use of the AWS Elasticity Property

Using AWS autoscaling feature to scale up and scale down your AWS computational resources according to the business needs is advisable. Consider for example an online e-commerce website that would have unpredictable spikes of traffic. Rather than using a high end architecture that is capable of handling any number of traffic it's better to scale the resources and computational power up and down according to the traffic to the website which helps to optimize the billing cost effectively. The above example setup can be achieved via the AWS autoscaling feature, and for triggering the scaling process a cloudwatch metric alarm , SQS or other notification services. This way you never overpay for our services creating a cost-effective yet intelligent architecture that is upto your computational demand. In other words AWS urges you to think parallel.

Understanding AWS Pricing Model

There are different types of EC2 instances available and it can be used effectively to build or develop infrastructures keeping the notion minimization of cost. They are: On-Demand Instances: It is the pay as you go classic EC2 instances, can be kept stopped when not using and there are different types of On-Demand instances available which are server resource specific. They are mostly used for running short term applications and which can be stopped unpredictably and restarted again. Spot Instances: Instances which are unused Amazon EC2 computer capacity which is made available for users at a highly reduced rate. In order to buy an instance, customers are required to bid a price that they are willing to pay per hour. And they get this instance when an EC2 instance becomes available at or below that price they bid for. The instance will be cut off when the Spot price increases and exceeds the customer's bid.As long as the customer doesn't cancel his bid, the instance will be reactivated when the price falls again. Companies use these type of instances for their testing purposes. Reserved Instances: Instance types (EC2, RDS, Redshift) which can be reserved and paid up-front for 1 or 3 years due to predictable computational usage. They are given out by AWS at a much-discounted price than the on-demand instances. So for infrastructures with predictable usage reserved instances can be used instead of On-demand ones as they reduce the total computational cost. For the AWS storage services such as S3 and Glacier, the monthly cost decreases as more data are stored in them. So choose your static files like images, logs, HTML files, reports etc on S3 or Glacier (for infrequent access) rather than your EBS volumes as it costs less. You can still reduce the storage cost in S3 by choosing the Reduced Redundancy option at the cost of reduced level of redundancy than the standard model, so storeless critical or easily reproducible data here. And the best feature is that the data transfer in is free in S3.

AWS Tools and Services for Cost Analyzing

There are some services made available by AWS in order to analyze an existing architecture or to understand the costing structure of the architecture you are about to build that can provide a layout of how you can reduce the bills. Some of them are: Consolidated Billing: For a large organization having multiple AWS accounts probably in different regions for different departments AWS allows to consolidate all the bills under one account. The major advantage of this is that AWS combines usage from all the accounts to qualify the company for volume pricing discounts ie, the AWS policy of pay less for how much you store more comes into action. AWS Trusted Adviser: With business level support the tool trusted adviser is available that helps you in analyzing an organization's existing architecture and constantly notifies users with unused or wasted resources, old snapshots , space issues, security loopholes etc. In turn Trusted Adviser helps you reduce your monthly billing effectively. Cloudwatch Billing Metric: AWS has introduced a billing metric in AWS Cloudwatch where it notifies you with an alarm if the billing estimate configured is exceeded. This way you can be notified early of any unexpected spike in usage probably due to any misused resource or because of a security mishap. AWS Cost Calculator: AWS cost calculator helps you to calculate monthly bills when provided with AWS components you use into the calculator template. Click below to calculate bills.


To sum up the topic one must always keep in mind the fundamental principles of AWS pricing principles and recursively analyze reports from tools such as Trusted Adviser for a cost optimized AWS environment.


We can easily create a database backup job to backup an user database. We can done this by using MSSQL management studio. We can follow the below mentioned steps and create a MSSQL backup job.

First We need to create a test database. We can create test database from SQL management studio by following the below steps.

Then we are creating new MS SQL Agent Job.

In Object Explorer, connect to an instance of the SQL Server, expand “SQL Server Agent”, expand Jobs and then right click Jobs and select the option “New Job”.In the General page, give a name to the SQL Agent job, here we can use the name as “Test Backup Job”.

The next step is to create a backup job step.

Select the option “Steps”In the Steps page, create a new job step by click on "New" button.Then we are naming the job steps, here we can use “Backup Job Step” and we can use below script to backup database.Script to backup database: BACKUP DATABASE TO DISK = N'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\Backup\DemoDB.bak'After entering the name and add script to the command field, click Ok to add this step to the job.Then click Ok to create this job from the "Steps" page.

We can view the created job under Jobs folder in Object Explorer.

To Start this job right click on “Test Backup Job” (under SQL Server Agent –> Jobs ) and click “Start job at Step”.Then we can see a window, that states the job start progress.We can see "Success" status after completing the job.

The status of the each Agent job can view from the Job History and Logs.

To view Job History and logs in Object Explorer, Under SQL Server Agent Jobs , select and right click the Agent Job we have created and select “View History”. We can see the recent job execution history and the results of the job execution in log viewer.By following the above mentioned steps, we can easily create a MS SQL Agent job step to backup a MSSQL DB.


Amazon SES (Simple Email Service)

Amazon SES (Simple Email Service) is a low-cost mail service used for mass mailing built on the reliable and scalable infrastructure that developed to serve its own customers. It can be used for sending transactional email, marketing messages etc.., by relaying through aws ses service. Amazon SES is a free service for
aws customers and its billing is based on the usage of the service, As for the normal package, it provides with 50000 emails in a month with a max send rate of 14 emails/second. Amazon SES service is available from the console and also from the command line interface.

Steps To Configure With Postfix

Before the configuring of postfix with SES check for the prerequisites, and verify them,

  • Confirmed postfix is installed and is able to send email from server
  • Create a user for SES access and keep credentials safe
  • Attach a policy to the user created (Administrative access to SES)
Firstly we have to create a user in aws IAM for SES access, with secret key and access key. Attach a policy for the user to have administrative access to SES like image below,

Assuming SES at US East N.Virginia region

  • Navigate to file in /etc/postfix
  • Add the below lines to file (Preferably bottom)
  • Navigate to file in /etc/postfix
  • Add the below lines to file (Preferably bottom
  • Save and close the file.
  • Navigate to file in /etc/postfix
  • Check all configuration lines are present in the postfix .conf file
  • Comment out the line -o smtp_fallback_relay= as shown;
  • Create or edit /etc/postfix/sasl_passwd file by replacing username and password by the generated SMTP credentials as shown (Not the secret/access key of user)
  • Replace the username/password and save the /etc/postfix/sasl_passwd.db file.
  • To create a hashmap database file containing ses smtp credentials use the command as below: postmap hash:/etc/postfix/sasl_passwd
  • Restrict access to the above file as it contains the smtp credentials
  • Stop and restart the postfix service.
  • If you want to add a domain name to relay through aws ses you should verify the domain name as shown below,
  • To verify the domain find the screen-shot below:
  • After the domain is verified you can try sending a mail from server as:
    • mail -s "Subject" mail address
    • Or Sendmail –f from address and to address following subject and press cntrl+D
  • Please check the inbox and you could see the mail as:
    • Signed by: verified