Scenario-

You have a deployment with 2 replicas. You want pods should be evenly distributed among Nodes (not all replicas on same Node). When you create this deployment, kubernetes cluster scheduler schedules a pod on each of the 2 nodes in the cluster (say Node A and Node B).  Later if you kill one of the node, say Node B, then the corresponding pod is re-scheduled on the last remaining node A (as expected). Now provision another Node, say Node C, into the cluster. The two pods still remain scheduled on node A which is expected behavior.
But you want to move one of the replica pod to Node C to maintain high availability of the service. How to do that?

 

Solution:

First get all nodes and names of nodes-

kubectl get nodes

 

Now get the details of the pod which you want to move to another node-

kubectl get pods -o wide

 

This will provide the NodeName where this pod is running at present.

Now cordon this Node so that no new pod could be rescheduled on this Node-

kubectl cordon <node A name>

 

Now depending on whether you always want 2 replicas to be available or whether running your setup on 1 replica for some time is acceptable or not, you can follow the steps to scale replicas-

kubectl scale deploy <deployment name> --replicas=3

 

This will start one more pod on new Node C (as Node A is already cordoned where present 2 replica pods are running).

So for some time you will have 3 pods running.

Now delete one of the pod on Node A and immediately scale down the deployment to 2

kubectl delete pod <pod name>
kubectl scale deploy <deployment name> --replicas=2

 

At last allow Node A to run new pods

kubectl uncordon <node A name>