Let's take a look at two points of view. Clients to vCloud and vCloud to vCenter.
Clients to vCloud: The inbound volume is actually load balanced as per your definition on the Load Balancer (F5, NetScaler, etc). If you only have 1 vCenter, then you are load balancing inbound HTTPS web requests and Console Requests. Each cell can service a client for the website and will handle the console requests on its own.
vCloud to vCenter: A cell is nominated as a vCenter proxy to handle the communication to a specific vCenter. This is a funnel approach. if you had two cells, active/active, it would be extremely hard to negotiate which tasks are complete and listen for results. If the cell running the proxy fails it will move the proxy to another cell.
If you have 1 vcenter and 4 cells, then there really isn't a "load of vcenters" to balance. All other cells running HTTPS requests will put things going to vCenter into the DB and then the one running the proxy will pull from the DB and ship to vcenter.
If you have 4 vCenters and 5 cells, then you have approximately 0.8 vCenters per cells (4 cells have 1 vCenter each). The cell that doesn't use the proxy is ready in case of another cells failure.
Now for what you are experiencing. I personally haven't seen that sort of lag when hitting the vcenter which is not holding the particular proxy you care about. However, I'm not at my desk right now to really dig into this.