Basically, the negotiating game is will break down to demanding absolute maximum and pretending you care a lot more then you care. The more demanding person gets more, less demanding person is taken for a ride.
Then the tool should be named Trump.ai, not Mediator.ai. :)
I don't know anything about this specific LLM thing but if it correctly uses the Nash bargaining optimiser then that won't happen.
This thing you point out is exactly why Nash demanded invariance under affine transformations in his solution. Using completely arbitrary units if I rank everything as having importance 1 million, that's exactly the same as ranking everything as having importance 1, and also the same as ranking everything as having importance 0.
The solution is only sensitive to diffences in the unitity function, not the actual values of the function. If you want to weight something very strongly in the Nash version of the game you also have to weight other things correspondingly weakly.