When you say robots, I'm assuming you're talking about artificial intelligence in general. This is a very contentious topic even among experts, so I want to stress this is just based on my opinion and vague understanding of things. There are two important issues here: automation and existential risk. Until recently, innovation has been seen as a good thing that creates more and better opportunities in the long-term, even if it costs low-skilled jobs in the short-term. Things seem to be changing now, however, as technology is destroying far more jobs than it creates. This article, for example, argues that about 47% of U.S. employment is at risk. This and other pieces of evidence strongly suggest that we should take this issue seriously in the next few years. Universal basic income is one proposed solution. Then there's existential risk, an even more controversial topic due to its speculative nature. It's the idea that A.I. could eventually lead to great harm, maybe even human extinction, due to unintended consequences. This may sound like science fiction, but there are lots of good arguments from A.I. researchers and philosophers that you might want to look into. What should we do about it though? I don't really know, sadly.