Just a couple months ago I realized that robots are not capable of visual perception. An article I read exposes that watching is a very complex process that needs a lot of intelligence. Understanding a simple chair requires knowing it is the same thing when viewed from different positions. To do this it is not only necessary to have mental images of most chair's angles but an abstraction of the object, which means giving it a name, putting it into a category (furniture, objects for rest by sitting, etc.) Abstraction and defining categories require other skills not directly related to vision: language abilities such as syntax, semantics, words creation, and perhaps even assigning emotional attributes to things (e.g., fire hurts, electricity is dangerous and the like.) And these are not the only problems. Things are not static, they evolve and are part of a three-dimensional world. A glass falls and breaks; this is a process that needs understanding and a corrective action should be taken. People approach, move away or disappear behind a door. A robot will need to know people are its reason to exist, which poses an ethical challenge. Will they be self-aware or have some kind of soul with all this brainpower? For now robots are blind machines, with very restricted autonomy; it seems making them see the world, will not be an easy task.