It appears white/gold to me on it’s own, I’ve never been able to see anything different.
Grabbing this specific image and sampling the colours though; they appear more of a grey/brown colour. I can sorta maybe understand blue, but definitely not black.
This is just using Polish photo editor on android:
It’s funny how people will keep barking about it even when you slap them in the face with color picker which is mathematical display of the color. There is no “how brain is seeing things”. It’s literally WHAT THE COLOR IS. To call white with faint blue tint “blue” and what is clearly a “gold” shade can’t possibly be black. If photo was heavily manipulated through photo editing or lighting, that doesn’t prove anything at all. Or the question was stupid. No one was really asking “what color is the dress”, they were asking what colors are on the photo. And photo has no relation to the real dress because of light conditions manipulation or even photo editing.
Whatever the dress may be in reality, the photo of it that was circulated was either exposed or twiddled with such that the pixels it’s made of are indeed slightly bluish grey trending towards white (i.e. above 50% grey) and tanish browny gold.
That is absolutely not up for debate. Those are the color values of those pixels, end of discussion.
Edit to add: This entire debacle is a fascinating case of people either failing to or refusing to separate the concept of a physical object versus its very inaccurate representation. The photograph of the object is not the object: ce n’est pas une robe.
The people going around in this thread and elsewhere putting people down and calling them “stupid” or whatever else only because they know that the physical dress itself is black and blue based on external information are studiously ignoring the fact that this is not what the photograph of it shows. That’s because the photograph is extremely cooked and is not an accurate depiction. The debate only exists at all if one party or the other does not have the complete set of information, and at this point in history now that this stupid meme has been driven into the ground quite thoroughly I should hope that all of us do.
It’s true that our brains can and will interpret false color data based on either context or surrounding contrast, and it’s possible that somebody deliberately messed with the original image to amplify this effect in the first place. But the fact remains that arguing about what the dress is versus how it’s been inaccurately depicted is stupid, and anyone still trying that at this late stage is probably doing so in bad faith.
Yes a very light blue, nobody is seeing brilliant white. But on a colour slider it’s much closer to white than the ‘true’ dark blue of the dress. If you sample the sleeve or whatever that is hanging over it’ll be even closer to pure white.
You missed the whole point. If I take a white dress and then shine a blue lamp on it, then take a photo.The pixels will be 100% blue, but would that mean the dress itself is blue?
They’re not stupid, their visual cortex just lacks the ability to calibrate to context. You can see in the picture that the scene is very brightly lit. If your visual cortex is in working order, you’ll adjust your perception of the colours. The picture reveals that some people struggle to do that.
fMRI studies show that white-and-gold perceivers exhibit more activity in frontal and parietal brain regions, suggesting that their interpretation involves more top-down processing. This means they are more, not less, engaged in contextual interpretation.
Some differences may relate to physiological traits like macular pigment density, which affects how much blue light is absorbed before reaching the retina. People with higher density tend to see white and gold
Color perception is not only about the visual cortex’s function but about the image’s properties and the brain’s inferential processes. You’d know this if you weren’t a dumb blue-n-black’er
How come the gold and whiters are simultaneously claiming they use more top down processing, AND that the pixels are white and gold? Looking at the pixel colour is bottom up processing.
If the dress is actually blue and black, how is doing more contextual processing supposed to get you a less accurate perception? Imagine if it was a snake and you needed to tell what colour it was so you’d know if it’s going to bite you. If your perception of the snake’s colour changes depending on the lighting, you’re going to die.
The correct interpretation of that study is that you white and golders are doing 10,000 calculations per second and they’re all wrong… Or, you know, the BOLD activation was in inhibitory pathways.
The claim mixes up how perception works and what people actually mean when they talk about top-down processing. White and gold viewers aren’t saying the pixels are literally white and gold—they’re saying the colors they perceive match most closely with that label, especially when those were the only options given. Many of them describe seeing pale blue and brown, which are the actual pixel values. That’s not bottom-up processing in the strict sense, because even that perception is shaped by how the brain interprets the image based on assumed lighting. You don’t just see wavelengths—you see surfaces under conditions your brain is constantly estimating. The dress image is ambiguous, so different people lock into different lighting models early in the process, and that influences what the colors look like. The snake example doesn’t hold up either. If the lighting changes and your perception doesn’t adjust, that’s when you’re more likely to get the snake’s color wrong. Contextual correction helps you survive, it doesn’t kill you. As for the brain scan data, higher activity in certain areas means more cognitive involvement, not necessarily error. There’s no evidence those areas were just shutting things down. The image is unstable, people resolve it differently, and that difference shows up in brain activity.
B) America can go fuck itself until it sorts out it’s Nazi problem. I still think Canada should enact a full trade embargo and take our business elsewhere.
The point has never been about the actual pixel color codes. It’s about how human perception doesn’t follow those objective metrics.
Distilled down, we perceive color and brightness in comparison to the surrounding scene. The checker shadow illusion is a clear example of the same color looking different.
So the color perception on the dress depends on how the brain decides to color correct the white balance of the scene.
I find it easy to switch back and forth between the two color combinations: If I assume that the scene is in full sun, then the dress looks blue and black. If I assume that it’s in the shade, but with a brightly-lit background, then it looks white and gold.
It appears white/gold to me on it’s own, I’ve never been able to see anything different.
Grabbing this specific image and sampling the colours though; they appear more of a grey/brown colour. I can sorta maybe understand blue, but definitely not black.
This is just using Polish photo editor on android:
It’s funny how people will keep barking about it even when you slap them in the face with color picker which is mathematical display of the color. There is no “how brain is seeing things”. It’s literally WHAT THE COLOR IS. To call white with faint blue tint “blue” and what is clearly a “gold” shade can’t possibly be black. If photo was heavily manipulated through photo editing or lighting, that doesn’t prove anything at all. Or the question was stupid. No one was really asking “what color is the dress”, they were asking what colors are on the photo. And photo has no relation to the real dress because of light conditions manipulation or even photo editing.
This is exactly the thing.
Whatever the dress may be in reality, the photo of it that was circulated was either exposed or twiddled with such that the pixels it’s made of are indeed slightly bluish grey trending towards white (i.e. above 50% grey) and tanish browny gold.
That is absolutely not up for debate. Those are the color values of those pixels, end of discussion.
Edit to add: This entire debacle is a fascinating case of people either failing to or refusing to separate the concept of a physical object versus its very inaccurate representation. The photograph of the object is not the object: ce n’est pas une robe.
The people going around in this thread and elsewhere putting people down and calling them “stupid” or whatever else only because they know that the physical dress itself is black and blue based on external information are studiously ignoring the fact that this is not what the photograph of it shows. That’s because the photograph is extremely cooked and is not an accurate depiction. The debate only exists at all if one party or the other does not have the complete set of information, and at this point in history now that this stupid meme has been driven into the ground quite thoroughly I should hope that all of us do.
It’s true that our brains can and will interpret false color data based on either context or surrounding contrast, and it’s possible that somebody deliberately messed with the original image to amplify this effect in the first place. But the fact remains that arguing about what the dress is versus how it’s been inaccurately depicted is stupid, and anyone still trying that at this late stage is probably doing so in bad faith.
The “white” pixels are literally blue. The “black” ones can be considered gold due to the lighting.
Yes a very light blue, nobody is seeing brilliant white. But on a colour slider it’s much closer to white than the ‘true’ dark blue of the dress. If you sample the sleeve or whatever that is hanging over it’ll be even closer to pure white.
You missed the whole point. If I take a white dress and then shine a blue lamp on it, then take a photo.The pixels will be 100% blue, but would that mean the dress itself is blue?
But you can clearly see that the lighting is bright yellow-white, not blue…
That’s… literally not what this phenominon is about, either. Talk about missing the point.
It’s exactly the point. White fabric will appear blue in blue light, which is why some people see this white dress and think it’s blue.
That is literally what the argument is caused by, adaptive perception to lighting conditions.
They’re not stupid, their visual cortex just lacks the ability to calibrate to context. You can see in the picture that the scene is very brightly lit. If your visual cortex is in working order, you’ll adjust your perception of the colours. The picture reveals that some people struggle to do that.
fMRI studies show that white-and-gold perceivers exhibit more activity in frontal and parietal brain regions, suggesting that their interpretation involves more top-down processing. This means they are more, not less, engaged in contextual interpretation.
Some differences may relate to physiological traits like macular pigment density, which affects how much blue light is absorbed before reaching the retina. People with higher density tend to see white and gold
Color perception is not only about the visual cortex’s function but about the image’s properties and the brain’s inferential processes. You’d know this if you weren’t a dumb blue-n-black’er
How come the gold and whiters are simultaneously claiming they use more top down processing, AND that the pixels are white and gold? Looking at the pixel colour is bottom up processing.
If the dress is actually blue and black, how is doing more contextual processing supposed to get you a less accurate perception? Imagine if it was a snake and you needed to tell what colour it was so you’d know if it’s going to bite you. If your perception of the snake’s colour changes depending on the lighting, you’re going to die.
The correct interpretation of that study is that you white and golders are doing 10,000 calculations per second and they’re all wrong… Or, you know, the BOLD activation was in inhibitory pathways.
The claim mixes up how perception works and what people actually mean when they talk about top-down processing. White and gold viewers aren’t saying the pixels are literally white and gold—they’re saying the colors they perceive match most closely with that label, especially when those were the only options given. Many of them describe seeing pale blue and brown, which are the actual pixel values. That’s not bottom-up processing in the strict sense, because even that perception is shaped by how the brain interprets the image based on assumed lighting. You don’t just see wavelengths—you see surfaces under conditions your brain is constantly estimating. The dress image is ambiguous, so different people lock into different lighting models early in the process, and that influences what the colors look like. The snake example doesn’t hold up either. If the lighting changes and your perception doesn’t adjust, that’s when you’re more likely to get the snake’s color wrong. Contextual correction helps you survive, it doesn’t kill you. As for the brain scan data, higher activity in certain areas means more cognitive involvement, not necessarily error. There’s no evidence those areas were just shutting things down. The image is unstable, people resolve it differently, and that difference shows up in brain activity.
Next up: the dress worn by the woman on the right.
Why not an American photo editor?
A) I’m not American
And
B) America can go fuck itself until it sorts out it’s Nazi problem. I still think Canada should enact a full trade embargo and take our business elsewhere.
I mean… it was a dumb joke on Polish and Polish being homographs, but okay.
Woops
I missed that; bit of a sensitive topic atm…
I’m American. You have full permission to shit on us whenever you want. This place fucking sucks.
Why are people downvoting someone for admitting they made a mistake? It takes some courage to do that.
Probably because they qualified it by making an excuse for themself instead of just owning the error without ‘strings attached’.
The point has never been about the actual pixel color codes. It’s about how human perception doesn’t follow those objective metrics.
Distilled down, we perceive color and brightness in comparison to the surrounding scene. The checker shadow illusion is a clear example of the same color looking different.
So the color perception on the dress depends on how the brain decides to color correct the white balance of the scene.
I find it easy to switch back and forth between the two color combinations: If I assume that the scene is in full sun, then the dress looks blue and black. If I assume that it’s in the shade, but with a brightly-lit background, then it looks white and gold.
You should watch https://youtu.be/bg41XfnIBvk for an explanation on how to properly get the colors from the image.