-
Edited by freefall722: 2/15/2013 5:44:34 AMThe clues for decoding come from the alpha lupi avatar (that symbol means Fourier) and the sentence "x,y ... use u,v..." This led us to using a Fourier Transform on the left image (actually a common tool for cleaning images with repeated patterns). The sneaky thing bungie did is bury the final output (with the numbers) in the Fourier image. Each geomask (right image) represents the mask they used to for that particular left image when burying the numbers. So in practical terms the geomask represents the area uncovered in the left image for that particular 5 minutes. The automation we created grabs that data every 5 minutes and adds to what we already have throughout the day.
-
Damn that's crazy.
-
You can use the steps in this thread if you want to do some experimenting yourself: http://www.bungie.net/en-US/View/community/Forum/Post?id=59803512 I use photoshop for some of the less critical steps - should be easy to substitute another program for those. The heavy lifting of doing a Fast Fourier Transform (FFT) is done in a cross platform Java app - ImageJ.
-
I have always been wondering how they appear.
-
From what I understand they use filters on photoshop to see it. I think there is a easier way to do this. I don't think Bungie would expect everybody to have photoshop.
-
They don't expect everyone to have photoshop.. but they know that quite a few people would. ARGs tend to require large groups of people and typically within that large group, different people have different skillsets and interests. So the solutions always require those with special skillsets because otherwise the puzzles would be too simple.
-
But in all seriousness... In order to see the numbers we Fourier transform the left image for each five minute node interval using either a photoshop plug-in or an image processor that can perform FFTs (Fast Fourier Transform), we then stack those images using the corresponding right image as a mask in order to get the final output. This latest image had the numbers stored across the RGB channels, whereas previously we were either using just the green (luma) channel or converting the image to black and white. Hopefully that helps...
-
Thanks for the explanation.