
I’ve been wanting to do it for years, but the autumn colors hitting this season lit the fire that got me building out the raspi time lapse project I’d hidden from.
Essentially I’m placing a Raspberry Pi + camera module in my window which faces west. Couple things about that view: there’s a Ginkgo tree directly outside; that tree turns this blazing yellow in autumn; when the leaves on the Ginko drop, and the leaves in the trees across the street drop, through the branches the Prison Ship Martyrs Monument and downtown Brooklyn are visible just enough above the brownstones. So it’s this evolving view that I’ve been wanting to get at.
I’d love to show you this. It’s not exactly a blow-your-mind kind of view, but it’s nice. And it’s the transitions that I think are most interesting. Unfortunately I ended up not having the time to complete it this year. NBD.
From an OPSEC perspective, yea, I’d love to share this, but it’d too easily expose the physical location of my apt, including the specific window in my apartment where I’ve got the camera perched. For that reason, I’m documenting the process here, and sharing the code for the project on GitHub, but I’ll be putting the resulting images/experiences/whatever it is I’ll build with the data into what’ll be anonymous spots online. I’ll leave it up to you to find the work, though you likely won’t.
I wish it didn’t have to be this way. But as we’re sliding into 2024 the world is sliding into an increasingly terrifying place. You should take similar precautions for yourself and those you care about.
github.com/mechaneyes/lapse-of-time
The code is relatively straightforward. Right now I’m mostly focused on getting the timing portion of the code up and running. The photo integration will happen when I’m back in BK.

I’m leveraging Astral for the calculations of times for the sun’s positions throughout the day. In addition to other events, Astral provides the times for dawn and dusk. I’ve taken those and added 20 mins on either side for the start and end times for shooting. Each night around midnight, the app updates those start/end times for the upcoming day. Then takes an image every 10s until dark.
I’m considering what I might want to do when it’s dark. And I’m especially curious how I’d configure this to continue shooting in snowstorms and when there’s snow on the ground. There’s a lot of light out there when that’s happening.
Camera Module 3 v High Quality Camera

Before heading for Sacto for the holidays I managed to run a test using the Raspberry Pi Camera Module 3. When setting up the preview the images looked fine but not great. The app ran, and the camera snapped a pic every 10 seconds until dusk.
When I looked at the images later, not only did they not look great, they weren’t even passable. Honestly, they looked like shit. Turns out the images you get from shooting with a $25 camera look like images shot with a $25 camera. Thankfully, keeping it easy, there’s the Raspberry Pi High Quality Camera.
The High Quality Camera is available with a choice of C/CS mount or M12 mount. I’m unfamiliar with both, and while I’ll do my research, I’m looking forward to a trip to B&H to be educated by one of their salespeople. I’ve always appreciated their knowledge/grasp of what they’re pushing. Great to watch pros at work.
Side note, a Jewish friend of mine shared the actual words making up the “B&H” acronym: “Beards and Hats.” I’m going with it.
Output … WFT will this look like?
Honestly don’t know.
Sure, I can use FFmpeg to assemble a little or a lot of images into videos. But what can I do beyond that?
I’m wondering how I might turn computer vision loose on the data. Do I trigger something every time a person or car passes through the frame? Trigger something every time someone wearing red enters the frame? Face detection? Let’s not get creepy here…

But yea, sticking with computer vision for another moment, what about analyzing pedestrian traffic, or bicycle or car traffic. How about correlating this with times of day, season or weather.
What about intersecting with GenAI? Let’s ask an LLM for ideas.
Prompt
i’m setting up a camera in my window to shoot time lapse images. it will be shooting every 10 seconds while there’s enough light outside. the sidewalk with people walking on it is in the field of view. in the background are brownstones with downtown brooklyn in the distance.
please help me brainstorm ways of working with the generated images beyond using something like ffmpeg to do time lapse videos
how might i leverage generative ai tools here?
GPT-4
- Image Augmentation: Generative AI models like GANs (Generative Adversarial Networks) can be used to create new images or augment existing ones, potentially adding weather effects like rain, snow, or different lighting conditions to your time-lapse images.
- Image-to-Image Translation: AI can translate photos into a different style , like transforming the cityscape into a landscape photo, or translating your time-lapse photo to an illustration.
- Deep Art: Use deep learning algorithms to transform your images into the style of famous artists, also known as deep art effects, and create an artistic masterpiece.
- Video Synthesis: Generative AI allows the synthesis of videos from still images. This could be used to create interesting photographic sequences or cinematic movie-like experiences from your time-lapse.
- Super Resolution: AI systems like SRGAN can be used to create Super Resolution images, allowing you to create high-resolution versions of your time-lapse.
- Predictor Models: Use generative AI to predict future events – for example, how busy the sidewalk might be at a certain time, or how the lighting will impact the appearance of your shot.
- Anomaly Detection: You can use AI to highlight or detect anomalies which occurred during the time-lapse, like a parade or an accident on the sidewalk.
- People/Objects Counting: You can use AI tools to count and track the people or objects passing by in each frame as an interesting study of pedestrian or vehicular traffic.
- Background Generation: Generative AI can be used to generate simple or complex background themes, which can add a new layer of interest and creativity to your time-lapse photos.
- Season Simulation: By feeding AI the right parameters, you can create simulations of different seasons’ influence on the same scene.
- Motion Transfer AI: These AI tools could allow you to control the movements of people or objects in the captured videos by generating motion transfer from one video to another. This could be used to create more controlled time-lapse videos or generate exciting simulation models.
I’ve got my work cut out for me.
Parting Pics
I created the pics above using Midjourney. These are a couple of the initial bits of output from the process:


midjourney prompt: multiple exposure timelapse of a full solar eclipse as viewed from on top of a mountain overlooking a stunning norwegian fjord, rich and saturated colors