I may have just broken standup (using n8n)

I May Have Just Broken Standup (Using n8n)
No scrum masters (nor bananas) were harmed during the writing of this article. 🍌
It's no secret that I love developing software. However, when it comes to being employed as a software developer, I tend to have more of a love-hate relationship. There's a few reasons as to why this is, but one of the more major ones is due to the amount of bureaucracy that tends to come with the job, especially when it comes to ceremonies.
One such ceremony is everyone's favorite 15-minute meeting: standup, which I find can be rather hit or miss.
On the one hand, standup itself can sometimes be rather valuable, especially when you have a really good team. Unfortunately, however, I think that's the exception rather than the rule. And even if you do happen to have a great team, I often feel like standup runs into the same repeating issues:
- The memory problem: Whenever I start giving my daily update, I personally can never remember what it is that I did the day before—especially if it happens to be a Monday, as I've usually spent the entire weekend doing everything I can to forget what I worked on the week before.
- The timing problem: Standup can be way too early in the morning, especially if pager duty went off at 2:00 a.m., or more likely, I stayed up till 2:00 a.m. binge-watching a new series on Netflix.
- The derailment problem: Perhaps the worst part of standup are the many derailments, typically always followed with a "can we take this offline?"
Personally, I find that all of these issues take away from the original purpose of standup, which is a bit of a shame because I think the underlying idea behind it is a good one.
In any case, whether or not standup is a good thing is kind of a moot point as it's not going anywhere anytime soon. And so, as developers, we're left with two options: either accept it as is or try to make it the best we can.
For myself, I decided to put my own spin on making standup the best I could. However, rather than trying to do this the responsible way of both improving the communication and underlying process, I instead decided to just break it in the only way that I know how—by building an overengineered solution that only I myself would use to automate the process of standup for me.
The Game Plan
In order to build a solution to automate standup for myself, I began as if it was any normal project: defining both the goals and requirements in order to scope it as success. However, in order to do so effectively, I first needed to understand the core of the problem I was trying to solve—standup.
When it comes to standup, this typically involves communicating three main data points:
- What one has done
- What one is doing
- Any current blockers
For me, all three of these can be a little hazy when it comes to an early morning meeting. And so, I wanted a way to automate the collection of these three data points.
However, rather than trying to attempt all three of these at once, I decided to take a little bit more of an iterative approach and chose just one data point to focus on at the beginning before adding in the other two.
Therefore, for my initial implementation, I decided to build a system to automatically remind me of what it was that I achieved the day before. This was not only going to be the easiest to implement (at least in my mind), but it would also solve one of my biggest personal pains when it comes to standup, making it a great MVP.
The High-Level Design
As for the actual implementation itself, I decided to achieve this by setting up a simple automation which would:
- Collect information about my previous day's work activities from a number of different sources, such as GitHub (for committing code) and Linear (for issue tracking)
- Send the data through a pipeline to an LLM in order to summarize it into some key standup talking points
- Deliver the summary to myself both as a Slack direct message and as an email
Pretty simple.
n8n: The Secret Weapon
Normally when it comes to building projects like this, I would go about implementing it by hand using a language such as Go. However, recently I've been trying to broaden my horizons. And so for this project, I decided to build it using a piece of technology that's been hyped quite a lot online—one that I originally dismissed: n8n.
If you're unaware, n8n is an automation tool allowing you to connect multiple services together, similar to something like Zapier. However, unlike Zapier, n8n is both source available and self-hostable, which is something that I really appreciate.
In addition to this, it also provides a huge number of integrations out of the box and provides the ability to easily add your own. Because of this, n8n is extremely popular with just over 156,000 stars on GitHub at the time of recording.
Because of this, and because I like to learn new things, I decided that this project was going to be a good excuse to use n8n.
Setting Up n8n
I began researching how to deploy a self-hosted instance of n8n. As it turns out, the n8n documentation provides a guide on how you can deploy it using Docker Compose, which will also install Traefik as a reverse proxy providing HTTPS.
I decided to make use of the new Docker Manager feature available on my VPS provider, Hostinger. This feature allows you to easily deploy a docker-compose.yaml straight to a VPS instance from the Hostinger dashboard, meaning you can do so without needing to SSH in—which makes deploying a self-hosted application incredibly fast.
Installing Docker & Docker Manager
Once I had my VPS instance in hand, it was time to set it up for both Docker Manager and n8n. Whilst Hostinger does provide an n8n ISO image that you can use, in my case I wanted to follow the documentation which gives you that Docker Compose file that also installs Traefik and provides you with HTTPS. So I decided to select the Docker app ISO image instead, which installs both Docker and Docker Compose, allowing you to use the Docker Manager feature.
Configuring n8n
Because I had both Docker and Docker Compose already installed, I could skip to step number three in the n8n docs, which was to set up DNS records for my new VPS instance. To do this, I added a new record to my CloudFlare dashboard pointing at the IP of my new VPS instance.
After this, I moved on to step number four, which was to create an environment file. Fortunately, when using Docker Manager, this is incredibly simple. I headed over to the Docker Manager page in the Hostinger dashboard and opened up the YAML editor view.
Here you have two different text entries: one for the docker-compose.yaml and the other for any environment variables. I copied the example file from the n8n documentation and made a couple of changes to suit my own environment, including:
- The domain name
- The time zone
- Email address for the TLS certificate
After that was done, I copied in the Docker Compose file and pasted it into the Docker Manager YAML editor. All that remained was to name the project and it was ready to deploy.
After a couple of minutes, the Docker Compose stack was up and running, which I verified by heading over to my configured DNS record.
Building the First n8n Workflow
With n8n successfully deployed, the next thing to do was to set up a user account, and I was ready to begin implementing my standup automation.
First things first, I selected to start a new workflow from scratch before then deciding to give it a name—one that I felt would be rather accurate.
What Exactly is a Workflow?
n8n defines a workflow as a collection of nodes which act as each of the individual steps to define your automation. The first of these nodes is the trigger, which is the condition or event that will kick off your workflow's execution.
n8n provides a number of different trigger types, such as:
- An inbound webhook
- A form submission
- Based off the result of another workflow
For this automation, I wanted to use the schedule trigger, which would run the workflow at the same time each day. I set this to 8:00 a.m., which would be early enough to ensure that my trigger would complete before standup started.
One nice thing about triggers in n8n is you can execute these at any point during development, which means you're not having to wait around for a trigger to execute in order to test your flows.
Obtaining My Commits
With the trigger defined, the next thing to do was to obtain my first data source—the git commits from my previous day.
Understanding Nodes
Nodes in n8n are the key building blocks of a workflow, allowing you to perform a range of different actions such as:
- Fetching data
- Writing data
- Performing data transformations
- Control flow (conditional expressions and loops)
If you take a look at all the different node types available in n8n, you can see it provides a huge amount of integrations for various different services out of the box. This is one of the key features of n8n as you can take a pretty much no-code approach to interacting with many different APIs and services.
The GitHub Integration Challenge
I wanted this first node to collect data from GitHub, specifically about the previous day's commits. So I searched for the GitHub node and selected one with the action of "get repo."
In order for this node to work, I needed to set up some authentication credentials, which I did by creating an access token inside of my GitHub account.
Note: When it comes to n8n, the majority of the nodes you'll use require credentials in order to integrate with their associated service. Fortunately, n8n provides comprehensive documentation that shows you how to obtain these credentials for whichever node you're configuring.
With my GitHub credentials added, I configured the rest of the node's properties, beginning by selecting which repository I wanted to pull the commit data from. Upon testing the node, I realized that the results from the "get repo" action didn't provide any commit data inside—it was only returning data about the actual repository itself.
Using HTTP Request Nodes
In order to obtain the actual commit data, I needed to use a different operation. Unfortunately, the GitHub Node didn't have one that could do this.
At this point, I assumed I was cooked. However, I stumbled across the custom API call operation, which directed me to make use of an HTTP request node (though it mentioned it would take care of authentication for me).
I replaced the GitHub node with an HTTP one and configured it by setting the URL to the GitHub API endpoint for pulling down the commits of a repo:
https://api.github.com/repos/{owner}/{repo}/commits
For authentication, I selected a predefined credential type of GitHub API, followed by selecting my configured GitHub account.
Filtering by Author
Upon executing the node, I was now retrieving a list of my recent commits on the repo. Unfortunately, it was also pulling down commits made by other authors. Because I'm not a fan of plagiarism, I needed to constrain this to only returning commits that I myself had authored.
I achieved this using the author query parameter:
?author=myusername
Filtering by Date
I began to notice another issue where the results included commits made beyond the previous working day. To resolve this, I used another query parameter called since, which only returns commits with a timestamp greater than the value you pass in.
Unlike the author parameter, the value for since needed to be dynamically generated—basically set to 24 hours in the past.
Fortunately, n8n allows you to set dynamic values using an expression with JavaScript:
{{ new Date(Date.now() - 24 * 60 * 60 * 1000).toISOString() }}
The Monday Problem
I had yet another bug—one that only appeared when I ran this code on a Monday, which produced an empty list of commits. This was happening because I hadn't produced any commits the day before (Sunday).
I needed to modify my expression to return commits made during the previous working day, which on a typical Monday would be Friday. Thanks to my friend Claude, this wasn't too difficult to whip up an expression for.
Obtaining Patches
With my commits filtering correctly, I realized they were currently only the git commit hashes—not the actual commit data itself. I needed to add another step to pull out the actual commit information.
To achieve this, I needed to loop over each of the commit hashes and perform an HTTP request to pull down the information for that specific commit.
I initially looked at the loop over items node. However, upon reading the documentation, it turns out you don't actually need to use this node in the majority of cases as n8n often handles the looping of input data for you.
All I needed to do was use another HTTP request node, this time using the same commits path but adding in the commit hash as a path parameter using an expression. For authentication, I reused the existing GitHub API credentials.
Now when I executed this step, n8n was looping through each individual commit hash and pulling down the commit information for each one—including most importantly the patch, which communicated the actual code changes that each commit made.
Summarizing My Work
I was ready to move on to the next step: passing this information into an LLM for summarization.
Setting Up the OpenAI Node
I created another node—an OpenAI node called "message a model," which does exactly what it says on the tin. This is something I really like about n8n: they're very clear with what each node actually does.
For this node to work, I needed to set up credentials by creating a new API key inside the OpenAI platform. With credentials defined, I could configure the rest of the node.
First, I chose the model (initially GPT-5, later changed to 4.1 Mini). Then I defined a system message to configure the behavior of the LLM:
"Summarize the commits that I will be sending in the next message as a simple stand-up update that I could share with colleagues as to what I achieved in my previous day."
For the user message, I sent across the entire input data as stringified JSON.
The Multiple Execution Problem
I ran into an issue where the node was being invoked multiple times—one for each individual commit. This meant the generated AI message didn't have the full context of all changes taken together and was producing multiple outputs.
This was happening due to the default behavior of a node in n8n: it executes once for each input item in an array.
Using the Aggregate Node
I needed to turn the multiple outputs from the previous node into a single input for the next. n8n provides the aggregate node for this purpose, which turns multiple items into a single one.
The aggregate node is one of several data transformation nodes that n8n provides, including:
- Filter node
- Merge node
- Deduplication node
I added an aggregate node between the HTTP and OpenAI nodes, configuring it to aggregate all item data into a single list with an output field called data. To save money on AI tokens, I specified only the two fields I needed.
Now the aggregate node was turning my 14 input items into a single output item, giving the LLM the entire context in a single message.
Refining the Output
The initial output felt kind of wooden—I wanted it to be more natural sounding. So I went about refining the system prompt.
As always with prompt engineering, it often takes a few iterations to get it right (though it always feels a little bit arcane). In addition to modifying the system prompt, I found a big improvement from changing the model from GPT-5 to 4.1 Mini, which not only saved costs but gave a more natural result.
Sending Myself the Report
The standup report for my previous day's work was taking shape. Now I needed to send it to myself over a couple of communication channels: Slack and email.
Slack Integration
Out of the two, Slack was going to be the easiest to configure. I selected the Slack node with the "send a message" action, which required setting up authentication credentials. I configured the authentication to use OAuth 2 so that messages would be sent from my own user account.
With credentials added, I configured where the message should be sent—either to a channel or to a user. Long-term, I wanted this sent to the engineering channel. However, for the MVP, I decided to just send it to myself.
I set it to a simple text message type and dragged in the output from the LLM. Upon executing the node, I received a Slack DM from myself with my summarized Git commits. Very cool.
Email Integration
I also wanted to send this as an email on the off chance that I didn't have access to Slack that day, or if I one day suddenly lost my mind and decided to migrate to Microsoft Teams.
I added an email node, which allows you to send an email over SMTP. I configured this to work with Resend, which is pretty much what I'm using for all my email sending these days (even though it is kind of expensive).
With email credentials configured, I finished setting up the rest of the node:
- From email: My configured sender
- To email: My personal email
- Subject: Using an expression to generate the current date
- Content: Plain text with the LLM output
I tested this and received a nice plain text email with the summarization of my previous day's commits.
Testing My Implementation
With my initial implementation completed, all that remained was to activate the workflow and wait for it to execute the following day.
The next morning, I woke up just after 6:00 a.m., went about my normal morning routine, then headed over to my desk and waited with nervous apprehension to see if my creation was going to work.
As 8:00 a.m. rolled around, I received both an email and Slack notification containing my standup update for the work I completed the day before.
Success! I had managed to complete my MVP.
The Other Data Points
It was now time to focus on making the rest of the standup process obsolete. I needed to obtain the other two data points:
- What I was currently working on
- Any current blockers
Linear Integration for Current Work
This ended up being rather simple by adding an integration to Linear (my issue tracking tool). I could pull down any issues that were:
- Assigned to myself
- Currently marked as "in progress"
This would serve as the data for communicating what I was currently working on. I used a filter node to remove any items that didn't match these constraints.
I also attempted to use Linear to pull down any tasks that were closed in my previous working day. Unfortunately, this wasn't possible as the Linear node didn't return the timestamps for when a task status changed. Maybe it would have been better to use Jira instead.
GitHub PRs for Blockers
For obtaining blocker data, I used another GitHub node to fetch data of any open PRs that were created by myself.
Merging the Data Sources
With both new data sources added, I linked them to their own aggregate nodes, each with their own unique field name. I also modified the output field for the existing commits aggregate node.
Then I used a merge node to turn each of the three inputs into a single object that could be passed to the LLM.
Once complete, I modified my system prompt to reference each of the input data fields for their respective standup communication topic.
With that, I was now generating a stand-up update that I would consider to be somewhat complete—one that could refresh my memory whenever the hacky sack of doom landed in my hands.
Automating Async Standup
Given how far I'd come, I wanted to see how much of standup I could end up automating. I decided to first tackle asynchronous standup, where you typically publish your standup update inside of a Slack channel (in my case, #engineering).
Whilst sending a message to a channel would be simple enough, the real challenge was to modify my workflow so that it would only publish on days I didn't have a stand-up meeting scheduled.
Google Calendar Integration
I used the Google Calendar node to pull down any events from that scheduled day that matched the query of "standup." This meant it would only produce a result if I had a meeting that day.
However, for my workflow to succeed, I configured this node to always produce output data, which meant there would always be an output even if I didn't have a calendar event for that day.
Conditional Branching
I used an if node for branching based on a conditional expression—checking if the calendar input was empty. If it was, I would consider that day to be async.
Lastly, I modified the existing Slack node to explicitly reference the LLM input, then duplicated the node, linked this duplicate to the asynchronous branch, and modified it to write to the engineering Slack channel instead of sending the message to me.
Now when I tested this on a day I didn't have standup scheduled, I was receiving a message from myself direct to the engineering Slack channel.
I had managed to automate my async standup meetings.
What About Synchronous Standup?
This I'm still figuring out. However, my current idea is to make use of something such as 11 Labs in order to generate an audio clip that makes use of my voice.
Here's a sample of what that might sound like:
"Yesterday, I worked on the starter template, updating the dashboard UX with a personalized welcome message, added a clearer upgrade CTA, and some reusable UI elements. Today I'm working on getting integrations set up within the Zenstart dashboard, starting with one to create a new database instance when a new repo is generated from a starter kit. As for blockers, currently I have none."
In addition to generating this clip, I'm also going to have to figure out how to automatically play it whenever it's my time to give an update. However, that's going to be a problem for my future self.
And given the team that I currently have, I'm pretty sure that standups are going to be asynchronous for the foreseeable future.
Conclusion
What started as a simple frustration with forgetting what I did yesterday turned into a full-fledged automation project that handles:
- ✅ Collecting commit data from GitHub
- ✅ Pulling current work items from Linear
- ✅ Identifying potential blockers from open PRs
- ✅ Summarizing everything with an LLM
- ✅ Delivering via Slack and email
- ✅ Automatically posting to channels on async days
n8n turned out to be a surprisingly powerful tool for this kind of automation work. The visual workflow builder, extensive integrations, and ability to self-host made it a great choice for this project.
If you want to try this yourself, I've made my workflow JSON available on GitHub.
Resources & Links
- n8n: https://n8n.io/
- n8n Docker Compose docs: https://docs.n8n.io/hosting/installation/server-setups/docker-compose/
- My "Breaking Standup" workflow JSON: https://github.com/dreamsofcode-io/automating-standup
- Hostinger VPS: https://hostinger.com/dreamsofcode (use code DREAMSOFCODE for 10% off)
Originally published on the Dreams of Code channel.