Running experiments to build Dynamics 365 and Power Platform apps

Running experiments to build Dynamics 365 and Power Platform apps

Today, it’s just me, and I’d like to discuss what EMPIRICISM means to teams building complex, enterprise Dynamics 365 and Power Platform applications.

In this episode, I’ll break down what empiricism is and its importance in Scrum, provide examples of work that would benefit from an empirical approach, and share insights on how we can use empiricism to build better business apps and improve our processes, tools, and techniques.


  • [01:45] Why Scrum is based on an empirical process
  • [05:01] Example of how empiricism works
  • [06:21] Why the defined process control is the opposite of empiricism
  • [10:01] The importance of empiricism in Scrum
  • [10:53] Examples of simple work where Scrum is unnecessary
  • [11:42] Examples of more complex work that would benefit from an empirical approach
  • [14:07] What I love most about Scrum
  • [15:50] Lessons that my business apps teams and I learned from our experiments
  • [17:27] My challenge to you 


Support the show

🌏 Amazing Applications
🟦 Customery on LinkedIn
🟦 Neil Benson on LinkedIn

🚀 Agile Foundations for Microsoft Business Apps
🏉 Scrum for Microsoft Business Apps
📐 Estimating Business Apps

Keep sprinting 🏃‍♂️


[00:00:00] Neil Benson: There's nothing experimental about the way McDonald's cooks its fries. It's a defined process, definitely not an empirical one. We don't get new teenage employees joining McDonald's and inventing new ways to cook or prepare fries. 

[00:00:17] Good day and welcome to the Amazing Applications Podcast episode 140. I'm your host, Microsoft Business Applications MVP Neil Benson. The purpose of the Amazing Applications Podcast is to help Dynamics 365 and Power Platform teams build amazing business apps using an agile approach. Sometimes, I have amazing guests like Emma Beckett, who joined me in the previous episode 139 to discuss the value that professional testers bring to our teams, and Malin Martnes, who joined me on the one before that 138 to discuss Dynamics 365 Marketing. You can find those episodes and about 138 others at AmazingApps.Show. If you're looking for a transcript or resources from this episode, visit 

[00:01:12] Today, it's a solo episode. Just me, yacking into the mic for the next 15 or 20 minutes. I hope you enjoy it and learn something new. If you do, I'd be grateful if you could support the show. The easiest way to do that is to share the episode on your favorite social media platform or leave a rating or review in Apple Podcasts, Spotify, or Podchaser. Today, I'd like to discuss what empiricism means to building complex, enterprise Dynamics 365 and Power Platform applications. 

[00:01:45] A couple of weeks ago, I had a photo shoot done for the new Customery website that's coming later this year. I'm working with a personal branding photographer, Louise Williams, and she was asking me about concepts that I'm trying to explain in my agile coaching and training programs. One of those is empiricism. Scrum is based on the empirical process, which is when we're building mission-critical business applications because it's impossible to predict all the challenges and the changing requirements that come up while your team are building the app. But what is empiricism exactly? How could I explain it to Louise, a former psychologist-turned-personal branding photographer, in a way that would not only make sense, but that we could somehow capture in a photograph? How do you capture a photograph of empiricism? 

[00:02:38] As Louise and I were chatting about my early career, she learned that I studied biochemistry. And in my final year, my team ran an experiment to transpose the luminescent gene from jellyfish into yeast. If you're into biochemistry, we were transposing the gene that expresses GFP, which is the green fluorescent protein from the jellyfish Aequorea victoria through a bacterial vector, Escherichia coli, into the yeast strain Saccharomyces cerevisiase, which is just baker's yeast. I've gotta admit that my work on GFP in 1996 didn't win any Nobel Prizes, but the three scientists whose work on GFP preceded mine did win them a Nobel Prize in chemistry in 2008. Shimomura, Chalfie, and Tsien won the prize for the discovery and development of the green fluorescent protein. It's used as a biomarker to confirm gene expression in all sorts of biomedical research today. Back in 1996, I just wanted to make glow-in-the-dark beer with my fluorescent yeast. Can you imagine that? Honestly, officer, I haven't been drinking. I just wanted to make glow-in-the-dark beer with my fluorescent yeast. 

[00:03:52] I loved science as a kid. Every year at school, I would ask the teachers questions about how the natural world worked. Please, sir, I've got another question. Please, miss, just one more question. They have to tell me to wait until next year. We cover that in next year's syllabus. Eventually, in my third year at university, I remember asking the lecturer a question and he said, "Benson, you've reached the limit of human knowledge. If you want to know the answer to that question, you're welcome to write a hypothesis and join my research team as a doctoral student." 

[00:04:25] "You've reached the limit of human knowledge." How cool is that? Well, sorry to say, I never did become a doctoral student of biochemistry. But in my own way, I have been trying to advance the limit of human knowledge. I guess we all are. You and me, we're both building unique business applications that no one has ever built before. You are solving problems that couldn't be solved before. Your team is advancing the limit of human knowledge. Whether you've been a biochemistry student or not, you've probably managed to advance the limit of human knowledge or at least your own knowledge. 

[00:05:01] Maybe you've done something like this in one of your projects. You've had an idea about how you might be able to meet a requirement. You built something to test that idea. You examined the results with your team, and you learned something new either because your idea worked or it didn't. Either way, you probably learned something new. That's empiricism. Empiricism is simply learning by doing something by experimenting. We've seen it all the time in science, and we've applied it to our professional lives. I've seen it called PDCA: plan, do, check, act; also known as the Shewhart cycle or OODA loops: observe, orient, decide, act; which came from the US Air Force’s Colonel Boyd. He was applying empiricism to combat operations. The scientific method that I learned in primary school is an empirical method. In science, we observe a phenomenon skeptically, we formulate a hypothesis about what's causing that phenomenon, we conduct experiments to either refine or disprove our hypothesis, and we report our conclusions. We have an idea, we conduct an experiment, we test the result, and we discuss the outcome. That's empiricism. 

[00:06:21] Maybe it sounds obvious to you that empiricism is the best way to handle complex work — the type of work that we do when we're building enterprise, mission-critical applications. But you know what? I bet you most business apps teams today don't take an empirical approach, and I'm talking to you, ERP people — no offense. Some teams still believe that the best way to handle complexity is to document it. Write it all down in excruciating detail. Design the heck out of it. Get the requirements and the designs signed off by the users. Lock it down. Charge them squillions if they change their minds or if we run into something unforeseen. That's like kind of the opposite to empiricism. It's called the defined process control. It's a great way for manufacturing products precisely and with very high degrees of quality control. 

[00:07:20] McDonald's makes fries from very specific types of potatoes grown to exact specifications. They're harvested and then stored in controlled conditions at specific humidity and temperature. They're then cooked for the specified time in a fryer from a specific manufacturer for the same duration and in the same oil at the same temperature in all of their restaurants. And I bet you they're probably even seasoned with a specific amount of salt. The result is an incredibly consistent product, and I'm sure McDonald's sells tons of reliable fries in every restaurant every year. There's nothing experimental about the way McDonald's cooks its fries. It's a defined process, definitely not an empirical one. We don't get new teenage employees joining McDonald's and inventing new ways to cook or prepare fries. 

[00:08:14] In fact, I remember reading a biography. I don't know if it's by Ray Kroc or about Ray Kroc. He was the owner of McDonald's. He bought it from the McDonald's brothers. And what happened when they expanded their restaurants outside Southern California? For some reason, the fries didn't taste the same. Same potatoes, same oil, same fryers, same cooking procedure. But in Southern California, the potatoes were stored outside in the warm, dry air before they were prepared as fries. Elsewhere in the United States, they were stored inside or refrigerated, so they had a slightly different moisture content, and the result was a product that didn't taste quite as good. And do you know how Ray Kroc figured out the cause of the difference — the temperature and the humidity at which the potatoes were stored before they were cooked? That's right. He experimented. He observed the phenomenon of fries that didn't taste right. He hypothesized that it might be because of the moisture content. He ran experiments in stores inside and outside Southern California that mimicked the storage conditions of the potatoes in Southern California. He observed the results of his experiment — perfect fries — and he shared the results with all his restaurant owners. So in the end, there was some empiricism in the origins of the McDonald's potato fries. But not anymore. The procedure for transporting and storing potatoes and cooking fries is now a well-defined process that's executed consistently across McDonald's restaurants, probably worldwide. 

[00:09:47] If empiricism means learning through experimentation, why is that so important in Scrum? How do we actually run experiments in practice when we're building business applications using Scrum? Good questions. 

[00:10:01] Let's address the first one first. Why is empiricism important in Scrum? The empirical approach encourages experimentation, which is critical in software development because I believe it's impossible to predict all the challenges that may arise during the course of a project. By experimenting and learning from our experiences, our teams can discover better ways of building software and adapt our approach accordingly. Scrum is a framework for developing complex products. Unlike simple products, we don't know what our final product is going to look like before we start the project. The requirements are not known and cannot be completely known in advance. The designs of the components we're gonna build aren't known in detail before we begin building them. 

[00:10:53] On the other hand, if you're building simple products or deploying simple applications, that's not complex work. If you've ever built a simple Power App on an Excel or SharePoint data source on your own in a few days, that's not complex work. You could have analyzed all your requirements in advance and designed the app upfront. But for really simple work, even those steps are often unnecessary. Some Microsoft partners can deploy Dynamics 365 Business Central in a couple of weeks because they've pre-built industry templates and they've boiled the work down into a set of repeatable proven procedures that a consultant can follow. I'd argue this is not complex work either. Using Scrum for simple Business Central deployments is unnecessary. It's overkill. 

[00:11:42] At the other end of the scale, here are some other examples of more complex work: most ERP implementations, certainly those involving finance and operations, supply chain, HR, project operations; building mission-critical or enterprise-scale Power Apps; creating new data platforms to support novel business intelligence apps; or deploying Dynamics 365 Customer Service while transforming your support processes. These are complex problems. Analysis in advance and upfront design just doesn't work. Many teams have tried, have failed, and they end up as blowouts in the IT press. 

[00:12:24] Instead, we need to take an empirical approach to complex work. We build a small feature. It could be a prototype, a proof-of-concept, or an increment. We test it by reviewing it with our stakeholders, and we learn from their feedback. That's why Scrum's pillars are transparency, inspection, and adaptation. Without them, there's no empiricism. Those are the pillars. We keep going feature by feature until the sum of all the features is a useful application that we can deploy into production. Incremental development, the concept of building small increments iteratively, is one way that we embrace empiricism in our business apps teams when we're using Scrum. 

[00:13:05] But there's another form of empiricism that I think is possibly even more important than incremental development, and it's using empiricism to improve how we work. Traditional approaches to software development have very little to say, almost nothing to say about how to improve how we work. At best, some traditional projects have a lessons learned document or a post-implementation review procedure to capture improvements that could be used in future projects. Good luck finding and using the outputs of those exercises. I bet those docs are buried somewhere in the project's SharePoint folder that you and I don't have access to — or maybe that's just me. If you're capturing that learning at the end of your project when your application is live in production, it means you've missed the opportunity to improve how you work during the project. Imagine working through all the phases of discovery and analysis, design, development, testing, and deployment and not getting any better at any of those things until after the app is in production. That could be years later. 

[00:14:07] What I love about Scrum are the opportunities for self-reflection by the team about how they're working. We do it every sprint during the sprint retrospective. We observe some phenomenon, create a hypothesis, and in the next sprint, we conduct an experiment to test our hypothesis, and from the results, we draw some conclusions. My business applications teams have conducted hundreds or maybe even thousands of tiny ways of working experiments over the past 15 years to frequently nudge the way that we work together. The empirical approach emphasizes the importance of collaboration and communication within our Scrum team and with our stakeholders, and this ensures that everyone is aligned on the application's goals and its progress. It enables our team to work together to solve problems and make improvements through experimentation.

[00:15:01] Some of the results of those experiments have resulted in what I call proven practices. For example, almost all of my teams have found two-week sprints more sustainable than one-week sprints. But there have been a couple of exceptions. We have run one-week sprints for short projects, maybe about a couple of months, and my team at UNSW switched from a two-week sprint to a three-week sprint. Still, most of my teams haven't joined two-week sprints, and that's the proven practice that I recommend to new business applications teams. Still, most of my teams have enjoyed two-week sprints, and that's the proven practice I recommend to new teams in my coaching and training. Of course, your team is absolutely welcome to experiment with alternative sprint durations.

[00:15:50] What else have we learned from our experiments? Here's a couple of examples. It's possible, but extremely challenging, when there's more than one product owner. But that's better than having no product owner. That's not an experiment I would try running. That the Given-When-Then format for writing behavior-driven development tests is a helpful format for developers who are building and developers who are testing features. We find that the common user story format, on the other hand, which is the “as a <user role>, I can <use a feature> so that <I get some value>” — that format, although really popular, just doesn't work for most of my teams. Instead, we prefer to use a short title with a less structured description followed by some BDD scenarios like the Given-When-Then. When the same requirement applies to lots of persona, we tag the item with the roles in Azure DevOps rather than listing them in the item description, you know, in a user story format. So we just add them as tags instead. We enjoy working together in-person at least once a week in the office. We time our sprint review, the sprint retrospective, and the next sprint planning meeting on this day when we're all together. So those are just a few examples of what we've learned from running experiments sprint over sprint over the last few years, and I've worked together with this team for the last five years with the same team members.

[00:17:12] If you work in a business apps consulting practice for a Microsoft partner or even a large enough Microsoft customer, you should be sharing the results from your experiments with other Scrum teams in your organizations. This is how we get empiricism at hyper-scale. 

[00:17:27] Here's my challenge to you: if you're already working in an agile team building business apps, inspect your work during your next sprint retrospective event. Find one or two maybe even three experiments that you can run in the following sprint to improve your team's way of working, at least you think they'll improve your team's way of working. Check the results of the experiments in the following retro. It could be how you handle your requirements, the stages across your Scrum board, how you deploy solutions into environments, how you test your increments, how you verify or validate features with your stakeholders, how you document functionality. Or perhaps it's something related to how you conduct in-person or online meetings that your team holds, like not using devices unless you're making notes. Or it could be related to other tools that you're using, like Azure DevOps or Miro or something like that. 

[00:18:19] The empirical approach encourages a culture of continuous improvement, where our team is always looking for ways to get better. This means that our Scrum teams are never satisfied with the status quo and are always striving to improve our processes, tools, and techniques. 

[00:18:35] Let me know how it goes. I'd love to hear what experiments you're running and what improvements your team has made to the way it works. Until then, keep sprinting. Bye for now.