It's said that the best teams can perform up to 20 times better than the average team. How can that be? Is it people? Process? Tools? What are they doing that hasn't already been learned and exploited by everyone else.
Join us for a lively, interactive discussion where we share stories from our teams' experiences both good and bad and reveal what we've learned about team performance and lessons you can take back and share with your organization.
Complimentary lunch will be provided.
Mon, June 4 11:30 AM - 1:00 PM Downer's Grove, IL REGISTER NOW!
Both Amazon and Windows Azure now offer dozens of services upon which to build amazing technology solutions. Navigating this array of code named, ever evolving capabilities can be dizzying. And what about the promises they made to us? Reduce your data center footprint. Lower your costs. Get more resiliency. Be more secure. Solve bigger problems. Do they live up to these promises? Do both vendors have equal parity at this point?
Bring your questions and join us for a lively, interactive session where we separate fact from fiction and share our hard learned lessons about what's working and what's not in cloud computing.
Complimentary lunch will be provided.
Tue, May 22 11:30 AM - 1:00 PM St. Louis, MO REGISTER NOW!
Microsoft is working Polaris and other partners around the central US region to put together free workshops focused on App Modernization and are designed to help developers get hands-on with Azure. These events will focus on the most popular Azure PaaS and DevOps offerings. Each technical session will be followed by a whiteboard design exercise to encourage attendees to take what they’ve learned and design solutions in Azure.
This 1-Day workshop will focus on most popular Azure PaaS and DevOps services. Each technical session will be followed by a hands-on Azure lab and a whiteboard design exercise. This workshop will help you gain a thorough understanding of the components of Azure and how you can take advantage of them as a developer.
Metrics and measures are always on the agenda when I work with organizations new to agile. We're bringing in a new way for them to get work completed, so it would make sense that the old ways of measuring wouldn't necessarily make sense anymore. The unfortunate thing is that you get what you measure, so you really have to think about what you want, what do you want to achieve?
If your goal is to get more stuff out the door, velocity is a great measure, but it is extremely easy to abuse and manipulate. But, you might ask, what value is getting stuff out the door if it's not the right stuff? How do you determine if you're building the right stuff? There are a bunch of different tools available to come up with a value or relative value, Weighted Shorted Job First (WSJF) is popular because of the popularity of Scaled Agile Framework (SAFe), but it can be complicated for people just starting out. Affinity estimation is a good tool for finding relative value to determine what you should be working on.
So, theoretically, you now have a way to know what stuff is the right stuff, and determine if you're actually developing it on a predictable cadence. In order to ensure you’re not trying to go too fast, if you’re measuring velocity, you should look to something that counteracts that measure, like quality. Using a counter-measure like this helps ensure that you aren’t taking anything too far. You’re not pushing the team to deliver too much, and you’re also not pushing for gold-plated code at the expense of quantity delivered.
Once you are delivering the right things on a regular cadence, you can move on to second level problems. The first of these is to examine if you are delivering optimally. Do the metrics we have allow you to determine that? I’d argue that you don’t have that information with the information you’ve got on hand. There are a few new things you can keep an eye on in order to better understand your system. As your organization matures, you can measure the efficiency of the system by measuring flow via flow efficiency. While this is a Kanban idea, that doesn’t mean it’s not applicable outside of Kanban systems.
To get to flow efficiency you need to track two new metrics, lead time and overall work time. Lead time is the time from the customer request to the time it’s delivered to the customer. Think of this as the time from when you order the pizza to when it shows up on your doorstep. Overall work time is measured by the time working in any states. Taking the pizza analogy further, this would be from the time the store starts making your pizza until it is ready to deliver. If it takes 30 minutes from order to door, and 10 from start of making to ready to deliver, then the flow efficiency of that system is 10/30 = or 33% efficiency.
Armed with this information, you can start looking at the process to determine if there are any bottlenecks or other places where it can be streamlined and improved. Starting to look at the process of building things will allow you to increase the amount you’re delivering, and with the other metrics you’ve already been capturing, you can ensure that it’s all quality. Starting with these metrics should get you through your first year of agile development.
The question of buy vs build is often a very difficult one. On the one hand, you get everything exactly as you want it, on the other, you have to compromise, but implementation cost and support is lower. This is the dilemma one of our clients was experiencing. They had a home-grown deployment and release tool that worked exactly as they needed when they built it, but it was starting to show some signs that it was in need of an overhaul. The current architecture wouldn’t support some of the newer features that were in industry-leading products, and they were spending a lot of time supporting and maintaining it instead of building new features to move them forward. In addition, their aging infrastructure was pushing them to evaluate if their solution would support cloud deployments. It was time to re-evaluate the situation and determine if it made sense to continue with their solution.
The existing platform was managing the build and deployment of their Platform as a Service (PaaS) to over 80,000 individual physical nodes, each one potentially having its own configuration. The configuration being unmanaged and uncontrolled made it nearly impossible to support and troubleshoot all of the possible issues. Our expert in DevOps looked at their problems, what they were trying to achieve, and created a proof of concept (POC) for them to try out. The POC involved using Azure to manage the build and deployment process that would span the cloud and on-premise nodes, but it would also allow them to build and deploy configurations. He created release process templates and run books to allow easier management of the process and provide a graphical representation of what was happening. The deployment tool would periodically audit the environments for compliance with the configuration templates and fix the problems it discovered without user intervention. If they wanted to change the configurations, they only needed to update the template and it would blast it out to all of the subscribed nodes nearly instantly.
The Azure deployment system would allow them to manage not just the potential cloud environment, but it would also allow them to better manage their existing infrastructure as they started the transition. They could immediately start building their standard configuration and deploy that to the entire environment, giving them a base with which to start for support. Configuration management was vastly simplified and compliance could be all but guaranteed. An as of yet untapped benefit is that this will enable them to migrate to a cloud environment that can be scaled based on actual need, not worst case scenario. They can now choose based on actual computing need rather than having to buy the biggest server they think they’ll need, leaving lots of unused potential untapped and driving up the cost of implementation.