Skip to main content

From Resistance to Revolution: Accelerating New Tool Adoption at Scale, A Case Study Using Github Copilot

copilot_helping_bot

As a consulting company, we strive to learn about new and emerging technologies so that we can be ahead of the curve and ready for that first client that comes looking for a solution.   This “Hungry” mindset, as we call it, is a core value to our culture at Ippon.  We are always looking for ways to get out of our comfort zones, to challenge ourselves to grow, to improve ourselves as individuals and to find ways that we can help our clients do the same.  Exploring new tools that can improve developer processes and productivity is a big part of our pursuit.  We encourage our developers to experiment and gain first hand experience of the possibilities and challenges of these new tools so that we can be well prepared before suggesting a certain technology as a solution. We recognize that for our clients this focus on experimentation can sometimes be difficult given budget constraints or when teams are fully committed to projects and there is little extra time in the day to even read documentation.  That’s why when a client had a strategic initiative around driving Github Copilot adoption, we jumped on the opportunity to help adopt this new technology, deepen our understanding of it, and improve software delivery speeds across the teams at the same time.    

The Challenge

Copilot is GitHub’s solution to an AI plugin that works alongside a developer’s integrated development environment (IDE).  There are a number of similar technologies in the marketplace, each with their own price tag, and all offering features for assisting a developer with their coding needs, but at the time of writing this Copilot is the market leader.  Given this obvious opportunity for the benefit of developer productivity and upskilling, we encouraged our developers to set up their environments and begin using Copilot, which surprisingly wasn’t the most fruitful approach.  In the first few months that Copilot was made available, we only saw a 30% adoption rate.  Many of the developers dabbled with the tool’s functionality, but few were using the tool to the extent that we knew would provide measurable advantages. So the mission became clear; we needed our developers to adopt this new technology without compromising their time or requiring much more effort on their part, and this required us to get creative.  

The Copilot Campaign

Experimentation with AI is essential to adoption.  Finding where these tools can increase productivity in our workflows is the key to making it stick.  Through trial and error we can find where a tool excels because we handle these tasks daily. As a part of our culture of experimentation, this is the message we push. We all know that it’s difficult to get people to try new things, especially when there are other factors competing for their time and energy.  Trying something new takes an open mind and desire to fail, or at least an openness to wasting hours on something that may not return on the investment. Our best solution was to incentivize and pay developers real money to try this new thing.  We came up with a few direct action options that would give developers more of a reason to get out of their comfort zones and into the murky waters of experimentation.  One option asked for focused blog articles of first hand accounts and best practices for using Copilot.  The second option was to become part of a focus-group that committed to using Copilot in their daily work and to meet twice a week for a month and share the experiments that they conducted.  This Copilot Campaign, as we called it, was by far the most bang for our buck.  The group effort and continuous accountability for trying new techniques ultimately created a group of Copilot evangelists who could now advocate for their experience and share it with our company.  The following is an account of how things played out for that Copilot Campaign group, from the initial stages to the adoption of a Copilot coding routine and all the successes and failures in between.

The Copilot Campaign consisted of developers working in various disciplines (data, cloud, backend, frontend) with leadership developers contributing and leading the conversation.  The first meeting was an initial conversation around everyone’s expectations and experiences to date with using Copilot.  This group turned out to be a good cross section of our overall developer community in terms of varying levels of experience with Copilot and other AI coding assistants.  These meetings were held twice a week, with the idea that the beginning of the week would be spent in discussion and planning for that week’s experimentation goals and the end of the week would be a summary of the outcomes.  We mostly followed this pattern, but real life is never as structured.  The initial conversations were about the ease of set-up and use of Copilot.  There was definitely a stigma around getting set-up and starting to experiment because of the unknown time commitment, but once that initial threshold was crossed and Copilot was in place then it didn’t take long for everyone to dive in and begin experimenting.  

The first week of experiments consisted of scaffolding and writing out tests, asking for explanations of code as part of the Peer Review process, and translating code from one language to another.  These experiments began with simple prompts in the Copilot window and then began to use the inline feature generating code directly in the file.  Both methods were useful in different ways depending on the desired outcome.  After the first week of experimentation, developers were already engaging more often with the AI and everyone agreed that it was a similar experience to using Stack Overflow or a Google search but without leaving the coding environment, which helped to keep their work more focused and on task.  

The second week involved more experimentation with prompting, code validation, and using the AI to help with writing documentation.  One of the methods was to use code-snippets as part of the prompt and to ask the AI to summarize it then use the output as part of the documentation.  A takeaway from this work was the tone used to converse with the AI seemed to assist with the output.  Developers found that using prompts as though the AI were another team mate were more effective, or at least more satisfying from a conversational standpoint.  

By the third week, all the developers were using Copilot as part of their daily coding routine.  Prompting was still in experimentation, but context had also become a focal point.  A few developers were working on configuration files that had hundreds of lines of code and found that there was a token limit when working with Copilot.  AI processes information in little chunks of data, or tokens, so it often has a limit as to how much it can process at one time.  This meant that specificity in prompting for a certain section of code worked better than asking for a whole file to be written.  Pulling documentation files into the context of a prompt was a useful technique to help with specificity as well.  Another developer found that asking to translate a proprietary library written in Java to be written in Go output a lot of unnecessary code and took more time to refactor than it would have taken to simply write it from scratch.  The developers also found that Copilot was not good at admitting when it didn’t know an answer and it did not ask clarifying questions but would generate a mediocre response instead, which it was agreed would impede progress in the long run.  All of these potential traps highlighted the need for the developers to have a good understanding of the code they were asking to be generated in order to know when the AI was not providing helpful solutions.  

In the fourth and final week, all the developers lamented the day that they would not have the option to use Copilot, and expressed how seamlessly it had become integrated into their daily workflows, which was a pretty good indication of its value.  The few remaining experiments included writing acceptance criteria for user stories, asking for ways to improve an application overall, debugging, and helping to duplicate code for multiple environments. Writing specific user stories was a difficult task for Copilot unless it was given very specific context, it did well with providing boilerplate and general acceptance criteria.  The same general type responses were given when asked to suggest ways to improve the application.  It seemed to the developers that Copilot needs a lot of context to better generate more specific answers.  Debugging and duplicating code were more successful and the developers agreed that direct prompting was used to ask these kinds of questions so that may have been the reason for the better results.

Outcomes 

By the end of the Copilot Campaign, it was clear that everyone involved had gained a solid understanding of best practices and ways to benefit from using Copilot in their work.  We sent out an exit survey to go a bit deeper on each developer’s experience.  Some of the notable responses said things like 

Having copilot as the "first step" for troubleshooting code issues helped me define my tasks better, describe my issues better and ultimately, get things done faster.” 

One developer wrote, “I assumed copilot was a glorified chat bot. My opinion of the product has improved, I now see a definite use case for the tool in everyday work”

another felt that “changes have been minimal” to their skills or workflow.  When asked about best practices, most developers spoke about 

“Giving copilot as much context as possible while prompting it like you are having a conversation with a colleague…” 

and most people agreed on improvements like allowing 

“... broader context of company's documentation and their proprietary tools would help developers tremendously ...”  

Here are some colorful graphs that show off our developer’s self assessments Before and After the Copilot Campaign:

When asked to rate their level of experience with Copilot, before the campaign 22% of the developers had Never used Copilot and 78% Sometimes or Occasionally used Copilot.  After the campaign 100% Sometimes or Often used Copilot.  This finding validates the other results that we gathered to show that any changes are due to participants having spent a good deal of time using Copilot directly.

When asked “Do you expend less mental effort on repetitive tasks when using Copilot?”, before the campaign 60% of the developers were Undecided if they spent less mental effort with repetitive tasks,  while after 100% Agreed or Strongly Agreed that they spent less mental effort with repetitive tasks when using Copilot.  

When asked “When stuck, do you spend less time searching for results when using Copilot?”, before the campaign 75% of the developers Disagreed or were Undecided if they spent less time searching for results,  while after 78% Agreed or Strongly Agreed that they spent less time searching for results when using Copilot.  

When asked “Do you feel that Copilot improved your productivity?”, before the campaign 50% of developers Agreed that Copilot improved Developer Productivity and 50% were Undecided, but after 78% Agreed or Strongly Agreed that Copilot improved Developer Productivity and 22% were Undecided.  This shows that it isn’t a game-changer for everyone, but overall it did boost the perceived productivity among most developers.  

From the survey data and self reporting as well as weekly conversations during the Copilot Campaign, we can see that using Copilot is helpful for many of the common issues that developers face in their daily coding routines.  It helped to decrease the time it takes to search for answers, as well as decrease the time it takes to perform simple repetitive tasks, which overall helped to increase a developer's productivity.  The developers also got a better sense of where Copilot wasn’t the best tool for the job, which is as useful as knowing when it can be helpful.  

Conclusion

Overall the repetitive use of Copilot for a month brought about a noticeable appreciation for the tool, as well as a better understanding of how to use it to improve developer productivity.  We were able to identify where Copilot truly added value, streamline much of our coding practices, and ultimately boost productivity. The team’s willingness to dive into uncharted waters, combined with our culture of collaboration and accountability, transformed initial skepticism into active adoption.  Every member of the team continued to use Copilot after the study.  

Through the strategy that we laid out from the beginning, the developers were able to achieve a deeper insight into the tool’s usefulness.  Incentivizing their participation minimized the barrier to entry and once developers were on board we continued to foster a culture of experimentation by organizing group conversations and collaboration, encouraging failures, and testing the tool to its limits.  Ultimately, by giving developers time to experience the tool first hand and decide where it would be helpful for their own work flows, we were able to create a group of Copilot evangelists and early adopters who could then take their first hand knowledge back to their teams and increase our rate of Copilot adoption across the organization.  

At the end of the day, the Copilot Campaign was more than just a technology experiment—it was a demonstration of how collaborative learning and continuous experimentation can help us to embrace innovation, even with limited resources and time. This experience prepared us to better serve our clients with cutting-edge solutions but also reinforced the importance of fostering a “Hungry” mindset across our teams.  Our commitment to staying ahead of the curve through continuous experimentation, collaboration, and a mutual drive for excellence is what has set us up for success, and the Copilot Campaign is another example of how our approach can ultimately yield significant returns in both skill development and operational efficiency.

If you want to boost developer productivity and empower your teams to embrace new technology with confidence, let’s connect. Contact us today to learn how we can help you drive successful technology adoption and unlock new levels of performance for your organization.

Laurie Lay
Post by Laurie Lay
May 6, 2025 1:00:00 AM

Comments

©Copyright 2024 Ippon USA. All Rights Reserved.   |   Terms and Conditions   |   Privacy Policy   |   Website by Skol Marketing