As developers, we constantly seek tools that streamline our workflow and enhance productivity. GitHub Copilot and Sourcegraph Cody are two prominent code AI assistants, but how do they fare in real-world development tasks? In this blog post, we dive into a practical comparison based on my experience using these tools on a production-grade AI-powered video editing app.
1. Learning About a Codebase
Starting with a new codebase can be daunting. Without AI, you might scour README files or ping the maintainer for a walkthrough. With Cody and Copilot, you can ask directly. Cody shined here, pulling context from 14 files to provide a comprehensive overview, while Copilot offered a more superficial take.
Takeaway: Cody’s ability to pull context from multiple files makes onboarding smoother.
2. Setting Up the Development Environment
Setting up a local dev environment can be a maze of dependencies and configurations. Cody guided me through installing dependencies, setting up the .env file, and running migrations. Copilot, however, missed several steps, which would have led to setup errors.
Takeaway: Detailed, context-aware setup instructions from Cody can save you from frustrating setup issues.
3. Familiarizing with Code
Understanding existing functionality is crucial. Cody correctly identified model properties and explained how to calculate costs using OpenAI, directly referencing functions in the codebase. The Copilot guessed and gave generic advice, requiring more manual validation.
Takeaway: Cody’s context-aware explanations can accelerate your understanding of the codebase.
4. Adding New Functionality
When I needed a function to convert milliseconds to SMPTE timecode, Cody correctly generated PHP code. Copilot, unaware of the codebase context, gave a JavaScript function.
Takeaway: Contextual awareness in Cody ensures that generated code fits your existing stack.
5. Editing Code
Using Cody’s /edit command, I updated the function to include frame numbers easily. The copilot required more manual guidance. Both tools eventually provided correct updates, but Cody’s intuitive commands streamlined the process.
Takeaway: Cody’s commands like /edit can simplify iterative code edits.
6. Autocomplete
Autocomplete can make or break your coding flow. Cody consistently provided accurate suggestions based on the context, while Copilot struggled, often suggesting incorrect completions.
Takeaway: Reliable autocomplete from Cody keeps your workflow smooth and error-free.
7. Finding and Fixing Issues
Both tools helped identify and fix type issues, but Cody’s additional insights via the /smell command offered extra value, highlighting improvements like dependency injection and logging.
Takeaway: Cody’s deeper analysis helps catch subtle issues and suggests best practices.
8. Generating Tests
Writing tests can be tedious. Both tools generated passing unit tests, but Cody’s multiple scenarios provided more comprehensive coverage.
Takeaway: Use Cody for thorough and varied test generation.
9. Documenting Code
Documentation is often neglected but crucial. Both tools generated helpful documentation, with Cody offering concise explanations and Copilot providing detailed steps.
Takeaway: Both Cody and Copilot can effectively assist with generating and explaining documentation.
Conclusion
Through these scenarios, Sourcegraph Cody emerged as the clear winner, demonstrating the critical importance of context in code AI. While GitHub Copilot has its strengths, Cody’s deeper integration and understanding of the codebase made it more reliable and helpful in real-world tasks. If you’re looking to enhance your development workflow, give Cody a try and experience the difference context can make.
Discover more from Susiloharjo
Subscribe to get the latest posts sent to your email.