Programming is a lot like making coffee. Let’s simplify things and say we’re starting in the kitchen. Even then, we still have to walk to the coffee machine. Normally, we don’t think about the process of walking because that’s taken care of by the autonomous brain.
Now we’re at the coffee machine. We have to choose which coffee pod flavor to use. Then we have to open the lid of the coffee maker, insert the coffee pod, and pierce the pod by closing the lid. We make sure there’s enough water in the brewer, and top it off if necessary.
Next we have to choose a mug, make sure there’s some sweetening agent in it, and put it into the brewer. Finally, we push some buttons, wait two minutes, and partake of the sacred brown elixir.
Also: How to use ChatGPT to write code
How, you ask, is that like programming? Wildly oversimplifying, it’s a set of steps, and a program is also a set of steps. But where programming once used to be just figuring out and describing those steps, modern programming requires not only the development of algorithms (the steps), but also interfacing with a wide range of external factors.
Take, for example, moving your legs. If we were trying to program that action in a robot, we’d either have to develop and test all the physics-based control details that make the robot move smoothly and stay upright, or we’d use an existing set of programs previously developed by someone else.
The coffee pod system is like that as well. Coffee pods are a standard. In my case, I use K-cups from Keurig, which is a proprietary standard (although competitors have managed to clone it). A lot of programming involves using external routines that are proprietary to other vendors, and that process involves many trade-offs relating to vendor restrictions, cost, loss of control, and update and bug fix frequency
Even making sure there’s water in the brewer involves an external interface. My town just had a boil water alert, because some contractor broke a water main. So because the contractor made a mistake, thousands of people were inconvenienced. The town did fix the problem, but we were subjected to a new round of inconveniences, being required to run water for a period of time, and we had to order and replace various filters in our home.
Also: Implementing AI into software engineering? Here’s everything you need to know
That’s like relying on a cloud vendor for service, and having to deal with outages, price increases, and interface changes.
APIs and libraries
The terms programmers use for these interfaces and connections are APIs and libraries. Programmers use APIs and libraries to link together almost everything, because most people really don’t want to take the time to reinvent every aspect of their code. It’s much easier to buy a coffee brewer and some pods than go down to the workshop, get some brass tubing, and build one from scratch.
Also: I used ChatGPT to write the same routine in 12 top programming languages. Here’s how it did
But here’s the gotcha. Not all APIs and libraries work with all programming environments. For any given project, programmers may need to program in a bunch of programming languages, and then somehow get those languages to talk to each other.
This all adds up to incredible complexity. Every time we need to use a different interface, connect to a different API, or incorporate a different library, there’s a ton of interface hoop jumps. Most programmers can’t remember all the details and the interfaces we use change often, so we spend a tremendous amount of time looking stuff up on the web.
The complexity itself often breeds bugs. And because vendors are often focused on competitive advantage, they sometimes make it really hard to debug something that crosses out of their ecosystems into others.
Also: OK, so ChatGPT just debugged my code. For real
For the programmers, all of this is incredibly time-consuming, tedious, and unfun. We want to make something, not spend our time negotiating among vendors and their code to try to convince everyone to play nice with each other.
Development tools
Classically, programmers have used tools to help manage all of this. We use interactive development environments like VS Code and JetBrains to organize all our components. We use source code control and source code control collaboration tools like Github and Bitbucket to make sure teams of programmers can keep code in sync. We use symbolic debuggers that take us under the hood and let us measure what’s going on while a program is running.
Also: One developer’s favorite Mac coding tools for PHP and web development
But until very recently, we never had an AI that could pick up some of the tedious work from us. It’s here that Gemini Code Assist (formerly known as Duet AI for Developers) can really help out. Today, Google is announcing Gemini Code Assist, which is an AI-powered coding assistant that lives inside development environments like VS Code and JetBrains.
The benefit of this — when it works — cannot be overstated. Let’s say you’re writing some JavaScript and you want to set the focus of a certain button, but you need to find the button in the DOM (the web page) first. Rather that writing that fairly mundane and tedious script, you could simply drop a comment in your code and say “Write me code that searches for an aria-label object named gallery.”
And whoosh… there it is. Even if it’s not perfect, you can open a chat window and discuss how to change up the code.
Also: If you use AI-generated code, what’s your liability exposure?
Another example: You can open a chat window and ask Gemini to explain how a block of code you inherited from the coder who quit last month works.
As Gemini becomes more knowledgeable, it will presumably know more APIs and libraries, so you can offload more and more work to it, and get its help debugging code. I’ve talked previously about how I used ChatGPT to debug my code (although, when I last tried, Gemini didn’t fare too well). This is Google, though, so I have all the confidence that it will become mighty over time.
Also: Gemini Advanced failed these simple coding tests that ChatGPT aced. Here’s what it got wrong
I’m already seeing some improvement. The following prompt asks for code in Keyboard Maestro (a screen automation scripting tool) and AppleScript, combined:
Write a Keyboard Maestro AppleScript that scans the frontmost Google Chrome window for a tab name containing the string matching the contents of the passed variable instance__ChannelName. Ignore case for the match. Once found, make that tab the active tab.
The Gemini LLM got it mostly right (it ignored the ignore case requirement). Even so, it couldn’t even get it that far in January.
That makes tools like Gemini Code Assist (which builds on top of the Gemini AI engine) much more potentially useful. The importance of the Gemini Code Assist announcement is that it runs from inside existing environments within your private codebase. Google says that can be “on-premises, Gitlab, Github, Bitbucket, or even across multiple repositories.”
Also: I confused Google’s most advanced AI – but don’t laugh because programming is hard
In VS Code and the various JetBrains environments, Gemini Code Assist is installed as plugins, and then integrates right into the coding environment like any other plugin. This is smart, because it means that the AI goes where we code, rather than requiring us to take our code to the AI.
Google’s announcement includes an acknowledgement of enterprise-level needs, including ways for organizations to incorporate AI at scale while balancing security, privacy, and compliance requirements.
Also: Is AI in software engineering reaching an ‘Oppenheimer moment’? Here’s what you need to know
The company didn’t mention whether your code will be used to train the AI, but Google clearly knows that leaking proprietary code into a public system would be an immediate deal-killer for most of their customers. So it’s likely that they have a very thick wall between your code and their AI training data.
Quantiphi is an AI, data analytics, and machine learning consulting firm. Asif Hasan, a co-founder Quantiphi, reports: “We have seen efficiency gains of over 30% through the adoption of LLM based code acceleration workflows. Gemini Code Assist’s time-saving code completion and bug resolution features have allowed us to push the boundaries of innovation and productivity.”
Full(ish) codebase awareness
Another interesting feature in Gemini Code Assist is full codebase awareness. When it’s released (it’s in private preview now) Google claims (more on that below) it will enable programmers to perform large scale changes across an entire codebase. These changes include things like adding new features, updating cross-file dependencies, helping with version upgrades, comprehensive code reviews, and more.
If it works, this could be very helpful. If it doesn’t work, it could ruin your all your code in a single prompt. So far, a lot of the code help AIs have provided has been in short routines, with fairly few lines. But having the ability to look at a project as a whole could be powerful.
And yet, as I said, it’s also worrisome, because if it screws up, those screw ups could ripple through thousands of lines of code — and without detailed testing, you might never see all the errors and bugs introduced throughout the project. Needless to say, this is when you need version control and backups at a mission-critical level.
Also: Will AI take programming jobs or turn programmers into AI managers?
To accomplish this very large analysis project, Google is using its Gemini 1.5 Pro model. Gemini Pro 1.5 has a one-million-token context window, which sounds big but is unlikely to be able to handle the “full codebase” or “entire codebase” programming challenges that Google has claimed in its announcement.
Take for example a project I recently sold off. When I was coding it, it had 153,259 lines of code across 563 files. A typical line of code had roughly 15 elements or words, which meant my entire codebase was roughly 2,298,885 words. Since an AI token is roughly a word, that meant my “entire codebase” — written by one dude, part time, nights and weekends — was more than double the capacity of Google’s new hotness.
So, given the potential dangers of letting an AI loose throughout your entire codebase, and the limits of what “full codebase” can possibly mean when there are only a million tokens, I recommend you don’t count on that feature as a mission-critical part of your enterprise coding plans.
Code customization
Another newly announced feature in private preview is the ability for enterprises to use code customization inside of the AI. What this means is that the Google Gemini Assist AI can be trained on your code, so that when you’re coding, it can be aware of the context involved when generating code. Google says this feature is available for enterprises, but it’s not clear whether small enterprises will be able to benefit from this feature.
Turing is a company that uses AI within its cloud video security offerings. “Code customization using RAG [Retrieval-Augmented Generation] with Gemini Code Assist significantly increased the quality of Gemini’s assistance for our developers in terms of code completion and generation,” said Kai Du, Turing’s director of engineering and head of generative Al. He added, “With code customization in place, we are expecting a big increase in the overall code-acceptance rate. ”
More knowledge sources
In its announcement, Google said, “We are providing connections for Gemini Code Assist to reach multiple source-code repositories including GitLab, GitHub and Bitbucket.” While it’s not entirely clear what this means, I interpret it as saying that the AI can use code and projects from these giant open source repositories for training. I’ve reached out to Google for clarification and if I get further information, I’ll update this story.
As part of the announcement today, Google said it’s expanding its data and knowledge sources for Gemini Code Assist through partnerships, which include Datadog, Datastax, Elastic, HashiCorp, Neo4j, Pinecone, Redis, Singlestore, Snyk, and Stack Overflow.
Most programmers are familiar with Stack Overflow. It’s a crowd-sourced forum-based community with an enormous library of asked and answered questions about programming.
Prashanth Chandrasekar, CEO of Stack Overflow, waxed poetic on his company’s partnership with Google, saying “This landmark, multi-dimensional Al-focused partnership, which includes Stack Overflow adopting the latest Al technology from Google Cloud, and Google Cloud integrating Stack Overflow knowledge into its Al tools, underscores our joint commitment to unleash developer creativity, unlock productivity without sacrificing accuracy, and deliver on socially responsible Al.”
It’s a nice sentiment, but I do have my concerns, both with the AI incorporating Stack Overflow “knowledge” and code from the repositories. That’s because, well, programmers often make mistakes. I’ve often looked to Stack Overflow for coding help, but found that it’s populated by more wrong answers than right ones. And repositories like Github may be populated with an enormous amount of code, but much of that code is unfinished or buggy.
If AIs like Gemini Code Assist are training on data of mixed usefulness, I’m concerned. Will the resultant output from Gemini Code Assist have those mistakes and misunderstandings from thousands of novice programmers baked into the results?
I don’t know, but it definitely makes testing even more relevant and necessary than ever.
That said, I’m quite hopeful, too. The ability to have deeply code-knowledgeable generative AI inside development environments is hugely exciting. I’ve used AI help in coding small snippets and even at that small scale, I’m convinced the AI saved me months of work. For full-time programmers and enterprises with thousands of programmers, AI help could prove to be a huge time-to-deployment boon.
My overall advice still stands. Think like a manager with a particularly bright, yet somewhat sloppy helper. Give good assignments (your prompts), refine and clarify when mistakes are made, and double-check everything. Take your wins where you find them and apply your learnings from errors and failures to keep subsequent projects on track.
So what about you? Are you excited about Gemini Code Assist? Do you expect to add it to your development environment? Have you gotten coding help from an AI? Let us know in the comments below.
You can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletter, and follow me on Twitter/X at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, and on YouTube at YouTube.com/DavidGewirtzTV.