Investigating Google Anti-Gravity AI coding agent tool
Will AI replace programmers?
I don’t think there will be a wholesale replacement, but it is interesting to see the recent advancements of AI coding tools. I’ve used Github CoPilot for a little while and also played around with continue.dev with local models;. Nothing too exciting and I need to be careful to use the company version of the tool when working with company code (to avoid leaking secrets).
So it was with some interest when I saw recent announcements introducing Google AntiGravity There were two things that caught my attention about the tool
- I’ve been using Gemini Deep Research as a preliminary investigation tool. It is a step forward from what I used before because it will find and pull in references from the web. Anti-Gravity looks like it has “agents” with similar capabilities.
- Google recently updated to their newest Gemini v3.0 model and this was an opportunity to try this as well.
Overall summary
My overall takeaway is that the tool let me develop a prototype tool much more quickly than I would have before (using web searches, code examples and experimentation). However, it wasn’t the easiest at getting me beyond some basic misunderstandings/problems and I needed to learn on how to go back and forth.
Problem statement and resultant tool
What I chose as my project is to build a prototype tracing tool that used the ROCprofiler-SDK interfaces.These interfaces have a reasonable amount of documentation and some program examples. However, they are also moderately complex and I wanted to see if Google Anti Gravity could create and enhance a prototype tool both as a useful tool and as a working demonstration of how the APIs worked.
I also picked this project because it might create a useful tool but using only public information and not proprietary work information.
I had tried this previously using Gemini Deep Research to create an investigation report along with sample code. That sample code didn’t quite work and hence I was back to debugging it. The idea was perhaps Google Anti Gravity would do a better job because the coding agent and investigation were essentially in the same tool.
The resultant tool is can be seen on github. Getting to a first version of the tool was remarkably quick (~four days) and much quicker than I would have otherwise.
Quick code generation (examples as well as pro and con)
The first tasks I asked it to do demonstrated the overall strength/weaknesses of the task/walkthrough approach. I essentially gave it a high level prompt to say I wanted to create
- A tracing tool with both C and C++ implementations
- That demonstrated the ROCprofiler-SDK with both buffered and callback APIs
Before I knew it, a whirlwind of activity as it first presented a plan and then implemented the plan for the first commit I think it is supposed to ask me to approve things but it seemed to only briefly pause and then immediately proceed without my providing an explicit direction. It did a reasonable job and gave me a somewhat working demonstration code that was more complete than I would have before.
With this start it shows some of the strength/weaknesses of the tool. Seems like a whirlwind of activity in the prompt screen to think through the problem followed by a quick implementation. As I’ve worked with it, it also seems to make initial clumsy mistakes here and there – but then a further flurry of activity fixing things and trying again.
Controlling some of the behavior
With this quick code/try behavior, I found myself directing additional tasks such as asking for regression tests – and program examples in documentation. I would review these and explicitly ask to retry and retest. This gave me some structure that then also helped guide the tool. With Copilot I’ve found this can go in an instruction document but haven’t seen enough on best ways to do this.
The other thing I found myself doing more over time was also explicitly asking for an investigation/research report but no implementation (yet). You can see some of these in the doc directory. Breaking things up this way gave me a change to review the overall analysis and provided a more targeted directed follow up prompt.
Challenging exercise – performance counters didn’t work…
One challenge I had with my implementation is that it turned out the performance counters were not fully supported with my Strix Halo 395+ machine (they might be in a later ROCm 7.9 release). However, the coding would go in classic whirlwind behavior and announce everything was done! It took me some extra steps to essentially realize this is what was going on and then guide the tool into tests to confirm and express the behavior.
Challenging exercise – conceptual issues with timing dependencies…
The other exercise I asked it to do was integrate logging information from one of the libraries (rocBLAS) with information coming from the tracing interface. While I didn’t realize it at first, there were some basic timing issues with my request. In particular something I thought might all be synchronous and matched between the interfaces turned out to by much more asynchronous than I realized.
This led me down a path of having the tool keep coming up with more complex synchronization that tried to look up information without realizing the basic problem. Eventually I figured that out, backed up the much too complex implementation and reworked the exercise. The tool was both a help and a hindrance in this. It didn’t help much in the underlying problem though it made it quick to code alternatives.
Overall review
I think I’ll use Google Anti Gravity as a rapid prototype tool for some new areas – but I’m not sure I trust it (as much as Copilot) for refactoring or working with existing code. It does “research” to quickly find alternatives and new concepts but is also a bit haphazard in trying things and potentially breaking existing code. That is OK for a quick prototype exploration but less for incrementally adding to a larger code base.
To bring it back to my starting question: these AI tools are definitely a huge shift in capability but still rely on programmer skill to guide them along. So my sense is a developer that uses the tools will be more skilled/productive than one that doesn’t. In that sense of overall productivity it might mean one person can be more productive. However, given the state of the tools there is still a fair amount of guidance required and tough to really automate some of those aspects.

Comments
Investigating Google Anti-Gravity AI coding agent tool — No Comments
HTML tags allowed in your comment: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>