The rise of GenEng: How AI changes the developer role
GM & VP of Engineering, Google Cloud Databases
The widespread availability of generative AI, such as chatbots powered by large language models (LLMs), has captured our imaginations about the possibilities of artificial intelligence. In the enterprise, generative AI has the potential to revolutionize customer experiences and employee productivity. As enterprise leaders try to distill their opportunities and competitive risks, I see a fundamental shift in who will execute those grand plans—and how that shift itself will likely become a force multiplier.
Many of us still remember Steve Ballmer, then CEO of Microsoft, jumping around on stage chanting “Developers, developers, developers!” Ballmer knew that the developer community’s ability to drive business outcomes for enterprises was the biggest opportunity for platform adoption.
Inside Google, we’re having all sorts of interesting conversations about how to enable developers with AI, but here’s my personal take: I believe we are entering a “post-training era” in which application developers will drive the bulk of the innovation in applying generative AI to solve business problems.
This is not to say that data science and MLOps are no longer relevant or a thing of the past. On the contrary, I am seeing a huge amount of innovation and fast iterations on building out the infrastructure and approaches to training the next generation of LLMs, including more domain-specific and lighter-weight ones to meet the requirements of a much broader set of use cases. However, I believe the availability of LLMs is democratizing access to AI for the broader community of developers, who will not need to become experts in deep learning, but rather expand their skills to integrate LLMs into enterprise application architectures. I draw the parallel to compilers, which were built by few but leveraged for innovation by many.
To get an appreciation for the difference in scale, consider that, according to the Bureau of Labor Statistics, in the US alone, there are over two million software developers but only around 150,000 data scientists. These roles overlap at times, but still: imagine at least an order of magnitude more technical practitioners being able to innovate with AI.
I think of this shift as the rise of generative engineering, or GenEng. Just as developers integrated ops practices into software engineering through the DevOps movement, I see the GenEng revolution being led by developers who build deep proficiency in how to best leverage and integrate generative AI technologies into applications.
What defines GenEng? While the rise of general purpose LLM-based chatbots have captured our imagination, the majority of enterprise use cases cannot tolerate their shortcomings, such as hallucinations. In fact, the real value for enterprises comes when they combine generative AI with their proprietary data to provide accurate, domain-specific outcomes. Developers are employing techniques such as Retrieval Augmented Generation (RAG) to augment the use of LLMs for dependable and high-value systems for enterprise applications. The generative engineer is a developer who enhances their skills with prompt engineering, embeddings for proximity searches (e.g. vector-enabled databases), and frameworks that help build LLM-powered applications (e.g. LangChain).
These GenEng practitioners will need to have many of the same skills of traditional application development, including scalable architecting, integrating enterprise systems, and understanding requirements from the business user. These skills will be augmented with the nuances of building generative AI applications, such as involving the business domain experts in validating aspects of prompt engineering and choosing the right LLM based on price/performance and outcomes.
What are some fundamental changes in runtimes, frameworks, and tools to best enable the generative engineer? That is for the next blog post. For now, welcome GenEng practitioners!