Four Big Bets For Better AI Research: A Personal Journey

Published

By , Distinguished Scientist and Vice President

It’s a big shift to change from being motivated by getting published in prestigious conferences and journals, to being motivated by solving real problems for real people.

Halfway through my 18-year research career working on program synthesis–the task of automatically constructing a program that satisfies a given high-level specification–I had a series of epiphanies that profoundly changed the way I approach research. Yes, I had a good and rising publication list, h-index, and awards. Yes, I had a nice car and could take the occasional exotic vacation. Yet, the meaning, the legacy, and the sense of purpose were missing.

On a flight home from a prestigious conference, I sat next to a woman who asked me to help her with an Excel problem, which I sheepishly admit that I was unable to solve. However, that moment opened up a door for me: would it be possible to apply artificial intelligence to help people avoid simple repetitive tasks? Could I approach research from a different value system than the one that academia had trained me to? This encounter propelled me to develop a research approach that I call the Four Big Bets. I call them “bets” because I had to make a leap of faith and invest in a different way of doing things to make these changes. As a result of them, my value system and my daily work are more aligned, and I believe I’m a much better researcher as a result. I’m also playing the role of bellwether for shifts in research philosophy at Microsoft; from pure research, to research that is focused on eventual productization, filling the gap between basic and applied research.

Big Bet One
The first Big Bet is one I call “customer connection.” In the past, I would find hard problems that I thought I could get published, and then I worked on those. In the new paradigm, my goal is to solve real problems that real people are experiencing. In fact, I wanted to figure out the simplest problems that would yield the biggest benefits. So inspired by the woman I met on the airplane, I spent time on spreadsheet help forums and identified potential problems that program synthesis could solve. While mining those forums, I discovered a clear problem: people want to write scripts for transforming a column of data, but they don’t know how; and so they spend a lot of time asking experts for help. What if they could give an example of the output they are looking for and the tool would figure out the script automatically? That’s an ideal application for program synthesis—automating tedious repetitive tasks from examples.

Spotlight: Blog post

Research Focus: Week of September 9, 2024

Investigating vulnerabilities in LLMs; A novel total-duration-aware (TDA) duration model for text-to-speech (TTS); Generative expert metric system through iterative prompt priming; Integrity protection in 5G fronthaul networks.

In program synthesis, the goal is to create a computer program based on some high-level logical or programmatic specification. This has been a consistent challenge for a long time, with some recent breakthroughs. For instance, today program synthesis technologies are being used to discover new algorithms that would take humans months to discover, or for superoptimizing code that can be faster than the human-optimized code by several factors.

However, that instance is a program synthesis tool aimed at algorithm designers or performance experts. What if you don’t know how to code at all or write logical specifications? To help the 99 percent of people who want to automate their tasks and who are non-programmers, you have to start by listening to them.

When I started listening, I was looking for the simplest problem I could solve by using program synthesis and that would empower the most people to achieve more. I observed that these people struggled with simple spreadsheet transformations but were able to express their intent using input-output examples. This led me to invent a programming-by-example synthesizer, now known as Flash Fill (opens in new tab), that Excel really needed, and I wrote a research paper (opens in new tab) about solving that problem. Today, it’s my most highly read and cited article, and what CNN Money called “Excel 2013’s coolest new feature.” Pretty good for a program synthesis PhD researcher. By solving this problem, I met both of my goals: practical impact and academic impact. I became addicted to that success and started to apply that big bet — connect with customers — to other domains.

Big Bet Two
The second big bet was to develop a framework to facilitate creating such synthesizers for different task domains or application areas. Instead of creating one implementation, and then another from scratch for each new domain, I wanted to do something bigger and better: create an algorithmic framework with reusable components that provides value to all domains.

The demand for program synthesis is fueled by two business needs: data wrangling, an activity where data scientists spend 80 percent of their time to bring raw data into a structured form, and code refactoring, an activity where application migration developers spend 40 percent of their time. Both of these numbers would come way down if we could develop example-based synthesizers for the various domains of repetitive tasks that arise in these application areas. These, in turn, can be facilitated by a general-purpose synthesis framework that would make it easy to create synthesizers for specific domains.

To develop such a framework, we had to modularize and generalize the key ideas behind the various domain-specific program synthesizers that we had already developed. We made multiple attempts, many of which failed, but some of which took us slowly and steadily to the ultimate insight that lay hidden behind all our synthesizers—which turned out to be a simple and powerful theory of inverse computations (opens in new tab) to guide the search backwards from input-output examples to programs. This framework has since served as the foundation for designing and developing new domain-specific synthesis algorithms and implementations, yielding an order-of-magnitude improvement in our effectiveness to provide solutions for different verticals.

I want to point out that this approach to research takes a village. It takes interns, engineers, researchers and funders. Every time you re-implement an algorithm to a more general version, it takes time, money and skills. To make it production-ready requires even more effort, as I learned with the Excel team.

Big Bet Three
Everything works better when research and engineering skills come together as part of one team committed to a single mission. Part of this learning came from my two years working with the Excel team to implement Flash Fill. Brilliant researchers without engineering skills may create one implementation; a lesser researcher with more engineering skills can try out multiple implementations in the same time, perhaps achieving a better result. Research is about embracing uncertainty: discovering new ideas in the short run, and creating generalized frameworks in the long run. Engineering is about removing uncertainty: creating customer focus in the short run, and adopting good software engineering in the long run. These somewhat opposing values, when put together, lead to faster innovation.

The other part of this learning happened from my years of experience leading multiple disconnected projects, each with a flux of contributors, creating high ramp-up costs for new participants. But when these researchers and engineers come together as part of one team committed to a single mission, the resulting continuity and synergy between different sub-projects leads to big innovation.

This Big Bet gets implemented by splitting a long-term blue-sky research vision into short-term mission-focused deliverables that would generate differentiated and distinguished business value. Some people might look at that approach and say, “then you’re not a researcher; you’re running a product team.” In my world view, I’m running a strong research team that knows how to engineer robust implementations of our creative research ideas, leading to faster and bigger impact. I can also often receive funding from product groups, so that people doing more basic research can stretch their funding further. The equation of engineers plus researchers as part of a single team (opens in new tab), equals a better work equation, leads me to my final big bet.

Big Bet Four
Cross-disciplinary research. It’s hard. Let’s look at the disciplines of machine learning and program reasoning as an instance, both of which have investigated the problem of learning programs. It’s difficult to explain to the other community that you have done something important; so in my old research paradigm, I would either ignore the other discipline or hold it in awe. The middle way is to appropriately combine the techniques from both the program reasoning and the machine learning communities. It takes working with empathetic experts that are willing to explain things and partner with you; but when it works, it’s amazing. The program synthesis problem involves search, ranking and disambiguation. Program reasoning techniques can provide a structure to the search (via grammar rules and back-propagating the example-based specification over those rules using inverse semantics of grammar operators) to generate multiple candidate programs and then drive an active learning session with the user to resolve ambiguities. Machine learning techniques, on the other hand, can speed up search by learning to order the non-deterministic choices (over grammar rules and over sub-goals produced by backpropagation), and make active learning more effective by learning to rank the candidate programs. Together, these two disciplines can solve the program synthesis problem (opens in new tab) faster and more completely than either one can by itself. More generally, there is a big opportunity to combine machine learning and traditional software development by using machine learning to learn the various heuristics that are today manually programmed in AI systems (and not just our program synthesizers). Such heuristic automation not only produces better heuristics, but can also lead to software systems that are personalizable and adaptive.

New Challenges
Now, this approach to research has its own set of challenges. For example, the data that we have gathered through our customer connection is so identifiable that we cannot release it in its current form as part of a challenge benchmark. We have to apply anonymization and remove personally identifiable information.

Our framework-based approach requires us to find the right balance between generalization and specialization. With a fixed amount of resourcing, one can either develop a more intelligent specialized solution or a less intelligent but more generally applicable solution. That’s an interesting needle to thread.

Our commitment to combined research and engineering requires us to deal with the challenge of delivering an end-to-end solution that has the right user experience baked in. Similarly, in doing cross-disciplinary research, we have to find the right balance between quick-turn experiments versus production-ready implementation.

I know that this approach accelerates discoveries and innovation. There is no doubt a ramp-up cost in adopting these value systems, but once the structure is in place — the customer relevance, the framework, the engineering, and the cross-pollination of research ideas — it will provide a boost to our ultimate mission of advancing science and delivering value. Today our program synthesis investment has advanced to a point where, for instance, we can enable a non-programmer to perform a data wrangling task in 30 seconds (opens in new tab) that would otherwise take a programmer 30 minutes of coding.

To summarize, in the second half of my 18-year-long research career, I imbibed a different research style, which yielded much more personal satisfaction. The common elixir behind all my uplifting transitions was that of connection—not just between ideas, but with people. My stories of change have pivoted around developing an empathetic connection with customers, researchers from my area, the engineering world, and researchers from other areas. I hope that this frame may be useful for research management and funding agencies to facilitate such connections. I also hope that budding researchers may find this frame useful to make conscious choices about the “what” and “how” of their research.

Related:

Flash Fill Gives Excel a Smart Charge (opens in new tab)

Best of both worlds: one researcher’s dual approach (opens in new tab)

Read more about Sumit Gulwani (opens in new tab)

Video: Four Big Bets (opens in new tab)

Microsoft Program Synthesis using Examples SDK (opens in new tab)

 

Related publications

Continue reading

See all blog posts