{"id":801424,"date":"2021-12-09T14:02:58","date_gmt":"2021-12-09T22:02:58","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-blog-post&p=801424"},"modified":"2021-12-09T14:58:06","modified_gmt":"2021-12-09T22:58:06","slug":"designing-a-framework-for-conversational-interfaces","status":"publish","type":"msr-blog-post","link":"https:\/\/www.microsoft.com\/en-us\/research\/articles\/designing-a-framework-for-conversational-interfaces\/","title":{"rendered":"Designing a Framework for Conversational Interfaces"},"content":{"rendered":"
This is a guest post from our close partners, Semantic Machines (opens in new tab)<\/span><\/a><\/em><\/p>\n By Zachary Tellman<\/b><\/p>\n Conversational interfaces are an idea that is forever on the cusp of transforming the world. The potential is undeniable: everyone has innate, untapped conversational expertise. We could do away with the nested menus required by visual interfaces; anything the user can name is immediately at hand. We could turn natural language into a declarative scripting language, and operating systems into an IDE.<\/p>\n Reality, however, has not lived up to this potential. Most people’s use of the conversational agents in their phones and smart devices is limited to reminders and timers, if they use them at all. At Semantic Machines, however, we are creating a framework for conversational interfaces that we hope will unlock some of this potential. It’s currently powering a conversational interface in Outlook Mobile, with other products soon to follow.<\/p>\n To accomplish this, our framework combines some of the latest advances in machine learning with concepts and approaches dating back to the earliest days of artificial intelligence research. To understand why, we’ll first need to look back fifty years to one of the first — and still one of the most successful — conversational agents ever created.<\/p>\n In 1972, Terry Winograd published a paper entitled Understanding Natural Language<\/em> which described a software project he had worked on in the late 1960s. It allowed users to direct a virtual robot arm, named SHRDLU, to interact with a world consisting of a table, a box, and a few blocks of varying shapes and colors. Users could carry on a conversation with SHRDLU, asking questions and giving instructions:<\/p>\n – Pick up a big red block. – Find a block which is taller than the one you are holding and put it into the box. – What does the box contain? – How many blocks are not in the box? – Is at least one of them narrower than the one which I told you to pick up? – Is it supported? – Can the table pick up blocks? – Can a pyramid be supported by a block? – Can a pyramid support a pyramid? – Stack up two pyramids. Winograd’s project represents a pivotal point in the history of AI research. Earlier efforts were significantly more ambitious; Herbert Simon and Alan Newell’s \u201cGeneral Problem Solver,\u201d introduced in 1958, was presented not just as a method for achieving human-like behavior, but also as a descriptive model for human cognition. As became the norm for early AI research, Simon and Newell reduced the problem to one of search. Given an initial state and a desired end state, the Solver would search through all possible sequences of actions until it found one that led to that end state. Since the branching factor of the search tree would be very high — you can, in most situations, do almost anything — the Solver would need to use heuristics (from the Greek heureka<\/em>, as in “I’ve found it!”) to determine which actions were likely to be useful in a given situation.<\/p>\n Having described the engine for thought, all that remained was \u201cknowledge engineering:\u201d creating a repository of possible actions and relevant heuristics for all aspects of human life. This, unfortunately, proved harder than expected. As various knowledge engineering projects stalled, researchers focused on problem solving within \u201cmicroworlds:\u201d virtual environments where the state was easily represented, and the possible actions easily enumerated. Winograd’s microworld was the greatest ever created; SHRDLU’s mastery of its environment, and the subset of the English language that could be used to describe it, was self-evident.<\/p>\n Still, it wasn’t clear how to turn a microworld into something more useful; the boundaries of SHRDLU’s environment were relied upon at every level of its implementation. Hubert Dreyfus, a professor of philosophy and leading critic of early AI research, characterized these projects as “ad hoc solutions [for] cleverly chosen problems, which give the illusion of complex intellectual activity.” Ultimately, Dreyfus was proven right; every attempt to generalize or stitch together these projects failed.<\/p>\n What came next is a familiar story: funding for research dried up in the mid-1970s, marking the beginning of the AI Winter. After some failed attempts in the 1980s to commercialize past research by selling so-called \u201cexpert systems,\u201d the field lay dormant for decades before the resurgence of the statistical techniques generally referred to as \u201cmachine learning.\u201d<\/p>\n Generally, this era in AI research is seen as a historical curiosity; a group of researchers made wildly optimistic predictions about what they could achieve and failed. What could they possibly have to teach us? Surely it’s better to look forward to the bleeding edge of research than back at these abandoned microworlds.<\/p>\n We must acknowledge, however, the astonishing sophistication of Winograd’s SHRDLU when compared to modern conversational agents. These agents operate on a model called \u201cslots and intents\u201d, which is effectively Mad-Libs in reverse. Given some text from the user (the utterance<\/b>), the system identifies the corresponding template (the intent<\/b>), and then extracts out pieces of the utterance (the slots<\/b>). These pieces are then fed into a function which performs the task associated with the intent.<\/p>\n If, for example, we had a function Find a block which is taller than the one you are holding and put it into the box.<\/p><\/blockquote>\n This utterance is difficult to model as an intent for a number of reasons. It describes two actions, but since every intent maps onto a single function, we’d have to define a compound function The problem is that natural language is compositional, while slots-and-intents frameworks are not. Rather than defining a set of primitives (“find a block,” “taller than,” “held block,” etc.) that can be freely combined, the developer must enumerate each configuration of these primitives they wish to support. In practice, this leads to conversational agents that are narrowly focused and easily confused.<\/p>\n Winograd’s SHRDLU, despite its limitations, was far more flexible. At Semantic Machines we are building a dialogue system that will preserve that flexibility, while avoiding most of the limitations. This post will explain, at a high level, how we’ve accomplished that feat. If you find this problem space or our approach interesting, you should consider working with us (opens in new tab)<\/span><\/a>.<\/p>\n In our dialogue system, utterances are translated into small programs, which for historical reasons (opens in new tab)<\/span><\/a> are called plans<\/b>. Given the problematic utterance:<\/p>\n Find a block which is taller than the one you are holding and put it into the box.<\/p><\/blockquote>\n Our planning model, which is a Transformer-based encoder-decoder neural network (opens in new tab)<\/span><\/a>, will return something like this:<\/p>\n This is rendered in Express, an in-house language which is syntactically modeled after Scala. Notice that each symbol in the plan corresponds almost one-to-one with a part of the utterance, down to a special The reason for this isn’t immediately obvious; to most experienced developers, a function like This indirection, however, is valuable. In a normal codebase, function names aren’t exposed; we can assign them any meaning we like, so long as it makes sense to other people on our team. Conversely, these functions are an interface between our system and the user, and so their meaning is defined by the user’s intent. Over time, that meaning is almost certain to become more nuanced. We may, for instance, realize that when people say “taller than,” they mean noticeably<\/em> taller:<\/p>\n If we’ve maintained our layer of indirection, this is an easy one-line change to our function definition, and the training dataset for the planning model remains unchanged. If we’ve inlined the function, however, we have to carefully migrate our training dataset; we only want to update By focusing on translation, we keep our training data timeless, allowing our dataset to monotonically grow even as we tinker with semantics. By matching each natural language concept to a function, we keep our semantics explicit and consistent. This approach, however, assumes the meaning is largely context-independent. Our planning model is constrained by the language\u2019s type system, so if the utterance doesn’t mention blocks it won’t use block-related functions, but otherwise we assume that “taller than” can always be translated into This, of course, is untrue for indefinite articles like “it,” “that,” or “them;” their meaning depends entirely on what was said earlier in the conversation. In our system, all such references are translated into a call to In SHRDLU, these indefinite articles were resolved during its parse phase, which transformed utterances into its own version of a plan. Resolution, however, is not always determined by the grammatical structure of the utterance; sometimes we need to understand its semantics. Consider these two commands:<\/p>\n Common sense tells us that the pyramid should go on whichever block is above the other. To act on this common sense, SHRDLU had to abandon any meaningful separation of syntactic and semantic analysis, which explains, in part, why it was so hard to extend. In our system, resolution is driven by an entirely separate model, which uses syntactic heuristics where possible and domain-specific semantics where necessary. For most developers, however, it suffices to know that “it” and “that” translate into Notice that in the above plan we pass This is because the user hasn’t told us which block they want, they only provided the criteria for finding it. This is called an intensional<\/b> description, as opposed to an extensional<\/b> description which specifies the actual entity or entities. In practice, every entity we reference in conversation is referenced intensionally; a reference to “Alice” would be translated into:<\/p>\n where If Our resolver, then, must translate the predicate into one or more queries to backend services which provide information about people. To do that, we must stop thinking of it as a predicate and start thinking of it as a constraint.<\/p>\n Many developers have likely heard of SAT solvers (opens in new tab)<\/span><\/a>, which given constraints on one or more boolean values will try to find satisfying assignments. Given Neither kind of solver, however, has a way to specify “the value must correspond to an entity in a backend service.” Even if they did, we probably wouldn’t want to use it; we don’t want the solver to fire off dozens of queries similar to “Alice” to the backend service while searching through possible values. Only the domain developer building atop our dialogue system understands the capabilities and costs of their backend services. The query API for a service, for instance, might offer its own “similar to” operator. Their similarity metric, however, probably won’t reflect that some people use “Misha” and “Mikhail” interchangeably. The domain developer will have to maintain a balance between preserving the user’s intent and minimizing the number of requests they make per utterance.<\/p>\n Since we can’t fully interpret the constraint for the domain developers, we must provide them their own tools for interpretation. Domain functions which, like resolvers, interpret constraints are called controllers<\/b>. In the current version of our system, controllers are typically written in TypeScript, since that language is likely to be a familiar and expressive way to write complex domain logic. Within the controller, predicates are transformed into constraint zippers, which allow them to traverse, query, and transform constraints on complex datatypes. For each field and sub-field, domain developers can ask various questions: are there lower or upper bounds? What is an example of a satisfying value? Is that the only<\/em> satisfying value? Does this value satisfy the constraint?<\/p>\n This last question is crucial, because we won’t always be able to encode the entire constraint in our query to the backend service. The set of results we get back may be too broad, and therefore must be post-filtered using the constraint. Conversely, operators which correspond to query operators in the service’s API, like Early AI researchers envisioned a world where knowledge had a singular representation and a singular repository. Instead, we live in a world where data, and the ability to interpret it, is fragmented and diffuse. As a result, our constraint solver must be unusually extensible, allowing developers to compose it with their own systems and domain expertise.<\/p>\n A major challenge in interpreting a user’s intent is everything they leave unsaid. Stripped of any context, much of what we say is ambiguous. To interpret “I’m headed to the bank,” we need to know whether the speaker is near a river. In linguistics, the study of how context confers meaning is called pragmatics (opens in new tab)<\/span><\/a>. Our dialogue system, then, needs to provide tools for developers to easily specify domain-specific pragmatics.<\/p>\n For example, if in Outlook Mobile a user says, “reschedule my meeting with Alice to next week,” we can reasonably assume they mean an upcoming meeting, because almost everything we do in our calendar focuses on upcoming events. If we believed this was always true, we could simply take every user intension about an event and further constrain it to start in the future:<\/p>\n But what if the user wants to reschedule a past meeting that was cancelled? If we apply the above function to “reschedule yesterday’s meeting with Alice to next week,” the event will be constrained to both be yesterday and in the future; the constraint will be unsatisfiable. We can’t, then, simply mix our default assumptions into whatever the user provides; we have to allow them to be selectively overridden, just like any other default value. Fortunately, we have a solution which is general across all domains:<\/p>\n In our system, Our default assumptions are that the event being referenced starts in the future and will be attended by the user. The first clause of those defaults, however, contradicts the user’s intension. The result of our revision, then, will consist of the second default clause and the user’s intension:<\/p>\n Simply looking for contradictions, however, isn’t enough. Consider a query for all the events since the year began:<\/p>\n In this case, the user’s intension isn’t contradicted<\/em> by our default assumptions, but it is implied<\/em> by them. If an event starts in the future, it necessarily occurs after the year began. If we don’t drop Since both contradiction and implication are concerned with intrinsic properties of a datatype (as opposed to extrinsic properties like “this corresponds to an entity in a backend service”), our system can handle the revision process on its own. Developers can simply focus on defining the appropriate pragmatics for their domain.<\/p>\n The existence of a revision operator, combined with the fact that users speak intensionally, also means that we can give users the ability to tweak and build upon what they’ve already said.<\/p>\n Consider the utterance “cancel my meeting with Alice.” If the user and Alice work on the same team, it’s likely they have more than one upcoming meeting together. We can guess at which one they mean, but before actually cancelling the meeting we will show them a description of the event and ask for confirmation.<\/p>\n Typically, confirmation involves giving the user a choice between “OK” and “cancel;” either we did exactly what they wanted, or they need to start over. Revision, however, means we don’t need to start over. If the user follows up “cancel my meeting with Alice” with “I meant the one-on-one,” we’ll revise the first intension with the second, and look for a one-on-one with Alice.<\/p>\n This is enormously freeing for the user, because it means they don’t need to fit everything they want into a single, monolithic utterance. This is akin to the difference between batch and interactive computing; users can try things, see what happens, and quickly build upon their successes.<\/p>\n This is also enormously freeing for the developer, because it means they can afford to get things wrong. We provide the best tools we can to help developers interpret the user’s intent, but the cost of misinterpretation is small. In the worst case, the user will be forced to provide incrementally more information.<\/p>\n Wherever possible, business logic should be described by code rather than training data. This keeps our system\u2019s behavior principled, predictable, and easy to change. Our approach to conversational interfaces allows them to be built much like any other application, using familiar tools, conventions, and processes, while still being able to take advantage of cutting-edge machine learning techniques.<\/p>\n When revisiting ideas from this earlier era of research, however, we must be careful; used wholesale, they\u2019re likely to send us down the same path as the people who first proposed them. Sometimes, as with plans, we have to make minor modifications. Sometimes, as with constraints, we have to acknowledge complexities that weren’t even imagined by early researchers. Sometimes, as with revision, we have to create something entirely novel.<\/p>\n Doing this well requires a team with a wide variety of interests and expertise. In addition to people with expertise in computational linguistics, we’re also looking to hire people with backgrounds in programming language runtimes, constraint solvers, and SDK design. If this sounds like you, and everything described above sounds like something you’d want to work on, let us know (opens in new tab)<\/span><\/a>.<\/p>\n","protected":false},"excerpt":{"rendered":" This is a guest post from our close partners, Semantic Machines By Zachary Tellman Conversational interfaces are an idea that is forever on the cusp of transforming the world. The potential is undeniable: everyone has innate, untapped conversational expertise. We could do away with the nested menus required by visual interfaces; anything the user can […]<\/p>\n","protected":false},"author":39000,"featured_media":0,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr-content-parent":688806,"footnotes":""},"research-area":[],"msr-locale":[268875],"msr-post-option":[],"class_list":["post-801424","msr-blog-post","type-msr-blog-post","status-publish","hentry","msr-locale-en_us"],"msr_assoc_parent":{"id":688806,"type":"project"},"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-blog-post\/801424"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-blog-post"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-blog-post"}],"author":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/users\/39000"}],"version-history":[{"count":17,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-blog-post\/801424\/revisions"}],"predecessor-version":[{"id":803701,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-blog-post\/801424\/revisions\/803701"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=801424"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=801424"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=801424"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=801424"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}A Detour Through History<\/h2>\n
\nOK.<\/p>\n
\nBY “IT”, I ASSUME YOU MEAN THE BLOCK WHICH IS TALLER THAN THE ONE I AM HOLDING.
\nOK.<\/p>\n
\nTHE BLUE PYRAMID AND THE BLUE BLOCK.<\/p>\n
\nFOUR OF THEM.<\/p>\n
\nYES, THE RED CUBE.<\/p>\n
\nYES, BY THE TABLE.<\/p>\n
\nNO.<\/p>\n
\nYES.<\/p>\n
\nI DON’T KNOW.<\/p>\n
\n(trying) I CAN’T.<\/p><\/blockquote>\norder_pizza(size, toppings)<\/code>, a slots-and-intents framework can easily provide a mapping between “order me a medium pizza with pepperoni and mushrooms” and
order_pizza(\"medium\", [\"pepperoni\", \"mushrooms\"])<\/code>. It allows us to separate linguistic concerns from the actual business logic required to order a pizza. But consider the second utterance from the conversation with SHRDLU:<\/p>\n
find_block_and_put_into_box(...)<\/code> and define similar functions for any other compound action we’d want to support. But even that’s not enough; if we simply call
find_block_and_put_into_box(\"taller than the one you are holding\")<\/code>, we’re letting linguistic concerns bleed into the business logic. At most, we’d want the business logic to be interpreting individual words like “taller,” “narrower,” and so on, but that would require an even more specific function:<\/p>\n
\r\nfind_block_which_is_X_than_held_block_and_put_in_box(\"taller\")\r\n<\/pre>\n
Plans<\/h2>\n
\r\nfind_block((b: Block) => taller_than(b, held_block()))\r\nput_in_box(the[Block]())\r\n<\/pre>\n
the()<\/code> function which resolves what “it” refers to. This is because we only want the planning model to translate<\/em> the utterance, not interpret<\/em> it.<\/p>\n
taller_than<\/code> would seem like an unnecessary layer of indirection. Why not just inline it?<\/p>\n
\r\nfind_block((b: Block) => b.height > held_block().height)\r\n<\/pre>\n
\r\ndef taller_than(a: Block, b: Block) = (a.height - b.height) > HEIGHT_EPSILON\r\n<\/pre>\n
a.height > b.height<\/code> where it corresponds to “taller than” in the utterance.<\/p>\n
taller_than<\/code>.<\/p>\n
the()<\/code>. This is possible because the Express runtime retains the full execution, including all intermediate results, of every plan in the current conversation (opens in new tab)<\/span><\/a>. This data, stored as a dataflow graph, represents our conversational context: things which we’ve already discussed, and may want to later reference. Certain special functions, such as
the()<\/code>, can query that graph, searching for the expression which is being referenced.<\/p>\n
\n
the()<\/code>.<\/p>\n
Constraints<\/h2>\n
find_block<\/code> a predicate with the criteria for the block we wish to find:<\/p>\n
\r\nfind_block((b: Block) => taller_than(b, held_block()))\r\n<\/pre>\n
\r\nthe[Person](p => p.name ~= \"Alice\")\r\n<\/pre>\n
~=<\/code> means “similar to”. When executed,
the()<\/code> will try to find a person named Alice somewhere in the conversational history, but there’s no guarantee one exists. The user may assume that, given who they are, the system can figure out who they mean. Perhaps there’s a particular Alice that they work with, or someone in their family is named Alice. In either case, the user clearly thinks they’ve given us enough information, so we have to figure out what makes sense in the given context.<\/p>\n
the()<\/code> fails to find a match in the conversational context, it will call a resolver<\/b> function associated with the
Person<\/code> datatype. But how should a
Person<\/code> resolver, given a user-provided predicate, actually work? We can’t simply scan over a list of all the possible people and apply our predicate as a filter; that dataset lives elsewhere and is unlikely to be easily accessed. Because of both practical and privacy concerns, it will almost certainly be exposed via a service with access controls and a limited API.<\/p>\n
a && !b<\/code>, it will return
a == true, b == false<\/code>. Given
a && !a<\/code>, it will tell us that the constraint is unsatisfiable. Since a variety of problems (opens in new tab)<\/span><\/a> can be mapped into this representation, SAT solvers are widely used. This capability is generalized by SMT solvers (opens in new tab)<\/span><\/a>, which can solve more complex constraints on a wider variety of datatypes.<\/p>\n
~=<\/code>, can be configured as abstract named properties. Upon navigating to
Person.name<\/code>, we can look for an abstract property of
~=<\/code>, and examine its argument’s zipper to construct our query.<\/p>\n
Revision<\/h2>\n
\r\ndef add_pragmatics(predicate: Event => Boolean): Event => Boolean = {\r\n e => predicate(e) && e.start > now()\r\n}\r\n<\/pre>\n
\r\ndef add_pragmatics(predicate: Event => Boolean): Event => Boolean = {\r\n revise(\r\n e => e.start > now(), \r\n predicate,\r\n )\r\n}\r\n<\/pre>\n
revise<\/code> is a powerful operator that, given two constraints
a<\/code> and
b<\/code>, will discard the parts of
a<\/code> which keep
b<\/code> from being meaningful, and conjoin the rest onto
b<\/code>. Consider a query for “yesterday’s meeting”, where we revise some basic pragmatics with the user’s intension:<\/p>\n
\r\nrevise(\r\n e => e.start > now() && e.attendees.contains(me()), \r\n e => e.start.date == yesterday(),\r\n)\r\n<\/pre>\n
\r\ne => e.start.date == yesterday() && e.attendees.contains(me())\r\n<\/pre>\n
\r\nrevise(\r\n e => e.start > now() && e.attendees.contains(me()), \r\n e => e.start > beginning_of_year()\r\n) \r\n<\/pre>\n
e.start > now()<\/code>, we will effectively ignore what the user said.<\/p>\n
Final Thoughts<\/h2>\n