Prosecution Insights
Last updated: April 19, 2026
Application No. 18/642,582

CONTEXT-AWARE CONVERSATIONAL MAP FUNCTIONALITY

Non-Final OA §101§102§103
Filed
Apr 22, 2024
Examiner
WEAVER, ADAM MICHAEL
Art Unit
2658
Tech Center
2600 — Communications
Assignee
Microsoft Technology Licensing, LLC
OA Round
1 (Non-Final)
92%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 92% — above average
92%
Career Allow Rate
11 granted / 12 resolved
+29.7% vs TC avg
Strong +20% interview lift
Without
With
+20.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
27 currently pending
Career history
39
Total Applications
across all art units

Statute-Specific Performance

§101
33.2%
-6.8% vs TC avg
§103
44.7%
+4.7% vs TC avg
§102
19.0%
-21.0% vs TC avg
§112
2.1%
-37.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 12 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement(s) (IDS) submitted on 06/21/2024 and 10/14/2025 is/are being considered by the examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim(s) 1-20 rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Independent claims 1, 10, and 18 recite “receiving a natural language question or command corresponding to a request”, “in response to the receiving of the natural language question or command, extracting contextual data”, “providing the contextual data and the natural language question or command as input into the one or more language model”, and “causing presentation”. These limitations, as drafted, are a process that, under a broadest reasonable interpretation, covers the abstract idea of “mental processes” because they cover concepts performed in the human mind, including observation, evaluation, judgement, and opinion. See MPEP 2106.04(a)(2). That is, other than reciting “one or more language models”, nothing in the claimed elements preclude the steps from practically being performed by a person listening to a natural language prompt, taking in contextual data surrounding the natural language prompt, and answering the prompt based on the prompt itself and its context. This judicial exception is not integrated into a practical application because the additional elements “one or more language models” are all recited at a high-level of generality. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Thus, the claims as a whole are directed to an abstract idea (Step 2A, prong two). Claims 1, 10, and 18 do not include any additional elements that are sufficient to amount to significantly more than the judicial exception because, as discussed above with respect to integration of the abstract idea into a practical applications, the additional elements of “one or more language models” amount to no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept (Step 2B). Dependent claims 2-9, 11-17, and 19-20 are directed to the contextual data surrounding the natural language question or command, as well as the natural language question or command itself. That is, nothing in the claimed elements preclude the steps from practically being performed by a person listening to a natural language prompt, taking in contextual data surrounding the natural language prompt, and answering the prompt based on the prompt itself and its context. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-7, 9-15, and 17-20 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Zhuang et al. (CN117892828A), hereinafter referred to as Zhuang. Regarding claim 1, Zhuang discloses a system comprising: at least one computer processor (Zhuang para [0005]); and one or more computer storage media storing computer-useable instructions that, when used by the at least one computer processor, cause the at least one computer processor to perform operations comprising (Zhuang para [0005]): receiving a natural language question or command corresponding to a request to obtain geographical information associated with a mapping platform (Zhuang para [0077]); in response to the receiving of the natural language question or command, extracting contextual data, the contextual data including at least one of: user information generated prior to the receiving of the natural language question or command, one or more spatial or temporal constraints within the natural language question or command, or context from an output generated by one or more language models ("By continuously asking users questions and leveraging the rich language understanding capabilities and historical message memory of the large language model, explicit function call instructions can be gradually obtained," Zhuang para [0027]); providing the contextual data and the natural language question or command as input into the one or more language models, wherein the one or more language models generate a response (Zhuang para [0010] and [0077]); and causing presentation, at a map interface associated with the mapping platform, of an indication associated with the response ("The execution result is transmitted to the front-end map for display and the user is notified, thus completing the first round of tasks," Zhuang para [n0026]). Regarding claim 2, Zhuang discloses all of the limitations of claim 1. Zhuang further discloses wherein the contextual data includes the user information generated prior to the receiving of the question or command, and wherein the user information includes at least one of: information from one or more previous turns that are part of a same first conversation as the natural language question or command, one or more previous natural language questions or commands generated prior to the natural language question or command that are a part of a second conversation, or user preferences of a user that issued the question or command (Zhuang para [0077] and "By continuously asking users questions and leveraging the rich language understanding capabilities and historical message memory of the large language model, explicit function call instructions can be gradually obtained," Zhuang para [0027]). Regarding claim 3, Zhuang discloses all of the limitations of claim 1. Zhuang further discloses wherein the contextual data includes the one or more spatial or temporal constraints within the natural language question or command (Zhuang para [0077]), and wherein the one or more spatial constraints specify a geographic location a user asks to navigate to (Zhuang para [0077]), and wherein the one or more temporal constraints include an order or time that the user asks to navigate to the geographical location at ("Taking route planning tasks as an example, users can query using natural language commands such as 'find a route from Plaza A to Shopping Mall B on foot' or 'I want to go from Plaza A to Shopping Mall B by strolling over'," Zhuang para [0077], "strolling over" indicates a sense of leisure, relating temporally). Regarding claim 4, Zhuang discloses all of the limitations of claim 1. Zhuang further discloses wherein the contextual data includes the context from an output generated by the one or more language models, and wherein the output includes a clarifying question that the one or more language models generate in response to a prior turn in the question or command or a prior question or prior command issued by a user before the natural language question or command (Zhuang para [0077] and "By continuously asking users questions and leveraging the rich language understanding capabilities and historical message memory of the large language model, explicit function call instructions can be gradually obtained," Zhuang para [0027]). Regarding claim 5, Zhuang discloses all of the limitations of claim 4. Zhuang further discloses wherein the response to the natural language question or command includes a clarifying question, and wherein the operations further comprising: subsequent to the generation of the clarifying question, receiving a second natural language question or command ("By continuously asking users questions and leveraging the rich language understanding capabilities and historical message memory of the large language model, explicit function call instructions can be gradually obtained," Zhuang para [0027]); providing second contextual data as input into the one or more language models, wherein the second contextual data includes the clarifying question, the natural language question or command, and the contextual data, and wherein the one or more language models generate a second response ("By continuously asking users questions and leveraging the rich language understanding capabilities and historical message memory of the large language model, explicit function call instructions can be gradually obtained," Zhuang para [0027]); and causing presentation, at the map interface associated with the mapping platform, of an indication associated with the second response ("The execution result is transmitted to the front-end map for display and the user is notified, thus completing the first round of tasks," Zhuang para [n0026]). Regarding claim 6, Zhuang discloses all of the limitations of claim 1. Zhuang further discloses wherein the operations further comprising: detecting entities in at least one of: a current conversation associated with the natural language question or command, or a historical conversation (Zhuang para [n0029]); and based on the detecting, populating a key-value pair data structure with one or more values, and wherein the key-value pair data structure is included in the contextual data, and wherein the key-value pair data structure is a part of the input into the one or more language models (Zhuang para [n0029]). Regarding claim 7, Zhuang discloses all of the limitations of claim 1. Zhuang further discloses wherein the natural language question or command includes a command to find a route from a first location to a second location and stopping by at least a third location in between the first location and the second location, and wherein the response by the one or more language models includes at least one of ("In another scenario, if the user inputs 'I want to take the bus from shopping mall A to location 1, and then find a coffee shop nearby,' the GLAM can extract the key features 'bus, shopping mall A, location 1, coffee shop.'," Zhuang para [0092]): a source, a destination, one or more waypoints, a temporal constraint, a travel mode, and one or more optimization objectives, and wherein the operations further comprising ("Based on these key features, the GLAM can generate a "function_call" message containing the function name and parameters. After receiving the feedback from the GLAM, the system parses the function name and parameters from the "function_call" message, finds the corresponding function in the GIS function library, and then executes the function with the provided parameters," Zhuang para [0092]): providing the response as an input into an optimization function, and wherein the optimization function computes the route ("This enables a multi-round function call mechanism to automatically break down tasks when handling complex tasks, flexibly call multiple tools, automatically sense task execution progress and results, optimize the execution process through front-end recursion mechanism, reduce redundant steps and execution costs, and effectively understand user needs through a large language model," Zhuang para [n0039]), and wherein the indication associated with the response to the natural language question or command includes an indicator superimposed over the map interface and that represents the route ("The execution result is transmitted to the front-end map for display and the user is notified, thus completing the first round of tasks," Zhuang para [0092]). Regarding claim 9, Zhuang discloses all of the limitations of claim 1. Zhuang further discloses wherein the response is indicative of an enriched prompt that includes at least a portion of: the natural language question or command and the contextual data, and wherein the operations further comprising ("In this embodiment, as shown in Figure 2, when the user's needs are not clearly stated or lack key information, the advantages of the language model can be used to guide the user's needs. By continuously asking users questions, leveraging the rich language understanding capabilities and historical message memory of the large language model, explicit function call instructions can be gradually obtained," Zhuang para [n0034] and Zhuang Fig. 2 shows a cycle in which the prompt continues to evolve, i.e., becoming "enriched"): providing the enriched prompt as second input into the one or more language models, and wherein the one or more language models generate a second response to the enriched prompt ("In this embodiment, as shown in Figure 2, when the user's needs are not clearly stated or lack key information, the advantages of the language model can be used to guide the user's needs. By continuously asking users questions, leveraging the rich language understanding capabilities and historical message memory of the large language model, explicit function call instructions can be gradually obtained," Zhuang para [n0034] and Zhuang Fig. 2 shows a cycle in which the prompt continues to evolve, i.e., becoming "enriched"); and providing the second response as an input into an optimization function ("This enables a multi-round function call mechanism to automatically break down tasks when handling complex tasks, flexibly call multiple tools, automatically sense task execution progress and results, optimize the execution process through front-end recursion mechanism, reduce redundant steps and execution costs, and effectively understand user needs through a large language model," Zhuang para [n0039]), and wherein the optimization function generates a third response associated with the geographical information and based at least in part on the enriched prompt ("The execution result is transmitted to the front-end map for display and the user is notified, thus completing the first round of tasks," Zhuang para [0092]). Regarding claim 10, Zhuang discloses a computer-implemented method comprising: extracting contextual data based at least in part on receiving a natural language sequence associated with a request to obtain geographical information (Zhuang para [0077] and "By continuously asking users questions and leveraging the rich language understanding capabilities and historical message memory of the large language model, explicit function call instructions can be gradually obtained," Zhuang para [0027]); based at least in part on the contextual data and the natural language sequence, generating, via one or more language models, an enriched prompt ("In this embodiment, as shown in Figure 2, when the user's needs are not clearly stated or lack key information, the advantages of the language model can be used to guide the user's needs. By continuously asking users questions, leveraging the rich language understanding capabilities and historical message memory of the large language model, explicit function call instructions can be gradually obtained," Zhuang para [n0034] and Zhuang Fig. 2 shows a cycle in which the prompt continues to evolve, i.e., becoming "enriched"); based at least in part on generating the enriched prompt, generating, via the one or more language models, a response to the enriched prompt ("In this embodiment, as shown in Figure 2, when the user's needs are not clearly stated or lack key information, the advantages of the language model can be used to guide the user's needs. By continuously asking users questions, leveraging the rich language understanding capabilities and historical message memory of the large language model, explicit function call instructions can be gradually obtained," Zhuang para [n0034] and Zhuang Fig. 2 shows a response (result) being generated); and causing presentation, at a map interface, of an indication associated with the response to the enriched prompt ("The execution result is transmitted to the front-end map for display and the user is notified, thus completing the first round of tasks," Zhuang para [n0026]). Regarding claim 11, Zhuang discloses all of the limitations of claim 10. Zhuang further discloses wherein the contextual data includes the user information generated prior to the receiving of the natural language sequence, and wherein the user information includes at least one of: information from one or more previous turns that are part of a same first conversation as the natural language sequence, one or more previous natural language sequences generated prior to the natural language sequence that are a part of a second conversation, or user preferences of a user that issued the natural language sequence (Zhuang para [0077] and "By continuously asking users questions and leveraging the rich language understanding capabilities and historical message memory of the large language model, explicit function call instructions can be gradually obtained," Zhuang para [0027]). Regarding claim 12, Zhuang discloses all of the limitations of claim 10. Zhuang further discloses wherein the contextual data includes the one or more spatial or temporal constraints within the natural language sequence (Zhuang para [0077]), and wherein the one or more spatial constraints specify a geographic location a user asks to navigate to (Zhuang para [0077]), and wherein the one or more temporal constraints include an order or time that the user asks to navigate to the geographical location at ("Taking route planning tasks as an example, users can query using natural language commands such as 'find a route from Plaza A to Shopping Mall B on foot' or 'I want to go from Plaza A to Shopping Mall B by strolling over'," Zhuang para [0077], "strolling over" indicates a sense of leisure, relating temporally). Regarding claim 13, Zhuang discloses all of the limitations of claim 10. Zhuang further discloses wherein the contextual data includes the context from an output generated by the one or more language models (Zhuang para [0077]), and wherein the output includes a clarifying question that the one or more language models generate in response to a prior turn in the natural language sequence or a prior question or prior command issued by a user before to the natural language sequence (Zhuang para [0077] and "By continuously asking users questions and leveraging the rich language understanding capabilities and historical message memory of the large language model, explicit function call instructions can be gradually obtained," Zhuang para [0027]). Regarding claim 14, Zhuang discloses all of the limitations of claim 10. Zhuang further discloses detecting entities in at least one of: a current conversation associated with the natural language sequence, or a historical conversation (Zhuang para [n0029]); and based on the detecting, populating a key-value pair data structure with one or more values, and wherein the key-value pair data structure is included in the contextual data (Zhuang para [n0029]). Regarding claim 15, Zhuang discloses all of the limitations of claim 10. Zhuang further discloses wherein the natural language sequence includes a command to find a route from a first location to a second location and stopping by at least a third location in between the first location and the second location, and wherein the response by the one or more language models includes at least one of ("In another scenario, if the user inputs 'I want to take the bus from shopping mall A to location 1, and then find a coffee shop nearby,' the GLAM can extract the key features 'bus, shopping mall A, location 1, coffee shop.'," Zhuang para [0092]): a source, a destination, one or more waypoints, a temporal constraint, a travel mode, and one or more optimization objectives ("Based on these key features, the GLAM can generate a "function_call" message containing the function name and parameters. After receiving the feedback from the GLAM, the system parses the function name and parameters from the "function_call" message, finds the corresponding function in the GIS function library, and then executes the function with the provided parameters," Zhuang para [0092]), and wherein the computer-implemented method further comprising: providing the response as an input into an optimization function, and wherein the optimization function computes the route ("This enables a multi-round function call mechanism to automatically break down tasks when handling complex tasks, flexibly call multiple tools, automatically sense task execution progress and results, optimize the execution process through front-end recursion mechanism, reduce redundant steps and execution costs, and effectively understand user needs through a large language model," Zhuang para [n0039]), and wherein the indication associated with the response to the enriched prompt includes an indicator superimposed over the map interface and that represents the route ("The execution result is transmitted to the front-end map for display and the user is notified, thus completing the first round of tasks," Zhuang para [0092]). Regarding claim 17, Zhuang discloses all of the limitations of claim 10. Zhuang further discloses providing the second response as an input into an optimization function ("This enables a multi-round function call mechanism to automatically break down tasks when handling complex tasks, flexibly call multiple tools, automatically sense task execution progress and results, optimize the execution process through front-end recursion mechanism, reduce redundant steps and execution costs, and effectively understand user needs through a large language model," Zhuang para [n0039]), and wherein the optimization function generates another response associated with the geographical information and based at least in part on the enriched prompt ("The execution result is transmitted to the front-end map for display and the user is notified, thus completing the first round of tasks," Zhuang para [0092]). As to claim 18, computer-readable medium (CRM) claim 18 and system claim 1 are related as system and CRM of using same, with each claimed element’s function corresponding to the system step, respectively. Accordingly, claim 18 is similarly rejected under the same rationale as applied above with respect to the system claim. As to claim 19, CRM claim 19 and system claim 9 are related as system and CRM of using same, with each claimed element’s function corresponding to the system step, respectively. Accordingly, claim 19 is similarly rejected under the same rationale as applied above with respect to the system claim. As to claim 20, CRM claim 20 and system claim 4 are related as system and CRM of using same, with each claimed element’s function corresponding to the system step, respectively. Accordingly, claim 20 is similarly rejected under the same rationale as applied above with respect to the system claim. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 8 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhuang, in view of Mallick et al. (US Patent Application Publication No. 2025/0209281), hereinafter referred to as Mallick. Regarding claim 8, Zhuang discloses all of the limitations of claim 1. Zhuang further discloses wherein the operations further comprising: prompting or tuning the one or more language models by: a first natural language sequence representing a user question or command (Zhuang para [0077]); generating, via a model, a second natural language sequence representing a response to the first natural language sequence (Zhuang para [0010] and [0077]). However, Zhuang fails to disclose generating, via a first model, generating, via a second model, and evaluating, via a third model, a quality of at least one of the first natural language sequence or the second natural language sequence based on one or more criterion, the one or more criterion including at least one of consistence, clearness, conversation flow, conciseness, completeness, closure, correctness, or relevance. Mallick teaches an adaptive query routing system for processing queries using natural language generators. Mallick teaches generating, via a first model ("The training includes, for given queries of a set of training queries, generating a vector-space embedding of a given query of the set of training queries. A first response is generated for the given query using the first natural language generator model. A second response is generated for the given query using the second natural language generator model," Mallick para [0126]); generating, via a second model ("The training includes, for given queries of a set of training queries, generating a vector-space embedding of a given query of the set of training queries. A first response is generated for the given query using the first natural language generator model. A second response is generated for the given query using the second natural language generator model," Mallick para [0126]); and evaluating, via a third model, a quality of at least one of the first natural language sequence or the second natural language sequence (Mallick para [0127]) based on one or more criterion, the one or more criterion including at least one of consistence, clearness, conversation flow, conciseness, completeness, closure, correctness, or relevance (Mallick para [0051]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Zhuang’s disclosure of natural language conversation using language models by including Mallick’s teaching of utilizing multiple language models to generate queries, respond to said queries, and then evaluate the responses based upon a quality metric. This would enable faster, more efficient, and more cost-effective training of the models used for responding to human clients. This would increase efficacy and efficiency of the models once they are employed in helping real clients. Regarding claim 16, Zhuang discloses all of the limitations of claim 10. Zhuang further discloses prompting or tuning the one or more language models by: a first natural language sequence representing a user question or command (Zhuang para [0077]); generating, via a model, a second natural language sequence representing a response to the first natural language sequence (Zhuang para [0010] and [0077]). However, Zhuang fails to disclose generating, via a first model, generating, via a second model, and evaluating, via a third model, a quality of at least one of the first natural language sequence or the second natural language sequence based on one or more criterion, the one or more criterion including at least one of consistence, clearness, conversation flow, conciseness, completeness, closure, correctness, or relevance. Mallick teaches generating, via a first model ("The training includes, for given queries of a set of training queries, generating a vector-space embedding of a given query of the set of training queries. A first response is generated for the given query using the first natural language generator model. A second response is generated for the given query using the second natural language generator model," Mallick para [0126]); generating, via a second model ("The training includes, for given queries of a set of training queries, generating a vector-space embedding of a given query of the set of training queries. A first response is generated for the given query using the first natural language generator model. A second response is generated for the given query using the second natural language generator model," Mallick para [0126]); and evaluating, via a third model, a quality of at least one of the first natural language sequence or the second natural language sequence (Mallick para [0127]) based on one or more criterion, the one or more criterion including at least one of consistence, clearness, conversation flow, conciseness, completeness, closure, correctness, or relevance (Mallick para [0051]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Zhuang’s disclosure of natural language conversation using language models by including Mallick’s teaching of utilizing multiple language models to generate queries, respond to said queries, and then evaluate the responses based upon a quality metric. This would enable faster, more efficient, and more cost-effective training of the models used for responding to human clients. This would increase efficacy and efficiency of the models once they are employed in helping real clients. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: US Patent Application Publication No. 2025/0315616 US Patent Application Publication No. 2019/0178671 US Patent Application Publication No. 2025/0237511 US Patent Application Publication No. 2025/0110951 US Patent No. 12,182,518 Any inquiry concerning this communication or earlier communications from the examiner should be directed to ADAM MICHAEL WEAVER whose telephone number is (571)272-7062. The examiner can normally be reached Monday-Friday, 8AM-5PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Richemond Dorvil can be reached at (571) 272-7602. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ADAM MICHAEL WEAVER/ Examiner, Art Unit 2658 /RICHEMOND DORVIL/ Supervisory Patent Examiner, Art Unit 2658
Read full office action

Prosecution Timeline

Apr 22, 2024
Application Filed
Jan 20, 2026
Non-Final Rejection — §101, §102, §103
Apr 02, 2026
Examiner Interview Summary
Apr 02, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591752
ZERO-SHOT DOMAIN TRANSFER WITH A TEXT-TO-TEXT MODEL
2y 5m to grant Granted Mar 31, 2026
Patent 12585765
SYSTEM AND METHOD FOR ROBUST NATURAL LANGUAGE CLASSIFICATION UNDER CHARACTER ENCODING
2y 5m to grant Granted Mar 24, 2026
Patent 12579375
IMPLEMENTING ACTIVE LEARNING IN NATURAL LANGUAGE GENERATION TASKS
2y 5m to grant Granted Mar 17, 2026
Patent 12562077
METHOD, COMPUTING DEVICE, AND NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM TO TRANSLATE AUDIO OF VIDEO INTO SIGN LANGUAGE THROUGH AVATAR
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 4 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
92%
Grant Probability
99%
With Interview (+20.0%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 12 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month