Prosecution Insights
Last updated: April 19, 2026
Application No. 19/239,036

Contextual Assistant Using Mouse Pointing or Touch Cues

Non-Final OA §103
Filed
Jun 16, 2025
Examiner
ALMEIDA, CORY A
Art Unit
2628
Tech Center
2600 — Communications
Assignee
Google LLC
OA Round
1 (Non-Final)
67%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
89%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
528 granted / 790 resolved
+4.8% vs TC avg
Strong +22% interview lift
Without
With
+22.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
22 currently pending
Career history
812
Total Applications
across all art units

Statute-Specific Performance

§101
1.7%
-38.3% vs TC avg
§103
56.9%
+16.9% vs TC avg
§102
30.1%
-9.9% vs TC avg
§112
7.1%
-32.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 790 resolved cases

Office Action

§103
DETAILED ACTION Status of the Claims The filing dated 6/16/26 is entered. Claims 1-20 are pending. Foreign Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Information Disclosure Statements The information disclosure statement (IDS) submitted on 6/16/26 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-6, 9-16, 19, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yoganandan, US-20180336009, in view of Spivack, US-20190102946. In regards to claim 1, Yoganandan discloses a computer-implemented method (Par. 0005) when executed by data processing hardware causes the data processing hardware to perform operations (Fig. 34; Par. 0128, 0132, and 0133 describes memory elements 3413, 3414, and 3415 storing processor executable instructions for performing system operations) comprising: receiving image data comprising a plurality of candidate objects displayed in a graphical user interface (GUI) displayed on a screen in communication with the data processing hardware (Fig. 3; Par. 0077 describes a user pointing input 305 to a device 120 screen 140 displaying a GUI having three elements 310, the pointing/spatial input 305 disambiguating a user voice input 315; wherein the voice and finger input comprise an event); receiving a query issued by a user (Fig. 3; Par. 0077 describes a user pointing input 305 to a device 120 screen 140 displaying a GUI having three elements 310, the pointing/spatial input 305 disambiguating a user voice input 315; wherein the voice and finger input comprise a query); detecting a pointing action performed by the user in the GUI at a first location on the screen (Fig. 3; Par. 0077 describes a user pointing input 305 to a device 120 screen 140 displaying a GUI having three elements 310, the pointing/spatial input 305 disambiguating a user voice input 315; wherein the voice and finger input comprise a query); performing query interpretation on the query to determine that the query is referring to one of the candidate objects displayed on the screen (Fig. 3; Par. 0077 describes a pointing input and ambiguous voice input about an object displayed on the screen; Par. 0085-0087 describes voice recognition processing providing uttered command recognition and interpretation); disambiguating, using the detected lassoing action performed by the user in the GUI at the first location on the screen, the query to uniquely identify the referred to one of the candidate objects that the query is referring to (Fig. 16A; Par. 0097 describes a user’s ambiguous request for information about an object displayed on display device 1610 which is disambiguated by the user’s cursor/spatial input on a displayed object); and providing a response to the query that includes obtained information about the referred to one of the candidate objects displayed on the screen (Fig. 16A; Par. 0097 describes the display 1610 showing the returned information 1631 on the identified screen object). Yoganandan does not disclose expressly detecting a lassoing action performed by the user in the GUI at a first location on the screen and the subject matter as claimed in a single embodiment. However, Yoganandan does disclose all of the claimed elements in a single document, and it would have been obvious for one of ordinary skill in the art before the effective filing date to experiment with combining Yoganandan’s finite number of described features, including a combination which the instant claim reads on, given Yoganandan’s statement that the various disclosed elements can be freely combined (Par. 0044). Yoganandan does not disclose expressly detecting a lassoing action performed by the user in the GUI at a first location on the screen. Spivack discloses detecting a lassoing action performed by the user in the GUI at a first location on the screen (Par. 0027 and 0209 lassoing objects to select them). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art that the pointing gesture of Yoganandan can be a lassoing gesture in the manner of Spivack. The motivation for doing so would have been that a lassoing gesture can select multiple objects or a single object thereby increasing usability. In regards to claim 11, Yoganandan discloses a system (Fig. 34, 3400 system; Par. 0128 system) comprising: data processing hardware (Figs. 2 and 34; Par. 0128, 0133, 0137 describes control circuitry 126 and one or more processors 3411); and memory hardware in communication with the data processing hardware, the memory hardware storing instructions that when executed on the data processing hardware cause the data processing hardware to perform operations (Fig. 34; Par. 0128, 0132, and 0133 describes memory elements 3413, 3414, and 3415 storing processor executable instructions for performing system operations) comprising: receiving image data comprising a plurality of candidate objects displayed in a graphical user interface (GUI) displayed on a screen in communication with the data processing hardware (Fig. 3; Par. 0077 describes a user pointing input 305 to a device 120 screen 140 displaying a GUI having three elements 310, the pointing/spatial input 305 disambiguating a user voice input 315; wherein the voice and finger input comprise an event); receiving a query issued by a user (Fig. 3; Par. 0077 describes a user pointing input 305 to a device 120 screen 140 displaying a GUI having three elements 310, the pointing/spatial input 305 disambiguating a user voice input 315; wherein the voice and finger input comprise a query); detecting a pointing action performed by the user in the GUI at a first location on the screen (Fig. 3; Par. 0077 describes a user pointing input 305 to a device 120 screen 140 displaying a GUI having three elements 310, the pointing/spatial input 305 disambiguating a user voice input 315; wherein the voice and finger input comprise a query); performing query interpretation on the query to determine that the query is referring to one of the candidate objects displayed on the screen (Fig. 3; Par. 0077 describes a pointing input and ambiguous voice input about an object displayed on the screen; Par. 0085-0087 describes voice recognition processing providing uttered command recognition and interpretation); disambiguating, using the detected lassoing action performed by the user in the GUI at the first location on the screen, the query to uniquely identify the referred to one of the candidate objects that the query is referring to (Fig. 16A; Par. 0097 describes a user’s ambiguous request for information about an object displayed on display device 1610 which is disambiguated by the user’s cursor/spatial input on a displayed object); and providing a response to the query that includes obtained information about the referred to one of the candidate objects displayed on the screen (Fig. 16A; Par. 0097 describes the display 1610 showing the returned information 1631 on the identified screen object). Yoganandan does not disclose expressly detecting a lassoing action performed by the user in the GUI at a first location on the screen and the subject matter as claimed in a single embodiment. However, Yoganandan does disclose all of the claimed elements in a single document, and it would have been obvious for one of ordinary skill in the art before the effective filing date to experiment with combining Yoganandan’s finite number of described features, including a combination which the instant claim reads on, given Yoganandan’s statement that the various disclosed elements can be freely combined (Par. 0044). Yoganandan does not disclose expressly detecting a lassoing action performed by the user in the GUI at a first location on the screen. Spivack discloses detecting a lassoing action performed by the user in the GUI at a first location on the screen (Par. 0027 and 0209 lassoing objects to select them). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art that the pointing gesture of Yoganandan can be a lassoing gesture in the manner of Spivack. The motivation for doing so would have been that a lassoing gesture can select multiple objects or a single object thereby increasing usability. In regards to claim 2, Yoganandan discloses performing query interpretation on the query further comprises determining that the query is requesting information about the object displayed on the screen (Fig. 16A; Par. 0097 describes a user’s ambiguous request for information about an object displayed on display device 1610 which is disambiguated by the user’s cursor/spatial input on a displayed object). Yoganandan does not disclose expressly the subject matter as claimed in a single embodiment. However, Yoganandan does disclose all of the claimed elements in a single document, and it would have been obvious for one of ordinary skill in the art before the effective filing date to experiment with combining Yoganandan’s finite number of described features, including a combination which the instant claim reads on, given Yoganandan’s statement that the various disclosed elements can be freely combined (Par. 0044). In regards to claim 12, Yoganandan discloses performing query interpretation on the query further comprises determining that the query is requesting information about the object displayed on the screen (Fig. 16A; Par. 0097 describes a user’s ambiguous request for information about an object displayed on display device 1610 which is disambiguated by the user’s cursor/spatial input on a displayed object). Yoganandan does not disclose expressly the subject matter as claimed in a single embodiment. However, Yoganandan does disclose all of the claimed elements in a single document, and it would have been obvious for one of ordinary skill in the art before the effective filing date to experiment with combining Yoganandan’s finite number of described features, including a combination which the instant claim reads on, given Yoganandan’s statement that the various disclosed elements can be freely combined (Par. 0044). In regards to claim 3, Yoganandan discloses performing query interpretation on the query further comprises determining that the query is referring to the object displayed on the screen without uniquely identifying the object (Par. 0085-0087 describes voice recognition processing providing uttered command recognition and interpretation; Fig. 3; Par. 0077 describes an ambiguous voice input 315). Yoganandan does not disclose expressly the subject matter as claimed in a single embodiment. However, Yoganandan does disclose all of the claimed elements in a single document, and it would have been obvious for one of ordinary skill in the art before the effective filing date to experiment with combining Yoganandan’s finite number of described features, including a combination which the instant claim reads on, given Yoganandan’s statement that the various disclosed elements can be freely combined (Par. 0044). In regards to claim 13, Yoganandan discloses performing query interpretation on the query further comprises determining that the query is referring to the object displayed on the screen without uniquely identifying the object (Par. 0085-0087 describes voice recognition processing providing uttered command recognition and interpretation; Fig. 3; Par. 0077 describes an ambiguous voice input 315). Yoganandan does not disclose expressly the subject matter as claimed in a single embodiment. However, Yoganandan does disclose all of the claimed elements in a single document, and it would have been obvious for one of ordinary skill in the art before the effective filing date to experiment with combining Yoganandan’s finite number of described features, including a combination which the instant claim reads on, given Yoganandan’s statement that the various disclosed elements can be freely combined (Par. 0044). In regards to claim 4, Yoganandan discloses receiving the query issued by the user comprises receiving audio data corresponding to the query and captured by an assistant-enabled device associated with the user (Par. 0085-0087 describes voice recognition processing providing uttered command recognition and interpretation; Fig. 3; Par. 0077 describes an ambiguous voice input 315; voice recognition, i.e. audio data corresponding to the query and captured by an assistant-enabled device associated). Yoganandan does not disclose expressly the subject matter as claimed in a single embodiment. However, Yoganandan does disclose all of the claimed elements in a single document, and it would have been obvious for one of ordinary skill in the art before the effective filing date to experiment with combining Yoganandan’s finite number of described features, including a combination which the instant claim reads on, given Yoganandan’s statement that the various disclosed elements can be freely combined (Par. 0044). In regards to claim 14, Yoganandan discloses receiving the query issued by the user comprises receiving audio data corresponding to the query and captured by an assistant-enabled device associated with the user (Par. 0085-0087 describes voice recognition processing providing uttered command recognition and interpretation; Fig. 3; Par. 0077 describes an ambiguous voice input 315; voice recognition, i.e. audio data corresponding to the query and captured by an assistant-enabled device associated). Yoganandan does not disclose expressly the subject matter as claimed in a single embodiment. However, Yoganandan does disclose all of the claimed elements in a single document, and it would have been obvious for one of ordinary skill in the art before the effective filing date to experiment with combining Yoganandan’s finite number of described features, including a combination which the instant claim reads on, given Yoganandan’s statement that the various disclosed elements can be freely combined (Par. 0044). In regards to claim 5, Yoganandan discloses receiving, in the GUI displayed on the screen, a user input indication indicating selection of a graphical element (Fig. 3; Par. 0077 describes a user pointing input 305 to a device 120 screen 140 displaying a GUI having three elements 310, the pointing/spatial input 305 disambiguating a user voice input 315; wherein the voice and finger input comprise a selection event); and in response to receiving the user input indication indicating selection of the graphical element, activating a speech recognition model to enable performance of speech recognition on the audio data corresponding to the query and captured by the assistant-enabled device (Fig. 3; Par. 0077 describes a user pointing input 305 to a device 120 screen 140 displaying a GUI having three elements 310, the pointing/spatial input 305 disambiguating a user voice input 315; Par. 0085-0087 describes voice recognition processing providing uttered command recognition and interpretation; Fig. 3; Par. 0077 describes an ambiguous voice input 315; voice recognition, i.e. audio data corresponding to the query and captured by an assistant-enabled device associated). Yoganandan does not disclose expressly the subject matter as claimed in a single embodiment. However, Yoganandan does disclose all of the claimed elements in a single document, and it would have been obvious for one of ordinary skill in the art before the effective filing date to experiment with combining Yoganandan’s finite number of described features, including a combination which the instant claim reads on, given Yoganandan’s statement that the various disclosed elements can be freely combined (Par. 0044). In regards to claim 15, Yoganandan discloses receiving, in the GUI displayed on the screen, a user input indication indicating selection of a graphical element (Fig. 3; Par. 0077 describes a user pointing input 305 to a device 120 screen 140 displaying a GUI having three elements 310, the pointing/spatial input 305 disambiguating a user voice input 315; wherein the voice and finger input comprise a selection event); and in response to receiving the user input indication indicating selection of the graphical element, activating a speech recognition model to enable performance of speech recognition on the audio data corresponding to the query and captured by the assistant-enabled device (Fig. 3; Par. 0077 describes a user pointing input 305 to a device 120 screen 140 displaying a GUI having three elements 310, the pointing/spatial input 305 disambiguating a user voice input 315; Par. 0085-0087 describes voice recognition processing providing uttered command recognition and interpretation; Fig. 3; Par. 0077 describes an ambiguous voice input 315; voice recognition, i.e. audio data corresponding to the query and captured by an assistant-enabled device associated). Yoganandan does not disclose expressly the subject matter as claimed in a single embodiment. However, Yoganandan does disclose all of the claimed elements in a single document, and it would have been obvious for one of ordinary skill in the art before the effective filing date to experiment with combining Yoganandan’s finite number of described features, including a combination which the instant claim reads on, given Yoganandan’s statement that the various disclosed elements can be freely combined (Par. 0044). In regards to claim 6, Yoganandan discloses receiving a user input indication indicating selection of a physical button disposed on the assistant-enabled device associated with the user (Par. 0103 disambiguation using touch input, i.e. physical button, plus voice); and in response to receiving the user input indication indicating selection of the physical button disposed on the assistant-enabled device, activating a speech recognition model to enable performance of speech recognition on the audio data corresponding to the query and captured by the assistant-enabled device (Fig. 3; Par. 0077 describes a user pointing input 305 to a device 120 screen 140 displaying a GUI having three elements 310, the pointing/spatial input 305 disambiguating a user voice input 315; Par. 0085-0087 describes voice recognition processing providing uttered command recognition and interpretation; Fig. 3; Par. 0077 describes an ambiguous voice input 315; voice recognition, i.e. audio data corresponding to the query and captured by an assistant-enabled device associated). Yoganandan does not disclose expressly the subject matter as claimed in a single embodiment. However, Yoganandan does disclose all of the claimed elements in a single document, and it would have been obvious for one of ordinary skill in the art before the effective filing date to experiment with combining Yoganandan’s finite number of described features, including a combination which the instant claim reads on, given Yoganandan’s statement that the various disclosed elements can be freely combined (Par. 0044). In regards to claim 16, Yoganandan discloses receiving a user input indication indicating selection of a physical button disposed on the assistant-enabled device associated with the user (Par. 0103 disambiguation using touch input, i.e. physical button, plus voice); and in response to receiving the user input indication indicating selection of the physical button disposed on the assistant-enabled device, activating a speech recognition model to enable performance of speech recognition on the audio data corresponding to the query and captured by the assistant-enabled device (Fig. 3; Par. 0077 describes a user pointing input 305 to a device 120 screen 140 displaying a GUI having three elements 310, the pointing/spatial input 305 disambiguating a user voice input 315; Par. 0085-0087 describes voice recognition processing providing uttered command recognition and interpretation; Fig. 3; Par. 0077 describes an ambiguous voice input 315; voice recognition, i.e. audio data corresponding to the query and captured by an assistant-enabled device associated). Yoganandan does not disclose expressly the subject matter as claimed in a single embodiment. However, Yoganandan does disclose all of the claimed elements in a single document, and it would have been obvious for one of ordinary skill in the art before the effective filing date to experiment with combining Yoganandan’s finite number of described features, including a combination which the instant claim reads on, given Yoganandan’s statement that the various disclosed elements can be freely combined (Par. 0044). In regards to claim 9, Yoganandan discloses the GUI is displayed on a screen of an assistant-enabled device associated with the user (Par. 0055 “mobile telephone devices”). Yoganandan does not disclose expressly the subject matter as claimed in a single embodiment. However, Yoganandan does disclose all of the claimed elements in a single document, and it would have been obvious for one of ordinary skill in the art before the effective filing date to experiment with combining Yoganandan’s finite number of described features, including a combination which the instant claim reads on, given Yoganandan’s statement that the various disclosed elements can be freely combined (Par. 0044). In regards to claim 19, Yoganandan discloses the GUI is displayed on a screen of an assistant-enabled device associated with the user (Par. 0055 “mobile telephone devices”). Yoganandan does not disclose expressly the subject matter as claimed in a single embodiment. However, Yoganandan does disclose all of the claimed elements in a single document, and it would have been obvious for one of ordinary skill in the art before the effective filing date to experiment with combining Yoganandan’s finite number of described features, including a combination which the instant claim reads on, given Yoganandan’s statement that the various disclosed elements can be freely combined (Par. 0044). In regards to claim 10, Yoganandan discloses the assistant-enabled device comprises a smart phone or tablet device (Par. 0055 “mobile telephone devices”). Yoganandan does not disclose expressly the subject matter as claimed in a single embodiment. However, Yoganandan does disclose all of the claimed elements in a single document, and it would have been obvious for one of ordinary skill in the art before the effective filing date to experiment with combining Yoganandan’s finite number of described features, including a combination which the instant claim reads on, given Yoganandan’s statement that the various disclosed elements can be freely combined (Par. 0044). In regards to claim 20, Yoganandan discloses the assistant-enabled device comprises a smart phone or tablet device (Par. 0055 “mobile telephone devices”). Yoganandan does not disclose expressly the subject matter as claimed in a single embodiment. However, Yoganandan does disclose all of the claimed elements in a single document, and it would have been obvious for one of ordinary skill in the art before the effective filing date to experiment with combining Yoganandan’s finite number of described features, including a combination which the instant claim reads on, given Yoganandan’s statement that the various disclosed elements can be freely combined (Par. 0044). Claim(s) 7, 8, 17, and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yoganandan, US-20180336009, and Spivack, US-20190102946, as combined in regards to claim 1, in further view of Pettyjohn, US-20120232897. In regards to claim 7, Yoganandan and Spivack, as combined above, do not disclose expressly generating a textual representation of the response to the query that includes the obtained information, wherein providing the response to the query comprises displaying, in the GUI, the textual representation of the response. Pettyjohn discloses generating a textual representation of the response to the query that includes the obtained information, wherein providing the response to the query comprises displaying, in the GUI, the textual representation of the response (Par. 0027 “Text-to-speech module 102 generates audible responses to the user's queries and requests. System 90 can also include components (not shown) that provide text and/or graphical data to a user's communication device in response to a voice or text request from the user.”, i.e. providing textual and text-to-speech responses to queries from a user). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art that the queries of Yognandan, can include text-to-speech and textual responses in the manner of Pettyjohn. The motivation for doing so would have been to provide the user with multiple way to receive the query response. In regards to claim 17, Yoganandan and Spivack, as combined above, do not disclose expressly generating a textual representation of the response to the query that includes the obtained information, wherein providing the response to the query comprises displaying, in the GUI, the textual representation of the response. Pettyjohn discloses generating a textual representation of the response to the query that includes the obtained information, wherein providing the response to the query comprises displaying, in the GUI, the textual representation of the response (Par. 0027 “Text-to-speech module 102 generates audible responses to the user's queries and requests. System 90 can also include components (not shown) that provide text and/or graphical data to a user's communication device in response to a voice or text request from the user.”, i.e. providing textual and text-to-speech responses to queries from a user). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art that the queries of Yognandan, can include text-to-speech and textual responses in the manner of Pettyjohn. The motivation for doing so would have been to provide the user with multiple way to receive the query response. In regards to claim 8, Yoganandan and Spivack, as combined above, do not disclose expressly generating a textual representation of the response to the query that includes the obtained information; and converting, using a text-to-speech system, the textual representation of the response into synthesized speech that conveys the response to the query, wherein providing the response to the query comprises providing, for audible output from an assistant-enabled device associated with the user, the synthesized speech that conveys the response to the query that includes the obtained information about the referred to one of the candidate objects. Pettyjohn discloses generating a textual representation of the response to the query that includes the obtained information (Par. 0027 “Text-to-speech module 102 generates audible responses to the user's queries and requests. System 90 can also include components (not shown) that provide text and/or graphical data to a user's communication device in response to a voice or text request from the user.”, i.e. providing textual and text-to-speech responses to queries from a user); and converting, using a text-to-speech system, the textual representation of the response into synthesized speech that conveys the response to the query, wherein providing the response to the query comprises providing, for audible output from an assistant-enabled device associated with the user, the synthesized speech that conveys the response to the query that includes the obtained information about the referred to one of the candidate objects (Par. 0027 “Text-to-speech module 102 generates audible responses to the user's queries and requests. System 90 can also include components (not shown) that provide text and/or graphical data to a user's communication device in response to a voice or text request from the user.”, i.e. providing textual and text-to-speech responses to queries from a user). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art that the queries of Yognandan, can include text-to-speech and textual responses in the manner of Pettyjohn. The motivation for doing so would have been to provide the user with multiple way to receive the query response. In regards to claim 18, Yoganandan and Spivack, as combined above, do not disclose expressly generating a textual representation of the response to the query that includes the obtained information; and converting, using a text-to-speech system, the textual representation of the response into synthesized speech that conveys the response to the query, wherein providing the response to the query comprises providing, for audible output from an assistant-enabled device associated with the user, the synthesized speech that conveys the response to the query that includes the obtained information about the referred to one of the candidate objects. Pettyjohn discloses generating a textual representation of the response to the query that includes the obtained information (Par. 0027 “Text-to-speech module 102 generates audible responses to the user's queries and requests. System 90 can also include components (not shown) that provide text and/or graphical data to a user's communication device in response to a voice or text request from the user.”, i.e. providing textual and text-to-speech responses to queries from a user); and converting, using a text-to-speech system, the textual representation of the response into synthesized speech that conveys the response to the query, wherein providing the response to the query comprises providing, for audible output from an assistant-enabled device associated with the user, the synthesized speech that conveys the response to the query that includes the obtained information about the referred to one of the candidate objects (Par. 0027 “Text-to-speech module 102 generates audible responses to the user's queries and requests. System 90 can also include components (not shown) that provide text and/or graphical data to a user's communication device in response to a voice or text request from the user.”, i.e. providing textual and text-to-speech responses to queries from a user). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art that the queries of Yognandan, can include text-to-speech and textual responses in the manner of Pettyjohn. The motivation for doing so would have been to provide the user with multiple way to receive the query response. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to CORY A ALMEIDA whose telephone number is (571)270-3143. The examiner can normally be reached M-Th 9AM-730PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nitin (Kumar) Patel can be reached at (571) 272-7677. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CORY A ALMEIDA/Primary Examiner, Art Unit 2628 2/25/26
Read full office action

Prosecution Timeline

Jun 16, 2025
Application Filed
Feb 25, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12601922
WAVEGUIDES WITH ENHANCED MODAL DENSITIES
2y 5m to grant Granted Apr 14, 2026
Patent 12591406
SYSTEM AND METHOD FOR GENERATING INTERACTIVE MEDIA
2y 5m to grant Granted Mar 31, 2026
Patent 12586521
DISPLAY PANEL, DISPLAY DEVICE, AND METHOD FOR DRIVING DISPLAY PANEL
2y 5m to grant Granted Mar 24, 2026
Patent 12586522
Correction Method Of Display Apparatus And Correction System Of The Display Apparatus
2y 5m to grant Granted Mar 24, 2026
Patent 12586492
ELECTRONIC DEVICE AND METHOD PROVIDING 3-DIMENSION IMAGE
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
67%
Grant Probability
89%
With Interview (+22.5%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 790 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month