Prosecution Insights
Last updated: April 19, 2026
Application No. 18/707,075

GAZE-MEDIATED AUGMENTED REALITY INTERACTION WITH SOURCES OF SOUND IN AN ENVIRONMENT

Final Rejection §103
Filed
May 02, 2024
Examiner
LHYMN, SARAH
Art Unit
2613
Tech Center
2600 — Communications
Assignee
Google LLC
OA Round
2 (Final)
65%
Grant Probability
Favorable
3-4
OA Rounds
2y 4m
To Grant
81%
With Interview

Examiner Intelligence

Grants 65% — above average
65%
Career Allow Rate
357 granted / 546 resolved
+3.4% vs TC avg
Strong +15% interview lift
Without
With
+15.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
30 currently pending
Career history
576
Total Applications
across all art units

Statute-Specific Performance

§101
5.4%
-34.6% vs TC avg
§103
63.2%
+23.2% vs TC avg
§102
5.9%
-34.1% vs TC avg
§112
15.3%
-24.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 546 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment / Arguments Most of the amendments are respectfully broadening and/or stylistic amendments, or amend to add features to the independent claims that were already in dependent claims. Applicant’s arguments with respect to the amended claim(s) have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. The claims stand rejected under 103. Claim Objections Claim 15 is objected to because of the following informalities: “buttons” should be amended to “button”. Appropriate correction is required. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1 and 3-22 are rejected under 35 U.S.C. 103 as being unpatentable over Ota (U.S. Patent App. Pub. No. 2017/0277257 A1) in view of Bradski (U.S. Patent Application Pub. No. 2019/0094981 A1). Regarding claim 1: Ota teaches: a method (claim 14, a method) comprising: localizing, by a computing device (Ota, para. 20, a HMD 100 or head mounted device) a sound source (Ota, para. 20, the HMD “includes multiple directional microphones 106 to discriminate from a variety of sound sources that may be coming from a variety of directions.”. See also another example teaching in claim 17 (alternative mapping: Bradski, para. 203, acoustic localization is known); displaying a highlight for the sound source on a display (Bradski, para. 947, 971, highlighting to draw attention or basically as graphical enhancement is known); detecting a first gaze of a user directed to the highlighted sound source (Ota, para. 20: “Based on the direction of gaze of the user, one or more directional microphones 106 are used to discriminate a source of sound from the corresponding direction of gaze”. See also another teaching: claim 17) (see Bradski, para. 974, 971 for highlighting); determining that a distance between a focus point of the first gaze and the sound source is less than a threshold distance for greater than a threshold time (Ota, see paras. 32-34, which describes gaze detection. A “threshold distance” can be a that corresponding to the range of the microphone array within the span of a user’s forward gaze toward the sound source. In the above cited paragraphs, Ota’s non-limiting example selects one, two or more microphones from the array, corresponding to the sound source. A threshold time is that required for the gaze detection circuit (e.g. Fig. 7: 702) to register a user’s gaze for tracking. See also claim 16); displaying, in response to the determining, an interactive feature anchored to the sound source on the display (Ota, claim 14, this step is taught by “presenting output data to the user”, whereby the output data can be, for example claim 23: “a readable transcription of the audio data to the user.”. This is one example of Applicant’s claimed “interactive feature”, here being a readable transcription); and detecting a second gaze directed at the interactive feature combined with a gesture performed by the user to interact with the interactive feature (Bradski, para. 1061, the “system may be responsive to various user interactions or gestures, including looking at some item of virtual content, moving hands, touching hands to themselves or to the environment, other gestures, opening and/or closing eyes, etc.” See also Fig. 134, which illustrates “many ways of interacting with the virtual content presented to the user’ (para. 1171), and the section “Gestures” beginning at para. 1177-1204. Gaze tracking, as with Ota, is also taught by Bradski. See Section “Gaze Tracking” beginning at para. 1005) (the interactive feature is mapped above with respect to Ota). It would have been obvious for one of ordinary skill in the art to have further modified the applied reference(-s), in view of same, to have obtained the above, and the results of the modification would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A). That is, to modify Ota to include graphical embellishments such as highlighting, per Bradski, in relation to the sound localization, taught by both references, and to include multi-gesture interaction, or basically a system capable of recognition of multiple gestures, as per Bradski, and mapped above, with additional motivation to increase ease of interactivity for a user with a system. The prior art included each element recited in claim 1, although not necessarily in a single embodiment, with the only difference being between the claimed element and the prior art being the lack of actual combination of certain elements in a single prior art embodiment, as described above. One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention. Regarding claim 3: Ota teaches: the method according to claim 1, wherein the sound source is a non-smart device (para. 30, sound source can be “one of the talking people” (i.e. another person speaking)). It would have been obvious for one of ordinary skill in the art, as of the effective filing date of Applicant’s claims, to have further modified the applied reference(-s) in view of same to have obtained the above, motivated to provide users with audio/sound interactive capabilities and facilitate communication. Regarding claim 4: Ota teaches: the method according to claim 1, wherein localizing the sound source includes mapping a location of the sound source in the environment using at least one of audio-based localization, gaze-based localization, or communication-based localization (see e.g. paras. 15-16 and Fig. 8, this teaches a combination of audio and gaze based localization; and Fig. 3, which adds and illustrates communication-based localization); and generating the threshold distance based on the location of the sound source (see discussion below). It would have been obvious for one of ordinary skill in the art to have further modified the applied reference(-s), in view of same, to have obtained the above, and the results of the modification would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A). That is, Ota teaches all of audio-based (using microphone array), gaze-based (gaze detection circuit), and communication based localization (talking people), as mapped above. Modifying Ota, in view of same, such that the threshold distance is based on the above-mapped location, including user gesture, per Ota, to select sound source, is all of taught and suggested by Ota, and would have been obvious and predictable to one of ordinary skill. Additional motivation can be found in using several means to ascertain a sound source that a user is most interested, and to most accurately single out said source. One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention. Regarding claim 5 Ota teaches: the method according to claim 4, wherein the audio-based localization includes: obtaining signals from an array of microphones of the computing device, the signals resulting from a sound from the sound source (claim 17, or para. 30, “In operation, while the user is wearing the AR subsystem 500, the user may be interacting with several people, each of whom are talking. When the user looks at one of the talking people, the microphone array 512 is configured to capture audible data emanating from the direction corresponding with the user's gaze.” ; and comparing the signals from the array of microphones and/or times of arrival of the signals from the array of microphones to map the location of the sound source (paras. 32-34). It would have been obvious for one of ordinary skill in the art, as of the effective filing date of Applicant’s claims, to have further modified the applied reference(-s) in view of same to have obtained the above, motivated to provide users with audio/sound interactive capabilities and facilitate communication. Regarding claim 6: Ota teaches: the method according to claim 4, wherein the gaze-based localization includes: mapping the location of the sound source based on the focus point of the first gaze (this is mapped in claim 1; see also Ota, paras. 28-32 and/or claim 17). It would have been obvious for one of ordinary skill in the art, as of the effective filing date of Applicant’s claims, to have further modified the applied reference(-s) in view of same to have obtained the above, motivated to provide users with audio/sound interactive capabilities and facilitate communication. Regarding claim 7: Bradski teaches: the method according to claim 4, wherein the communication-based localization includes: communicating, by the computing device, with the sound source using wireless communication (Bradski, Fig. 1: 12 and paras. 172-73, 206, multiple users can communicate, any one or all of them can be a sound source, wirelessly. One non-limiting further example is Fig. 71 of Bradski, showing five users. Ota also teaches users in the same space interacting (Ota, Fig. 3)); and obtaining location information from the wireless communication to map the location of the sound source (para. 624, a non-limiting example of using wireless communication for localization, here using ultra wide bandwidth (UWB). Another teaching, para. 1517: “the AR system 9501 may detect a location of the user based on visual information and/or additional information (e.g., GPS location information, compass information, wireless network information).”). It would have been obvious for one of ordinary skill in the art to have combined and modified the applied reference(-s), in view of same, to have obtained the above, and the results of the modification would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A). That is, to include multiple users with wireless connectivity per Bradski, in communication with each other, per both references, and using wireless communication for location information, per Bradski, as location of sounds (per either reference), as mapped above. The prior art included each element recited in claim 7, although not necessarily in a single embodiment, with the only difference being between the claimed element and the prior art being the lack of actual combination of certain elements in a single prior art embodiment, as described above. One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention. Regarding claim 8: Bradski teaches: the method according to claim 7, wherein the wireless communication is ultra-wideband (UWB) (para. 624, 886, UWB connectivity is known). It would have been obvious for one of ordinary skill in the art, as of the effective filing date of Applicant’s claims, to have further modified the applied reference(-s) in view of same to have obtained the above, motivated to make use of known connectivity protocols. Regarding claim 9: Bradski teaches: the method according to claim 7, wherein the wireless communication is Bluetooth (para. 528, Bluetooth is known). It would have been obvious for one of ordinary skill in the art, as of the effective filing date of Applicant’s claims, to have further modified the applied reference(-s) in view of same to have obtained the above, motivated to make use of known connectivity protocols. Regarding claim 10: Bradski teaches: the method according to claim 7, wherein the wireless communication is WiFi (para. 528, WiFi is known). It would have been obvious for one of ordinary skill in the art, as of the effective filing date of Applicant’s claims, to have further modified the applied reference(-s) in view of same to have obtained the above, motivated to make use of known connectivity protocols. Regarding claim 11: Ota teaches: the method according to claim 1, wherein the interactive feature is an augmented reality control, and the method further including triggering a function of the computing device based on an interaction with the augmented reality control (para. 35, select an operation from a pop-up, the operation can be translation). It would have been obvious for one of ordinary skill in the art to have further modified the applied reference(-s), in view of same, to have obtained the above, and the results of the modification would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A). That is, to modify Ota, such that a pop-up dialog box can be incorporated, per Ota, as a selectable interactive feature, per Ota, to trigger a function, such as translation, per Ota, od a transcription, also per Ota. One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention. Regarding claim 12: Bradski teaches: the method according to claim 11, wherein the gesture performed by the user includes pointing a finger at the augmented reality control (Bradski, “Finger Gestures” beginning at para. 1040). It would have been obvious for one of ordinary skill in the art, as of the effective filing date of Applicant’s claims, to have further modified the applied reference(-s) in view of same to have obtained the above, motivated to make use of known interactive input protocols. Regarding claim 13: Ota teaches: the method according to claim 11, wherein the gesture performed by the user includes speaking a command (see para. 35, voice command and gaze (mapped in claim 1), can be used to trigger commands). It would have been obvious for one of ordinary skill in the art to have further modified the applied reference(-s), in view of same, to have obtained the above, and the results of the modification would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A). That is, to combine the gaze selection and voice command features of Ota, to use in interactivity with the device of Ota. Ota teaches all of these features. One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention. Regarding claim 14: Ota teaches: the method according to claim 11, wherein the augmented reality control is a transcript (para. 45, readable transcription) It would have been obvious for one of ordinary skill in the art, as of the effective filing date of Applicant’s claims, to have further modified the applied reference(-s) in view of same to have obtained the above, motivated to make use of known interactive graphics to facilitate communication. Regarding claim 15: Ota or Bradski teach: the method according to claim 11, wherein the augmented reality control is graphic including at least one virtual buttons (Ota: Fig. 4, one example of a graphic message and, para. 35, popup dialog box to select operations) (Bradski, para. 1068). It would have been obvious for one of ordinary skill in the art, as of the effective filing date of Applicant’s claims, to have further modified the applied reference(-s) in view of same to have obtained the above, motivated to make use of known connectivity protocols. Regarding claim 16: see also claim 1; many features are recited also in claim 1 Ota teaches: a computing device (para. 24, head-mounted display), comprising: a microphone array configured to capture sounds from a sound source (Fig. 5: 512, microphone array); a heads-up display configured to display messages corresponding to the sounds from the sound source in an environment (Fig. 5: 502, visual display unit of the head-mounted display, in combination with e.g. Fig. 3 or claims 11 or 12); a gaze sensor configured to monitor one or both eyes of a user (Fig. 5: 508, gaze detection unit; see description of same in para. 28)… a camera configured to capture images of the environment (para. 17-18, camera array); and a processor in communication with the microphone array, the heads-up display, the gaze sensor, the wireless module, and the camera, (paras. 53-54, processors can be part of the devices/embodiments of Ota) the processor configured to: detect a location of the sound source (para. 20, the HMD “includes multiple directional microphones 106 to discriminate from a variety of sound sources that may be coming from a variety of directions.”. See also another example teaching in claim 17); determine that the location corresponds to the sound source (para. 20)… detect a first gaze of the user directed to the highlighted sound source for a period of time (para. 20, in combination with paras. 32-34, the period of time being that necessary for the gaze detection Fig. 7: 702 of Ota to register and process a gaze); display, in response to the first gaze, an interactive feature anchored to the sound source in the environment (see mapping to claim 1); and detect a second gaze (mapped above re: Ota, and in claim 1) directed to the interactive feature combined with a gesture performed by the user to interact with the interactive feature (see mapping to claim 1). Regarding: a wireless module configured to communicate with a device (Bradski, Fig. 1: 12, communication arrows between device and between devices. Another teaching: para. 546)…and display a highlight for the sound source in the heads-up display (Bradski, para. 891, highlighting areas for attention are known), these claim features are taught by Bradski, as mapped herein above. It would have been obvious for one of ordinary skill in the art to have combined and modified the applied reference(-s), in view of same, to have obtained the above, and the results of the modification would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A). That is, to modify Ota, such to include wireless modules, per Bradski, and highlight display features, per Bradski, for sound source, per Ota (i.e. to draw user attention, as the purpose of Ota is to establish which sound source is of most significance to a user), is all of taught and suggested by the prior art, and would have been obvious and predictable to one of ordinary skill. And to have modified Ota, in view of same, such to include use of multiple selection techniques (gaze, and gesture and/or voice command, all per Ota), to interact with the message (both messages and interactivity are taught by Ota). The prior art included each element recited in claim 16, although not necessarily in a single embodiment, with the only difference being between the claimed element and the prior art being the lack of actual combination of certain elements in a single prior art embodiment, as described above. One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention. Regarding claim 17: Ota teaches: the computing device according to claim 16, wherein the processor is further configured to detect a gaze (claim 14; also mapped in claim 1) and an additional pre-determined user action to select the sound source for registration (para. 18 or 29, Ota teaches that its device can detect gestures made by the user) (alternatively, see Bradski and mapping to claim 1, additional gestures); mapping a location of the selected sound source in a global space using any combination of audio-based localization, gaze-based localization, and communication-based localization (see e.g. Ota, paras. 15-16 and Fig. 8, this teaches a combination of audio and gaze based localization; and Fig. 3, which adds and illustrates communication-based localization); and generating the threshold distance based on the location of the sound source (see discussion below). It would have been obvious for one of ordinary skill in the art to have further modified the applied reference(-s), in view of same, to have obtained the above, and the results of the modification would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A). That is, Ota teaches all of audio-based (using microphone array), gaze-based (gaze detection circuit), and communication based localization (talking people), as mapped above. Modifying Ota, in view of same, such that the threshold distance is based on the above-mapped location, including user gesture, per Ota, to select sound source, is all of taught and suggested by Ota, and would have been obvious and predictable to one of ordinary skill. Additional motivation can be found in using several means to ascertain a sound source that a user is most interested, and to most accurately single out said source. One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention. Regarding claim 18: Ota teaches: the computing device according to claim 16, wherein the interactive feature is an augmented reality control (see mapping to claim 16, Ota, para. 25, message can include popup dialog box; another teaching from Ota: para. 26: “Alternatively, the presentation of the AR content may be on a sidebar, in a margin, in a popup window, in a separate screen, as scrolling text (e.g., in a subtitle format), or the like.”) and the processor is further configured to: trigger a function of the computing device based on an interaction with the augmented reality control (para. 35, can select an operation). It would have been obvious for one of ordinary skill in the art, as of the effective filing date of Applicant’s claims, to have further modified the applied reference(-s) in view of same to have obtained the above, motivated to provide users with audio/sound interactive capabilities and facilitate communication amongst users. Regarding claim 19: Ota teaches: the computing device according to claim 18, wherein the augmented reality control includes a text transcript or a text translation (paras. 15-16) that is updated in real time with the sounds (para. 15, real-time translation of sounds. See also paras. 20, 22). See also para. 35, “always perform translation”). It would have been obvious for one of ordinary skill in the art, as of the effective filing date of Applicant’s claims, to have further modified the applied reference(-s) in view of same to have obtained the above, motivated to provide users with audio/sound interactive capabilities and facilitate communication amongst users, and ones of different native languages. Regarding claim 20: Ota teaches: the computing device according to claim 18, wherein the augmented reality control includes a graphical menu or graphical controls (para. 26, “AR content may be on a sidebar, in a margin, in a popup window, in a separate screen, as scrolling text (e.g., in a subtitle format), or the like”). It would have been obvious for one of ordinary skill in the art, as of the effective filing date of Applicant’s claims, to have further modified the applied reference(-s) in view of same to have obtained the above, motivated to provide users with audio/sound interactive capabilities and facilitate communication amongst users. Regarding claim 21: Bradski teaches: the computing device according to claim 16, wherein the sound source is a smart device (Fig. 1: 12 and related description, smart devices as user devices) It would have been obvious for one of ordinary skill in the art, as of the effective filing date of Applicant’s claims, to have further modified the applied reference(-s) in view of same to have obtained the above, motivated to make use of known hardware to perform tasks. Regarding claim 22: see claim 3. These claims are similar; the same rationale for rejection applies. Claim(s) 2 is rejected under 35 U.S.C. 103 as being unpatentable over Ota in view of Bradski, and further in view of Zhang (U.S. Patent No. 11,681,364). Regarding claim 2: The applied reference(-s) to claim 1 do not proactively teach claim 2. Consider the following. In analogous art, Zhang teaches: the method according to claim 1, wherein the determining that the distance between the focus point of the first gaze and the sound source is less than the threshold distance includes: determining whether the focus point of the first gaze is within a bounding box surrounding the sound source (see e.g. Fig. 5: 505, and related description, which shows bounding boxes around faces of persons of interest, used in gaze determination/prediction. See also C11, first two full paragraphs of this column). It would have been obvious for one of ordinary skill in the art, as of the effective filing date of Applicant’s claims, to have further modified the applied reference(-s) in view of same to have obtained the above, that is, to include the bounding box gaze detection features of Zhang, to facilitate gaze detection of Ota. Further motivation is to provide users with audio/sound interactive capabilities and facilitate communication amongst users. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. * * * * * Any inquiry concerning this communication or earlier communications from the examiner should be directed to Sarah Lhymn whose telephone number is (571)270-0632. The examiner can normally be reached M-F, 9:00 AM to 6:00 PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao Wu can be reached at 571-272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. Sarah Lhymn Primary Examiner Art Unit 2613 /Sarah Lhymn/Primary Examiner, Art Unit 2613
Read full office action

Prosecution Timeline

May 02, 2024
Application Filed
Dec 26, 2025
Non-Final Rejection — §103
Mar 05, 2026
Examiner Interview Summary
Mar 05, 2026
Applicant Interview (Telephonic)
Mar 18, 2026
Response Filed
Mar 29, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602882
AUGMENTED REALITY DISPLAY DEVICE AND AUGMENTED REALITY DISPLAY SYSTEM
2y 5m to grant Granted Apr 14, 2026
Patent 12602764
METHODS OF ARTIFICIAL INTELLIGENCE-ASSISTED INFRASTRUCTURE ASSESSMENT USING MIXED REALITY SYSTEMS
2y 5m to grant Granted Apr 14, 2026
Patent 12602746
SYSTEM AND METHOD FOR BACKGROUND MODELLING FOR A VIDEO STREAM
2y 5m to grant Granted Apr 14, 2026
Patent 12585888
AUTOMATICALLY GENERATING DESCRIPTIONS OF AUGMENTED REALITY EFFECTS
2y 5m to grant Granted Mar 24, 2026
Patent 12586163
INTERACTIVELY REFINING A DIGITAL IMAGE DEPTH MAP FOR NON DESTRUCTIVE SYNTHETIC LENS BLUR
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
65%
Grant Probability
81%
With Interview (+15.2%)
2y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 546 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month