Prosecution Insights
Last updated: April 19, 2026
Application No. 18/086,407

NATURAL AND INTERACTIVE 3D VIEWING ON 2D DISPLAYS

Final Rejection §103
Filed
Dec 21, 2022
Examiner
NGUYEN, KATHLEEN V
Art Unit
2486
Tech Center
2400 — Computer Networks
Assignee
Adeia Guides Inc.
OA Round
2 (Final)
66%
Grant Probability
Favorable
3-4
OA Rounds
2y 6m
To Grant
92%
With Interview

Examiner Intelligence

Grants 66% — above average
66%
Career Allow Rate
188 granted / 287 resolved
+7.5% vs TC avg
Strong +26% interview lift
Without
With
+26.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
23 currently pending
Career history
310
Total Applications
across all art units

Statute-Specific Performance

§101
2.6%
-37.4% vs TC avg
§103
59.3%
+19.3% vs TC avg
§102
19.6%
-20.4% vs TC avg
§112
16.7%
-23.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 287 resolved cases

Office Action

§103
DETAILED ACTION This Office Action is in response to the amendment filed on 11/17/2025, wherein claims 10-50 and 60-113 have been cancelled. Claims 1-9, 51-59 and 114-116 have been examined and are pending. This Action is Final. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment/ Argument Applicant's arguments with respect to independent claims 1 and 51, filed on 11/17/2025, have been considered but are moot in new ground of rejection. The combination of Delamont and Park discloses all the limitations as cited in independent claims 1 and 51. See the following rejection. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under AIA 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 4. Claims 1-2, 4, 6, 51-52, 54, 56, 114 and 116 are rejected under 35 U.S.C. 103 as being unpatentable over Delamont (U.S. 2020/0368616), in view of Park et al. (US 2022/0086409) hereinafter Park. Regarding claims 1 and 51, Delamont discloses a method, and a system comprising circuitry configure to: determining a value for an effect implemented to display a two-dimensional (2D) representation of a three-dimensional (3D) scene on a 2D display device, wherein the effect provides a 3D-like effect in a 2D environment (Delamont [0335]-[0340], [0057], [0119], [0076], [0079], [0121], [0064]-[0067]: 3D image is converted into 2D image, i.e. hence value for an effect for display, which can be displayed in display 3 so that the image appears as a 3D formed image, hence provide 3D-like effect in a 2D environment). determining a first user input during the display of the 2D representation of the 3D scene on the 2D display device; modifying the value for the effect; changing the display based on the modified value; determining a second user input during the changed display; analyzing at least one of the value, the first user input, the modified value, the second user input, the data indicative of user viewing satisfaction, or the at least one user preference for the effect to determine an optimized value for the effect; and generating the changed display on the 2D display device utilizing the optimized value for the effect (Delamont [0174]-[0177], [0847]-[0856], [1087]: gesture recognition module to capture users specific hand gestures to control game objects and invoke change in projected images; [1062]: precise mapping of the position of users fingers and hands can be performed using globes for applying transformation and the adjustment of the external projectors 96 rotation, orientation, pan and tilt based on the detected hand gesture movement; [1204]: the rendered 3D scenes are converted into 2D images for display; [0248]: user’s hand gestures or voice command are used to move AI character being displayed on display 3. Hence, determining user input during the displaying, modifying a value for an effect and changing the displayed based on the modified value; [0877]: when user touching the screen, the detection of a touch screen input on the display panel touch screen may invoke the displaying of change in an in game rendered scenery of the surrounding space, which may be seen via display 3 screen; [2019]: detecting hand and wrist movement to control display of a sword on display 3; [1856], [1928], Fig. 11: wired gloves 245 can be used to provide accurate hand gesture input direct feedback for complex hand and finger gestures at a faster rate; [0088], [0131]: the sensors include depth sensors which are used to track the user’s directional movements, focal point, head orientation and head movements so as the system can make adjustments accordingly to the rendering of the augmented displayed surface renderings and virtual game objects. Hence, determine second user input during display which can include changed display based on previous user input, analyzing at least one of the value, first user input, modified value, or second user input to determine optimized value for the effect, which is the displayed image that has change in the image object or image scene based on movement, and generating the changed display). Delamont does not explicitly disclose determining data indicative of user viewing satisfaction; determining at least one user preference for the effect based at least in part on the determined data indicative of user viewing satisfaction. However, Park discloses determining data indicative of user viewing satisfaction; determining at least one user preference for the effect based at least in part on the determined data indicative of user viewing satisfaction (Park [0211]: artificial intelligence model can request user’s feedback, e.g. satisfaction score, for the display device to use user’s feedback and user’s selection frequency, which both show user preference, to recommend image quality information; [0193]: obtain image quality information reflecting user’s tendency information or preference information from predetermined input data using the trained artificial intelligence. Hence, determine data indicative of user viewing satisfaction and user preference for display effect). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the system and method, as disclosed by Delamont, and further incorporate determining data indicative of user viewing satisfaction; determining at least one user preference for the effect based at least in part on the determined data indicative of user viewing satisfaction, as taught by Park, to provide image quality information based on user’s preference to improve user experience without manual operations of adjusting screen setting of the display device by the user (Park [0004], [0022]). Regarding claims 2 and 52, Delamont and Park disclose all limitations of claims 1 and 51, respectively. Delamont discloses comprising at least one of: a. wherein the determining the first user input includes detecting, with a movement module, a movement during the display; b. wherein the determining the first user input includes determining, with a depth module, a depth parameter during the display; c. wherein the determining the second user input includes determining, with a group feedback module, group feedback during the changed display; d. wherein the determining the second user input includes determining, with a user feedback module, a user feedback during the changed display; or e. wherein the analyzing includes analyzing, with a derivation module, rendering data based on the at least one of the detecting step a, the determining step b, the determining step c, or the determining step d (Delamont [0174]-[0177], [0847]-[0856], [1087]: gesture recognition module to capture users specific hand gestures to control game objects and invoke change in projected images, i.e. step a; [0178]: depth aware cameras can be used in detection of hand gesture; [0088], [0131]: the sensors include depth sensors which are used to track the user’s directional movements, focal point, head orientation and head movements so as the system can make adjustments accordingly to the rendering of the augmented displayed surface renderings and virtual game objects; [0129]: depth data of scene can be obtained and depth for display space is created as in [0131], [0166], [0513], [0997], [1192]-[1194], [1896]-[1897], i.e. step b; [1856], [1928], Fig. 11: wired gloves 245 can be used to provide accurate hand gesture input direct feedback for complex hand and finger gestures at a faster rate, i.e. step d). Regarding claims 4 and 54, Delamont and Park disclose all limitations of claims 2 and 52, respectively. Delamont discloses including at least two of steps a-d (Delamont [0174]-[0177], [0847]-[0856], [1087]: gesture recognition module to capture users specific hand gestures to control game objects and invoke change in projected images, i.e. step a; [0178]: depth aware cameras can be used in detection of hand gesture; [0088], [0131]: the sensors include depth sensors which are used to track the user’s directional movements, focal point, head orientation and head movements so as the system can make adjustments accordingly to the rendering of the augmented displayed surface renderings and virtual game objects; [0129]: depth data of scene can be obtained and depth for display space is created as in [0131], [0166], [0513], [0997], [1192]-[1194], [1896]-[1897], i.e. step b; [1856], [1928], Fig. 11: wired gloves 245 can be used to provide accurate hand gesture input direct feedback for complex hand and finger gestures at a faster rate, i.e. step d). Regarding claims 6 and 56, Delamont and Park disclose all limitations of claims 2 and 52, respectively. Delamont discloses wherein the method includes the detecting step a, wherein the movement includes at least one of a hand movement, an eye movement, or a head movement (Delamont [0174]-[0177], [0847]-[0856], [1087]: gesture recognition module to capture users specific hand gestures to control game objects and invoke change in projected images; [1856], [1928], Fig. 11: wired gloves 245 can be used to provide accurate hand gesture input direct feedback for complex hand and finger gestures at a faster rate; [0248]: user’s hand gestures are used to move AI character being displayed on display 3 based on the hand gestures). Regarding claim 114, Delamont and Park disclose all limitations of claim 1. Delamont discloses wherein the value is representative of at least one of: 1) shadow (Delamont [0114], [0652], [0655]), 2) depth ([0057], [0071], [0099], [0131]), 3) motion ([0257], [0248]), 4) color ([0340]-[0341], [0416]), 5) focus ([0409]-[0412]), 6) sharpness, or 7) intensity ([0416]) for the effect implemented to display the 2D representation of the 3D scene on the 2D display device [0174]-[0177], [0847]-[0856], [1087]: gesture recognition module to capture users specific hand gestures to control game objects and invoke change in projected images; [1062]: precise mapping of the position of users fingers and hands can be performed using globes for applying transformation and the adjustment of the external projectors 96 rotation, orientation, pan and tilt based on the detected hand gesture movement; [0248]: user’s hand gestures or voice command are used to move AI character being displayed on display 3; [0877]: when user touching the screen, the detection of a touch screen input on the display panel touch screen may invoke the displaying of change in an in game rendered scenery of the surrounding space, which may be seen via display 3 screen; [2019]: detecting hand and wrist movement to control display of a sword on display 3; [1856], [1928], Fig. 11: wired gloves 245 can be used to provide accurate hand gesture input direct feedback for complex hand and finger gestures at a faster rate; [0088], [0131]: the sensors include depth sensors which are used to track the user’s directional movements, focal point, head orientation and head movements so as the system can make adjustments accordingly to the rendering of the augmented displayed surface renderings and virtual game objects). Regarding claim 116, Delamont and Park disclose all limitations of claim 1. Delamont does not explicitly disclose wherein the data is acquired via an active measurement of viewer satisfaction or a passive measurement of viewer satisfaction. However, Park discloses wherein the data is acquired via an active measurement of viewer satisfaction or a passive measurement of viewer satisfaction (Park [0211]: artificial intelligence model can request user’s feedback, e.g. satisfaction score, on the display device, hence active measurement of viewer satisfaction). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the system and method, as disclosed by Delamont, and further incorporate having wherein the data is acquired via an active measurement of viewer satisfaction or a passive measurement of viewer satisfaction, as taught by Park, to provide image quality information based on user’s preference to improve user experience without manual operations of adjusting screen setting of the display device by the user (Park [0004], [0022]). 5. Claims 5 and 55 are rejected under 35 U.S.C. 103 as being unpatentable over Delamont (U.S. 2020/0368616), in view of Park et al. (US 2022/0086409) hereinafter Park, further in view of Hanina et al. (U.S. 2023/0347100) hereinafter Hanina. Regarding claims 5 and 55, Delamont and Park disclose all limitations of claims 2 and 52, respectively. Delamont does not explicitly disclose including each of steps a-d (Delamont [0174]-[0177], [0847]-[0856], [1087]: gesture recognition module to capture users specific hand gestures to control game objects and invoke change in projected images, i.e. step a; [0178]: depth aware cameras can be used in detection of hand gesture; [0088], [0131]: the sensors include depth sensors which are used to track the user’s directional movements, focal point, head orientation and head movements so as the system can make adjustments accordingly to the rendering of the augmented displayed surface renderings and virtual game objects; [0129]: depth data of scene can be obtained and depth for display space is created as in [0131], [0166], [0513], [0997], [1192]-[1194], [1896]-[1897], i.e. step b; [1856], [1928], Fig. 11: wired gloves 245 can be used to provide accurate hand gesture input direct feedback for complex hand and finger gestures at a faster rate, i.e. step d). Delamont does not explicitly disclose step c. However, Hanina discloses step c (Hanina Fig. 5, [0122]-[0126], [0132]: Visual code or codes can be generated based on feedback from one or more participants 505 of a group of participants. The code or codes are displayed to a group of participants 505 as a group using display 502). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the system and method, as disclosed by Delamont and Park, and further incorporate step c, as taught by Hanina, to improve attention of users (Hanina [0093]). 6. Claims 3 and 53 are rejected under 35 U.S.C. 103 as being unpatentable over Delamont (U.S. 2020/0368616), in view of Park et al. (US 2022/0086409) hereinafter Park, further in view of Parland (U.S. 2020/0393909). Regarding claims 3 and 53, Delamont and Park disclose all limitations of claims 2 and 52, respectively. Delamont does not explicitly disclose training, with a neural network module, a model based on the at least one of the detecting step a, the determining step b, the determining step c, or the determining step d. However, Parland discloses training, with a neural network module, a model based on the at least one of the detecting step a, the determining step b, the determining step c, or the determining step d (Parland [0068]: gesture detection module for video session. Neural network can be used to train a machine learning algorithm or model to detect a gesture completion, such as moving a finger or hand away from a projection screen, or completion of any other gesture. Hence, training, with a neural network module, a model based on at least one of step a). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the system and method, as disclosed by Delamont and Park, and further incorporate training, with a neural network module, a model based on the at least one of the detecting step a, the determining step b, the determining step c, or the determining step d, as taught by Parland, to improve digital capturing and recognizing of gestures (Parland [0028], [0007]). 7. Claims 7-8 and 57-58are rejected under 35 U.S.C. 103 as being unpatentable over Ellison (U.S. 2015/0025967), in view of Park et al. (US 2022/0086409) hereinafter Park, further in view of Boyes (U.S. 2013/0083252). Regarding claims 7 and 57, Delamont and Park disclose all limitations of claims 2 and 52, respectively. Delamont does not explicitly disclose wherein a speed of alteration of the changed display is based on the movement. However, Boyes discloses a speed of alteration of the changed display is based on the movement (Boyes Figs. 1-3, [0026]: light detector 13; [0028]: a user viewing images displayed on a display 2. As the user moves, the controller identifies and tracks the head or body of the user 10 based on gesture code from the light detector 13 to determine the position of the user 10 relative to the display screen 2, and adjust the image on the display screen 2 in real time. When the user moves to the left, the controller pans the video image at the same speed as the user 10, and changes the image to display additional visual information of the right side of the image on the display screen 2. Hence, a speed of alteration of the changed display is based on the movement of the user). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the system and method, as disclosed by Delamont and Park, and further incorporate having a speed of alteration of the changed display is based on the movement, as taught by Boyes, to enhance viewer’s experience while maintain system’s simplification (Boyes [0028], [0007]). Regarding claims 8 and 58, Delamont and Park disclose all limitations of claims 8 and 58, respectively. Delamont does not explicitly disclose wherein the speed of the alteration of the changed display based on the movement is adjusted based on at least one of the determining step d, or the analyzing step e. Boyes discloses the speed of the alteration of the changed display based on the movement is adjusted based on at least one of the determining step d, or the analyzing step e (Boyes Figs. 1-3, [0026]: light detector 13; [0028]: a user viewing images displayed on a display 2. As the user moves, the controller identifies and tracks the head or body of the user 10 based on gesture code from the light detector 13 to determine the position of the user 10 relative to the display screen 2, and adjust the image on the display screen 2 in real time. When the user moves to the left, the computer controller pans the video image at the same speed as the user 10, and changes the image to display additional visual information of the right side of the image on the display screen 2. Hence, a speed of alteration of the changed display based on the movement is adjusted based on at least analyzing of step e which includes analyzing, with a derivation module, rendering data based on at least step a of detecting the user movement). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the system and method, as disclosed by Delamont and Park, and further incorporate having the speed of the alteration of the changed display based on the movement is adjusted based on at least one of the determining step d, or the analyzing step e, as taught by Boyes, to enhance viewer’s experience while maintain system’s simplification (Boyes [0028], [0007]). 8. Claims 9 and 59 are rejected under 35 U.S.C. 103 as being unpatentable over Delamont (U.S. 2020/0368616), in view of Park et al. (US 2022/0086409) hereinafter Park, in view of Perea-OcHoa (U.S. 2023/0152961), in view of O’Sullivan et al. (U.S. 2010/0079500) hereinafter O’Sullivan, further in view of Williamson et al. (U.S. 2010/0123737) hereinafter Williamson. Regarding claims 9 and 59, Delamont and Park disclose all limitations of claims 6 and 56, respectively. Delamont discloses wherein the movement includes the hand movement [0174]-[0177], [0847]-[0856], [1087]: gesture recognition module; [0248]: user’s hand gestures or voice command are used to move AI character being displayed on display 3. Hence, determining user input during the displaying, modifying a value for an effect and changing the displayed based on the modified value; [0877]: when user touching the screen, the detection of a touch screen input on the display panel touch screen may invoke the displaying of change in an in game rendered scenery of the surrounding space, which may be seen via display 3 screen; [2019]: detecting hand and wrist movement to control display of a sword on display 3). Delamont does not explicitly disclose wherein the hand movement includes at least one of a left-right hand movement, an up-down hand movement, or an opening-closing fingers movement, wherein the opening-closing fingers movement is converted to a zoom-in-zoom-out movement in the changed display. However, Perea-OcHoa discloses wherein the opening-closing fingers movement is converted to a zoom-in-zoom-out movement in the changed display (Perea-OcHoa [0392], Figs. 7-8 and 12: gesture of two or more touch points 521 can be recognized and localized by the module of artificial intelligence 25; [0438]: gesture for generating touch points 521 which can be used on 2-dimensional display; Fig. 7, [0318]: predesigned operating gestures in a gesture dictionary 402 with an opening fingers movement that is converted to a zoom-in movement of the displayed image, and a closing fingers movement that is converted to a zoom-out movement of the displayed image). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the system and method, as disclosed by Delamont and Park, and further incorporate having wherein the opening-closing fingers movement is converted to a zoom-in-zoom-out movement in the changed display, as taught by Perea-OcHoa, for the user to easily control the display with convenience (Perea-OcHoa [0003]). Delamont does not explicitly disclose wherein the left-right hand movement is converted to a pan movement in the changed display. However, O’Sullivan discloses wherein the left-right hand movement is converted to a pan movement in the changed display (O’Sullivan Fig. 8, [0039]: camera 805 detects movement of a user’s hand to control pan, zoom or the like of the graphical object displayed on monitor 110. To pan the graphical object, the camera detects movement of the user’s hand moving left for left panning or right for right panning). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the system and method, as disclosed by Delamont and Park and Perea-OcHoa, and further incorporate having wherein the left-right hand movement is converted to a pan movement in the changed display, as taught by O’Sullivan, for the user to control the display with convenience (O’Sullivan [0039]). Delamont does not explicitly disclose wherein the up-down hand movement is converted to a tilt movement in the changed display. Williamson discloses wherein the left-right hand movement is converted to a pan movement in the changed display, wherein the up-down hand movement is converted to a tilt movement in the changed display (Williamson Fig. 4, [0039], [0041]: a device 100 include a touch-sensitive display 102 to display images; [0087]: the user can change the direction (azimuth angle, i.e. pan angle) of the field of view displayed on the display 102 by grabbing and dragging the image to the left or right, or by one or more finger movements swiping across the touch-sensitive display in a desired direction. Hence, converted left-right hand movement to a pan movement in the changed display. The user can change the tilt of the field of view of image displayed on by grabbing and dragging image up and down, or by one or more finger movement swiping across the touch-sensitive display in a desired direction). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the system and method, as disclosed by Delamont and Park and Perea-OcHoa and Williamson, and further incorporate having wherein the left-right hand movement is converted to a pan movement in the changed display, wherein the up-down hand movement is converted to a tilt movement in the changed display, as taught by Williamson, for the user to control the display with convenience (Williamson [0087]). Allowable Subject Matter Claim 115 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: The prior arts individual or in combination fails to discloses within the context of claim 115 the feature of the analyzing the at least one of the value, the first user input, the modified value, the second user input, the data indicative of user viewing satisfaction, or the at least one user preference for the effect to determine the optimized value for the effect includes analyzing each of the value, the first user input, the modified value, the second user input, the data indicative of user viewing satisfaction, and the at least one user preference for the effect to determine the optimized value for the effect as cited in claim 115. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KATHLEEN V NGUYEN whose telephone number is (571)270-0626. The examiner can normally be reached on M-F 9:00am-6:00pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jamie Atala can be reached on 571-272-7384. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KATHLEEN V NGUYEN/Primary Examiner, Art Unit 2486
Read full office action

Prosecution Timeline

Dec 21, 2022
Application Filed
Jul 12, 2025
Non-Final Rejection — §103
Oct 22, 2025
Interview Requested
Oct 29, 2025
Examiner Interview Summary
Oct 29, 2025
Applicant Interview (Telephonic)
Nov 17, 2025
Response Filed
Mar 02, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12593133
TRACKING SYSTEM
2y 5m to grant Granted Mar 31, 2026
Patent 12587674
BIT DEPTH VARIABLE FOR HIGH PRECISION DATA IN WEIGHTED PREDICTION SYNTAX AND SEMANTICS
2y 5m to grant Granted Mar 24, 2026
Patent 12578680
APPARATUS AND METHOD FOR REPRODUCING HOLOGRAM IMAGE
2y 5m to grant Granted Mar 17, 2026
Patent 12574619
DISPLAY CALIBRATION MECHANISM AND EXTERNALLY-HUNG THERMAL IMAGING DEVICE
2y 5m to grant Granted Mar 10, 2026
Patent 12563232
IMAGE FILE FORMAT FOR MULTIPLANE IMAGES
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
66%
Grant Probability
92%
With Interview (+26.0%)
2y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 287 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month