Prosecution Insights
Last updated: April 19, 2026
Application No. 18/687,193

RENDERING IMAGE CONTENT

Non-Final OA §102§103
Filed
Feb 27, 2024
Examiner
TEKLE, DANIEL T
Art Unit
2481
Tech Center
2400 — Computer Networks
Assignee
Mo-Sys Engineering Limited
OA Round
1 (Non-Final)
63%
Grant Probability
Moderate
1-2
OA Rounds
3y 4m
To Grant
56%
With Interview

Examiner Intelligence

Grants 63% of resolved cases
63%
Career Allow Rate
462 granted / 732 resolved
+5.1% vs TC avg
Minimal -7% lift
Without
With
+-6.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
46 currently pending
Career history
778
Total Applications
across all art units

Statute-Specific Performance

§101
8.7%
-31.3% vs TC avg
§103
46.9%
+6.9% vs TC avg
§102
33.5%
-6.5% vs TC avg
§112
4.1%
-35.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 732 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-4, 10-13 and 15-16 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Cordes et al. US 2020/0145644. In regarding to claim 1 Cordes teaches: 1. A control system for controlling a display screen forming a background to a video set wherein multiple cameras are available to capture video of a subject against the screen [0101] Some embodiments of the invention include multiple taking cameras. For example, FIG. 7 illustrates an exemplary embodiment of an immersive content production system 700 that includes two taking cameras 112a and 112b and FIG. 8 is a simplified top view of production system 700. FIGS. 7 and 8 depict a performer 210 in a performance area 102 surrounded at least partially by multiple displays 104 that display scenery images 214 to be captured by the multiple taking cameras. The multiple taking cameras (shown as a first taking camera 112a and a second taking camera 112b) can be directed at a performance area 102 (including the virtual environment presented on the displays 104 (e.g., the LED or LCD display walls) to concurrently capture images. Although only one performer 210 is depicted in the performance area 102 in FIG. 7, multiple performers can be within the performance area as well as multiple props and set decorations. Cordes, 0101-0103, emphasis added and an output selector is configured to receive captured video streams from the cameras and select the stream from a determined one of those cameras as its output, [0103] In some instances the fields of view of the multiple taking cameras will overlap as indicated by region 750 shown in each of FIGS. 7 and 8. Since the perspective-correct renderings of the multiple cameras can be different in the overlapping regions, embodiments of the invention can interleave the cameras and the perspective-correct renderings for each camera in order to isolate the camera feeds from each other. For example, in a scenario with two taking cameras 112a, 112b in which each camera has a frame rate of 60 fps, camera 112a can be set to capture images on the even frames while camera 112b can be set to capture images on the odd frames. Content production system 700 can be synchronized with the cameras such that it generates and displays content for the area 326a when taking camera 112a is capturing images and generates and displays content for area 326b when taking camera 112b is capturing images. Interleaving the perspective-correct content in this manner ensures that each taking camera is capturing images with a background from scenery 214 that matches the perspective of the camera even in the areas 750 where the cameras have overlapping fields of view. Cordes, 0101-0103, emphasis added the control system comprising a display controller configured to compute an image of a background scene from the point of view of a camera other than the determined one of the cameras. [0103] In some instances the fields of view of the multiple taking cameras will overlap as indicated by region 750 shown in each of FIGS. 7 and 8. Since the perspective-correct renderings of the multiple cameras can be different in the overlapping regions, embodiments of the invention can interleave the cameras and the perspective-correct renderings for each camera in order to isolate the camera feeds from each other. For example, in a scenario with two taking cameras 112a, 112b in which each camera has a frame rate of 60 fps, camera 112a can be set to capture images on the even frames while camera 112b can be set to capture images on the odd frames. Content production system 700 can be synchronized with the cameras such that it generates and displays content for the area 326a when taking camera 112a is capturing images and generates and displays content for area 326b when taking camera 112b is capturing images. Interleaving the perspective-correct content in this manner ensures that each taking camera is capturing images with a background from scenery 214 that matches the perspective of the camera even in the areas 750 where the cameras have overlapping fields of view. Cordes, 0101-0103, emphasis added In regarding to claim 2 Cordes teaches: 2. A video capture system for capturing video in a set comprising a display screen providing a background and wherein multiple cameras are available to capture video of a subject against the screen, [0101] Some embodiments of the invention include multiple taking cameras. For example, FIG. 7 illustrates an exemplary embodiment of an immersive content production system 700 that includes two taking cameras 112a and 112b and FIG. 8 is a simplified top view of production system 700. FIGS. 7 and 8 depict a performer 210 in a performance area 102 surrounded at least partially by multiple displays 104 that display scenery images 214 to be captured by the multiple taking cameras. The multiple taking cameras (shown as a first taking camera 112a and a second taking camera 112b) can be directed at a performance area 102 (including the virtual environment presented on the displays 104 (e.g., the LED or LCD display walls) to concurrently capture images. Although only one performer 210 is depicted in the performance area 102 in FIG. 7, multiple performers can be within the performance area as well as multiple props and set decorations. Cordes, 0101-0103, emphasis added the system comprising: an output selector configured to receive captured video streams from the cameras and select the stream from a determined one of those cameras as its output; [0102] The taking cameras 112a, 112b can be pointed in different directions and have different fields of views. For example, taking camera 112a can have a field of view defined by frustum 318a while taking camera 112b can have a field of view defined by frustum 318b. Thus, each taking camera 112a, 112b can capture a different portion of the immersive environment presented on displays 104. For example, taking camera 112a can capture portion 326a while taking camera 112b can capture portion 326b. Cordes, 0101-0103, emphasis added and a display controller configured to compute an image of a background scene from the point of view of a camera other than the determined one of the cameras. [0103] In some instances the fields of view of the multiple taking cameras will overlap as indicated by region 750 shown in each of FIGS. 7 and 8. Since the perspective-correct renderings of the multiple cameras can be different in the overlapping regions, embodiments of the invention can interleave the cameras and the perspective-correct renderings for each camera in order to isolate the camera feeds from each other. For example, in a scenario with two taking cameras 112a, 112b in which each camera has a frame rate of 60 fps, camera 112a can be set to capture images on the even frames while camera 112b can be set to capture images on the odd frames. Content production system 700 can be synchronized with the cameras such that it generates and displays content for the area 326a when taking camera 112a is capturing images and generates and displays content for area 326b when taking camera 112b is capturing images. Interleaving the perspective-correct content in this manner ensures that each taking camera is capturing images with a background from scenery 214 that matches the perspective of the camera even in the areas 750 where the cameras have overlapping fields of view. Cordes, 0101-0103, emphasis added In regarding to claim 3 Cordes teaches: 3. A video capture system as claimed in claim 2, wherein the display controller is configured to provide the image to the screen for display in response to the designation of the output of the other camera for selection by the output selector. Cordes, 0101-0104 In regarding to claim 4 Cordes teaches: 4. A video capture system as claimed in claim 2, comprising a user input device for receiving input from a user, that input representing the said designation of the output of the other camera. Cordes, 0113, 0126 In regarding to claim 10 Cordes teaches: 10. A render engine controller, the controller (Fig. 2, items 16 and 17) being configured to: receive an indication of the location of a first camera with respect to a display wall, [0079] Motion cameras 122 can be part of a motion capture system that can track the movement of performers or objects within system 100. In some instances, motion cameras 122 can be used to track the movement of the taking camera 112 and provide a location of the taking camera to content production system 100 as part of the process that determines what portion of displays 104 are rendered from the tracked position of and the perspective of the taking camera. Cordes, 0079, 0101-0104 and Figs. 6-8, emphasis added wherein the first film camera is currently filming a shot, receive an indication of the location of a second film camera with respect to the display wall to which the shot is going to switch next, [0079] Motion cameras 122 can be part of a motion capture system that can track the movement of performers or objects within system 100. In some instances, motion cameras 122 can be used to track the movement of the taking camera 112 and provide a location of the taking camera to content production system 100 as part of the process that determines what portion of displays 104 are rendered from the tracked position of and the perspective of the taking camera. Cordes, 0079, 0101-0104 and Figs. 6-8, emphasis added the controller being configured to output the location of the first film camera to a first render engine of the plurality of render engines, [0048] In various embodiments, images of the virtual environment presented on the one or more displays are updated by: rendering a global-view perspective of the virtual environment; rendering a first perspective-correct view of the virtual environment from a location and perspective of the first taking camera for an area of the one or more displays within the frustum of the first taking camera; rendering a second perspective-correct view of the virtual environment from a location and perspective of the second taking camera for an area of the one or more displays within the frustum of the second taking camera; and combining the global-view rendering with the first perspective-correct view rendering and second perspective-correct view rendering such that portions of the virtual environment outside the frustums of the first taking camera and the second taking camera are from the global-view render and portions of the virtual environment within the frustum of the first taking camera and the second taking camera are from the perspective-correct view render. Cordes, 0048, 0101-0104 and Figs. 6-8, emphasis added and output the location of the second film camera to a second render engine of the plurality of render engines, [0048] In various embodiments, images of the virtual environment presented on the one or more displays are updated by: rendering a global-view perspective of the virtual environment; rendering a first perspective-correct view of the virtual environment from a location and perspective of the first taking camera for an area of the one or more displays within the frustum of the first taking camera; rendering a second perspective-correct view of the virtual environment from a location and perspective of the second taking camera for an area of the one or more displays within the frustum of the second taking camera; and combining the global-view rendering with the first perspective-correct view rendering and second perspective-correct view rendering such that portions of the virtual environment outside the frustums of the first taking camera and the second taking camera are from the global-view render and portions of the virtual environment within the frustum of the first taking camera and the second taking camera are from the perspective-correct view render. Cordes, 0048, 0101-0104 and Figs. 6-8, emphasis added each render engine being configured to prepare a respective rendered image for displaying on the display wall in dependence on the location received by each render engine respectively; [0048] In various embodiments, images of the virtual environment presented on the one or more displays are updated by: rendering a global-view perspective of the virtual environment; rendering a first perspective-correct view of the virtual environment from a location and perspective of the first taking camera for an area of the one or more displays within the frustum of the first taking camera; rendering a second perspective-correct view of the virtual environment from a location and perspective of the second taking camera for an area of the one or more displays within the frustum of the second taking camera; and combining the global-view rendering with the first perspective-correct view rendering and second perspective-correct view rendering such that portions of the virtual environment outside the frustums of the first taking camera and the second taking camera are from the global-view render and portions of the virtual environment within the frustum of the first taking camera and the second taking camera are from the perspective-correct view render. Cordes, 0048, 0101-0104 and Figs. 6-8, emphasis added the controller being configured to selectively connect the first and second render engines to a display controller for displaying the image rendered by the connected render engine on the display wall. [0048] In various embodiments, images of the virtual environment presented on the one or more displays are updated by: rendering a global-view perspective of the virtual environment; rendering a first perspective-correct view of the virtual environment from a location and perspective of the first taking camera for an area of the one or more displays within the frustum of the first taking camera; rendering a second perspective-correct view of the virtual environment from a location and perspective of the second taking camera for an area of the one or more displays within the frustum of the second taking camera; and combining the global-view rendering with the first perspective-correct view rendering and second perspective-correct view rendering such that portions of the virtual environment outside the frustums of the first taking camera and the second taking camera are from the global-view render and portions of the virtual environment within the frustum of the first taking camera and the second taking camera are from the perspective-correct view render. Cordes, 0048, 0101-0104 and Figs. 6-8, emphasis added In regarding to claim 11 Cordes teaches: 11. A render engine controller as claimed in claim 10, the controller being configured to connect a render engine to the display controller if that render engine is receiving an output from the controller that indicates the location of the film camera that is being used to film a (current) shot. Cordes, 0101-0104 and Figs. 6-8 In regarding to claim 12 Cordes teaches: 12. A render engine controller as claimed in claim 10, the controller being configured not to connect a render engine to the display controller if that render engine receives an output from the controller that indicates the location of a film camera to which the shot is going to switch. Cordes, 0101-0104 and Figs. 6-8 In regarding to claim 13 Cordes teaches: 13. The render engine controller as claimed in claim 12, the controller being configured to be connected to fewer render engines than there are film cameras in use. Cordes, 0088-0091 In regarding to claim 15 Cordes teaches: 15. A system comprising a plurality of film cameras directed at a display wall and a plurality of render engines, [0101] Some embodiments of the invention include multiple taking cameras. For example, FIG. 7 illustrates an exemplary embodiment of an immersive content production system 700 that includes two taking cameras 112a and 112b and FIG. 8 is a simplified top view of production system 700. FIGS. 7 and 8 depict a performer 210 in a performance area 102 surrounded at least partially by multiple displays 104 that display scenery images 214 to be captured by the multiple taking cameras. The multiple taking cameras (shown as a first taking camera 112a and a second taking camera 112b) can be directed at a performance area 102 (including the virtual environment presented on the displays 104 (e.g., the LED or LCD display walls) to concurrently capture images. Although only one performer 210 is depicted in the performance area 102 in FIG. 7, multiple performers can be within the performance area as well as multiple props and set decorations. Cordes, 0101-0104 and Figs. 6-8, emphasis added each render engine being configured to receive an input relating to the position of one of the plurality of film cameras with respect to the display wall, [0103] In some instances the fields of view of the multiple taking cameras will overlap as indicated by region 750 shown in each of FIGS. 7 and 8. Since the perspective-correct renderings of the multiple cameras can be different in the overlapping regions, embodiments of the invention can interleave the cameras and the perspective-correct renderings for each camera in order to isolate the camera feeds from each other. For example, in a scenario with two taking cameras 112a, 112b in which each camera has a frame rate of 60 fps, camera 112a can be set to capture images on the even frames while camera 112b can be set to capture images on the odd frames. Content production system 700 can be synchronized with the cameras such that it generates and displays content for the area 326a when taking camera 112a is capturing images and generates and displays content for area 326b when taking camera 112b is capturing images. Interleaving the perspective-correct content in this manner ensures that each taking camera is capturing images with a background from scenery 214 that matches the perspective of the camera even in the areas 750 where the cameras have overlapping fields of view. Cordes, 0101-0104 and Figs. 6-8, emphasis added and wherein each render engine is configured to output a rendered image in dependence on the position received by each render engine; [0103] In some instances the fields of view of the multiple taking cameras will overlap as indicated by region 750 shown in each of FIGS. 7 and 8. Since the perspective-correct renderings of the multiple cameras can be different in the overlapping regions, embodiments of the invention can interleave the cameras and the perspective-correct renderings for each camera in order to isolate the camera feeds from each other. For example, in a scenario with two taking cameras 112a, 112b in which each camera has a frame rate of 60 fps, camera 112a can be set to capture images on the even frames while camera 112b can be set to capture images on the odd frames. Content production system 700 can be synchronized with the cameras such that it generates and displays content for the area 326a when taking camera 112a is capturing images and generates and displays content for area 326b when taking camera 112b is capturing images. Interleaving the perspective-correct content in this manner ensures that each taking camera is capturing images with a background from scenery 214 that matches the perspective of the camera even in the areas 750 where the cameras have overlapping fields of view. Cordes, 0101-0104 and Figs. 6-8, emphasis added wherein the number of render engines is less than the number of film cameras. [0104] It can also be beneficial to interleave the immersive content generated for each camera's field of view in scenarios where two taking cameras (e.g., camera 112a and camera 112b) face each other in opposing directions and do not have overlapping fields of view. Such a scenario can occur, for example, during a complicated action scenes (e.g., a fight scene) in which multiple cameras would be used to capture as much video as possible in a single take. In this scenario, even though the multiple cameras might not have overlapping fields of view, the display lighting from outside the frustum of camera 112a can pollute images being taken for camera 112b. Similarly, the display lighting from outside the frustum of camera 112b can pollute the images being taken for camera 112a. Accordingly, interleaving the immersive content generated for each camera's field of view as described above can be used to resolve the light pollution for each of the taking cameras. Cordes, 0101-0104 and Figs. 6-8, emphasis added In regarding to claim 16 Cordes teaches: 16. A system as claimed in claim 15, further comprising a render engine controller for controlling the plurality of render engines, and a display controller for receiving a rendered image and outputting the rendered image to the display wall, wherein the render engine controller is configured to prevent each render engine from the plurality of render engines from outputting a rendered image to the display controller. Cordes, 0101-0104 and Figs. 6-8. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 5-9 is rejected under 35 U.S.C. 103 as being unpatentable over Cordes et al. US 2020/0145644 as applied to claims 1-4 above, and further in view of Jun et al. US 2017/0253122. In regarding to claim 5 Cordes teaches: 5. A video capture system as claimed in claim 2, However, Cordes fails to explicitly teach but Jun teaches: comprising multiple display controllers, [0309] That is, the vehicle control device according to an embodiment of the present disclosure generates information to be displayed on the HUD 801, the cluster 802, and the CID 803, as one image (synthesized image), divides the generated one image (synthesized image) and displays the same on the corresponding displays 9 e.g., the HUD 801, the cluster 802, and the CID 803), whereby a high quality seamless image may be displayed on the corresponding displays (e.g., the HUD 801, the cluster 802, and the CID 803) without data transmission delay caused as a plurality of controllers independently controlling a plurality of displays share data. Jun, Fig. 2, item 6 and 17, 0046, 0283, 0309 Accordingly, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Jun with the system of Cordes in order comprising multiple display controllers, as such, provides high quality seamless image without data transmission delay and a difference in image quality between a plurality of displays..—Abstract. Furthermore, Cordes teaches: each display controller being assigned to compute an image of the background scene from the point of view of respective one of the cameras. [0101] Some embodiments of the invention include multiple taking cameras. For example, FIG. 7 illustrates an exemplary embodiment of an immersive content production system 700 that includes two taking cameras 112a and 112b and FIG. 8 is a simplified top view of production system 700. FIGS. 7 and 8 depict a performer 210 in a performance area 102 surrounded at least partially by multiple displays 104 that display scenery images 214 to be captured by the multiple taking cameras. The multiple taking cameras (shown as a first taking camera 112a and a second taking camera 112b) can be directed at a performance area 102 (including the virtual environment presented on the displays 104 (e.g., the LED or LCD display walls) to concurrently capture images. Although only one performer 210 is depicted in the performance area 102 in FIG. 7, multiple performers can be within the performance area as well as multiple props and set decorations. Cordes, 0101-0104 and Figs. 6-8. In regarding to claim 6 Cordes teaches: 6. A video capture system as claimed in claim 2, furthermore, Jun teaches: comprising fewer display controllers than the number of the cameras. Jun, 0283, 0309 In regarding to claim 7 Cordes and Jun teaches: 7. A video capture system as claimed in claim 6, furthermore, Cordes teaches: wherein each display controller is configured to, when it is not providing image output to the screen and it is signal led with the identity of a camera whose stream is to be selected as output, compute an image of the background scene from the point of view of that camera. Cordes, 0072, 0124, In regarding to claim 8 Cordes and Jun teaches: 8. A video capture system as claimed in claim 6, furthermore, Cordes teaches: comprising a camera selection controller configured to: receive an input representing the designation of a camera to provide output; Cordes, 0101-0104 signal the identity of that camera to one or more of the display controllers; Cordes, 0101-0104 and a predetermined time after such signaling, cause that display controller to provide image output to the screen and cause the output selector to select the stream from that camera as its output. Cordes, 0101-0104 In regarding to claim 9 Cordes and Jun teaches: 9. A video capture system as claimed in claim 6, furthermore, Cordes teaches: comprising a camera selection controller configured to: receive an input representing the designation of a camera to provide output; Cordes, 0101-0104 signal the identity of that camera to one or more of the display controllers; Cordes, 0101-0104 and in response to a signal from a display controller indicating that a background image from the point of view of that camera is available, cause that display controller to provide image output to the screen and cause the output selector to select the stream from that camera as its output. Cordes, 0101-0104 Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Cordes et al. US 2020/0145644 further in view of Okada US 2014/0253741. In regarding to claim 14 Cordes teaches: 14. A render engine controller as claimed in claim 10, however, Cordes fails to explicitly teach but Okada teaches: the controller being configured to delay a switch from one camera to another if it is determined that the frustums of those cameras intersect, and otherwise to not delay such a switch. Okada, 0031-0033, 0060 and Figs. 1-2 Accordingly, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Jun with the system of Cordes in order the controller being configured to delay a switch from one camera to another if it is determined that the frustums of those cameras intersect, and otherwise to not delay such a switch, as such, provides a camera system that can reduce quality degradation of a necessary image even in a case where a network bandwidth is insufficient for video signal transmission..—0005. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DANIEL T TEKLE whose telephone number is (571)270-1117. The examiner can normally be reached Monday-Friday 8:00-4:30 ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Vaughn can be reached at 571-272-3922. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DANIEL T TEKLE/Primary Examiner, Art Unit 2481
Read full office action

Prosecution Timeline

Feb 27, 2024
Application Filed
Dec 27, 2025
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602804
Method for Processing Three-dimensional Scanning, Three-dimensional Scanning Device, and Computer-readable Storage Medium
2y 5m to grant Granted Apr 14, 2026
Patent 12603969
PARKING VIDEO RECORDING DEVICE, A TELEMATICS SERVER AND A METHOD FOR RECORDING A PARKING VIDEO
2y 5m to grant Granted Apr 14, 2026
Patent 12587615
MULTI-STREAM PEAK BANDWIDTH DISPERSAL
2y 5m to grant Granted Mar 24, 2026
Patent 12573430
INTERACTIVE VIDEO ACCESSIBILITY COMPLIANCE SYSTEMS AND METHODS
2y 5m to grant Granted Mar 10, 2026
Patent 12548219
SYSTEM AND METHOD FOR HIGH-RESOLUTION 3D IMAGES USING LASER ABLATION AND MICROSCOPY
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
63%
Grant Probability
56%
With Interview (-6.9%)
3y 4m
Median Time to Grant
Low
PTA Risk
Based on 732 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month