Prosecution Insights
Last updated: April 19, 2026
Application No. 18/357,145

REAL-TIME RENDERING GENERATING APPARATUS, METHOD, AND NON-TRANSITORY COMPUTER READABLE STORAGE MEDIUM THEREOF

Non-Final OA §103§112
Filed
Jul 23, 2023
Examiner
RENZE, GEORGE NICHOLAS
Art Unit
2613
Tech Center
2600 — Communications
Assignee
HTC Corporation
OA Round
3 (Non-Final)
67%
Grant Probability
Favorable
3-4
OA Rounds
2y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
16 granted / 24 resolved
+4.7% vs TC avg
Strong +33% interview lift
Without
With
+33.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
33 currently pending
Career history
57
Total Applications
across all art units

Statute-Specific Performance

§101
2.7%
-37.3% vs TC avg
§103
73.3%
+33.3% vs TC avg
§102
16.0%
-24.0% vs TC avg
§112
8.0%
-32.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 24 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on December 21st, 2025 has been entered. Response to Amendment The Amendment filed December 21st, 2025 has been entered. Claims 1, 11 and 20 have been amended. Claims 1-20 remain rejected in the application. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claims 1, 11 and 20 contain subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. The amendments incorporate new matter in regards to the limitation “rendering the skeletal range of a non-customized body part corresponding to each character level of detail based on a preset template rigged to each of the virtual characters”. Regarding this limitation, applicant states that paragraphs [0038]-[0041] explain these technical features, however they do not describe exactly what constitutes a “non-customized body part”. The closest that the specifications come to describing a “non-customized body part” is in paragraph [0040] where it states that “In the present example, all body parts of the character level of detail VL3 are rendered with the preset template (i.e., there are no body parts that can be customized, and the ranges is 0)...”. Although this paragraph describes a preset template where no body parts can be customized, it does not properly distinguish exactly what is considered a “non-customized body part” and/or what would differentiate that from a “body part that can, or cannot, be customized”. One of ordinary skill in the art would understand that a preset template consisting of body parts that cannot be customized is not the same as a body part that is not customized, i.e., a “non-customized body part”. The dependent claims inherit and do not remedy the deficiency. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-9 and 11-20 are rejected under 35 U.S.C. 103 as being unpatentable by Ding (CN 110090400 B) in view of WoW Quests (Arachnophobia Safe Mode Grounded) [https://www.youtube.com/watch?v=YF5qbRj4JIE], hereinafter WoW. Regarding claim 1, Ding discloses a real-time rendering generating apparatus (FIG. 16 and Paragraph [0033] teaches that FIG. 16 is a schematic diagram of the structure of a virtual object display device), comprising: a transceiver interface (Paragraph [0167] teaches that the electronic device 1700 may also have components such as a wired or wireless network interface); a storage (Paragraph [0168] teaches that a computer-readable storage medium is also provided, such as a memory including instructions); and a processor, being electrically connected to the transceiver interface and the storage (Paragraph [0167] teaches that the electronic device 1700 may include one or more processors (central processing units, CPU) 1701 and one or more memories 1702, wherein the one or more memories 1702 store at least one instruction, and the at least one instruction is loaded and executed by the one or more processors 1701 to implement the virtual object display method provided), and being configured to perform operations comprising: receiving a plurality of character motion data of a plurality of virtual characters (Paragraph [0108] teaches that the terminal may obtain a plurality of animation frames corresponding to each virtual object in the at least one virtual object to be displayed and a weight of each animation frame); determining a rendering level corresponding to each of the virtual characters based on a classification rule related to a first virtual character and the character motion data, wherein each of the rendering levels corresponds to one of a plurality of character level of detail, and each of the plurality of character level of detail corresponds to a range of a customized body part and a skeletal model (Paragraph [0082] teaches that the degree of refinement of the skeleton model can be achieved through LOD technology. In step 204, the terminal can determine the LOD level of each virtual object to be displayed in parallel according to the distance between the at least one virtual object to be displayed and the virtual camera through the multiple parallel channels of the image processor, thereby obtaining the skeleton model corresponding to the LOD level, and the skeleton model corresponding to the LOD level is the target skeleton model. Additionally, FIG. 14 displays multiple different body parts that can be customized with different features and appearances). However, Ding fails to disclose a skeletal range of a skeletal model, and the customized body part corresponds to a retained detail of part of each of the plurality of virtual characters. WoW discloses a skeletal range of a skeletal model, and the customized body part corresponds to a retained detail of part of each of the plurality of virtual characters (WoW, 00:13-00:30 shows a skeletal range of a spider model being changed from a range of 0 to 5, with 0 being the most detailed model and 5 having no detail and does not include a skeleton model at all. 00:56-1:15 shows the after effects of the customized body parts of the spider being turned from a detailed spider with legs, fangs and a body with features and designs, into a model consisting of only a couple of featureless blobs that float around, but still retain the customized body and head part structure that another character can still interact with). Since Ding teaches the initial real-time rendering generating apparatus for determining a rendering level for virtual characters with different level of details (LODs) for their body and skeleton model based on character motion data and WoW teaches a skeletal range for a character model, that can display customized body parts that correspond to how much detail is retained according to the virtual character, it would have been obvious to a person skilled in the art to have combined the concepts together so that when determining a characters level of detail to render, the skeletal model could also be taken into account and rendered in a similar manner as the character model, with different range levels and different body parts, that can also be customized in a way that corresponds to the retained LOD of the character model. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ding to incorporate the concepts of WoW, so that the combined features together would allow for more efficient rendering of a virtual character by being able to control the level of detail needed to render a virtual character by incorporating customizable body parts that correspond to the level of detail in relation to the characters skeletal range of the skeletal model. Additionally, Ding in view of WoW disclose and generating a real-time rendering of each of the virtual characters based on the rendering level corresponding to each of the virtual characters (FIG. 14 and Paragraph [0105] teach that as shown in FIG. 14, the head, body, and arms may each include multiple part models, and when a virtual object is generated, a random selection may be made from the multiple part models of each part to obtain a skeleton model of the virtual object). Regarding claim 2, Ding in view of WoW disclose everything claimed as applied above (see claim 1), in addition, Ding discloses wherein the classification rule is generated based on a regional relationship between each of the virtual characters and the first virtual character (Paragraph [0135] teaches that for each virtual object among the multiple virtual objects, according to the target position information of each virtual object, at least one of a first position relationship and a second position relationship corresponding to each virtual object is obtained, where the first position relationship is a relationship between each virtual object and an image acquisition range of a virtual camera, and the second position relationship is at least one of a position relationship between each virtual object, the virtual camera, and a virtual object in a virtual scene). Regarding claim 3, Ding in view of WoW disclose everything claimed as applied above (see claim 2), in addition, Ding discloses wherein the operation of generating the regional relationship between each of the virtual characters and the first virtual character comprises the following operations: generating a region node graph based on a plurality of regions and a connection relationship corresponding to the regions, wherein the region node graph comprises a plurality of nodes (Paragraph [0052] teaches that the judgment of each virtual object can be achieved by including the target area of each virtual object, that is, the terminal can obtain the target position information of the target area including each virtual object, and use the target position information of the target area as the target position information of each virtual object); assigning a distance relationship value corresponding to each of the regions based on the region node graph and a minimum node distance value between a first region of the first virtual character and each of the regions (Paragraph [0080] teaches that the terminal can select one from multiple skeleton models as a skeleton model to be displayed, that is, a target skeleton model, according to the distance between each virtual object to be displayed and the virtual camera); and classifying the virtual characters into the regions to generate the regional relationship between each of the virtual characters and the first virtual character based on a position information comprised in each of the character motion data and the distance relationship value corresponding to each of the regions (Paragraph [0069] teaches that the terminal obtains the first position relationship and the second position relationship corresponding to each virtual object according to the target position information of each virtual object). Regarding claim 4, Ding in view of WoW disclose everything claimed as applied above (see claim 3), in addition, Ding discloses wherein the virtual characters located in the same area correspond to the same rendering level (Paragraph [0082] teaches that the LOD level of a virtual object to be displayed may be determined through each parallel channel according to the distance between the virtual object to be displayed and the virtual camera and paragraph [0097] teaches that the virtual objects with the same material information are displayed in the same batch and specifically, the terminal can instantiate the drawing commands of the same type and number according to the type and number of material information of the target skeletal model of at least one virtual object to be displayed, and obtain the objects of the same type and number, each drawing command corresponding to a type of material information). Regarding claim 5, Ding in view of WoW disclose everything claimed as applied above (see claim 1), in addition, Ding discloses wherein the processor is further configured to perform the following operations: determining an interaction state between the first virtual character and each of the virtual characters based on the character motion data (Paragraph [0084] teaches that the terminal can also directly set the correspondence between the distance interval and the skeleton model, and the terminal can obtain the skeleton model corresponding to the distance interval as the target skeleton model based on the distance interval); and adjusting the rendering level corresponding to each of the virtual characters based on the interaction states (Paragraph [0076] teaches that by making judgments based on these two positional relationships, it can be ensured that the visibility of virtual objects is culled more thoroughly and accurately, and the subsequent steps of unnecessary display data determination can be reduced, thereby improving the processing efficiency of the overall process of the virtual object display method). Regarding claim 6, Ding in view of WoW disclose everything claimed as applied above (see claim 1), in addition, Ding discloses wherein the plurality of character level of detail comprise at least a first character level of detail and a second character level of detail, the first character level of detail corresponds to a first customized body part and a first skeletal model, the second character level of detail corresponds to a second customized body part and a second skeletal model (Paragraph [0083] teaches that LOD levels include four levels and LOD levels are negatively correlated with the degree of refinement, LOD levels can include levels 0, 1, 2, and 3, among which level 0 has the best degree of refinement of the skeleton model, and the largest number of faces and vertices of the skeleton model, and level 3 has the worst degree of refinement of the skeleton model, and the smallest number of faces and vertices of the skeleton model). Regarding claim 7, Ding in view of WoW disclose everything claimed as applied above (see claim 6), in addition, Ding discloses wherein the second customized body part is at least a part of the first customized body part, and the second skeletal model is at least a part of the first skeletal model (FIG. 3 and paragraph [0083] teaches that as shown in FIG. 3, the four skeletal models have fewer faces and vertices from left to right, and their level of refinement becomes increasingly poor and paragraph [0087] teaches that the materials of the skeleton models of different virtual objects may be the same or different and paragraph [0105] teaches that the skeleton model may include multiple part models). Regarding claim 8, Ding in view of WoW disclose everything claimed as applied above (see claim 6), in addition, Ding discloses wherein the plurality of character level of detail further comprise at least a third character level of detail, the third character level of detail corresponds to a third customized body part and a third skeletal model, the third skeletal model is at least a part of the second skeletal model, the range corresponding to the third customized body part is zero (Paragraph [0044] teaches that taking the four LOD levels as an example, the LOD levels may include 0, 1, 2, and 3 and paragraph [0080] teaches that each virtual object to be displayed may include multiple skeleton models, and the multiple skeleton models have different degrees of refinement). Regarding claim 9, Ding in view of WoW disclose everything claimed as applied above (see claim 6), in addition, Ding discloses wherein the plurality of character level of detail further comprise at least a fourth character level of detail, the fourth character level of detail corresponds to a fourth customized body part and a fourth skeletal model, the range corresponding to the fourth customized body part is zero, the range corresponding to the fourth skeletal model is zero (Paragraph [0044] teaches that taking the four LOD levels as an example, the LOD levels may include 0, 1, 2, and 3. Among them, the skeleton model at level 0 has the best degree of refinement, and the number of faces and vertices of the skeleton model is the largest. The skeleton model at level 3 has the worst degree of refinement, and the number of faces and vertices of the skeleton model is the smallest). Regarding claim 11, the method steps correspond to and our rejected similarly to the apparatus steps of claim 1 (see claim 1 above). Regarding claim 12, the method steps correspond to and our rejected similarly to the apparatus steps of claim 2 (see claim 2 above). Regarding claim 13, the method steps correspond to and our rejected similarly to the apparatus steps of claim 3 (see claim 3 above). Regarding claim 14, the method steps correspond to and our rejected similarly to the apparatus steps of claim 4 (see claim 4 above). Regarding claim 15, the method steps correspond to and our rejected similarly to the apparatus steps of claim 5 (see claim 5 above). Regarding claim 16, the method steps correspond to and our rejected similarly to the apparatus steps of claim 6 (see claim 6 above). Regarding claim 17, the method steps correspond to and our rejected similarly to the apparatus steps of claim 7 (see claim 7 above). Regarding claim 18, the method steps correspond to and our rejected similarly to the apparatus steps of claim 8 (see claim 8 above). Regarding claim 19, the method steps correspond to and our rejected similarly to the apparatus steps of claim 9 (see claim 9 above). Regarding claim 20, the non-transitory computer readable storage medium corresponds to and is rejected similarly to the apparatus steps of claim 1 (see claim 1 above). Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Ding in view of WoW as applied to claim 1 above, and further in view of Munkberg et al. (Pub. No.: US 2022/0165040 A1), hereinafter Munkberg. Regarding claim 10, Ding in view of WoW disclose everything claimed as applied above (see claim 1), however, Ding in view of WoW fail to disclose wherein the processor is further configured to perform the following operations: determining an appearance rendering level corresponding to each of the plurality of character level of detail based on a plurality of appearance level of detail, wherein each of the plurality of appearance level of detail corresponds to a rendering polygon number. Munkberg discloses wherein the processor is further configured to perform the following operations: determining an appearance rendering level corresponding to each of the plurality of character level of detail based on a plurality of appearance level of detail, wherein each of the plurality of appearance level of detail corresponds to a rendering polygon number (FIG. 2A and Paragraph [0051] teach that FIG. 2A illustrates a conceptual diagram of normal vector and displacement mapping for a 3D model, in accordance with an embodiment. In an embodiment, the appearance driven automatic 3D modeling system 150 jointly optimizes a base mesh, displacement map, and normal map of an initial 3D model 205 to match the appearance of a 370k triangle reference 3D model 225 of a dancer. The 1k triangle initial 3D model 205 is a decimated mesh generated from the reference 3D model 225. The dancer model presents a complex optimization problem, and the initial 3D model 205 provides a coarsely tessellated base mesh with displacement constrained to the normal direction. Still, the appearance of both a 64k triangle reduced resolution 3D model 210 comprising a displacement map without a normal map and a 64k triangle reduced resolution 3D model 215 comprising a displacement map with a normal map, closely match the reference 3D model 225). Since Ding in view of WoW teaches the initial real-time rendering generating apparatus for rendering virtual characters with different level of details (LODs) and Munkberg teaches creating various 3D models using an appearance 3D modeling system, that can create and generate a 3D model with various polygonal counts, it would have been obvious to a person having ordinary skill in the art to combine the features together so that when rendering the virtual character with varying different LODs, the amount of polygons being used could also be associated with the varying different LODs as well. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ding in view of WoW to incorporate the teachings of Munkberg, so that the combined features together would allow for more efficient rendering of a virtual character by allowing for specific polygon counts to be used with the various different LODs. Furthermore, Ding in view of WoW and Munkberg disclose and generating the real-time rendering of each of the virtual characters based on the rendering level and the appearance rendering level corresponding to each of the virtual characters (Paragraph [0118] of Ding teaches that the terminal reads the character group drawing instruction through the GPU, and can perform real-time skinning on the character and draw all the characters in parallel. Each character is drawn according to its own LOD level, so as to display it on the screen (graphical user interface) and paragraph [0101] of Ding teaches that as shown in Figures 8 to 11, the degree of refinement of the target bone model of the virtual object to be displayed can be different. In the above step 204, after the target bone model of each virtual object is determined, it is displayed. Combined with Figure 4, the display effects of bone models at different LOD levels are different). Response to Arguments Applicant's arguments filed December 21st, 2025 with respect to independent claims 1, 11 and 20 have been fully considered but they are not persuasive. In response to applicant's argument that neither Ding nor WoW teach the feature of “rendering the skeletal range of a non-customized body part corresponding to each character level of detail based on a preset template rigged to each of the virtual characters” recited in the amended claim 1, a recitation of the intended use of the claimed invention must result in a structural difference between the claimed invention and the prior art in order to patentably distinguish the claimed invention from the prior art. If the prior art structure is capable of performing the intended use, then it meets the claim. Due to the uncertainty and clarification needed for the potential addition of new matter being added involving “a non-customized body part”, Ding in view of WoW still appear to teach the capabilities of rendering a skeletal range of a body part corresponding to each character level of detail based on a preset template rigged to each of the virtual characters. FIG. 3 and paragraph 83 of Ding still teach that there are four LOD levels that correspond to particular skeletal models and a range of different details for each of the skeletal models and FIG. 4 and paragraphs 84-85 of Ding show that information related to the correspondences of distances and LOD levels of different objects can be stored for later use and that a number of LOD levels of skeletal models of virtual objects can be determined and then assigned to the virtual objects/characters in order to be displayed and rendered due to the stored information, thus showing the capabilities of being able to store information related to potential templates for each of the different virtual objects/characters. In addition, WoW has preset templates of skeletal information related to the different virtual models, that are specifically rigged to the virtual character models of the different spiders that make adjustments to the different body parts of the spider, including its legs, fangs, body and head, thus providing a skeletal range of different body parts from preset templates (the five different spider body make-ups) that are assigned and rigged to a virtual character and thus it appears that the combination of Ding in view of WoW can still perform the newly amended claimed language, barring clarification of what constitutes “a non-customized body part”. In response to applicant’s argument that there is no teaching, suggestion, or motivation to combine the references, the examiner recognizes that obviousness may be established by combining or modifying the teachings of the prior art to produce the claimed invention where there is some teaching, suggestion, or motivation to do so found either in the references themselves or in the knowledge generally available to one of ordinary skill in the art. See In re Fine, 837 F.2d 1071, 5 USPQ2d 1596 (Fed. Cir. 1988), In re Jones, 958 F.2d 347, 21 USPQ2d 1941 (Fed. Cir. 1992), and KSR International Co. v. Teleflex, Inc., 550 U.S. 398, 82 USPQ2d 1385 (2007). In this case, Ding teaches the aspect and capabilities of real-time rendering of different virtual models with different skeleton models and LODs and the capabilities of storing related skeletal and LOD information about many different virtual objects for potential template usage, while WoW teaches the concepts of providing the capabilities of utilizing preset templates to make specific adjustments to the LOD of a skeleton range of a particular virtual model and then render that model at a certain LOD according to the preset template. This would mean that when combined, Ding could utilize in real-time the potential rendering and skeletal template functions that WoW provides when trying to render different LODs for its skeleton models. This would then improve the real-time rendering aspects of Ding and provide the capabilities to then reduce the amount of resources thus needed to render the different LOD of the different virtual objects. In regards to the additional arguments regarding the dependent claims for the virtue of their dependency are moot because the independent claims are not allowable. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Kwai (Pub. No.: US 2020/0013232 A1) teaches a method and apparatus for converting 3D scanned objects to avatars using preset templates Raymond (Pub. No.: US 2022/0096931 A1) teaches methods and systems for generating level of detail visual assets in video games. Any inquiry concerning this communication or earlier communications from the examiner should be directed to George Renze whose telephone number is (703)756-5811. The examiner can normally be reached Monday-Friday 9:00am - 6:00pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao Wu can be reached on (571) 272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /G.R./Examiner, Art Unit 2613 /XIAO M WU/Supervisory Patent Examiner, Art Unit 2613
Read full office action

Prosecution Timeline

Jul 23, 2023
Application Filed
Apr 14, 2025
Non-Final Rejection — §103, §112
Jul 08, 2025
Response Filed
Sep 20, 2025
Final Rejection — §103, §112
Dec 21, 2025
Request for Continued Examination
Jan 17, 2026
Response after Non-Final Action
Mar 05, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602407
SYSTEMS AND METHODS FOR GENERATING A UNIQUE IDENTITY FOR A GEOSPATIAL OBJECT CODE BY PROCESSING GEOSPATIAL DATA
2y 5m to grant Granted Apr 14, 2026
Patent 12573147
LANDMARK DATA COLLECTION METHOD AND LANDMARK BUILDING MODELING METHOD
2y 5m to grant Granted Mar 10, 2026
Patent 12555315
HEURISTIC-BASED VARIABLE RATE SHADING FOR MOBILE GAMES
2y 5m to grant Granted Feb 17, 2026
Patent 12530759
System and Method for Point Cloud Generation
2y 5m to grant Granted Jan 20, 2026
Patent 12505508
DIGITAL IMAGE RADIAL PATTERN DECODING SYSTEM
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
67%
Grant Probability
99%
With Interview (+33.3%)
2y 7m
Median Time to Grant
High
PTA Risk
Based on 24 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month