Prosecution Insights
Last updated: April 19, 2026
Application No. 18/595,043

SYSTEMS FOR AND METHODS OF CREATING A LIBRARY OF FACIAL EXPRESSIONS

Non-Final OA §102§DP
Filed
Mar 04, 2024
Examiner
TRAN, PHUOC
Art Unit
2668
Tech Center
2600 — Communications
Assignee
Prof Jim Inc.
OA Round
1 (Non-Final)
85%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
93%
With Interview

Examiner Intelligence

Grants 85% — above average
85%
Career Allow Rate
606 granted / 713 resolved
+23.0% vs TC avg
Moderate +8% lift
Without
With
+8.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
9 currently pending
Career history
722
Total Applications
across all art units

Statute-Specific Performance

§101
11.2%
-28.8% vs TC avg
§103
26.8%
-13.2% vs TC avg
§102
31.0%
-9.0% vs TC avg
§112
9.4%
-30.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 713 resolved cases

Office Action

§102 §DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp. Claims 1-15 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-15 of U.S. Patent No. 11,922,726 in view of Kantor (US 2021/0350076). Although the claims at issue are not identical, they are not patentably distinct from each other because the present claims obvious variants of the patent claims. The present claims 1-15 differ from patent claims 1-15 in that “output of a large language model” instead of “an electronic mail message” is processed. Processing output of a large language model is well-known in the art as evidenced by Kantor (para. 0015, 0023). It would have been obvious to one of ordinary skill in the art to replace an electronic mail message with output of a large language model as taught by Kantor since doing this would amount to a simple substitution of one known element for another in order to obtain predictable results. The following table shows the corresponding limitations between the present claims and the patent claims. Present Application Claims Patent claims 1. A method comprising: receiving, at a computer system from a user, output of a large language model; processing, via at least one processor of the computer system, the output of a large language model using at least one neural network, the at least one neural network comprising summarization capabilities and sentiment analysis capabilities, resulting in a sentiment analysis and at least one summary; generating, via the at least one processor, a plurality of slides, each slide in the plurality of slides comprising an avatar, and a narration, the narration based on the output of the large language model; animating, via the at least one processor, the avatar within each slide in the plurality of slides using the narration and the sentiment analysis, resulting in a plurality of animated slides; and combining, via the at least one processor, the plurality of animated slides, resulting in an animated video comprising an animated avatar. 2. The method of claim 1, wherein the processing of the output of a large language model using the at least one neural network further results in: a title for each slide in the plurality of slides. 3. The method of claim 1, wherein the processing of the output of a large language model using the at least one neural network further results in: a slide description for each slide in the plurality of slides. 4. The method of claim 1, wherein the processing of the output of a large language model using the at least one neural network further results in: keywords for each slide in the plurality of slides. 5. The method of claim 4, further comprising: executing, via the at least one processor, an image search using the keywords. 6. A system comprising: at least one processor; and a non-transitory computer-readable storage medium having instructions stored which, when executed by the at least one processor, cause the at least one processor to perform operations comprising: receiving, from a user, an output of a large language model; processing the output of the large language model using at least one neural network, the at least one neural network comprising summarization capabilities and sentiment analysis capabilities, resulting in a summary and a sentiment analysis; generating a plurality of slides, each slide in the plurality of slides comprising an avatar and a narration, the narration based on the output of the large language model; animating the avatar within each slide in the plurality of slides using the narration and the sentiment analysis, resulting in a plurality of animated slides; and combining the plurality of animated slides, resulting in an animated video comprising an animated avatar. 7. The system of claim 6, wherein the processing of output of the large language model using the at least one neural network further results in: a title for each slide in the plurality of slides. 8. The system of claim 6, wherein the processing of output of the large language model using the at least one neural network further results in: a slide description for each slide in the plurality of slides. 9. The system of claim 6, wherein the processing of output of the large language model using the at least one neural network further results in: keywords for each slide in the plurality of slides. 10. The system of claim 9, the non-transitory computer-readable storage medium having additional instructions stored which, when executed by the at least one processor, cause the at least one processor to perform operations comprising: executing an image search using the keywords. 11. A non-transitory computer-readable storage medium having instructions stored which, when executed by at least one processor, cause the at least one processor to perform operations comprising: receiving, from a user, an output of a large language model in electronic form; processing the output of the large language model using at least one neural network, the at least one neural network comprising summarization capabilities and sentiment analysis capabilities, resulting in a summary and a sentiment analysis; generating a plurality of slides, each slide in the plurality of slides comprising an avatar and a narration, the narration based on the output of the large language model; animating the avatar within each slide in the plurality of slides using the narration and the sentiment analysis, resulting in a plurality of animated slides; and combining the plurality of animated slides, resulting in an animated video comprising an animated avatar. 12. The non-transitory computer-readable storage medium of claim 11, wherein the processing of the output of the large language model using the at least one neural network further results in: a title for each slide in the plurality of slides. 13. The non-transitory computer-readable storage medium of claim 11, wherein the processing of the output of the large language model using the at least one neural network further results in: a slide description for each slide in the plurality of slides. 14. The non-transitory computer-readable storage medium of claim 11, wherein the processing of the output of the large language model using the at least one neural network further results in: keywords for each slide in the plurality of slides. 15. The non-transitory computer-readable storage medium of claim 14, the non-transitory computer-readable storage medium having additional instructions stored which, when executed by the at least one processor, cause the at least one processor to perform operations comprising: executing an image search using the keywords. 1. A method comprising: receiving, at a computer system from a user, an electronic mail message; processing, via at least one processor of the computer system, the electronic mail message using at least one neural network, the at least one neural network comprising summarization capabilities and sentiment analysis capabilities, resulting in a sentiment analysis and at least one summary; generating, via the at least one processor, a plurality of slides, each slide in the plurality of slides comprising an avatar and a narration, the narration based on the electronic mail message; animating, via the at least one processor, the avatar within each slide in the plurality of slides using the narration and the sentiment analysis, resulting in a plurality of animated slides; and combining, via the at least one processor, the plurality of animated slides, resulting in an animated video comprising an animated avatar. 2. The method of claim 1, wherein the processing of the electronic mail message using the at least one neural network further results in: a title for each slide in the plurality of slides. 3. The method of claim 1, wherein the processing of the electronic mail message using the at least one neural network further results in: a slide description for each slide in the plurality of slides. 4. The method of claim 1, wherein the processing of the electronic mail message using the at least one neural network further results in: keywords for each slide in the plurality of slides. 5. The method of claim 4, further comprising: executing, via the at least one processor, an image search using the keywords. 6. A system comprising: at least one processor; and a non-transitory computer-readable storage medium having instructions stored which, when executed by the at least one processor, cause the at least one processor to perform operations comprising: receiving, from a user, an electronic mail message; processing the electronic mail message using at least one neural network, the at least one neural network comprising summarization capabilities and sentiment analysis capabilities, resulting in a summary and a sentiment analysis; generating a plurality of slides, each slide in the plurality of slides comprising an avatar and a narration, the narration based on the electronic mail message; animating the avatar within each slide in the plurality of slides using the narration and the sentiment analysis, resulting in a plurality of animated slides; and combining the plurality of animated slides, resulting in an animated video comprising an animated avatar. 7. The system of claim 6, wherein the processing of the electronic mail message using the at least one neural network further results in: a title for each slide in the plurality of slides. 8. The system of claim 6, wherein the processing of the electronic mail message using the at least one neural network further results in: a slide description for each slide in the plurality of slides. 9. The system of claim 6, wherein the processing of the electronic mail message using the at least one neural network further results in: keywords for each slide in the plurality of slides. 10. The system of claim 9, the non-transitory computer-readable storage medium having additional instructions stored which, when executed by the at least one processor, cause the at least one processor to perform operations comprising: executing an image search using the keywords. 11. A non-transitory computer-readable storage medium having instructions stored which, when executed by at least one processor, cause the at least one processor to perform operations comprising: receiving, from a user, a electronic mail message in electronic form; processing the electronic mail message using at least one neural network, the at least one neural network comprising summarization capabilities and sentiment analysis capabilities, resulting in a summary and a sentiment analysis; generating a plurality of slides, each slide in the plurality of slides comprising an avatar and a narration, the narration based on the electronic mail message; animating the avatar within each slide in the plurality of slides using the narration and the sentiment analysis, resulting in a plurality of animated slides; and combining the plurality of animated slides, resulting in an animated video comprising an animated avatar. 12. The non-transitory computer-readable storage medium of claim 11, wherein the processing of the electronic mail message using the at least one neural network further results in: a title for each slide in the plurality of slides. 13. The non-transitory computer-readable storage medium of claim 11, wherein the processing of the electronic mail message using the at least one neural network further results in: a slide description for each slide in the plurality of slides. 14. The non-transitory computer-readable storage medium of claim 11, wherein the processing of the electronic mail message using the at least one neural network further results in: keywords for each slide in the plurality of slides. 15. The non-transitory computer-readable storage medium of claim 14, the non-transitory computer-readable storage medium having additional instructions stored which, when executed by the at least one processor, cause the at least one processor to perform operations comprising: executing an image search using the keywords. Claims 1-15 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-15 of U.S. Patent No. 12,165,433 in view of Kantor (US 2021/0350076). Although the claims at issue are not identical, they are not patentably distinct from each other because the present claims obvious variants of the patent claims. The present claims 1-15 differ from patent claims 1-15 in that “output of a large language model” instead of “social media content” is processed. Processing output of a large language model is well-known in the art as evidenced by Kantor (para. 0015, 0023). It would have been obvious to one of ordinary skill in the art to replace social media content with output of a large language model as taught by Kantor since doing this would amount to a simple substitution of one known element for another in order to obtain predictable results. The following table shows the corresponding limitations between the present claims and the patent claims. Present Application Claims Patent claims 1. A method comprising: receiving, at a computer system from a user, output of a large language model; processing, via at least one processor of the computer system, the output of a large language model using at least one neural network, the at least one neural network comprising summarization capabilities and sentiment analysis capabilities, resulting in a sentiment analysis and at least one summary; generating, via the at least one processor, a plurality of slides, each slide in the plurality of slides comprising an avatar, and a narration, the narration based on the output of the large language model; animating, via the at least one processor, the avatar within each slide in the plurality of slides using the narration and the sentiment analysis, resulting in a plurality of animated slides; and combining, via the at least one processor, the plurality of animated slides, resulting in an animated video comprising an animated avatar. 2. The method of claim 1, wherein the processing of the output of a large language model using the at least one neural network further results in: a title for each slide in the plurality of slides. 3. The method of claim 1, wherein the processing of the output of a large language model using the at least one neural network further results in: a slide description for each slide in the plurality of slides. 4. The method of claim 1, wherein the processing of the output of a large language model using the at least one neural network further results in: keywords for each slide in the plurality of slides. 5. The method of claim 4, further comprising: executing, via the at least one processor, an image search using the keywords. 6. A system comprising: at least one processor; and a non-transitory computer-readable storage medium having instructions stored which, when executed by the at least one processor, cause the at least one processor to perform operations comprising: receiving, from a user, an output of a large language model; processing the output of the large language model using at least one neural network, the at least one neural network comprising summarization capabilities and sentiment analysis capabilities, resulting in a summary and a sentiment analysis; generating a plurality of slides, each slide in the plurality of slides comprising an avatar and a narration, the narration based on the output of the large language model; animating the avatar within each slide in the plurality of slides using the narration and the sentiment analysis, resulting in a plurality of animated slides; and combining the plurality of animated slides, resulting in an animated video comprising an animated avatar. 7. The system of claim 6, wherein the processing of output of the large language model using the at least one neural network further results in: a title for each slide in the plurality of slides. 8. The system of claim 6, wherein the processing of output of the large language model using the at least one neural network further results in: a slide description for each slide in the plurality of slides. 9. The system of claim 6, wherein the processing of output of the large language model using the at least one neural network further results in: keywords for each slide in the plurality of slides. 10. The system of claim 9, the non-transitory computer-readable storage medium having additional instructions stored which, when executed by the at least one processor, cause the at least one processor to perform operations comprising: executing an image search using the keywords. 11. A non-transitory computer-readable storage medium having instructions stored which, when executed by at least one processor, cause the at least one processor to perform operations comprising: receiving, from a user, an output of a large language model in electronic form; processing the output of the large language model using at least one neural network, the at least one neural network comprising summarization capabilities and sentiment analysis capabilities, resulting in a summary and a sentiment analysis; generating a plurality of slides, each slide in the plurality of slides comprising an avatar and a narration, the narration based on the output of the large language model; animating the avatar within each slide in the plurality of slides using the narration and the sentiment analysis, resulting in a plurality of animated slides; and combining the plurality of animated slides, resulting in an animated video comprising an animated avatar. 12. The non-transitory computer-readable storage medium of claim 11, wherein the processing of the output of the large language model using the at least one neural network further results in: a title for each slide in the plurality of slides. 13. The non-transitory computer-readable storage medium of claim 11, wherein the processing of the output of the large language model using the at least one neural network further results in: a slide description for each slide in the plurality of slides. 14. The non-transitory computer-readable storage medium of claim 11, wherein the processing of the output of the large language model using the at least one neural network further results in: keywords for each slide in the plurality of slides. 15. The non-transitory computer-readable storage medium of claim 14, the non-transitory computer-readable storage medium having additional instructions stored which, when executed by the at least one processor, cause the at least one processor to perform operations comprising: executing an image search using the keywords. 1. A method comprising: receiving, at a computer system from a user, social media content; processing, via at least one processor of the computer system, the social media content using at least one neural network, the at least one neural network comprising summarization capabilities and sentiment analysis capabilities, resulting in a sentiment analysis and at least one summary; generating, via the at least one processor, a plurality of slides, each slide in the plurality of slides comprising an avatar and a narration, the narration based on the social media content; animating, via the at least one processor, the avatar within each slide in the plurality of slides using the narration and the sentiment analysis, resulting in a plurality of animated slides; and combining, via the at least one processor, the plurality of animated slides, resulting in an animated video comprising an animated avatar. 2. The method of claim 1, wherein the processing of the social media content using the at least one neural network further results in: a title for each slide in the plurality of slides. 3. The method of claim 1, wherein the processing of the social media content using the at least one neural network further results in: a slide description for each slide in the plurality of slides. 4. The method of claim 1, wherein the processing of the social media content using the at least one neural network further results in: keywords for each slide in the plurality of slides. 5. The method of claim 4, further comprising: executing, via the at least one processor, an image search using the keywords. 6. A system comprising: at least one processor; and a non-transitory computer-readable storage medium having instructions stored which, when executed by the at least one processor, cause the at least one processor to perform operations comprising: receiving, from a user, social media content; processing the social media content using at least one neural network, the at least one neural network comprising summarization capabilities and sentiment analysis capabilities, resulting in a summary and a sentiment analysis; generating a plurality of slides, each slide in the plurality of slides comprising an avatar and a narration, the narration based on the social media content; animating the avatar within each slide in the plurality of slides using the narration and the sentiment analysis, resulting in a plurality of animated slides; and combining the plurality of animated slides, resulting in an animated video comprising an animated avatar. 7. The system of claim 6, wherein the processing of the social media content using the at least one neural network further results in: a title for each slide in the plurality of slides. 8. The system of claim 6, wherein the processing of the social media content using the at least one neural network further results in: a slide description for each slide in the plurality of slides. 9. The system of claim 6, wherein the processing of the social media content using the at least one neural network further results in: keywords for each slide in the plurality of slides. 10. The system of claim 9, the non-transitory computer-readable storage medium having additional instructions stored which, when executed by the at least one processor, cause the at least one processor to perform operations comprising: executing an image search using the keywords. 11. A non-transitory computer-readable storage medium having instructions stored which, when executed by at least one processor, cause the at least one processor to perform operations comprising: receiving, from a user, a social media content in electronic form; processing the social media content using at least one neural network, the at least one neural network comprising summarization capabilities and sentiment analysis capabilities, resulting in a summary and a sentiment analysis; generating a plurality of slides, each slide in the plurality of slides comprising an avatar and a narration, the narration based on the social media content; animating the avatar within each slide in the plurality of slides using the narration and the sentiment analysis, resulting in a plurality of animated slides; and combining the plurality of animated slides, resulting in an animated video comprising an animated avatar. 12. The non-transitory computer-readable storage medium of claim 11, wherein the processing of the social media content using the at least one neural network further results in: a title for each slide in the plurality of slides. 13. The non-transitory computer-readable storage medium of claim 11, wherein the processing of the social media content using the at least one neural network further results in: a slide description for each slide in the plurality of slides. 14. The non-transitory computer-readable storage medium of claim 11, wherein the processing of the social media content using the at least one neural network further results in: keywords for each slide in the plurality of slides. 15. The non-transitory computer-readable storage medium of claim 14, the non-transitory computer-readable storage medium having additional instructions stored which, when executed by the at least one processor, cause the at least one processor to perform operations comprising: executing an image search using the keywords. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-15 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Anooj et al. (US 11,532,179). As to claim 1, Anooj discloses a method comprising: receiving, at a computer system from a user, output of a large language model (col. 1, lines 32 – col. 2, line 20, col. 14, line 64 – col. 15, line 35) ; processing, via at least one processor of the computer system, the output of a large language model using at least one neural network, the at least one neural network comprising summarization capabilities and sentiment analysis capabilities, resulting in a sentiment analysis and at least one summary model (col. 1, lines 32 – col. 2, line 20, col. 14, line 64 – col. 15, line 35, col. 6, lines 6-56); generating, via the at least one processor, a plurality of slides, each slide in the plurality of slides comprising an avatar, and a narration, the narration based on the output of the large language model model (col. 1, lines 32 – col. 2, line 20, col. 14, line 64 – col. 15, line 35, col. 6, lines 6-56); animating, via the at least one processor, the avatar within each slide in the plurality of slides using the narration and the sentiment analysis, resulting in a plurality of animated slide model (col. 1, lines 32 – col. 2, line 20, col. 14, line 64 – col. 15, line 35, col. 6, lines 6-56); and combining, via the at least one processor, the plurality of animated slides, resulting in an animated video comprising an animated avatar model (col. 1, lines 32 – col. 2, line 20, col. 14, line 64 – col. 15, line 35). As to claim 2, Anooj discloses the method of claim 1, wherein the processing of the output of a large language model using the at least one neural network further results in: a title for each slide in the plurality of slides model (col. 1, lines 32 – col. 2, line 20, col. 14, line 64 – col. 15, line 35). As to claim 3, Anooj discloses the method of claim 1, wherein the processing of the output of a large language model using the at least one neural network further results in: a slide description for each slide in the plurality of slides model (col. 1, lines 32 – col. 2, line 20, col. 14, line 64 – col. 15, line 35). As to claim 4, Anooj discloses the method of claim 1, wherein the processing of the output of a large language model using the at least one neural network further results in: keywords for each slide in the plurality of slides model (col. 1, lines 32 – col. 2, line 20, col. 14, line 64 – col. 15, line 35). As to claim 5, Anooj discloses the method of claim 4, further comprising: executing, via the at least one processor, an image search using the keywords model (col. 1, lines 32 – col. 2, line 20, col. 14, line 64 – col. 15, line 35). As to claims 6-15, these claims recite features similar to those discussed above. Therefore, they are rejected for reasons similar to those discussed above. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to PHUOC TRAN whose telephone number is (571)272-7399. The examiner can normally be reached 9am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vu Le can be reached at 571-272-7332. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PHUOC TRAN/Primary Examiner, Art Unit 2668
Read full office action

Prosecution Timeline

Mar 04, 2024
Application Filed
Jan 24, 2026
Non-Final Rejection — §102, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602752
IMAGE PROCESSING METHOD AND IMAGE PROCESSING APPARATUS
2y 5m to grant Granted Apr 14, 2026
Patent 12592071
METHOD, DEVICE, AND COMPUTER PROGRAM PRODUCT FOR DETERMINING NODE OF DECISION TREE
2y 5m to grant Granted Mar 31, 2026
Patent 12579692
Method and device for preparing data for identifying analytes
2y 5m to grant Granted Mar 17, 2026
Patent 12579618
IMAGING METHOD AND APPARATUS, AND COMPUTER-READABLE MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12573000
UPSAMPLING AN IMAGE USING ONE OR MORE NEURAL NETWORKS
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
85%
Grant Probability
93%
With Interview (+8.1%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 713 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month