Prosecution Insights
Last updated: April 19, 2026
Application No. 19/077,678

SYSTEM AND METHOD FOR A CATALOG OF TRAINING CONTENT AUGMENTED WITH ARTIFICIAL INTELLIGENCE

Final Rejection §103
Filed
Mar 12, 2025
Examiner
HU, XIAOQIN
Art Unit
2168
Tech Center
2100 — Computer Architecture & Software
Assignee
Hsi Usa Holding Inc.
OA Round
2 (Final)
61%
Grant Probability
Moderate
3-4
OA Rounds
2y 12m
To Grant
99%
With Interview

Examiner Intelligence

Grants 61% of resolved cases
61%
Career Allow Rate
114 granted / 187 resolved
+6.0% vs TC avg
Strong +58% interview lift
Without
With
+57.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 12m
Avg Prosecution
25 currently pending
Career history
212
Total Applications
across all art units

Statute-Specific Performance

§101
19.1%
-20.9% vs TC avg
§103
35.6%
-4.4% vs TC avg
§102
12.4%
-27.6% vs TC avg
§112
29.2%
-10.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 187 resolved cases

Office Action

§103
DETAILED ACTION This office action is in response to the above identified application filed on January 22, 2026. The application contains claims 1-20. Claims 1, 8, and 15 are amended Claims 1-20 are pending Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments and amendments filed on January 22, 2026 have been fully considered and the objections and rejections are updated accordingly. Claim Rejections - 35 USC § 103 Applicant’s arguments with respect to the new limitations introduced with the amendments are addressed with new prior art and rationale. Please refer to the updated 35 U.S.C. 103 rejections as set forth below for details. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Ives et al. (US 20090006388 A1), in view of Short (US 20230214412 A1), and in further view of Sood et al. (US 20220337443 A1). With respect to claim 1, Ives teaches a method (Fig. 17; [0129]) comprising: executing, at a computer system via at least one processor (Fig. 1; [0109]: processor(s)), a search of training course content stored in a database ([0036]; [0133]: stored in a database), the search identifying at least one of new training course content or updated training course content, resulting in search result content (Fig. 17; [0129]: rescan the pages in a given web collection to determine their changes after a set period at step 670, wherein the rescanning corresponds to “executing … a search … the search identifying … new … or updated …”. The limitation “training course content” is nonfunctional descriptive material that is not functionally involved in the steps recited. The method would have been performed the same regardless of the data. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the teachings already in the prior art to the particular type of data: “training course content”); identifying, via the at least one processor for each piece of content in the search result content, a media type of the each piece of content, the media type comprising one of a video type, an article type, and a slide type (Fig. 17; [0129]: identify media types of files in the pages at step 610. [0135]-[0141]; [0040]: extracted web page content includes images, video files, audio files, text files, or parts of or combinations of any of these and so on, wherein video files indicate “a video type” and text files indicate “an article type”, i.e., the media type comprises at least one of … type); executing, via the at least one processor on the each piece of content, at least one data extraction algorithm from a plurality of data extraction algorithms, wherein the at least one data extraction algorithm is selected based on the media type, resulting in extracted data for each piece of content in the search result content (Fig. 17; [0129]: apply an analysis algorithm to each file according to the media type of the file to derive or extract content items at step 620, wherein the analysis algorithm corresponds to “at least one data extraction algorithm” and [0135]-[0141] and [0040] teach “a plurality of data extraction algorithms” as discussed above); Ives does not teach the plurality of data extraction algorithms comprising a video extraction algorithm, an article extraction algorithm, and a slide extraction algorithm, adding, via the at least one processor, the extracted data to a semantic search index. Short teaches adding, via the at least one processor (Fig. 7: processor(s) 102), the extracted data to a semantic search index (Fig. 1; [0065]-[0067]: store the determined semantic information as an entry in the index file at S106. [0002]-[0004]: a semantic search index). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ives to incorporate the teachings of Short to add the extracted data to a semantic search index. Doing so would provide a way of searching a corpus of textual data /text data items that uses concepts and conceptual relationships rather than keyword searching. That is, use a lexical-conceptual knowledge base to enable the querying of the corpus of textual data using concepts and relations between concepts instead of literal or fuzzy string matching as taught by Short ([0005]). Ives and Short do not teach the plurality of data extraction algorithms comprising a video extraction algorithm, an article extraction algorithm, and a slide extraction algorithm, Sood teaches the plurality of data extraction algorithms comprising a video extraction algorithm, an article extraction algorithm, and a slide extraction algorithm ([0069]-[0070]: use an extractive text summarization algorithm to an audio, video, text file, and the one or more presentation tools (e.g., Microsoft® PowerPoint slides, Microsoft® Excel files, spreadsheets, web pages, diagrams, flowcharts, etc.)), It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ives and Short to incorporate the teachings of Sood to comprise a video extraction algorithm, an article extraction algorithm, and a slide extraction algorithm in the plurality of data extraction algorithms. Doing so would identify and extract the important sentences and information from a given audio, video, and/or text file and generate them verbatim to produce a subset of the sentences from the original text as a summary as taught by Sood ([0070]). With respect to claim 2, As discussed in claim 1, Ives and Short and Sood teach all the limitations therein. Short further teaches the method of claim 1, further comprising: receiving, at the computer system, a natural language query from a user (Fig. 5; [0072]: receive, via a user interface, a user text query (step S200). The user text query may be a single word or multiple words); searching, via the at least one processor, the semantic search index for a response to the natural language query, resulting in query search results (Fig. 5; [0073]: search an index file of the concept-based search engine to identify at least one entry that matches the user text query (step S202)); and displaying, via a display of the computer system, the query search results in response to the natural language query (Fig. 5; [0077]: output, via the user interface, information, from a text-based database of the concept-based search engine, identifying at least one text data item associated with an entry in the index file that matches the user text query (step S204)). With respect to claim 3, As discussed in claim 2, Ives and Short and Sood teach all the limitations therein. Short further teaches the method of claim 2, wherein the query search results further comprise at least one source for each query search result in the query search results (Fig. 5; [0077]: output, via the user interface, information, from a text-based database of the concept-based search engine, identifying at least one text data item associated with an entry in the index file that matches the user text query (step S204), wherein the information identifying at least one text data item identifies “at least one source”). With respect to claim 4, As discussed in claim 1, Ives and Short and Sood teach all the limitations therein. Ives further teaches the method of claim 1, wherein the training course content further comprises a plurality of courses, with each course in the plurality of courses comprising training content and exam questions (As discussed above, the limitation “training course content” is nonfunctional descriptive material that is not functionally involved in the steps recited. The method would have been performed the same regardless of the data. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the teachings already in the prior art to the particular type of data: “training course content”). With respect to claim 5, As discussed in claim 4, Ives and Short and Sood teach all the limitations therein. Ives further teaches the method of claim 4, wherein the training content comprises at least one of video course content, slide-based course content, and article course content (Fig. 17; [0129]; [0036]: web page content includes any collection of data files, audio, image or video files and so on). With respect to claim 6, As discussed in claim 1, Ives and Short and Sood teach all the limitations therein. Short further teaches the method of claim 1, wherein the at least one data extraction algorithm comprises at least one Artificial Intelligence (Al) language service (Fig. 7; [0091]: the AI module 116 may be used to analyse the unstructured text data items received during the index file generation process, to determine semantic information encoded by at least one portion of text within the received text data items, wherein the AI module 116 corresponds to “at least one AI language service”). With respect to claim 7, As discussed in claim 6, Ives and Short and Sood teach all the limitations therein. Short further teaches the method of claim 6, wherein the at least one Al language service comprises: key phrase extraction; recognition of named entities; and domain extraction ([0095]: keyword, domain, and named entities. [0116]: keyword). With respect to claim 8, Ives teaches a system (Fig. 1) comprising: at least one processor (Fig. 1; [0109]: processor(s)); and a non-transitory computer-readable storage medium having instructions stored which, when executed by the at least one processor, cause the at least one processor to perform operations comprising: executing a search of training course content stored in a database ([0036]; [0133]: stored in a database), the search identifying at least one of new training course content or updated training course content, resulting in search result content (Fig. 17; [0129]: rescan the pages in a given web collection to determine their changes after a set period at step 670, wherein the rescanning corresponds to “executing … a search … the search identifying … new … or updated …”. The limitation “training course content” is nonfunctional descriptive material that is not functionally involved in the steps recited. The method would have been performed the same regardless of the data. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the teachings already in the prior art to the particular type of data: “training course content”); identifying, for each piece of content in the search result content, a media type of the each piece of content, the media type comprising one of a video type, an article type, and a slide type (Fig. 17; [0129]: identify media types of files in the pages at step 610. [0135]-[0141]; [0040]: extracted web page content includes images, video files, audio files, text files, or parts of or combinations of any of these and so on, wherein video files indicate “a video type” and text files indicate “an article type”, i.e., the media type comprises at least one of … type); executing, on the each piece of content, at least one data extraction algorithm from a plurality of data extraction algorithms, the plurality of data extraction algorithms comprising a video extraction algorithm, an article extraction algorithm, and a slide extraction algorithm, wherein the at least one data extraction algorithm is selected based on the media type, resulting in extracted data for each piece of content in the search result content (Fig. 17; [0129]: apply an analysis algorithm to each file according to the media type of the file to derive or extract content items at step 620, wherein the analysis algorithm corresponds to “at least one data extraction algorithm” and [0135]-[0141] and [0040] teach “a plurality of data extraction algorithms” as discussed above); Ives does not teach the plurality of data extraction algorithms comprising a video extraction algorithm, an article extraction algorithm, and a slide extraction algorithm, adding the extracted data to a semantic search index. Short teaches adding the extracted data to a semantic search index (Fig. 1; [0065]-[0067]: store the determined semantic information as an entry in the index file at S106. [0002]-[0004]: a semantic search index). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ives to incorporate the teachings of Short to add the extracted data to a semantic search index. Doing so would provide a way of searching a corpus of textual data /text data items that uses concepts and conceptual relationships rather than keyword searching. That is, use a lexical-conceptual knowledge base to enable the querying of the corpus of textual data using concepts and relations between concepts instead of literal or fuzzy string matching as taught by Short ([0005]). Ives and Short do not teach the plurality of data extraction algorithms comprising a video extraction algorithm, an article extraction algorithm, and a slide extraction algorithm, Sood teaches the plurality of data extraction algorithms comprising a video extraction algorithm, an article extraction algorithm, and a slide extraction algorithm ([0069]-[0070]: use an extractive text summarization algorithm to an audio, video, text file, and the one or more presentation tools (e.g., Microsoft® PowerPoint slides, Microsoft® Excel files, spreadsheets, web pages, diagrams, flowcharts, etc.)), It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ives and Short to incorporate the teachings of Sood to comprise a video extraction algorithm, an article extraction algorithm, and a slide extraction algorithm in the plurality of data extraction algorithms. Doing so would identify and extract the important sentences and information from a given audio, video, and/or text file and generate them verbatim to produce a subset of the sentences from the original text as a summary as taught by Sood ([0070]). With respect to claim 9, As discussed in claim 8, Ives and Short and Sood teach all the limitations therein. Short further teaches the system of claim 8, the non-transitory computer-readable storage medium having additional instructions stored which, when executed by the at least one processor, cause the at least one processor to perform operations comprising: receiving a natural language query from a user (Fig. 5; [0072]: receive, via a user interface, a user text query (step S200). The user text query may be a single word or multiple words); searching the semantic search index for a response to the natural language query, resulting in query search results (Fig. 5; [0073]: search an index file of the concept-based search engine to identify at least one entry that matches the user text query (step S202)); and displaying, via a display of the system, the query search results in response to the natural language query (Fig. 5; [0077]: output, via the user interface, information, from a text-based database of the concept-based search engine, identifying at least one text data item associated with an entry in the index file that matches the user text query (step S204)). With respect to claim 10, As discussed in claim 9, Ives and Short and Sood teach all the limitations therein. Short further teaches the system of claim 9, wherein the query search results further comprise at least one source for each query search result in the query search results (Fig. 5; [0077]: output, via the user interface, information, from a text-based database of the concept-based search engine, identifying at least one text data item associated with an entry in the index file that matches the user text query (step S204), wherein the information identifying at least one text data item identifies “at least one source”). With respect to claim 11, As discussed in claim 8, Ives and Short and Sood teach all the limitations therein. Ives further teaches the system of claim 8, wherein the training course content further comprises a plurality of courses, with each course in the plurality of courses comprising training content and exam questions (As discussed above, the limitation “training course content” is nonfunctional descriptive material that is not functionally involved in the steps recited. The method would have been performed the same regardless of the data. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the teachings already in the prior art to the particular type of data: “training course content”). With respect to claim 12, As discussed in claim 11, Ives and Short and Sood teach all the limitations therein. Ives further teaches the system of claim 11, wherein the training content comprises at least one of video course content, slide-based course content, and article course content (Fig. 17; [0129]; [0036]: web page content includes any collection of data files, audio, image or video files and so on). With respect to claim 13, As discussed in claim 8, Ives and Short and Sood teach all the limitations therein. Short further teaches the system of claim 8, wherein the at least one data extraction algorithm comprises at least one Artificial Intelligence (Al) language service (Fig. 7; [0091]: the AI module 116 may be used to analyse the unstructured text data items received during the index file generation process, to determine semantic information encoded by at least one portion of text within the received text data items, wherein the AI module 116 corresponds to “at least one AI language service”). With respect to claim 14, As discussed in claim 13, Ives and Short and Sood teach all the limitations therein. Short further teaches the system of claim 13, wherein the at least one Al language service comprises: key phrase extraction; recognition of named entities; and domain extraction ([0095]: keyword, domain, and named entities. [0116]: keyword). With respect to claim 15, Ives teaches a non-transitory computer-readable storage medium having instructions stored which, when executed by at least one processor, cause the at least one processor to perform operations comprising: executing a search of training course content stored in a database ([0036]; [0133]: stored in a database), the search identifying at least one of new training course content or updated training course content, resulting in search result content (Fig. 17; [0129]: rescan the pages in a given web collection to determine their changes after a set period at step 670, wherein the rescanning corresponds to “executing … a search … the search identifying … new … or updated …”. The limitation “training course content” is nonfunctional descriptive material that is not functionally involved in the steps recited. The method would have been performed the same regardless of the data. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the teachings already in the prior art to the particular type of data: “training course content”); identifying, for each piece of content in the search result content, a media type of the each piece of content, the media type comprising one of a video type, an article type, and a slide type (Fig. 17; [0129]: identify media types of files in the pages at step 610. [0135]-[0141]; [0040]: extracted web page content includes images, video files, audio files, text files, or parts of or combinations of any of these and so on, wherein video files indicate “a video type” and text files indicate “an article type”, i.e., the media type comprises at least one of … type); executing, on the each piece of content, at least one data extraction algorithm from a plurality of data extraction algorithms, the plurality of data extraction algorithms comprising a video extraction algorithm, an article extraction algorithm, and a slide extraction algorithm, wherein the at least one data extraction algorithm is selected based on the media type, resulting in extracted data for each piece of content in the search result content (Fig. 17; [0129]: apply an analysis algorithm to each file according to the media type of the file to derive or extract content items at step 620, wherein the analysis algorithm corresponds to “at least one data extraction algorithm” and [0135]-[0141] and [0040] teach “a plurality of data extraction algorithms” as discussed above); Ives does not teach the plurality of data extraction algorithms comprising a video extraction algorithm, an article extraction algorithm, and a slide extraction algorithm, adding the extracted data to a semantic search index. Short teaches adding the extracted data to a semantic search index (Fig. 1; [0065]-[0067]: store the determined semantic information as an entry in the index file at S106. [0002]-[0004]: a semantic search index). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ives to incorporate the teachings of Short to add the extracted data to a semantic search index. Doing so would provide a way of searching a corpus of textual data /text data items that uses concepts and conceptual relationships rather than keyword searching. That is, use a lexical-conceptual knowledge base to enable the querying of the corpus of textual data using concepts and relations between concepts instead of literal or fuzzy string matching as taught by Short ([0005]). Ives and Short do not teach the plurality of data extraction algorithms comprising a video extraction algorithm, an article extraction algorithm, and a slide extraction algorithm, Sood teaches the plurality of data extraction algorithms comprising a video extraction algorithm, an article extraction algorithm, and a slide extraction algorithm ([0069]-[0070]: use an extractive text summarization algorithm to an audio, video, text file, and the one or more presentation tools (e.g., Microsoft® PowerPoint slides, Microsoft® Excel files, spreadsheets, web pages, diagrams, flowcharts, etc.)), It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ives and Short to incorporate the teachings of Sood to comprise a video extraction algorithm, an article extraction algorithm, and a slide extraction algorithm in the plurality of data extraction algorithms. Doing so would identify and extract the important sentences and information from a given audio, video, and/or text file and generate them verbatim to produce a subset of the sentences from the original text as a summary as taught by Sood ([0070]). With respect to claim 16, As discussed in claim 15, Ives and Short and Sood teach all the limitations therein. Short further teaches the non-transitory computer-readable storage medium of claim 15, having additional instructions stored which, when executed by the at least one processor, cause the at least one processor to perform operations comprising: receiving a natural language query from a user (Fig. 5; [0072]: receive, via a user interface, a user text query (step S200). The user text query may be a single word or multiple words); searching the semantic search index for a response to the natural language query, resulting in query search results (Fig. 5; [0073]: search an index file of the concept-based search engine to identify at least one entry that matches the user text query (step S202)); and displaying, via a display, the query search results in response to the natural language query (Fig. 5; [0077]: output, via the user interface, information, from a text-based database of the concept-based search engine, identifying at least one text data item associated with an entry in the index file that matches the user text query (step S204)). With respect to claim 17, As discussed in claim 16, Ives and Short and Sood teach all the limitations therein. Short further teaches the non-transitory computer-readable storage medium of claim 16, wherein the query search results further comprise at least one source for each query search result in the query search results (Fig. 5; [0077]: output, via the user interface, information, from a text-based database of the concept-based search engine, identifying at least one text data item associated with an entry in the index file that matches the user text query (step S204), wherein the information identifying at least one text data item identifies “at least one source”). With respect to claim 18, As discussed in claim 15, Ives and Short and Sood teach all the limitations therein. Ives further teaches the non-transitory computer-readable storage medium of claim 15, wherein the training course content further comprises a plurality of courses, with each course in the plurality of courses comprising training content and exam questions (As discussed above, the limitation “training course content” is nonfunctional descriptive material that is not functionally involved in the steps recited. The method would have been performed the same regardless of the data. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the teachings already in the prior art to the particular type of data: “training course content”). With respect to claim 19, As discussed in claim 18, Ives and Short and Sood teach all the limitations therein. Ives further teaches the non-transitory computer-readable storage medium of claim 18, wherein the training content comprises at least one of video course content, slide-based course content, and article course content (Fig. 17; [0129]; [0036]: web page content includes any collection of data files, audio, image or video files and so on). With respect to claim 20, As discussed in claim 15, Ives and Short and Sood teach all the limitations therein. Short further teaches the non-transitory computer-readable storage medium of claim 15, wherein the at least one data extraction algorithm comprises at least one Artificial Intelligence (Al) language service (Fig. 7; [0091]: the AI module 116 may be used to analyse the unstructured text data items received during the index file generation process, to determine semantic information encoded by at least one portion of text within the received text data items, wherein the AI module 116 corresponds to “at least one AI language service”). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to XIAOQIN HU whose telephone number is (571)272-1792. The examiner can normally be reached on Monday-Friday 7:00am-3:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Charles Rones can be reached on (571) 272-4085. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /XIAOQIN HU/Examiner, Art Unit 2168 /CHARLES RONES/Supervisory Patent Examiner, Art Unit 2168
Read full office action

Prosecution Timeline

Mar 12, 2025
Application Filed
Oct 18, 2025
Non-Final Rejection — §103
Jan 20, 2026
Applicant Interview (Telephonic)
Jan 20, 2026
Examiner Interview Summary
Jan 22, 2026
Response Filed
Feb 10, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585863
COMPRESSION SCHEME FOR STABLE UNIVERSAL UNIQUE IDENTITIES
2y 5m to grant Granted Mar 24, 2026
Patent 12554773
METHODS AND SYSTEM FOR IMPORTING DATA TO A GRAPH DATABASE USING NEAR-STORAGE PROCESSING
2y 5m to grant Granted Feb 17, 2026
Patent 12554736
METHODS AND SYSTEMS FOR GENERATING RECOMMENDATIONS IN CLOUD-BASED DATA WAREHOUSING SYSTEM
2y 5m to grant Granted Feb 17, 2026
Patent 12488055
DATASET IDENTIFICATION FOR DATASETS WITH MULTIPLE IDENTIFICATION ATTRIBUTES
2y 5m to grant Granted Dec 02, 2025
Patent 12481645
DATA MANAGEMENT SYSTEM AND METHOD FOR DETECTING BYZANTINE FAULT
2y 5m to grant Granted Nov 25, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
61%
Grant Probability
99%
With Interview (+57.9%)
2y 12m
Median Time to Grant
Moderate
PTA Risk
Based on 187 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month