Prosecution Insights
Last updated: April 19, 2026
Application No. 18/136,523

CLASSIFICATION PROCESS SYSTEMS AND METHODS

Non-Final OA §101§102§103
Filed
Apr 19, 2023
Examiner
TITCOMB, WILLIAM D
Art Unit
2178
Tech Center
2100 — Computer Architecture & Software
Assignee
Rayyan Systems Inc.
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
98%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
516 granted / 619 resolved
+28.4% vs TC avg
Moderate +14% lift
Without
With
+14.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
17 currently pending
Career history
636
Total Applications
across all art units

Statute-Specific Performance

§101
9.7%
-30.3% vs TC avg
§103
41.6%
+1.6% vs TC avg
§102
28.9%
-11.1% vs TC avg
§112
15.5%
-24.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 619 resolved cases

Office Action

§101 §102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Claim Interpretation During patent examination, pending claims must be “given their broadest reasonable interpretation consistent with the specification.” MPEP 2111; See also, MPEP 2173.02. Limitations appearing in the specification but not recited in the claim are not read into the claim. In re Prater, 415 F.2d 1393, 1404-05, 162 USPQ 541, 550-551 (CCPA 1969). See also, In re Zletz, 893 F.2d 319, 321-22, 13 USPQ2d 1320, 1322 (Fed. Cir. 1989) (“During patent examination the pending claims must be interpreted as broadly as their terms reasonably allow”). The reason is simply that during patent prosecution when claims can be amended, ambiguities should be recognized, scope and breadth of language explored, and clarification imposed. An essential purpose of patent examination is to fashion claims that are precise, clear, correct, and unambiguous. Only in this way can uncertainties of claim scope be removed, as much as possible, during the administrative process. The Examiner respectfully requests of the Applicant in preparing responses, to consider fully the entirety of the reference(s) as potentially teaching all or part of the claimed invention. It is noted, REFERENCES ARE RELEVANT AS PRIOR ART FOR ALL THEY CONTAIN. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 19, and 20-36 in the claimed invention are directed to non-statutory subject matter. The claims do not fall within at least one of the four categories of patent eligible subject matter because The broadest reasonable interpretation of " a memory " includes transitory signals which are non-propagating, i.e., waves and signals, which are not considered statutory subject matter . See In re Nuijten, 84 USPQ2d 1495, 1503 (Fed. Cir. 2007). Because the full scope of the claim encompasses non-statutory subject matter (i.e., transitory propagating signals), the claim as a whole is non-statutory , and therefore claim 1 9 in its current version is rejected under 35 U.S.C. 101 . Claim 1 9 is rejected under 35 U.S.C. 101 because the subject matter is not limited to statutory subject matter , as discussed above. Claim 19 directly recites, inter alia, “a memory”. Claims 20-36 directly or indirectly depend from claim 19, and fail to correct the basis by which claim 19 is rejected under 35 USC 101; therefore they are rejected under the same basis as claim 19 . Amendment and or correction is required. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale , or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1- 16, 19-34, 37-40 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by U.S. Patent Application Publication No. 2020/ 0364233 A1 to Chan et al. (hereinafter Chan) . With regards to claim 1, Chan discloses: 1. A method for automated and/or semi-automated classification of research data via a collaboration platform, said platform designed for extraction and synthesis of research data from a collection, the method comprising (see, detailed description, including, the search module 206 may receive a search criteria. In one example, search criteria may include search terms, questions, and selection criteria from a searcher (e.g., a user) para. 0099-0102) : receiving, by a processor of a computing device, a first input comprising a classification context, wherein the classification context comprises a query and/or a topic (see, detailed description, including, search module 206 may search the index of the corpus based on the explicit search criteria provided by the user (e.g., according to the search criteria entered in a search box). In some embodiments, the search module 206 may have utilized reference document(s) and/or other information to build the domain-specific lookup table/special keyword table. In other embodiments, the search module 206 may not have built the domain-specific lookup table/specific keyword table and/or received reference documents , para. 0099-0102) ; receiving, by the processor, a second input comprising a classification protocol, wherein the classification protocol comprises one or more user-customizable features for obtaining and/or presenting output search results for review by a user (see, detailed description, including, The search module 206 may rank the output of the search results in any number of ways. In some embodiments, the search module 206 may determine a similarity of each document in the search results to one or more reference documents. Similarity may be determined in any number of ways including determining distances between documents based on commonality of keywords and phrases, context, and/or the like , and search module 206 may receive the selected relevant documents and search the corpus again using the keywords or simply re-order the existing search results. The new search results may then be ranked based on similarity to the selected relevant documents. As a result, the search results may be improved and curated by the user. The process may continue with the searcher selecting one or more additional relevant documents (or selecting irrelevant documents) and again, running the search para. 0060, 0063-0067, 0083-0084, 0099-0102) ; executing an artificial intelligence (AI) module by the processor, to produce the output search results in response to the classification context and in accordance with the classification protocol (see, detailed description, including, t he search module 206 may utilize the search criteria and the optional reference documents to build a machine learning model that may be stored and used by any number of subsequent users. After the search module 206 provides the search results (e.g., via the output module 210), the implicit input module 204 may receive feedback from the user , para. 0064) ; and receiving, by the processor, a third input comprising a plurality of user classification actions, the classification actions automatically recorded by the processor, wherein each of the plurality of classification actions is acceptance or rejection by the user of a proposed search result, and wherein the AI module is trained using the received plurality of classification actions (see, detailed description, including, The search module 206 and/or ML model module 208 may train a machine learning model based on the inclusion and/or exclusion of documents identified by the researcher. In some embodiments, the search module 206 and/or ML model module 208 trains the machine learning model such that those documents that are selected by the researcher as relevant are likely to be prioritized to the top of the list of search results. This machine learning model may be used to replace the default ranking algorithm to rank the future results. , para. 0083-0084) . With regards to claim 2, Chan discloses: 2. The method of claim 1, further comprising graphically rendering, by the processor, the output to the user via a presentation device (see, detailed description, step 412, the output module 210 may provide the ranked results to the researcher. In one example, the ranked results are provided in a GUI provided by a user system 108 , para. 0099-0102) . With regards to claim 3, Chan discloses: 3. The method of claim 2, wherein ( i ) the presentation of the output and/or (ii) the user-customizable settings is/are configurable by the user via at least one graphical user interface widget (see, detailed description, including, step 412, the output module 210 may provide the ranked results to the researcher. In one example, the ranked results are provided in a GUI provided by a user system 108 , para. 0099-0102) With regards to claim 4, Chan discloses: 4. The method of claim 3, wherein the at least one graphical user interface widget is a member selected from the group consisting of: a button, a slider, a list box, a spinner, a drop-down list, a menu, a menu bar, a toolbar, a ribbon, an icon, a tree view, a grid view, a datagrid, a text box, and a combo box (see, detailed description, including, The GUI of the user system 108 may provide an option to identify any number of documents as relevant. The GUI may additionally provide an option to identify any number of documents as not relevant. Further, the GUI of the user system 108 may provide a field or other structure to identify reasons (e.g., text, radio buttons, or the like) for the document to be relevant or irrelevant , para. 0102) . With regards to claim 5, Chan discloses: 5. The method of claim 3, wherein the presentation of the output is customizable by the user via the at least one graphical user interface (GUI) widget (see, as above claim 4, and detailed description, including, The GUI of the user system 108 may provide an option to identify any number of documents as relevant. The GUI may additionally provide an option to identify any number of documents as not relevant. Further, the GUI of the user system 108 may provide a field or other structure to identify reasons (e.g., text, radio buttons, or the like) for the document to be relevant or irrelevant (para. 0099-0102). With regards to claim 6, Chan discloses: 6. The method of claim 3, wherein the user-customizable settings comprise one or more members selected from the group consisting of: (i) allowing a swipe gesture to include and exclude a search result, (ii) having swipe left include and swipe right to exclude, (iii) showing the include button on the right; showing an include/exclude button, (iv) showing an Abstract, (v) showing a Journal and Author information, (vi) showing Labels, (vii) showing Reasons for classification (viii) showing Decisions, and (ix) showing a "Maybe" designation indicating the classification of a given search result may not have sufficiently high confidence for automatic inclusion in the set of relevant search results (see, as above claim 4 and 5, and detailed description, including, The GUI of the user system 108 may provide an option to identify any number of documents as relevant. The GUI may additionally provide an option to identify any number of documents as not relevant. Further, the GUI of the user system 108 may provide a field or other structure to identify reasons (e.g., text, radio buttons, or the like) for the document to be relevant or irrelevant (para. 0099-0102). With regards to claim 7, Chan discloses: 7. The method of claim 1, comprising executing the artificial intelligence (AI) module by the processor, in response to the classification context and in accordance with the classification protocol, to produce an output comprising research data beyond a scope of the classification context (see, detailed description, including, the search module 206 may create a machine learning model such as a classification model and performs an initial search of the corpus based on the search criteria to generate search result s, and the output module 210 may provide the ranked results to the researcher. In one example, the ranked results are provided in a GUI provided by a user system 108. The GUI of the user system 108 may provide an option to identify any number of documents as relevant. The GUI may additionally provide an option to identify any number of documents as not relevant. Further, the GUI of the user system 108 may provide a field or other structure to identify reasons (e.g., text, radio buttons, or the like) for the document to be relevant or irrelevant , para. 0100-010 3 ) . With regards to claim 8, Chan discloses: 8. The method of claim 1, wherein executing the AI module comprises searching a collection of research data for research data relevant to the classification context in accordance with the classification protocol (see, detailed description, including, t he search module 206 may receive the list of document identifiers that identify documents that are considered by the user to be relevant and/or not relevant to the desired query and/or search results , para. 0103-0109) . With regards to claim 9, Chan discloses: 9. The method of claim 1 wherein ex e cuting the AI module comprises labeling the research data identified as relevant with a code (see, detailed description, including, the search module 206 may receive the list of document identifiers that identify documents that are considered by the user to be relevant and/or not relevant to the desired query and/or search results, para. 0103-0109). With regards to claim 10, Chan discloses: 10. The method of claim 9 further comprising identifying a reason for excluding research data that are excluded as irrelevant, thereby supporting reproducibility and exclusion of duplicates (see, detailed description including, he search module 206 may determine a similarity of each document in the search results to one or more reference documents. Similarity may be determined in any number of ways including determining distances between documents based on commonality of keywords and phrases, context, and/or the like , t he feedback may include different criteria, selection of relevant documents from the search results, rejection of one or more documents of the search results, requests for different methodology for model creation, and/or the like. As a result, the search module 206 and/or the implicit input module 204 may modify or change the machine learning model based on the information from the user after search results are provided , para. 0063-0064) . With regards to claim 11, Chan discloses: 11. The method of claim 1, wherein the classification context comprises one or more strings of alphanumeric characters and wherein the AI module comprises natural language processing (NLP) software (see, detailed description, including, corpus module 202 may include NLP functionality to convert, index, and/or categorize text (e.g., words and phrases) within any number of documents of the corpus , and corpus module 202 may identify keywords and phrases within all or some of the corpus (e.g., using NLP techniques). In one example the corpus module 202 may parse keywords and phrases from any number of the documents of the corpus , para. 0054, 0079) . With regards to claim 12, Chan discloses: 12. The method of claim 1, wherein the classification action is user defined and comprises swiping, tapping, nodding, hot keys, buttons, mouse clicks, head movements, hand movements or gestures, voice recognition, brain activity, or any other user defined action or computer process associated with a customization action (see, detailed description, including, The GUI of the user system 108 may provide an option to identify any number of documents as relevant. The GUI may additionally provide an option to identify any number of documents as not relevant. Further, the GUI of the user system 108 may provide a field or other structure to identify reasons (e.g., text, radio buttons, or the like) for the document to be relevant or irrelevant , para. 0102) . With regards to claim 13, Chan discloses: 13. The method of claim 12, wherein the classification action is performed using an intermediary gadget that senses and interprets a user intention at the point of action (see, detailed description, including, user system 108 may each be or include any number of digital devices , para. 0029) . With regards to claim 14, Chan discloses: 14. The method of claim 1, wherein the AI module produces the output and/or th e processor renders the output in accordance with one or more user-defined customizations comprising one or more of the following: metadata, such as labels, reasons, notes, tags, types, sources, authors, definitions, categories, assessments; assigning relationships, ratings, rankings, scores, measures, grades, quality, probabilities, confidence; searching, selecting, sorting, filtering, categorizing, identifying, deleting, copying, extracting, archiving, filing, deciding, coding or curating; and any other user defined customizations in their plurality (see, detailed description, including, In step 406, the search module 206 may receive a search criteria. In one example, search criteria may include search terms, questions, and selection criteria from a searcher (e.g., a user). Examples of search terms may include “Current AND Coronary AND intravascular ultrasound,” “Advantage AND intravascular ultrasound,” and “Disadvantage AND intravascular ultrasound.” , para. 0099). With regards to claim 15, Chan discloses: 15. The method of claim 1, where the AI module is trained using the received classification actions (see, detailed description, including, search module 206 may rank the output of the search results in any number of ways. In some embodiments, the search module 206 may determine a similarity of each document in the search results to one or more reference documents. Similarity may be determined in any number of ways including determining distances between documents based on commonality of keywords and phrases, context, and/or the like , and , t he search module 206 and/or ML model module 208 may train a machine learning model based on the inclusion and/or exclusion of documents identified by the researcher. In some embodiments, the search module 206 and/or ML model module 208 trains the machine learning model such that those documents that are selected by the researcher as relevant are likely to be prioritized to the top of the list of search results. This machine learning model may be used to replace the default ranking algorithm to rank the future results. para. 0063-0084) . With regards to claim 16, Chan discloses: 16 . The method of claim 1, wherein the steps are performed iteratively (see, detailed description, including, he search module 206 may utilize the search criteria and the optional reference documents to build a machine learning model that may be stored and used by any number of subsequent users. After the search module 206 provides the search results (e.g., via the output module 210), the implicit input module 204 may receive feedback from the user. The feedback may include different criteria, selection of relevant documents from the search results, rejection of one or more documents of the search results, requests for different methodology for model creation, and/or the like. As a result, the search module 206 and/or the implicit input module 204 may modify or change the machine learning model based on the information from the user after search results are provided , para. 0064, 0083-0084) . With regard to claim 19, claim 19 (a system claim) recites substantially similar limitations to claim 1 (a method claim) and is therefore rejected using the same art and rationale set forth above. With regard to claim 20, claim 20 (a system claim) recites substantially similar limitations to claim 2 (a method claim) and is therefore rejected using the same art and rationale set forth above. With regard to claim 21, claim 21 (a system claim) recites substantially similar limitations to claim 3 (a method claim) and is therefore rejected using the same art and rationale set forth above. With regard to claim 22, claim 22 (a system claim) recites substantially similar limitations to claim 4 (a method claim) and is therefore rejected using the same art and rationale set forth above. With regard to claim 23, claim 23 (a system claim) recites substantially similar limitations to claim 5 (a method claim) and is therefore rejected using the same art and rationale set forth above. With regard to claim 24, claim 24 (a system claim) recites substantially similar limitations to claim 6 (a method claim) and is therefore rejected using the same art and rationale set forth above. With regard to claim 25, claim 25 (a system claim) recites substantially similar limitations to claim 7 (a method claim) and is therefore rejected using the same art and rationale set forth above. With regard to claim 26, claim 26 (a system claim) recites substantially similar limitations to claim 8 (a method claim) and is therefore rejected using the same art and rationale set forth above. With regard to claim 27, claim 27 (a system claim) recites substantially similar limitations to claim 9 (a method claim) and is therefore rejected using the same art and rationale set forth above. With regard to claim 28, claim 28 (a system claim) recites substantially similar limitations to claim 10 (a method claim) and is therefore rejected using the same art and rationale set forth above. With regard to claim 29, claim 29 (a system claim) recites substantially similar limitations to claim 11 (a method claim) and is therefore rejected using the same art and rationale set forth above. With regard to claim 30, claim 30 (a system claim) recites substantially similar limitations to claim 12 (a method claim) and is therefore rejected using the same art and rationale set forth above. With regard to claim 31, claim 31 (a system claim) recites substantially similar limitations to claim 13 (a method claim) and is therefore rejected using the same art and rationale set forth above. With regard to claim 32, claim 32 (a system claim) recites substantially similar limitations to claim 14 (a method claim) and is therefore rejected using the same art and rationale set forth above. With regard to claim 33, claim 33 (a system claim) recites substantially similar limitations to claim 15 (a method claim) and is therefore rejected using the same art and rationale set forth above. With regard to claim 34, claim 34 (a system claim) recites substantially similar limitations to claim 16 (a method claim) and is therefore rejected using the same art and rationale set forth above. With regard to claim 37, claim 37 (a method claim) recites substantially similar limitations to claim 1, 2, 3 (all a method claim) and is therefore rejected using the same art and rationale set forth above. With regard to claim 38, claim 38 (a method claim) recites substantially similar limitations to claim 4 (a method claim) and is therefore rejected using the same art and rationale set forth above. With regard to claim 39, claim 39 (a method claim) recites substantially similar limitations to claim 6 (a method claim) and is therefore rejected using the same art and rationale set forth above. With regard to claim 40, claim 40 (a method claim) recites substantially similar limitations to claim 8 (a method claim) and is therefore rejected using the same art and rationale set forth above. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 17-18, 35-36 are rejected under 35 U.S.C. 103 as being unpatentable over Chen in view of U.S. Patent Application Publication N o . 2021/0182423 A1 to Padmanabhan . With regards to claim 17, Chan fails to explicitly disclose: 17. The method of claim 1, wherein the classification context, the classification protocol, the output, or combinations thereof are published on a blockchain. Padmanabhan discloses : the classification protocol, the output, or combinations thereof are published on a blockchain (see, detailed description, including, triggering the event via the event listener based on changes to the metadata for the new application further includes: triggering one or more of: a business user defined process flow to execute responsive to changes to the defined metadata persisted to the blockchain; a business user defined data retrieval operation to execute responsive to changes to the defined metadata persisted to the blockchain , para. 0544). It would have been obvious to one having ordinary skill at the time the invention was filed, and having the teachings of Chan with Padmanabhan before her, to be motivated to combine the features from Padmanabhan , with Chan, including, triggering the event via the event listener based on changes to the metadata for the new application further includes: triggering one or more of: a business user defined process flow to execute responsive to changes to the defined metadata persisted to the blockchain; a business user defined data retrieval operation to execute responsive to changes to the defined metadata persisted to the blockchain , para. 0544). Therefore, a rationale to support a conclusion that a claim would have been obvious is that all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results to one of ordinary skill in the art . With regards to claim 18, Chan fails to explicitly disclose: 18. The method of claim 17, comprising providing for a compensatory exchange for publishing on the blockchain. Padmanabhan discloses : comprising providing for a compensatory exchange for publishing on the blockchain (see, detailed description, including, comprising providing for a compensatory exchange for publishing on the blockchain , para. 1008, and, social media platforms and apps measure their ability to make money on the basis of the amount and diversity of “user” data they have at their disposal , p latform providers and app providers leverage user's data to generate revenue , para. 1011 ). It would have been obvious to one having ordinary skill at the time the invention was filed, and having the teachings of Chan with Padmanabhan before her, to be motivated to combine the features from Padmanabhan , with Chan, including, comprising providing for a compensatory exchange for publishing on the blockchain , para. 1008, and, social media platforms and apps measure their ability to make money on the basis of the amount and diversity of “user” data they have at their disposal , p latform providers and app providers leverage user's data to generate revenue , para. 1011). Therefore, a rationale to support a conclusion that a claim would have been obvious is that all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results to one of ordinary skill in the art . With regard to claim 35, claim 35 (a system claim) recites substantially similar limitations to claim 17 (a method claim) and is therefore rejected using the same art and rationale set forth above. With regard to claim 36, claim 36 (a system claim) recites substantially similar limitations to claim 18 (a method claim) and is therefore rejected using the same art and rationale set forth above. A sampling of the prior art made of record and not relied upon and considered pertinent to Applicants’ disclosure includes: U.S. Patent No. 12,233,000 B2 to Zimmer , that discusses, The data shaping system comprises a computer implemented algorithm that uses modifiers (e.g., time available to study, educational attainment of the user, etc.) to shape data retrieved from one or more datasets for consumption by a user. A dataset may comprise text, images, video, audio, or a combination thereof. The data shaping system is configured to curate data, retrieved from selected datasets, using modifiers to shape (or assemble) an output document that is presented to the user for review. In this way, the user is provided with a curated subset of data, which is an assemblage of information about two or more topics of interest and how the two or more topics of interest are related, that has been tailored to their needs. In some implementations, the output document may be text, one or more images, audio, video, or a combination thereof. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT WILLIAM D. TITCOMB whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (571)270-5190 . The examiner can normally be reached FILLIN "Work Schedule?" \* MERGEFORMAT 9:30 AM - 6:30 PM (M-F) . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FILLIN "SPE Name?" \* MERGEFORMAT Stephen C. Hong can be reached at FILLIN "SPE Phone?" \* MERGEFORMAT 571-272-4124 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. FILLIN "Examiner Stamp" \* MERGEFORMAT WILLIAM D. TITCOMB Primary Examiner Art Unit 2178 /WILLIAM D TITCOMB/ Primary Examiner, Art Unit 2178 3-2-2026
Read full office action

Prosecution Timeline

Apr 19, 2023
Application Filed
Mar 03, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604055
Auto-reframing and multi-cam functions of video editing application
2y 5m to grant Granted Apr 14, 2026
Patent 12591441
DETERMINING SEQUENCES OF INTERACTIONS, PROCESS EXTRACTION, AND ROBOT GENERATION USING GENERATIVE ARTIFICIAL INTELLIGENCE / MACHINE LEARNING MODELS
2y 5m to grant Granted Mar 31, 2026
Patent 12591442
DETERMINING SEQUENCES OF INTERACTIONS, PROCESS EXTRACTION, AND ROBOT GENERATION USING GENERATIVE ARTIFICIAL INTELLIGENCE / MACHINE LEARNING MODELS
2y 5m to grant Granted Mar 31, 2026
Patent 12579647
EVALUATION APPARATUS, EVALUATION METHOD, AND EVALUATION PROGRAM
2y 5m to grant Granted Mar 17, 2026
Patent 12573231
CONTROLLING ROLLABLE DISPLAY DEVICES BASED ON FINGERPRINT INFORMATION AND TOUCH INFORMATION
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
98%
With Interview (+14.4%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 619 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month