Prosecution Insights
Last updated: April 19, 2026
Application No. 18/173,202

TEACHING DEVICE, TEACHING METHOD, AND COMPUTER PROGRAM PRODUCT

Final Rejection §101§103
Filed
Feb 23, 2023
Examiner
SOMERS, MARC S
Art Unit
2159
Tech Center
2100 — Computer Architecture & Software
Assignee
Kabushiki Kaisha Toshiba
OA Round
2 (Final)
65%
Grant Probability
Moderate
3-4
OA Rounds
4y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 65% of resolved cases
65%
Career Allow Rate
364 granted / 563 resolved
+9.7% vs TC avg
Strong +35% interview lift
Without
With
+34.6%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
36 currently pending
Career history
599
Total Applications
across all art units

Statute-Specific Performance

§101
18.0%
-22.0% vs TC avg
§103
47.9%
+7.9% vs TC avg
§102
10.1%
-29.9% vs TC avg
§112
15.1%
-24.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 563 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . The amendments were received 2/10/2026. Claims 1-12 are pending where claims 1-12 were previously presented. Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. Claim Objections The applicant amended the claims to address the claim objections. Accordingly, the respective objections to the claims have been withdrawn. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-12 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. With regard to claim 1: Step 2A, Prong One: The claim recites the following limitations which are drawn towards an abstract idea: A teaching device comprising processing circuitry configured to: estimate a first estimation result from the first input data, search for a second taught estimation result taught for second input data, the second taught estimation result being associated with at least one of the second input data similar to the first input data, and a second estimation result similar to the first estimation result and estimated from the second input data, and select one selection candidate among a plurality of selection candidates including the first estimation result and the second taught estimation result, as a correction target estimation result to be used for correction of the first estimation result (recites mental process steps of evaluating and selecting a better tag/label/identity for the detected/identified object/person based on remembering other related/similar tags/labels/identities where the new estimation/guess on the identity is better than the initial guess, e.g. guessing the person’s identity and then remembering that someone else you know/met at a different event looks similar and based on other observed characteristics decide the identity is not your initial guess). As seen from above, the identified limitations recite concepts associated with an abstract idea and thus the respective claim recites a judicial exception (see 2106.04(a)) and thus requires further analysis as discussed below. Step 2A, Prong Two: The following limitations have been identified as being additional elements as discussed below. acquire first input data (recites insignificant extrasolution activity of receiving information, see MPEP 2106.05(g)); “…using a machine learning model” (recites apply it limitations of using the computer as a tool (machine learning model) to implement the abstract idea, see MPEP 2106.05(f)); and the second taught estimation result being searched for from a correction example database (DB) in which the second input data, the second estimation result, and the second taught estimation result are associated with each other (recites field of use limitation indicating the preferred meaning of data in a database that is used when performing the judicial exception); wherein the second input data is input to the machine learning model earlier than the first input data (recites insignificant extrasolution activity of inputting/transmitting information, see MPEP 2106.05(g), and also field of use limitations indicating that particular data has already been processed by the computer system, see MPEP 2106.05(h)) and is already associated with second estimation result and the second taught estimation result (recites field of use limitations describing intended relationship between data that already exists, see MPEP 2106.05(h)), and the second taught estimation result is an estimation result obtained by correcting the second estimation result to be a taught estimation result of a correct answer (recites field of use limitations describing the intended meaning of the data, see MPEP 2106.05(h)). As seen from the above discussion, the identified limitations did not integrate the judicial exception into a practical application (see MPEP 2106.04(d)). This judicial exception is not integrated into a practical application because the additional elements appear to merely recite receiving data with descriptions of intended meaning and relationship of the data, and utilizing a computer at a high-level of generality as a tool to implement the abstract idea. Step 2B: Below is the analysis of the claims: acquire first input data (recites well-understood, routine, and conventional activity of receiving information, see MPEP 2106.05(d)); “…using a machine learning model” (recites apply it limitations of using the computer as a tool (machine learning model) to implement the abstract idea, see MPEP 2106.05(f)); and the second taught estimation result being searched for from a correction example database (DB) in which the second input data, the second estimation result, and the second taught estimation result are associated with each other (recites field of use limitation indicating the preferred meaning of data in a database that is used when performing the judicial exception, see MPEP 2106.05(h)); wherein the second input data is input to the machine learning model earlier than the first input data (recites well-understood, routine, and conventional activity of inputting/transmitting information, see MPEP 2106.05(d), and also field of use limitations indicating that particular data has already been processed by the computer system, see MPEP 2106.05(h)) and is already associated with second estimation result and the second taught estimation result (recites field of use limitations describing intended relationship between data that already exists, see MPEP 2106.05(h)), and the second taught estimation result is an estimation result obtained by correcting the second estimation result to be a taught estimation result of a correct answer (recites field of use limitations describing the intended meaning of the data, see MPEP 2106.05(h)). As seen from above, the respective claim elements taken individually do not amount to significantly more than the judicial exception. When taken as a whole (in combination), the claim also does not amount to significantly more than the abstract idea because the additional elements appear to merely receiving data with descriptions of intended meaning and relationship of the data, and utilizing a computer at a high-level of generality as a tool to implement the abstract idea. With regard to claim 2, this claim recites output a plurality of selection candidates to an output unit (recites insignificant extrasolution activity of transmitting information which amounts to well-understood, routine, and conventional activity of transmitting information, see MPEP 2106.05(d)), and select, as the correction target estimation result, the one selection candidate among the plurality of output selection candidates, a selection input by a user being received for the one selection candidate (recites insignificant extrasolution activity of receiving information which amounts to well-understood, routine, and conventional activity of transmitting information, see MPEP 2106.05(d)). With regard to claim 3, this claim recites wherein the output unit is a display (recites display unit as generic hardware being used such as using a computer to implement the abstract idea, see MPEP 2106.05(f)). With regard to claim 4, this claim recites select the one selection candidate among the plurality of selection candidates, as the correction target estimation result, the one selection candidate satisfying a predetermined condition (recites mental process steps of evaluating and judging candidates based on criteria to make a decision). With regard to claim 5, this claim recites receive a correction input by a user for the correction target estimation result (recites insignificant extrasolution activity of receiving information which amounts to well-understood, routine, and conventional activity of transmitting information, see MPEP 2106.05(d)), and generate a first taught estimation result taught for the first input data, the first taught estimation result being obtained by reflecting the received correction input in the correction target estimation result (recites insignificant extrasolution activity of storing information in memory which amounts to well-understood, routine, and conventional activity of storing information in memory, see MPEP 2106.05(d)). With regard to claim 6, this claim recites generate, based on at least one of the first estimation result and the second taught estimation result, a candidate estimation result different from the first estimation result and the second taught estimation result, select, as the correction target estimation result, the one selection candidate among the plurality of selection candidates including the first estimation result, the second taught estimation result, and the candidate estimation result (recites mental process steps of evaluating/remembering other related and similar information to the input/observed data/person and considering those other identities as to whether they represent the observed object). With regard to claim 7, this claim recites generate one or more candidate estimation results including one or more local regions according to similarity between each of first local regions which are the one or more local regions included in the first estimation result for the first input data, and each of second local regions which are one or more local regions included in the second taught estimation result (recites mental process steps of evaluating features/regions of different objects/items/people/faces to make determinations as to how similar the object being evaluated is to other known objects/people/faces). With regard to claim 8, this claim recites wherein the first input data and the second input data are image data, computer aided design (CAD) data, or sound data (recites field of use limitations describing particular data formats being used, see MPEP 2106.05(h)). With regard to claim 9, this claim recites to convert the CAD data or the sound data into image data and uses it as the first input data and the second input data (recites technological environment limitations by reciting converting that a first data format described at a high-level of generality is converted to a different computer data format, also described at a high-level of generality where such limitations relate to using the computer to save the files in different formats). With regard to claim 10, this claim recites convert the first taught estimation result into element information corresponding to the first taught estimation result included in the first input data used to derive the first taught estimation result (recites insignificant extrasolution activity of saving information, such as the tag/label, into memory such as the metadata for an image file which amounts to well-understood, routine, and conventional activity of saving information to memory, see MPEP 2106.05(d)). With regard to claims 11 and 12, these claims are substantially similar to claim 1 and are rejected for similar reasons as discussed above. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-8 and 10-12 are rejected under 35 U.S.C. 103 as being unpatentable over Bengio et al [US 2015/0178596 A1] in view of Ioffe et al [US 9,025,811], Desai et al [US 2018/0114334 A1], and Zeiler [US 10,296,826]. With regard to claim 1, Bengio teaches a teaching device comprising processing circuitry configured to: acquire first input data (see paragraph [0039] and [0019]; the system is able to acquire input data such as an image); estimate a first estimation result from the first input data, using a machine learning model (see paragraphs [0019] and [0024]; the system can estimate/predict an estimation result/label/identifier for an object in the first input data). Bengio does not appear to explicitly teach: search for a second taught estimation result taught for second input data, the second taught estimation result being associated with at least one of the second input data similar to the first input data, and a second estimation result similar to the first estimation result and estimated from the second input data, using the machine learning model, and the second taught estimation result being searched for from a correction example database (DB) in which the second input data, the second estimation result, and the second taught estimation result are associated with each other; and select one selection candidate among a plurality of selection candidates including the first estimation result and the second taught estimation result, as a correction target estimation result to be used for correction of the first estimation result, wherein the second input data is input to the machine learning model earlier than the first input data and is already associated with the second estimation result and the second taught estimation result, and the second taught estimation result is an estimation result obtained by correcting the second estimation result to be a taught estimation result of a correct answer. Ioffe teaches search for a second taught estimation result taught for second input data, the second taught estimation result being associated with at least one of the second input data similar to the first input data, and a second estimation result similar to the first estimation result and estimated from the second input data, using the machine learning model (see col 13, line 54 through col 14, line 1; col 4, lines 56-65; and col 10, lines 27-32; the system is able to search for other labels or second estimation results for other image data that is similar to the first image data where the system utilizes machine learning algorithms to train and determine those other labels). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify the image tagging/labeling process of Bengio providing means to find representative images of various classification categories similar to the first input image as taught by Ioffe in order to find similar images and utilize determinations of various tags/labels of those images as additional tags/labels thereby increasing the efficiency of the system’s label process by utilizing its own classification model to get an initial classification and then using that information to search for other related images so that the knowledge of the labels/tags for those similar images can be used to provide candidate tags/labels for the object’s estimate/predicted tag. Bengio in view of Ioffe teach the second taught estimation result being searched for from a correction example database (DB) in which the second input data, and the second taught estimation result is an estimation result obtained by correcting the second estimation result to be a taught estimation result of a correct answer (see Ioffe, col 13, line 54 through col 14, line 1; col 4, lines 56-65; and col 10, lines 27-32; the second taught estimation result, corrected label, is based on user feedback). Bengio in view of Ioffe do not appear to explicitly teach: the second taught estimation result being searched for from a correction example database (DB) in which the second input data, the second estimation result, and the second taught estimation result are associated with each other; and select one selection candidate among a plurality of selection candidates including the first estimation result and the second taught estimation result, as a correction target estimation result to be used for correction of the first estimation result, wherein the second input data is input to the machine learning model earlier than the first input data and is already associated with the second estimation result and the second taught estimation result. Desai teaches select one selection candidate among a plurality of selection candidates It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify the image tagging/labeling process of Bengio in view of Ioffe by utilizing means for the model to offer options of tags/labels that a user can interact with and to be able to record and use the user’s selection as new training data as taught by Desai in order to provide means to allow users to be able to leverage their knowledge to correct or properly identify objects of the image in conjunction with using that observed/recorded proper label as new training data to fine tune and adapt the respective object detection model accordingly so that future users can have more accurate labels to be determined by the computer automatically. Bengio in view of Ioffe and Desai teach select one selection candidate among a plurality of selection candidates including the first estimation result and the second taught estimation result, as a correction target estimation result to be used for correction of the first estimation result (see Ioffe, col 10, lines 27-32; see Desai, paragraphs [0072] and [0073]; see Bengio, paragraph [0019]; the system provides means to present options to the user to be able to select the appropriate/correct label with means to utilize the selected user input as training data to update/correct/fine-tune the model); Bengio in view of Ioffe and Desai teach that negative examples can be used by the system (see Ioffe, col 9, line 65 through col 10, line 7) but do not appear to explicitly teach: the second taught estimation result being searched for from a correction example database (DB) in which the second input data, the second estimation result, and the second taught estimation result are associated with each other; wherein the second input data is input to the machine learning model earlier than the first input data and is already associated with the second estimation result and the second taught estimation result. Zeiler teaches wherein the second input data It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify the image tagging/labeling process of Bengio in view of Ioffe and Desai by being able to utilize negative examples as part of the training/updating of the classification process as taught by Zeiler in order to provide the system means to not only identify and know what concepts should be associated with a particular label/classification but also the concepts/images/examples that should not be associated with that particular label/classification thus helping the system to reduce the false positive classifications quicker by ensuring that the system is tuned to recognize when an input is similar to the positive or taught classifications/results but also not similar to the negative/incorrect results/classifications. Bengio in view of Ioffe, Desai, and Zeiler teach the second taught estimation result being searched for from a correction example database (DB) in which the second input data, the second estimation result, and the second taught estimation result are associated with each other; wherein the second input data is input to the machine learning model earlier than the first input data and is already associated with the second estimation result and the second taught estimation result (see Zeiler, col 7, lines 6-52; see Ioffe, col 13, line 54 through col 14, line 1; col 4, lines 56-65; and col 10, lines 27-32; the respective taught result that is used by the system for respective second input data can be based on feedback/training of the system where that information is used at a later time with respect to new input data including when providing corrections to respective estimation predictions/results/labels). With regard to claim 2, Bengio in view of Ioffe, Desai, and Zeiler teach output a plurality of selection candidates to an output unit, and select, as the correction target estimation result, the one selection candidate among the plurality of output selection candidates, a selection input by a user being received for the one selection candidate (see Ioffe, col 10, lines 27-32; see Desai, paragraphs [0072] and [0073]; see Bengio, paragraph [0019]; the system provides means to present options to the user to be able to select the appropriate/correct label with means to utilize the selected user input as training data to update/correct/fine-tune the model). With regard to claim 3, Bengio in view of Ioffe, Desai, and Zeiler teach wherein the output unit is a display (see Desai, paragraphs [0072] and [0073]; the system can output information to a display unit and allow user interactions with the computer). With regard to claim 4, Bengio in view of Ioffe, Desai, and Zeiler teach select the one selection candidate among the plurality of selection candidates, as the correction target estimation result, the one selection candidate satisfying a predetermined condition (see Ioffe, col 13, lines 61-64; see Bengio, paragraph [0026]; the system can utilize various conditions/criteria for being able to select one of the selection candidates including highest score/similarity). With regard to claim 5, Bengio in view of Ioffe, Desai, and Zeiler teach receive a correction input by a user for the correction target estimation result, and generate a first taught estimation result taught for the first input data, the first taught estimation result being obtained by reflecting the received correction input in the correction target estimation result (see Bengio, paragraphs [0019] and [0026]; see Ioffe, col 10, lines 27-32; see Desai, paragraphs [0072] and [0073]; the system provides means to present options to the user to be able to select the appropriate/correct label with means to utilize the selected user input as training data to update/correct/fine-tune the model). With regard to claim 6, Bengio in view of Ioffe, Desai, and Zeiler teach generate, based on at least one of the first estimation result and the second taught estimation result, a candidate estimation result different from the first estimation result and the second taught estimation result; and select, as the correction target estimation result, the one selection candidate among the plurality of selection candidates including the first estimation result, the second taught estimation result, and the candidate estimation result (see Ioffe, col 13, lines 54-64; col 10, lines 27-32; see Desai, paragraphs [0072] and [0073]; see Bengio, paragraphs [0025]-[0028] and [0019]; the system can utilize a plurality of similar classifications to be considered matching and not just the single best where it can be some top number of results and be able to present and allow users to select the respective ). With regard to claim 7, Bengio in view of Ioffe, Desai, and Zeiler teach generate one or more candidate estimation results including one or more local regions according to similarity between each of first local regions which are the one or more local regions included in the first estimation result for the first input data, and each of second local regions which are one or more local regions included in the second taught estimation result ((see Ioffe, col 13, lines 54-64; col 10, lines 27-32; see Desai, paragraphs [0072] and [0073]; see Bengio, paragraphs [0024]-[0028] and [0019]; the system can utilize information from a first local region (first object) with similarity to second local region information in the image (second object) as means to determine the likelihood that both objects would be in the same image together). With regard to claim 8, Bengio in view of Ioffe and Desai teach wherein the first input data and the second input data are image data, computer aided design (CAD) data, or sound data (see Bengio, paragraph [0039] and [0019]; the system is able to acquire input data such as an image). With regard to claim 10, Bengio in view of Ioffe, Desai, and Zeiler convert the first taught estimation result into element information corresponding to the first taught estimation result included in the first input data used to derive the first taught estimation result (see Ioffe, col 4, lines 27-35; col 9, lines 27-35; col 3, line 65 through col 4, line 11; the system can utilize element information of the respective classification definition/label for comparisons with respective input in order to estimate/recognize objects in the images). With regard to claims 11 and 12, these claims are substantially similar to claim 1 and are rejected for similar reasons as discussed above. Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Bengio et al [US 2015/0178596 A1] in view of Ioffe et al [US 9,025,811], Desai et al [US 2018/0114334 A1], and Zeiler [US 10,296,826] in further view of Yoda [US 5,384,785]. With regard to claim 9, Bengio in view of Ioffe, Desai, and Zeiler teach all the claim limitations of claims 1 and 8 as discussed above. Bengio in view of Ioffe, Desai, and Zeiler teach image data but do not appear to explicitly teach wherein the acquisition unit is configured to convert the CAD data or the sound data into image data and uses it as the first input data and the second input data. Yoda teaches wherein the acquisition unit is configured to convert the CAD data or the sound data into image data (see col 8, 40-41; the system can convert CAD data to image data). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify the image tagging/labeling process of Bengio in view of Ioffe, Desai, and Zeiler by converting CAD data/models into an image format as taught by Yoda in order to allow the system to be able to analyze many different types of visual information including CAD data/models by converting those visual data items into a standard format that the respective object recognition models can handle thus allowing for the respective models to handle other types of data by having the other data converted into an appropriate input format. Bengio in view of Ioffe, Desai, and Zeiler in further view of Yoda teach uses it as the first input data and the second input data (see Yoda, col 8, 40-41; see paragraph [0039] and [0019]; see col 13, line 54 through col 14, line 1; col 4, lines 56-65; and col 10, lines 27-32; the system can utilize the input information where the input formation can be converted from one form into another). Response to Arguments Applicant's arguments (see second paragraph on page 8) have been fully considered but they are not persuasive. Applicant provided an amended title; however, that title is also not descriptive. In essence, teaching devices to teach teaching data does not convey much information regarding the invention and its broadly stated in a manner that the title, upon initial glance, could very well be related to student education such as for classroom learning of kids or even for teachers to continue their learning. Due to the broad and general description, the title is still being objected to. Applicant’s arguments (see third paragraph on page 8) with respect to the claim objections have been fully considered and are persuasive. The objections of claim 8 has been withdrawn. The applicant amended the claims to address the claim objections. Accordingly, the respective objections to the claims have been withdrawn. Applicant’s arguments (see fourth and fifth paragraphs on page 8) with respect to the 35 USC 101 rejections regarding signals and software per se have been fully considered and are persuasive. The 35 USC 101 rejections of the respective claims with respect to software per se and signals per se have been withdrawn. The applicant amended the claims to address the respective claim rejections. Accordingly, the respective rejections to the claims have been withdrawn. Applicant's arguments (see last paragraph on page 8 through the top of page 10) have been fully considered but they are not persuasive. The applicant argues that the claims are directed to a specific improvement related to machine-learning teaching system that would improve the efficiency of the teaching process. The Examiner respectfully disagrees. With regard to applicant’s arguments regarding an improvement to the functioning of a computer or to any other technology or technical field, the Examiner notes that, per MPEP 2106.05(a), that “[a]n important consideration in determining whether a claim improves technology is the extent to which the claim covers a particular solution to a problem or a particular way to achieve a desired outcome, as opposed to merely claiming the idea of a solution or outcome. McRO, 837 F.3d at 1314-15, 120 USPQ2d at 1102-03; DDR Holdings, 773 F.3d at 1259, 113 USPQ2d at 1107.” (emphasis added). Additionally, it is important to note that “the judicial exception alone cannot provide the improvement” and that “the claim reflects the asserted improvement”. With respect to the broadest reasonable interpretation of the claims as currently recited, the respective claim limitations do find similar input data and respective corrected label/estimation result and can select a candidate as the correction; however, that is where the claim limitations end. No other actions occur after selecting one of the candidates; thus, the claims and respective arguments appear to recite the idea of a solution. Additionally, as noted in MPEP 2106.05(a), the judicial exception alone cannot provide the improvement, i.e. the selection of results. Therefore, applicant’s arguments are not persuasive. Applicant’s arguments (see the first whole paragraph on page 10 through the last paragraph on page 12) with respect to the rejection(s) of claim(s) under 35 USC 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Zeiler. The applicant amended the claims to incorporate new limitations that required further search and consideration. As illustrated in the 35 USC 103 rejections above a new reference was found that illustrated feedback for correcting data and having how that data can be associated with each other in a manner to help train or fine-tune the machine learning system; which, accordingly, when combined, would teach or fairly suggest the claim limitations as recited. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARC S SOMERS whose telephone number is (571)270-3567. The examiner can normally be reached M-F 11-8 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ann Lo can be reached at 5712729767. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MARC S SOMERS/Primary Examiner, Art Unit 2159 3/11/2026
Read full office action

Prosecution Timeline

Feb 23, 2023
Application Filed
Nov 10, 2025
Non-Final Rejection — §101, §103
Feb 10, 2026
Response Filed
Mar 11, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579099
CONTROL LEVEL TAGGING METHOD AND SYSTEM
2y 5m to grant Granted Mar 17, 2026
Patent 12561288
METHOD AND APPARATUS TO VERIFY FILE METADATA IN A DEDUPLICATION FILESYSTEM
2y 5m to grant Granted Feb 24, 2026
Patent 12554681
SYSTEM AND METHOD OF UNDOING DATA BASED ON DATA FLOW MANAGEMENT
2y 5m to grant Granted Feb 17, 2026
Patent 12541502
METHODS AND APPARATUSES FOR IMPROVING PROCESSING EFFICIENCY IN A DISTRIBUTED SYSTEM
2y 5m to grant Granted Feb 03, 2026
Patent 12530365
SYSTEMS AND METHODS FOR A MACHINE LEARNING FRAMEWORK
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
65%
Grant Probability
99%
With Interview (+34.6%)
4y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 563 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month