Prosecution Insights
Last updated: April 18, 2026
Application No. 18/272,328

Recording Medium, Learning Model Generation Method, and Support Apparatus

Final Rejection §101§103
Filed
Jul 13, 2023
Examiner
BURKE, TIONNA M
Art Unit
2178
Tech Center
2100 — Computer Architecture & Software
Assignee
Anaut Inc.
OA Round
2 (Final)
54%
Grant Probability
Moderate
3-4
OA Rounds
4y 9m
To Grant
73%
With Interview

Examiner Intelligence

Grants 54% of resolved cases
54%
Career Allow Rate
233 granted / 431 resolved
-0.9% vs TC avg
Strong +19% interview lift
Without
With
+19.3%
Interview Lift
resolved cases with interview
Typical timeline
4y 9m
Avg Prosecution
46 currently pending
Career history
477
Total Applications
across all art units

Statute-Specific Performance

§101
11.0%
-29.0% vs TC avg
§103
60.1%
+20.1% vs TC avg
§102
18.1%
-21.9% vs TC avg
§112
7.5%
-32.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 431 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Applicant’s Response In Applicant’s Response dated 12/30/25, the Applicant amended Claims 17, 18, 25, 28-32 and argued claims previously rejected in the Office Action dated 10/2/25. Claims 17-32 are being examined. In light of the Applicant’s amendments and remarks, the 35 USC 102 rejections have been withdrawn. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 17-32 are rejected under 35 U.S.C. 101 because the claimed invention is directed to abstract idea without significantly more. Claims 17, 30 and 31 recite “acquiring a real-time image of an operative field image obtained by imaging the operative field of scopic surgery”, “recognizing, in real-time, a target tissue portion included in the acquired real-time image so as to be distinguished from a blood vessel tissue portion appearing on a surface of the target tissue portion by using a learning model trained to output information regarding a target tissue when the operative field image is input” and “displaying an image of the target tissue portion recognized in real time by superimposing the image of the target tissue portion on the real-time image of the operative field”. The broadest reasonable interpretation of recited limitation “recognizing, in real time, a target tissue portion included in the acquired real time image so as to be distinguished from a blood vessel tissue portion appearing on a surface of the target tissue portion by using a learning model trained to output information regarding a target tissue when the operative field image is input” falls within the mental process groupings of abstract ideas because they cover concepts performed in the human mind, including observation, evaluation, judgment, and opinion. The “recognizing” encompasses mental observations or evaluations that are practically performed in the human mind. For example, the claimed recognizing target tissue encompasses observing data in a data set and performing an evaluation by comparing tissues by inputting into a trained learning model. The limitations “acquiring a real time image of an operative field image obtained by imaging the operative field of scopic surgery” is mere data gathering recited at a high level of generality, and thus is an insignificant extra-solution activity. See MPEP 2106.05(g) (“whether the limitation is significant”). In addition, all uses of the recited judicial exceptions require such data gathering, and, as such, this limitation do not impose any meaningful limits on the claim. This limitations amount to necessary data gathering. The limitations are recited as being performed by a computer. The computer is recited at a high level of generality. In the first limitation “acquiring a real time image of an operative field image obtained by imaging the operative field of scopic surgery”, the computer is used as a tool to perform the generic computer function of receiving data. See MPEP 2106.05(f). In the other limitations “recognizing, in real time, a target tissue portion included in the acquired real time image so as to be distinguished from a blood vessel tissue portion appearing on a surface of the target tissue portion by using a learning model trained to output information regarding a target tissue when the operative field image is input”, the computer is used to perform an abstract idea, such that it amounts to no more than mere instructions to apply the exception using a generic computer. The limitation reciting “using a learning model” provide nothing more than mere instructions to implement an abstract idea on a generic computer. The additional elements are recited at a high level of generality. These elements amount to receiving data over a network and are well-understood, routine, conventional activity. See MPEP 2106.05(d), subsection II. 10 As discussed above, the recitation of a computer to perform limitations “recognizing, in real time, a target tissue portion included in the acquired real time image so as to be distinguished from a blood vessel tissue portion appearing on a surface of the target tissue portion by using a learning model trained to output information regarding a target tissue when the operative field image is input” amounts to no more than mere instructions to apply the exception using a generic computer component. Even when considered in combination, these additional elements represent mere instructions to implement an abstract idea or other exception on a computer and insignificant extra-solution activity, which do not provide an inventive concept. Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 17-21, 24 and 30-32 are rejected under 35 U.S.C. 103 as being unpatentable over Tseng et al., United States Patent Publication20210065451 (hereinafter “Tseng”), in view of Hansen et al., United States Patent Publication 2022/0087643 (hereinafter “Hansen”). Claim 17: Tseng discloses: A non-transitory computer readable recording medium storing a computer program causing a computer to execute processing comprising: acquiring an operative field by imaging the operative field of scopic surgery (see paragraphs [0055]-[0057]). Tseng acquiring preoperative images for a patient; recognizing a target tissue portion included in the acquired operative field image so as to be distinguished from a blood vessel tissue portion appearing on a surface of the target tissue portion by using a learning model trained to output information regarding a target tissue when the operative field image is input. (see paragraph [0010]-[0011], [0048]). Tseng teaches recognizing nerve and blood vessels by using an intelligent algorithm when the preoperative image is inputted into the algorithm. Tseng fails to expressly disclose real time image of the operative field and superimposing the real-time image. Hansen discloses: acquiring a real-time image of an operative field by imaging the operative field of scopic surgery (see paragraph [0027] and [0075]). Hansen teaches acquiring real time imaging during surgery. recognizing, in real time, a target tissue portion included in the acquired in real-time image so as to be distinguished from a blood vessel tissue portion appearing on a surface of the target tissue portion by using a learning model trained to output information regarding a target tissue when the real-time image is input (see paragraph [0116] and [0175]). Hansen teaches segmenting the image using a model to distinguish the ureter tissue from other tissue; and displaying an image of the target tissue portion recognized in real-time by superimposing the image of the target tissue portion on the real-time image of the operative field (see paragraph [0007], [0108], [0109], [0171] and [0172]). Hansen teaches displaying the real-time image and anatomical positions of the image. . Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Tseng to include recognizing other tissue such as ureter tissue by using a model for the purpose of providing a real-time imaging tool already used in minimal invasive surgery, as taught by Hansen. Claim 18: Tseng discloses: displaying the target tissue portion and the blood vessel tissue portion so as to be distinguishable from each other (see paragraphs [0049] and [0050]). Tseng teaches segmenting the nerve and the blood vessels and displaying the model. Tseng fails to expressly disclose real time image of the operative field. Hansen discloses: displaying the target tissue portion and the blood vessel tissue portion so as to be distinguishable from each other on the real-time image (see paragraph [0116] and [0175]). Hansen teaches segmenting the image using a model to distinguish the ureter tissue from other tissue; and Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Tseng to include recognizing other tissue such as ureter tissue by using a model for the purpose of providing a real-time imaging tool for use in minimal invasive surgery, as taught by Hansen. Claim 19, 20: Tseng discloses: periodically switching display and non-display of the target/blood vessel tissue portion (see figure 10 and paragraph [0048]). Tseng teaches periodically switching between tissue display. The segmentations are done independently and can be displayed independently. Claim 21: Tseng discloses: wherein the target tissue is a nerve tissue (see paragraph [0058]). Tseng teaches the target tissue can be nerve tissue, and the computer is caused to execute processing of recognizing the nerve tissue so as to be distinguished from a blood vessel tissue accompanying the nerve tissue by using the learning model (see paragraph [0058]). Tseng teaches distinguishing the nerve tissue and the blood vessel tissue using the algorithms and segmentations. Claim 24: Tseng fails to disclose the target tissue being ureter tissue. Hansen discloses: wherein the target tissue is a ureter tissue, and the computer is caused to execute processing of recognizing the ureter tissue so as to be distinguished from a blood vessel tissue accompanying the ureter tissue by using the learning model (see paragraph [0116] and [0175]). Hansen teaches segmenting the image using a model to distinguish the ureter tissue from other tissue. Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Tseng to include recognizing other tissue such as ureter tissue by using a model for the purpose of providing a realtime imaging tool already used in minimal invasive surgery, as taught by Hansen. Claim 30: Although Claim 30 is a method claim, it is interpreted and rejected for the same reasons as the medium of Claim 17. Claim 31, 32: Although Claim 31 and 32 are apparatus claims, they are interpreted and rejected for the same reasons as the medium of Claims 17 and 24. Claims 22 and 23 are rejected under 35 U.S.C. 103 as being unpatentable over Tseng and Hansen, in view of Masutani et al., United States Patent Publication 20050101857 (hereinafter “Masutani”). Claim 22: Tseng and Hansen fails to disclose the direction of the nerve tissues. Masutani discloses: wherein the target tissue is a nerve tissue running in a first direction, and the computer is caused to execute processing of recognizing the nerve tissue running in the first direction so as to be distinguished from a nerve tissue running in a second direction different from the first direction by using the learning model. (see paragraph [0041]). Masutani teaches determining the directions of the nerves and distinguishing each nerve from another. Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Tseng and Hansen to include determining the directions for the nerves and distinguishing the nerves going in different directions for the purpose of efficiently tracking nerves based on images, as taught by Masutani. Claim 23: Tseng and Hansen fail to disclose the direction of the nerve tissues. Masutani discloses: wherein the target tissue is a nerve tissue, and the computer is caused to execute processing of recognizing the nerve tissue so as to be distinguished from a loose connective tissue running in a direction crossing the nerve tissue by using the learning model. (see paragraph [0041]). Masutani teaches determining the directions of the nerves and distinguishing each nerve from another. Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Tseng and Hansen to include determining the directions for the nerves and distinguishing the nerves going in crossing directions for the purpose of efficiently tracking nerves based on images, as taught by Masutani. Claim 25 is rejected under 35 U.S.C. 103 as being unpatentable over Tseng and Hansen, in view of Maynard et al., United States Patent Publication 20070265532 (hereinafter “Maynard”). Claim 25: Hansen discloses: recognizing a target tissue in the real-time image by using the learning model (see paragraph [0116] and [0175]). Hansen teaches segmenting the image using a model to distinguish the ureter tissue from other tissue. Tseng and Hansen fail to disclose determining a tissue state based on a model. Maynard discloses: recognizing a target tissue in a tense state included in the operative field image by using the learning model. (see paragraph [0002] and [0012]). Maynard teaches determining a tissue state of the tissue based on a model. Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Tseng and Hansen to include determining a tissue state using a model for the purpose of providing an accurate state of a disease, as taught by Maynard. Claims 26-29 are rejected under 35 U.S.C. 103 as being unpatentable over Tseng and Hansen, in view of Lennartz et al., United States Patent Publication 2021/0228287 (hereinafter “Lennartz”) Claim 26: Tseng and Hansen fail to disclose calculating confidence in the recognition of the tissue. Lennartz discloses: wherein the computer is caused to execute processing comprising: calculating a confidence of a recognition result of the learning model; and displaying the target tissue portion in a display mode according to the calculated confidence. (see paragraphs [0063] and [0064]). Lennartz teaches the tissue is correctly identified and treated by calculating parameters and determine if it correct based on the calculations. Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Tseng and Hansen to include display tissue and treatment based on calculated parameters for the purpose of efficiently identifying hidden tissue for accurate analysis of tissues, as taught by Lennartz. Claim 27: Tseng and Hansen fail to disclose determining and displaying hidden tissue. Lennartz discloses: wherein the computer is caused to execute processing comprising: displaying an estimated position of a target tissue portion hidden behind another object by referring to a recognition result of the learning model (see paragraphs [0006] and [0058]). Lennartz teaches determining the hidden tissues and displaying the hidden tissue. Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Tseng and Hansen to include displaying hidden tissue to the user for the purpose of efficiently identifying hidden tissue for accurate analysis of tissues, as taught by Lennartz. Claim 28: Tseng and Hansen fail to disclose determining and displaying hidden tissue. Lennartz discloses: wherein the computer is caused to execute processing comprising: estimating a running pattern of a target tissue by using the learning model and displaying an estimated position of a target tissue portion that does not appear in the real-time image based on the estimated running pattern of the target tissue (see paragraphs [0006], [0045] and [0056]-[0058]). Lennartz teaches determining a pattern based on light sensing or blood circulation to determine the identification of tissue and their locations. Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Tseng and Hansen to include determining a pattern to identifying tissue for the purpose of efficiently identifying hidden tissue for accurate analysis of tissues, as taught by Lennartz. Claim 29: Tseng discloses: recognizing the target tissue portion from the acquired real-time image of the operative field, while excluding surface blood vessels appearing on the surface of the target tissue (see paragraph [0009], [0047], [0048]). Tseng teaches recognizing the target tissue from the image of the operative field before the blood vessels to added to the image; and Tseng and Hansen fail to disclose determining and displaying hidden tissue. Lennartz discloses: recognizing, in real time, the target tissue portion from the acquired real-time image of the operative field, while excluding surface blood vessels appearing on the surface of the target tissue (see paragraphs [0043]-[0045]). Lennartz teaches recognizing the target tissue in real time before it is combined with the blood vessel image. Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Tseng and Hansen to include recognizing and displaying the targeted tissue for the purpose of efficiently identifying hidden tissue for accurate analysis of tissues, as taught by Lennartz. Response to Arguments Applicant's arguments filed 12/30/25 have been fully considered but they are not persuasive. Rejections under 35 USC 101 Applicant argues The Office Action rejects claims 1-16 under § 101 as 1) being directed to non-statutory subject matter and 2) for being directed to a judicial exception. Applicant respectfully traverses the rejections. The Examiner withdrew the 101 rejection with respect to the non-statutory subject matter. Applicant argues Assuming arguendo that the "recognizing a tissue portion" feature qualifies as an abstract idea, Applicant submits that claim 17 recites a practical application of this feature. Specifically, the amended feature applies the information gleaned from the "recognizing..." feature in a meaningful way, such that the resulting image provides an actionable and useful display. This application is accomplished by "superimposing the image of the target tissue portion on the real-time image of the operative field." Additionally, Applicant submits that this amended feature also provides an improvement for scopic surgeries in that this actionable and useful display allows for better accuracy during a surgery. This accuracy is especially impactful in instances of inexperience or fatigue where a surgeon may be more susceptible to human error. For these reasons, applicant submits that claim 17 provides a practical application of any alleged judicial exception, thus is patentable under Step 2A, prong 2 of the Alice/Mayo guidance. The Examiner disagrees. The argued limitations are being grouped as a mental process because the claims do not recite claim language that perform functions. The “recognizing” step uses a learning model used to generally apply the abstract idea without placing any limits on how the learning model functions. Rather, these limitations only recite the outcome of “recognizing” and does not include any details about how the “recognizing” are accomplished. See MPEP 2106.05(f). The “displaying” step includes superimposing an image onto a result image. This is a data gathering step and recites how the results are displayed. The “displaying” step does not include an inventive concept. Therefore, the 35 USC 101 rejections are maintained. 35 USC 102 Rejections Applicants argues Tseng does not describe "acquiring a real-time image of an operative field..." of claim 17. Additionally, as the overlayed image is pre-constructed, Tseng does not describe any process of "recognizing, in real time, a target tissue portion..." of claim 1. Furthermore, as there is no acquired image of the operative field, Tseng cannot "display an image by superimposing the image of the target tissue portion on the real-time image of the operative field." As Tseng does not describe all features of claim 17, Applicant thus submits that claim 17 is novel over Tseng. The Examiner agrees. The Examiner combines Tseng and Hansen to teach the amended limitations. Hansen teaches acquiring real time images of the target tissue. Thus Tseng and Hansen disclose the limitations of the claims. Applicant argues The Office Action alleges that Hansen illustrates "wherein the target tissue is a ureter tissue, and the computer is caused to execute processing of recognizing the ureter tissue so as to be distinguished from a blood vessel tissue accompanying the ureter tissue by using the learning model." For the sake of brevity, even if Hansen were to describe this feature-Applicant does not admit to any such teaching-Hansen does not cure the defects of Tseng as applied to claim 17. Thus, Applicant submits that claim 24 is patentable over Tseng in view of Hansen. The Examiner disagrees. Hansen teaches acquiring real time imaging during surgery (see paragraph [0027] and [0075]). A real-time open surgery imaging system that captures the images during surgery. Tseng teaches recognizing nerve and blood vessels by using an intelligent algorithm when real-time image is inputted into the algorithm (see paragraph [0010]-[0011], [0048]). Thus, the combination of Tseng and Hansen teaches the limitations of the claims. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to TIONNA M BURKE whose telephone number is (571)270-7259. The examiner can normally be reached M-F 8a-4p. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen Hong can be reached at (571)272-4124. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TIONNA M BURKE/Examiner, Art Unit 2178 3/30/26 /STEPHEN S HONG/Supervisory Patent Examiner, Art Unit 2178
Read full office action

Prosecution Timeline

Jul 13, 2023
Application Filed
Sep 30, 2025
Non-Final Rejection — §101, §103
Dec 30, 2025
Response Filed
Mar 30, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596470
GESTURE-BASED MENULESS COMMAND INTERFACE
2y 5m to grant Granted Apr 07, 2026
Patent 12591731
SYSTEM AND METHOD FOR SELECTING RELEVANT CONTENT IN AN ENHANCED VIEW MODE
2y 5m to grant Granted Mar 31, 2026
Patent 12572698
INFRASTRUCTURE METHODS AND SYSTEMS FOR EXTENDING CUSTOMER RELATIONSHIP MANAGEMENT PLATFORM
2y 5m to grant Granted Mar 10, 2026
Patent 12564152
SYSTEM AND METHOD FOR MANAGEMENT OF SENSOR DATA BASED ON HIGH-VALUE DATA MODEL
2y 5m to grant Granted Mar 03, 2026
Patent 12547823
DYNAMICALLY AND SELECTIVELY UPDATED SPREADSHEETS BASED ON KNOWLEDGE MONITORING AND NATURAL LANGUAGE PROCESSING
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
54%
Grant Probability
73%
With Interview (+19.3%)
4y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 431 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month