Prosecution Insights
Last updated: April 19, 2026
Application No. 18/505,114

INFORMATION PROCESSING APPARATUS, DIAGNOSIS SUPPORT PROCESSING DEVICE, INFORMATION PROCESSING METHOD, DIAGNOSIS SUPPORT PROCESSING METHOD, INFORMATION PROCESSING PROGRAM, AND DIAGNOSIS SUPPORT PROCESSING PROGRAM

Final Rejection §103
Filed
Nov 09, 2023
Examiner
WINDSOR, COURTNEY J
Art Unit
2661
Tech Center
2600 — Communications
Assignee
Fujifilm Corporation
OA Round
2 (Final)
86%
Grant Probability
Favorable
3-4
OA Rounds
2y 7m
To Grant
96%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
217 granted / 252 resolved
+24.1% vs TC avg
Moderate +9% lift
Without
With
+9.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
32 currently pending
Career history
284
Total Applications
across all art units

Statute-Specific Performance

§101
5.4%
-34.6% vs TC avg
§103
51.1%
+11.1% vs TC avg
§102
20.5%
-19.5% vs TC avg
§112
17.9%
-22.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 252 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on January 28, 2026 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Response to Amendment Claims 1-2 and 5-10 have been amended changing the scope and contents of the claim. Claims 3-4 have been cancelled. Applicant’s amendment filed December 9, 2025 overcomes the following objection/rejection(s) from the last Office Action of September 25, 2025: Rejections to the claims under 35 USC § 101 Response to Arguments Applicant's arguments filed December 9, 2025 have been fully considered but they are not fully persuasive. Specifically, applicant argues, “Claims 1 and 6-10, as amended, recite extracting each part included in the medical image in a segment image and associating the part with a diagnosis support processing result. Applicant submits that the cited references do not disclose this feature (Remarks, 10).” The examiner respectfully disagrees. Specifically, using claim 1 as an exemplary claim, there is no limitation explicitly claiming that the segment image is a separate image as the applicant appears to argue. Claim 1 recites, “perform diagnosis support processing on one medical image and derive a plurality of diagnosis support processing results by extracting each part included in the one medical image in a segment image.” Under the broadest reasonable interpretation of this claim, this is read that within a medical image, the regions associated with the diagnosis are detected (i.e. segmented). There is no requirement that a segment image is generated for each individual diagnosis result. Rather, this is read as an image segmented according to the diagnosis regions. Further in the Remarks, the applicant notes, “”the present disclosure allows a diagnosis to be more easily determined, since the diagnosis support processing results are not superimposed on a single image (Remarks, 11).” However, this is not clear from the claim itself. If the applicant intends to claim that each result is processed and output into its individual image, this should be clarified within the claim. As seen below, the examiner relies upon Kikuchi paragraph 0059, “The presentation unit 310 creates the diagnosis assistance result 641 from the diagnosis name that is the deduction result of the first deduction unit 304 and the image finding that is the deduction result of the second deduction units 305-i (i=1, . . . , Nf) corresponding to the image finding type selected by the selection unit 309;” and further Figure 6, multiple diagnosis deduction results are displayed in box 641. These pieces of prior art apply to the interpretation as noted above, that the multiple results are in one image since the claim is not limited in that each result has their own respective image. Further, Applicant argues, Kikuchi and Ujiie fail to disclose “presenting, as the selection information, information indicating an acquisition source for acquiring a diagnosis support processing result for each of the plurality of diagnosis support processing results, where the acquisition source is information indicating the stored location of a diagnosis support processing result, as recited in independent claims 1 and 6-10.” Applicant’s arguments with respect to claim(s) 1 and 6-10 as presented above have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. The examiner relies upon JP ‘738 to teach this feature as noted below. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-2 and 5-10 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Publication No. 2020/0381115 to Kikuchi (hereinafter Kikuchi), and further in view of U.S. Publication No. 2017/0372473 to Ujiie et al. (hereinafter Ujiie) and JP2015158738 (a machine translation from Google Patents provided; hereinafter HP ‘738). Regarding independent claim 1, Kikuchi discloses An information processing apparatus (paragraph 0001, “The disclosure of the present specification relates to an information processing apparatus, an information processing system, an information processing method, and a program.”) comprising: at least one processor (paragraph 0024, “The CPU 203 is an example of a processor.”), the at least one processor being configured to: perform diagnosis support processing on one medical image (paragraph 0039, “The image obtaining unit 301 obtains the medical image data 410-i (i=1, . . . , Ni) from the medical image DB 102 via the LAN 103;” abstract, “An information processing method includes deducing a diagnosis name derived from a medical image on the basis of an image feature amount corresponding to a value indicating a feature of a medical image”) and derive a plurality of diagnosis support processing results (paragraph 0058, “The selection unit 309 selects a predetermined number of the image finding types having high matching degrees Psn (higher than than others). When the image finding type is selected, the image finding values deduced by the second deduction units 305-i corresponding to the image finding types are selected;” paragraph 0059, “The presentation unit 310 creates the diagnosis assistance result 641 from the diagnosis name that is the deduction result of the first deduction unit 304 and the image finding that is the deduction result of the second deduction units 305-i (i=1, . . . , Nf) corresponding to the image finding type selected by the selection unit 309;” see also Figure 6, multiple diagnosis deduction results are displayed in box 641) by extracting each part included in the one medical image in a segment image and associating the part with a diagnosis support processing result (paragraph 0059, “The presentation unit 310 creates the diagnosis assistance result 641 from the diagnosis name that is the deduction result of the first deduction unit 304 and the image finding that is the deduction result of the second deduction units 305-i (i=1, . . . , Nf) corresponding to the image finding type selected by the selection unit 309;” see also Figure 6, multiple diagnosis deduction results are displayed in box 641). Kikuchi fails to explicitly disclose as further recited, however Ujiie discloses present selection information for allowing a user to select at least one diagnosis support processing result among the plurality of derived diagnosis support processing results (Fig. 17, element 52 allows for selecting the results associated with the specific region/organ; paragraph 0143, “ In FIG. 17, an example list 52 is displayed on the monitor 32, in the case the target body part is selected by a user;” paragraph 0145, “In this way, the input/output controlling function 37 c accepts the operation to select the target body part on the list 52. Further the input/output controlling function 37 c outputs the accepted information to the generating function 37 d.”) together with the one medical image (Figure 18); and output a diagnosis support processing result associated with the part selected by the user according to the selection information (Figure 17, as the element in 52 is selected, the list in 40 is altered only showing the results relevant to the region/organ selected), wherein the at least one processor is configured to present, as the selection information, information indicating an acquisition source for acquiring a diagnosis support processing result for each of the plurality of diagnosis support processing results (Fig. 17, element 52 allows for selecting the results associated with the specific region/organ; paragraph 0143, “ In FIG. 17, an example list 52 is displayed on the monitor 32, in the case the target body part is selected by a user;” paragraph 0145, “In this way, the input/output controlling function 37 c accepts the operation to select the target body part on the list 52. Further the input/output controlling function 37 c outputs the accepted information to the generating function 37 d.”), and Kikuchi is directed toward “An information processing method includes deducing a diagnosis name derived from a medical image on the basis of an image feature amount (abstract).” Ujiie is directed toward “A medical imaging diagnosis apparatus (abstract).” As can be easily seen by one of ordinary skill in the art at the time of filing the claimed invention, both Kikuchi and Ujiie are directed toward similar methods of endeavor of image analysis for diagnostic purposes. Further, one of ordinary skill in the art at the time of filing the claimed invention could easily conceive that image diagnostics of a patient can result in a multitude of diagnoses. For example, analyzing a chest CT can result in both cardiac diagnoses and pulmonary diagnoses. If a user were a cardiologist, they would be most interested in and most knowledgeable about the conditions specifically related to the heart. On the contrary, If a user were a pulmonology, they would be most interested in and most knowledgeable about the conditions specifically related to the lungs. Presenting information unrelated to the area in question or the area of expertise can aid in clutter of the screen, and presentation of useless information. Thus, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Ujiie in order to ensure the most relevant data is displayed corresponding to the body parts in question. Kikuchi and Ujiie in the combination fail to explicitly disclose as further recited. However, JP ‘738 discloses wherein the acquisition source is information indicating the stored location of a diagnosis support processing result (page 9, “The reference destination information in FIG. 4 is a URL or an IP address, but the reference destination of the reference destination information may be on the network (LAN) of the medical system 100 as well as on the Internet (WAN). For example, a configuration in which a database of reference information exists on the network of the medical system 100 may be used;” page 21, “In the first embodiment or the second embodiment, the storage location information (URL or the like) of the reference information presented for understanding the medical report is presented as the reference destination information.”). As noted above, Kikuchi and Ujjie are directed toward similar methods of endeavor of image processing for diagnostic purposes. Further, JP ‘738 is directed toward “provide a medical information management device capable of providing a worker of a medical institution with information about medical information (abstract).” As can be easily seen by one of ordinary skill in the art before the effective filing date of the claimed invention, Kikuchi, Ujiie and JP ‘738 are directed toward medical information processing and presentation. Further, it is well known by one of ordinary skill in the art before the effective filing date of the claimed invention that user interfaces often represent black boxes, in the sense that users aren’t often aware entirely what is represented by buttons or other presentation mechanisms. Presenting the URL data of where information is stored allows users to understand where to find additional data, if not within the program itself. Thus, it would have been obvious to a person having ordinary skill in the art before the effective filing date to incorporate the teaching of JP ‘738 in order to ensure a user is aware of where to find stored data, if the program stops working, or they can’t access the program for some reason. Regarding dependent claim 2, the rejection of claim 1 is incorporated herein. Additionally, Kikuchi in the combination further discloses wherein the at least one processor is configured to output each diagnosis support processing result in association with the one medical image as a diagnosis support processing result (paragraph 0036, “The diagnosis assistance button 622 is a button for performing the deduction of the diagnosis name from an image of the lesion position 631. When the diagnosis assistance button 622 is clicked by using the mouse, the deduction of the diagnosis name is performed from the image of the lesion position 631, and a diagnosis assistance result 641 is displayed together with the image finding serving as the reference information. In the example of FIG. 6, it is displayed in the diagnosis assistance result 641 that a probability of the primary lung cancer is “83%”, a probability of the metastatic lung cancer is “12%”, and a probability of the benign node is “5%” as the diagnosis deduction result”). Regarding dependent claim 5, the rejection of claim 1 is incorporated herein. Additionally, Ujiie in the combination further discloses wherein the at least one processor is configured to: perform the diagnosis support processing on the one medical image for each type of disease and associate the type of the disease with a diagnosis support processing result (Figure 17, the results (element 40) are associated with the region in element 52; the type of disease is read as the region the disease is in); present information for selecting the type of the disease as the selection information (Fig. 17, element 52 allows for selecting the results associated with the specific region/organ; paragraph 0143, “ In FIG. 17, an example list 52 is displayed on the monitor 32, in the case the target body part is selected by a user”); and output a diagnosis support processing result associated with the type of the disease selected by the user according to the selection information (Fig. 17, element 52 allows for selecting the results associated with the specific region/organ; paragraph 0143, “ In FIG. 17, an example list 52 is displayed on the monitor 32, in the case the target body part is selected by a user;” paragraph 0145, “In this way, the input/output controlling function 37 c accepts the operation to select the target body part on the list 52. Further the input/output controlling function 37 c outputs the accepted information to the generating function 37 d.”). It would have been obvious to a person having ordinary skill in the art before the effective filing date to incorporate the teaching of Ujiie in order to ensure multiple results can be processed on an image, and the user can select the results of importance, as opposed to reviewing all data which may not be relevant to their specialty. Regarding independent claim 6, Kikuchi discloses A diagnosis support processing device (paragraph 0001, “The disclosure of the present specification relates to an information processing apparatus, an information processing system, an information processing method, and a program;” paragraph 0002, “A computer aided diagnosis (CAD) system has been proposed in which a medical image is analyzed by a calculator, and information is presented for assisting radiogram interpretation where a doctor observes the medical image and conducts a diagnosis. ”) comprising: at least one processor (paragraph 0024, “The CPU 203 is an example of a processor.”), the at least one processor being configured to: perform diagnosis support processing on one medical image and derive a plurality of diagnosis support processing results by extracting each part included in the one medical image in a segment image and associating the part with a diagnosis support processing result (paragraph 0039, “The image obtaining unit 301 obtains the medical image data 410-i (i=1, . . . , Ni) from the medical image DB 102 via the LAN 103;” abstract, “An information processing method includes deducing a diagnosis name derived from a medical image on the basis of an image feature amount corresponding to a value indicating a feature of a medical image;” paragraph 0058, “The selection unit 309 selects a predetermined number of the image finding types having high matching degrees Psn (higher than than others). When the image finding type is selected, the image finding values deduced by the second deduction units 305-i corresponding to the image finding types are selected;” paragraph 0059, “The presentation unit 310 creates the diagnosis assistance result 641 from the diagnosis name that is the deduction result of the first deduction unit 304 and the image finding that is the deduction result of the second deduction units 305-i (i=1, . . . , Nf) corresponding to the image finding type selected by the selection unit 309;” see also Figure 6, multiple diagnosis deduction results are displayed in box 641); output each diagnosis support processing result in association with the one medical image (paragraph 0036, “The diagnosis assistance button 622 is a button for performing the deduction of the diagnosis name from an image of the lesion position 631. When the diagnosis assistance button 622 is clicked by using the mouse, the deduction of the diagnosis name is performed from the image of the lesion position 631, and a diagnosis assistance result 641 is displayed together with the image finding serving as the reference information. In the example of FIG. 6, it is displayed in the diagnosis assistance result 641 that a probability of the primary lung cancer is “83%”, a probability of the metastatic lung cancer is “12%”, and a probability of the benign node is “5%” as the diagnosis deduction result”); and Kikuchi fails to explicitly disclose as further recited. However, Ujiie discloses output selection information for allowing a user to select at least one diagnosis support processing result among the plurality of derived diagnosis support processing results in association with the one medical image (Fig. 17, element 52 allows for selecting the results associated with the specific region/organ; paragraph 0143, “ In FIG. 17, an example list 52 is displayed on the monitor 32, in the case the target body part is selected by a user;” paragraph 0145, “In this way, the input/output controlling function 37 c accepts the operation to select the target body part on the list 52. Further the input/output controlling function 37 c outputs the accepted information to the generating function 37 d;” Figure 17, as the element in 52 is selected, the list in 40 is altered only showing the results relevant to the region/organ selected), wherein the at least one processor is configured to present, as the selection information, information indicating an acquisition source for acquiring a diagnosis support processing result for each of the plurality of diagnosis support processing results (Fig. 17, element 52 allows for selecting the results associated with the specific region/organ; paragraph 0143, “ In FIG. 17, an example list 52 is displayed on the monitor 32, in the case the target body part is selected by a user;” paragraph 0145, “In this way, the input/output controlling function 37 c accepts the operation to select the target body part on the list 52. Further the input/output controlling function 37 c outputs the accepted information to the generating function 37 d.”). Kikuchi is directed toward “An information processing method includes deducing a diagnosis name derived from a medical image on the basis of an image feature amount (abstract).” Ujiie is directed toward “A medical imaging diagnosis apparatus (abstract).” As can be easily seen by one of ordinary skill in the art at the time of filing the claimed invention, both Kikuchi and Ujiie are directed toward similar methods of endeavor of image analysis for diagnostic purposes. Further, one of ordinary skill in the art at the time of filing the claimed invention could easily conceive that image diagnostics of a patient can result in a multitude of diagnoses. For example, analyzing a chest CT can result in both cardiac diagnoses and pulmonary diagnoses. If a user were a cardiologist, they would be most interested in and most knowledgeable about the conditions specifically related to the heart. On the contrary, If a user were a pulmonology, they would be most interested in and most knowledgeable about the conditions specifically related to the lungs. Presenting information unrelated to the area in question or the area of expertise can aid in clutter of the screen, and presentation of useless information. Thus, it would have been obvious to a person having ordinary skill in the art at the time the claimed invention was filed to incorporate the teaching of Ujiie in order to ensure the most relevant data is displayed corresponding to the body parts in question. Kikuchi and Ujiie in the combination fail to explicitly disclose as further recited. However, JP ‘738 discloses wherein the acquisition source is information indicating the stored location of a diagnosis support processing result (page 9, “The reference destination information in FIG. 4 is a URL or an IP address, but the reference destination of the reference destination information may be on the network (LAN) of the medical system 100 as well as on the Internet (WAN). For example, a configuration in which a database of reference information exists on the network of the medical system 100 may be used;” page 21, “In the first embodiment or the second embodiment, the storage location information (URL or the like) of the reference information presented for understanding the medical report is presented as the reference destination information.”). As noted above, Kikuchi and Ujjie are directed toward similar methods of endeavor of image processing for diagnostic purposes. Further, JP ‘738 is directed toward “provide a medical information management device capable of providing a worker of a medical institution with information about medical information (abstract).” As can be easily seen by one of ordinary skill in the art before the effective filing date of the claimed invention, Kikuchi, Ujiie and JP ‘738 are directed toward medical information processing and presentation. Further, it is well known by one of ordinary skill in the art before the effective filing date of the claimed invention that user interfaces often represent black boxes, in the sense that users aren’t often aware entirely what is represented by buttons or other presentation mechanisms. Presenting the URL data of where information is stored allows users to understand where to find additional data, if not within the program itself. Thus, it would have been obvious to a person having ordinary skill in the art before the effective filing date to incorporate the teaching of JP ‘738 in order to ensure a user is aware of where to find stored data, if the program stops working, or they can’t access the program for some reason. Regarding independent claim 7, the rejection of claim 1 applies directly. Additionally, Kikuchi discloses An information processing method executed by a computer (paragraph 0001, “The disclosure of the present specification relates to an information processing apparatus, an information processing system, an information processing method, and a program.”), the method comprising: performing diagnosis support processing on one medical image and deriving a plurality of diagnosis support processing results by extracting each part included in the one meical image in a segment image and associating the part with a diagnosis support processing result (see claim 1 analysis); presenting selection information for allowing a user to select at least one diagnosis support processing result among the plurality of derived diagnosis support processing results, together with the one medical image (see claim 1 analysis); and outputting a diagnosis support processing result associated with the part selected by the user according to the selection information (see claim 1 analysis), wherein the selection information is information indicating an acquisition source for acquiring a diagnosis support processing result for each of the plurality of diagnosis support processing results (see claim 1 analysis), and wherein the acquisition source is information indicating the stored location of a diagnosis support processing result (see claim 1 analysis). Regarding independent claim 8, the rejection of claim 6 applies directly. Additionally, Kikuchi discloses A diagnosis support processing method executed by a computer (paragraph 0001, “The disclosure of the present specification relates to an information processing apparatus, an information processing system, an information processing method, and a program;” abstract, “An information processing method includes deducing a diagnosis name derived from a medical image on the basis of an image feature amount corresponding to a value indicating a feature of a medical image”), the method comprising: Perform diagnosis support processing on one medical image and deriving a plurality of diagnosis support processing results by extracting each part included in the one medical image in a segment image and associating the part with a diagnosis support processing result (see claim 6 analysis); outputting each diagnosis support processing result in association with the one medical image (see claim 6 analysis); and outputting selection information for allowing a user to select at least one diagnosis support processing result among the plurality of derived diagnosis support processing results in association with the one medical image (see claim 6 analysis), wherein the selection information is information indicating an acquisition source for acquiring a diagnosis support processing result for each of the plurality of diagnosis support processing results (see claim 6 analysis), and wherein the acquisition source is information indicating the stored location of a diagnosis support processing result (see claim 6 analysis). Regarding independent claim 9, the rejection of claim 1 applies directly. Additionally, Kikuchi discloses A non-transitory computer readable medium storing an information processing program causing a computer to execute a process (paragraph 0001, “The disclosure of the present specification relates to an information processing apparatus, an information processing system, an information processing method, and a program;” paragraph 0024, “A random access memory (RAM) 204 temporarily stores information to be used when the CPU 203 executes the program. ”) comprising: performing diagnosis support processing on one medical image and deriving a plurality of diagnosis support processing results by extracting each part included in the one medical image in a segment image and associating the part with a diagnosis support processing result (see claim 1 analysis); presenting selection information for allowing a user to select at least one diagnosis support processing result among the plurality of derived diagnosis support processing results, together with the one medical image (see claim 1 analysis); and outputting a diagnosis support processing result associated with the part selected by the user according to the selection information (see claim 1 analysis), wherein the selection information is information indicating an acquisition source for acquiring a diagnosis support processing result for each of the plurality of diagnosis support processing results (see claim 1 analysis), and wherein the acquisition source is information indicating the stored location of a diagnosis support processing result (see claim 1 analysis). Regarding independent claim 10, the rejection of claim 6 applies directly. Additionally, Kikuchi discloses A non-transitory computer readable medium storing a diagnosis support processing program causing a computer to execute a process (paragraph 0001, “The disclosure of the present specification relates to an information processing apparatus, an information processing system, an information processing method, and a program;” paragraph 0024, “A random access memory (RAM) 204 temporarily stores information to be used when the CPU 203 executes the program. ”) comprising: Perform diagnosis support processing on one medical image and derive a plurality of diagnosis support processing results by extracting each part included in the one medical image in a segment image and associating the part with a diagnosis support processing result (see claim 6 analysis); outputting each diagnosis support processing result in association with the one medical image (see claim 6 analysis); and outputting selection information for allowing a user to select at least one diagnosis support processing result among the plurality of derived diagnosis support processing results in association with the one medical image (see claim 6 analysis), wherein the selection information is information indicating an acquisition source for acquiring a diagnosis support processing result for each of the plurality of diagnosis support processing results (see claim 6 analysis), and wherein the acquisition source is information indicating the stored location of a diagnosis support processing result (see claim 6 analysis). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Contact Any inquiry concerning this communication or earlier communications from the examiner should be directed to Courtney J. Nelson whose telephone number is (571)272-3956. The examiner can normally be reached Monday - Friday 8:00 - 4:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John Villecco can be reached at 571-272-7319. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /COURTNEY JOAN NELSON/Primary Examiner, Art Unit 2661
Read full office action

Prosecution Timeline

Nov 09, 2023
Application Filed
Sep 22, 2025
Non-Final Rejection — §103
Dec 09, 2025
Response Filed
Feb 04, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603175
METHOD AND APPARATUS FOR DETERMINING DIAGNOSIS RESULT DATA
2y 5m to grant Granted Apr 14, 2026
Patent 12597188
SYSTEMS AND METHODS FOR PROCESSING ELECTRONIC IMAGES FOR PHYSIOLOGY-COMPENSATED RECONSTRUCTION
2y 5m to grant Granted Apr 07, 2026
Patent 12597494
METHOD AND APPARATUS FOR TRAINING MEDICAL IMAGE REPORT GENERATION MODEL, AND IMAGE REPORT GENERATION METHOD AND APPARATUS
2y 5m to grant Granted Apr 07, 2026
Patent 12588881
PROVIDING A RESULT DATA SET
2y 5m to grant Granted Mar 31, 2026
Patent 12592016
Material-Specific Attenuation Maps for Combined Imaging Systems Background
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
86%
Grant Probability
96%
With Interview (+9.4%)
2y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 252 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month