Prosecution Insights
Last updated: April 19, 2026
Application No. 18/481,260

WORK SUPPORT APPARATUS, WORK SUPPORT METHOD, AND WORK SUPPORT PROGRAM

Final Rejection §101§103
Filed
Oct 05, 2023
Examiner
WASEEM, HUMA
Art Unit
3686
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Fujifilm Corporation
OA Round
2 (Final)
17%
Grant Probability
At Risk
3-4
OA Rounds
4y 3m
To Grant
35%
With Interview

Examiner Intelligence

Grants only 17% of cases
17%
Career Allow Rate
9 granted / 54 resolved
-35.3% vs TC avg
Strong +18% interview lift
Without
With
+18.4%
Interview Lift
resolved cases with interview
Typical timeline
4y 3m
Avg Prosecution
31 currently pending
Career history
85
Total Applications
across all art units

Statute-Specific Performance

§101
31.4%
-8.6% vs TC avg
§103
39.4%
-0.6% vs TC avg
§102
17.8%
-22.2% vs TC avg
§112
7.9%
-32.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 54 resolved cases

Office Action

§101 §103
DETAILED ACTION This is responsive to amendments filed on 05/22/2025 in which claims 1-17 are presented for examination; Claims 1,3 and 16-17 have been amended. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-17 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding claim 1: Step 1: Is the claim to a process, machine, manufacture or composition of matter?” Yes, it’s a machine(apparatus). Step 2a Prong 1 (judicial exception) Step 2A (1): “Does the claim recite an abstract idea, law of nature, or natural phenomenon? Yes , the claim comes under mental processes. Claim 1 recites: “A work support apparatus comprising at least one processor, wherein the processor is configured to: acquire a medical image; extract, by an extraction process, a region to be a target of creation work of a medical document by a user in the medical image; derive, by using a trained model, findings of the region which has been extracted, wherein the trained model is trained using training data, the training data comprises a large number of combinations of a medical image including an abnormal shadow, information specifying a region in the medical image in which the abnormal shadow is present, and a finding of the abnormal shadow; and in response to receiving an operation by the user in which the region is designated by the user, perform control to display, in an identifiable manner, a status for the region which has been designated among a plurality of statuses related to the creation work of the medical document by the user. and use at least one of a plurality of comments on findings which are generated based on the findings of the region to create the medical document, wherein the status for the designated region is displayed in the identifiable manner for indicating the creation work is incomplete.” All the limitations above are abstract idea related to the mental process (concepts performed in the human mind (including an observation, evaluation, judgment, opinion)) with the exception of bold and underlined limitations. Claim language pertains to analyzing a medical image (e.g. X-ray, CT scan etc.) and identifying the specific regions for which more examining(work) is required. Any information for a specific target (needing work ) can be extracted/retrieved by writing on paper. An image can be examined (e.g. X-ray) and any abnormality can be analyzed. Any area which is not examined can be identified by simply viewing the image(e.g. In X-ray image) All of this can be done mentally or on paper(marking areas needed to be examined) Step 2A(2): Prong Two: evaluate whether the claim recites additional elements that integrate the exception into a practical application of the exception. NO The claim does recite additional elements; however they don’t integrate the exception into a practical application of the exception. acquire a medical image Adding insignificant extra-solution activity to the judicial exception - see MPEP 2106.05(g) ) work support apparatus (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)) processor (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)) display (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)) trained model (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)) trained using training data(Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)) Step 2B: evaluate whether the claim recites additional elements that amount to an inventive concept (aka “significantly more”) than the recited judicial exception? NO As discussed previously with respect to Step 2A Prong Two, the additional element in the claim amounts to no more than mere instructions to apply the exception using a generic computer component. Regarding the claim limitation,“ acquire a medical image” the courts have recognized the computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (“i. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information”); See, MPEP 2106.05 (d)(II) The same analysis applies here in 2B, i.e., mere instructions to apply an exception using a generic computer component cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Dependent claims 2-15 further narrows the abstract idea and add the additional elements of “computer” . Under step 2A, prong two, the additional elements don’t integrate the exception into a practical application of the exception as merely adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f). As discussed previously with respect to Step 2A Prong Two, the additional elements in the claim amounts to no more than mere instructions to apply the exception using a generic computer component. The same analysis applies here in 2B, i.e., mere instructions to apply an exception using a generic computer component cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Regarding claim 16, it is rejected under the same rationale as claim 1 . Regarding claim 17, it is rejected under the same rationale as claim 1 . In addition it adds the additional elements of “non-transitory computer-readable storage medium” Under step 2A, prong two, the additional elements don’t integrate the exception into a practical application of the exception as merely adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f). As discussed previously with respect to Step 2A Prong Two, the additional elements in the claim amounts to no more than mere instructions to apply the exception using a generic computer component. The same analysis applies here in 2B, i.e., mere instructions to apply an exception using a generic computer component cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-11 and 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over MATSUMOTO et al( US 20200111558 A1) in view of Ishida et al (US 20090169075 A1) Regarding claim 1, MATSUMOTO teaches a work support apparatus comprising at least one processor, wherein the processor is configured to(see para 0007): acquire a medical image(para, “[0057] When the medical image is received from the image generation apparatus 1, the received medical image is stored in the image DB 5. On the basis of the header information of the received medical image, the management information is generated and stored in the image management table. When the CAD information is received from the CAD 2, the received CAD information is stored in the image DB 5, and a record of which UID agrees with the CAD information in the image management table is retrieved, and the file name, the file location, and the like of the CAD information are written in the retrieved record. Thus, the medical image and the CAD information generated from the medical image are associated with each other, and stored in the image DB 5 so as to be retrievable) extract, by an extraction process, a region to be a target of creation work of a medical document by a user in the medical image(para, “[0032] The medical image display system 100 is a system in which: a medical image is taken; on the basis of the medical image, abnormal shadow candidate(s) is detected; and the detection result along with the medical image is provided to an image interpreter(s).” Also, para, “[0041] After finishing the process of detecting abnormal shadow candidates with the detection algorithm, the CAD 2 generates an abnormal shadow candidate detection result (hereinafter called CAD information). The CAD information includes: positional information of the region (contour) of each detected abnormal shadow candidate; the type of each detected abnormal shadow candidate (e.g. nodular shadows and cardiac hypertrophy); the number of the detected abnormal shadow candidates; the level of severity of each abnormal shadow candidate as a disease (e.g. the risk of death); and an abnormal shadow probability indicating a probability that the abnormal shadow candidate is an abnormal shadow. The CAD 2 attaches the generated CAD information to the header information of the medical image on which the process of detecting abnormal shadow candidates has been performed, and sends the medical image to the image DB 5 and/or the information processing apparatus 3 through the communication unit. The CAD information may be accumulated in the image DB 5 as a file separate from the corresponding medical image and associated with the corresponding medical image such that they are recognized to correspond to each other.”); derive, by using a trained model, findings of the region which has been extracted (para, “0040] As the detection algorithm for detecting abnormal shadow candidates, an algorithm known to the public can be adopted. For example, fully convolutional networks (FCN), which is a deep learning model, can be used. Details of an FCN is described later.” Also, para, “[0041] After finishing the process of detecting abnormal shadow candidates with the detection algorithm, the CAD 2 generates an abnormal shadow candidate detection result (hereinafter called CAD information). The CAD information includes: positional information of the region (contour) of each detected abnormal shadow candidate; the type of each detected abnormal shadow candidate (e.g. nodular shadows and cardiac hypertrophy); the number of the detected abnormal shadow candidates; the level of severity of each abnormal shadow candidate as a disease (e.g. the risk of death); and an abnormal shadow probability indicating a probability that the abnormal shadow candidate is an abnormal shadow. The CAD 2 attaches the generated CAD information to the header information of the medical image on which the process of detecting abnormal shadow candidates has been performed, and sends the medical image to the image DB 5 and/or the information processing apparatus 3 through the communication unit. The CAD information may be accumulated in the image DB 5 as a file separate from the corresponding medical image and associated with the corresponding medical image such that they are recognized to correspond to each other.”), and in response to receiving an operation by the user in which the region is designated by the user, perform control to display, in an identifiable manner, a status for the region which has been designated among a plurality of statuses related to the creation work of the medical document by the user(para, “[0069] In the process of determining the number of image interpreters, the controller 31 determines the number of image interpreters according to whether or not an abnormal shadow candidate is present. Because only medical images that are highly likely to include an abnormal shadow(s) are interpreted by a plurality of image interpreters, accuracy in image interpretation is improved, and increase of time for image interpretation is restrained.” Also para, “[0061] By performing the process using FCN, a heat map is output for each type of lesion to be diagnosed. The heat map shows probabilities indicating respective points on an image being a lesion.”) and use at least one of a plurality of comments on findings which are generated based on the findings of the region to create the medical document (para, “[0131] The viewer screen also displays an input section for the image interpreter to input findings on the specified lesion region. The input section has checkboxes for selecting a lesion type of the specified lesion region. When the image interpreter checks the lesion type with the operation unit 42, checkboxes for selecting findings (characteristics (e.g. small round, amorphous or indistinct, pleomorphic), categories, etc.) on the lesion region for the checked lesion type are displayed.” Also, para, “[0135] The image interpretation result information includes, as described above, information on the lesion type, the number of lesion regions determined to be the lesion, positional information of each lesion region, and findings. The image interpretation result information may also include the image interpretation completion information, the reassignment instruction information and reassignment criteria.”), wherein the status for the designated region is displayed in the identifiable manner for indicating the creation work is incomplete (para, “[0135] The image interpretation result information includes, as described above, information on the lesion type, the number of lesion regions determined to be the lesion, positional information of each lesion region, and findings. The image interpretation result information may also include the image interpretation completion information, the reassignment instruction information and reassignment criteria.” Also, see Fig. 9.) MATSUMOTO does not explicitly teach: wherein the trained model is trained using training data, the training data comprises a large number of combinations of a medical image including an abnormal shadow, information specifying a region in the medical image in which the abnormal shadow is present, and a finding of the abnormal shadow. Ishida teaches: wherein the trained model is trained using training data, the training data comprises a large number of combinations of a medical image including an abnormal shadow, information specifying a region in the medical image in which the abnormal shadow is present, and a finding of the abnormal shadow (para, “0004] In common practice, when a discrimination device is used for pattern recognition, for example, preparation is made to get the pattern image of an abnormal shadow to be detected. Then image feature quantity including such statistical values as the average pixel value and distribution value or such geometric feature quantities as size and circularity in the image area of that abnormal shadow are inputted into the ANN as training data. Further, the ANN is made to learn in such a way that the output value close to "1" should be outputted if the pattern is similar to that of the abnormal shadow image. Likewise, using the pattern image of the shadow of a normal tissue (called the normal shadow), the ANN is made to learn in such a way that the output value close to "0" should be outputted if the pattern is similar to that of the normal shadow image. This arrangement ensures that, if the image feature quantity of the image to be detected is inputted to the aforementioned ANN, the output value of 0 through 1 is obtained from that image feature quantity. Accordingly, if this value is close to "1", it is highly likely that the shadow is abnormal; whereas, if this value is close to "0", it is highly likely that the shadow is normal. Thus, in the conventional CAD, the abnormal shadow candidates have been detected according to the output value obtained from this method.” Para, “[0015] The invention described in Structure (3) is the image processing method described in Structure (1) or (2) wherein the aforementioned training input image includes a plurality of training feature images created by applying image processing to the training input image, in the learning step, the pixel value of the pixel of interest located at the corresponding position in each of a plurality of the training input images is inputted into the discrimination device, and in the training output image, the pixel value of the pixel corresponding to the pixel of interest is set as the learning target value for the input of the discrimination device.”) It would have been obvious for a person of ordinary skill in the art to apply abnormal shadow training teachings of Ishida into the teachings of MATSUMOTO at the time the application was filed in order to develop apparatus for detecting the candidate area for the abnormal shadow. (para, “[0003] In the medical field, this method is used to develop the apparatus for detecting a candidate area for the abnormal shadow by recognizing the pattern of the image area assumed to be the shadow (called the abnormal shadow) of a portion of lesion from the medical image obtained by examination of radiographing. This apparatus is called the CAD (Computer Aided Diagnosis Apparatus).”) Regarding claim 2, MATSUMOTO as modified by Ishida teaches the work support apparatus according to claim 1. MATSUMOTO further teaches wherein the plurality of statuses include at least one of a status in which the work is required or a status in which the work is not required (para, “[0062] A lesion is detected if a probability value in the heat map exceeds a predetermined threshold. The abnormal shadow probability is determined on the basis of a predetermined threshold and the probability value in the heat map. For example, if “V<(1.0+T)±2.0” holds, wherein V represents a probability value in the heat map, and T represents a threshold value being 0.5, it is determined that determination of whether or not the region is an abnormal shadow is difficult to be made. A triage level that indicates the level of urgency of the image interpretation is determined for each type of lesion. For example, the triage level for nodular shadows is set high, because the nodular shadows indicate a possibility of lung cancer and requires early diagnosis and treatment.” Also, para, “[0094] The “image interpretation completed” button B1 is operated, for example, in a case where a plurality of image interpreters has been assigned the medical image, and any of the assigned image interpreters determines that the medical image does not need to be interpreted by a plurality of image interpreters. That is, the “image interpretation completed” button B1 is operated, for example, in a case where the CAD 2 has determined that a plurality of image interpreters should interpret the medical image, but any of the assigned image interpreters determines, in actual image interpretation, that the medical image does not need to be interpreted by a plurality of image interpreters.” Note: Also, see Fig. 9 and accompanying disclosure.) Regarding claim 3, MATSUMOTO as modified by Ishida teaches the work support apparatus according to claim 1. MATSUMOTO further teaches wherein the plurality of statuses include two or more of a status in which the user has not confirmed the region, a status in which the user has designated the region as the target of the work and the work is incomplete, a status in which the user has designated that the region is excluded from the target of the work, and a status that the work for the region is completed (para, “[0094] The “image interpretation completed” button B1 is operated, for example, in a case where a plurality of image interpreters has been assigned the medical image, and any of the assigned image interpreters determines that the medical image does not need to be interpreted by a plurality of image interpreters. That is, the “image interpretation completed” button B1 is operated, for example, in a case where the CAD 2 has determined that a plurality of image interpreters should interpret the medical image, but any of the assigned image interpreters determines, in actual image interpretation, that the medical image does not need to be interpreted by a plurality of image interpreters.” Also, para “[0113] Each doctor-in-charge display section 201 has a checkbox 202. When an image interpreter being displayed in the doctor-in-charge display section 201 completes interpreting the medical image, the image interpreter can check his/her checkbox 202. In a case where a detection result by the CAD 2 is determined to be highly accurate, the checkbox 202 may be automatically checked (interpretation of the medical image is determined to be complete) by the setting.” Also, para, “[0062] A lesion is detected if a probability value in the heat map exceeds a predetermined threshold. The abnormal shadow probability is determined on the basis of a predetermined threshold and the probability value in the heat map. For example, if “V<(1.0+T)±2.0” holds, wherein V represents a probability value in the heat map, and T represents a threshold value being 0.5, it is determined that determination of whether or not the region is an abnormal shadow is difficult to be made. A triage level that indicates the level of urgency of the image interpretation is determined for each type of lesion. For example, the triage level for nodular shadows is set high, because the nodular shadows indicate a possibility of lung cancer and requires early diagnosis and treatment.” Note: Also, see Fig. 9 and accompanying disclosure.) Regarding claim 4, MATSUMOTO as modified by Ishida teaches the work support apparatus according to claim 3. MATSUMOTO further teaches wherein the processor is configured to, in a case where the status is the status in which the user has designated the region as the target of the work and the work is incomplete, perform control to display the status in an identifiable manner by adding a predetermined mark to the region(para, “ [0118] In the thumbnail image g1, abnormal shadow candidate regions are enclosed by frames K. The display colors of the frames K can be different according to the types of abnormal shadow candidates. The regions enclosed by the frames K in the thumbnail image g1 are displayed in an enlarged manner as the lesion-detected region images g2, g3. Note: Also, see Fig. 9 below. PNG media_image1.png 507 724 media_image1.png Greyscale ) Regarding claim 5, MATSUMOTO as modified by Ishida teaches the work support apparatus according to claim 3. MATSUMOTO further teaches wherein the processor is configured to perform control to display the status in which the user has not confirmed the region and the status in which the user has designated the region as the target of the work and the work is incomplete among the plurality of statuses to be different from other statuses (para, “[0037] The image generation apparatus 1 attaches, as header information, the patient information, the examination information, a unique ID (UID) for identifying the medical image, and the like to the generated medical image, sends the medical image with the header information to the image DB 5 through the communication network N, and stores and accumulates the same in the image DB 5. The image generation apparatus 1 can send the medical image directly to the CAD 2 and the information processing apparatus 3. In a case where an apparatus that does not conform to the DICOM standard is used as the image generation apparatus 1, a DICOM conversion device (not illustrated) can be used to input the information to be attached to the medical image to the image generation apparatus 1.” Note: here only image with information is provided, and no region has been confirmed. Also, para 0127 teaches displaying image to be interpreted (not confirmed area0, and displaying abnormal shadow region (designated). Also, para, “[0041] After finishing the process of detecting abnormal shadow candidates with the detection algorithm, the CAD 2 generates an abnormal shadow candidate detection result (hereinafter called CAD information). The CAD information includes: positional information of the region (contour) of each detected abnormal shadow candidate; the type of each detected abnormal shadow candidate (e.g. nodular shadows and cardiac hypertrophy); the number of the detected abnormal shadow candidates; the level of severity of each abnormal shadow candidate as a disease (e.g. the risk of death); and an abnormal shadow probability indicating a probability that the abnormal shadow candidate is an abnormal shadow. The CAD 2 attaches the generated CAD information to the header information of the medical image on which the process of detecting abnormal shadow candidates has been performed, and sends the medical image to the image DB 5 and/or the information processing apparatus 3 through the communication unit. The CAD information may be accumulated in the image DB 5 as a file separate from the corresponding medical image and associated with the corresponding medical image such that they are recognized to correspond to each other.” Noe: now the region have been detected and it is target of work for interpretation. Also, “[0118] In the thumbnail image g1, abnormal shadow candidate regions are enclosed by frames K. The display colors of the frames K can be different according to the types of abnormal shadow candidates. The regions enclosed by the frames K in the thumbnail image g1 are displayed in an enlarged manner as the lesion-detected region images g2, g3.” Note: here, the confirmed region is indicated using frame k, and color. Fig, 9, shows check box that indicates if the work have been completed or not complete. Also, see para 0113) Regarding claim 6, MATSUMOTO as modified by Ishida teaches the work support apparatus according to claim 1. MATSUMOTO further teaches wherein the processor is configured to further perform control to display information regarding each of a plurality of the regions (para, “[0131] The viewer screen also displays an input section for the image interpreter to input findings on the specified lesion region. The input section has checkboxes for selecting a lesion type of the specified lesion region. When the image interpreter checks the lesion type with the operation unit 42, checkboxes for selecting findings (characteristics (e.g. small round, amorphous or indistinct, pleomorphic), categories, etc.) on the lesion region for the checked lesion type are displayed.) Regarding claim 7, MATSUMOTO as modified by Ishida teaches the work support apparatus according to claim 6. MATSUMOTO further teaches wherein the processor is configured to perform control to display a list of the information regarding each of the plurality of regions(para, “[0131] The viewer screen also displays an input section for the image interpreter to input findings on the specified lesion region. The input section has checkboxes for selecting a lesion type of the specified lesion region. When the image interpreter checks the lesion type with the operation unit 42, checkboxes for selecting findings (characteristics (e.g. small round, amorphous or indistinct, pleomorphic), categories, etc.) on the lesion region for the checked lesion type are displayed.) Regarding claim 8, MATSUMOTO as modified by Ishida teaches the work support apparatus according to claim 7. MATSUMOTO further teaches wherein the processor is configured to perform control to display the list of the information regarding each of the plurality of regions for each of the statuses (para, “[0093] More specifically, the controller 31 determines that the completion operation is performed if image interpretation reports by all the image interpreters determined to interpret the medical image have been registered. Alternatively, the controller 31 may determine that the completion operation is performed if any of the image interpreters operates an “image interpretation completed” button B1 (shown in FIG. 9) with the operation unit 42 of the image display apparatus 4.” Also, para, “[0096] More specifically, the controller 31 determines that the instruction operation to make an instruction to reassign an image interpreter(s) is input if any of the image interpreters who has interpreted the medical image has operated a “reassign” button B2 (shown in FIG. 9) with the operation unit 42 of the image display apparatus 4.”) Regarding claim 9, MATSUMOTO as modified by Ishida teaches the work support apparatus according to claim 1. MATSUMOTO further teaches wherein the region is a region including an abnormal shadow (para, “[0038] The CAD 2 is a computer that analyzes the medical image provided by the image generation apparatus 1, thereby performing a process of detecting abnormal shadow candidates. The CAD 2 includes: a central processing unit (CPU); a random access memory (RAM); a storage, such as a hard disk drive (HDD); and a communication unit, such as a LAN card.” Note: Also, see para 0040) Regarding claim 10, MATSUMOTO as modified by Ishida teaches the work support apparatus according to claim 1. MATSUMOTO further teaches wherein the region is extracted from the medical image by an extraction process via a compute r( para, “[0002] There is known in the medical field a computer aided diagnosis (CAD) system that automatically detects abnormal shadow candidates in medical images, and outputs the medical images with the visibility of the detected abnormal shadow candidates increased.” Also, para, “[0170] As shown in FIG. 13, the controller 31 determines whether or not an abnormal shadow candidate is present according to the CAD information (Step S401). If the controller 31 determines that an abnormal shadow candidate is present (Step S401: YES), the controller 31 extracts the abnormal shadow probability from the obtained CAD information, and determines whether or not the abnormal shadow probability is equal to or higher than a predetermined threshold (Step S402).) Regarding claim 11, MATSUMOTO as modified by Ishida teaches the work support apparatus according to claim 1. MATSUMOTO further teaches wherein the region is a region designated by the user (para, “[0130] When the image interpreter specifies, with the operation unit 42, a lesion (focus) region that the image interpreter has determined to be a suspected lesion in the medical image displayed on the viewer screen, the viewer screen displays a mark indicating the lesion region over the medical image.” Note : Also, see Fig. 9) Regarding claim 16, MATSUMOTO teaches a work support method executed by a processor provided in a work support apparatus, the method comprising: acquiring a medical image(para, “[0057] When the medical image is received from the image generation apparatus 1, the received medical image is stored in the image DB 5. On the basis of the header information of the received medical image, the management information is generated and stored in the image management table. When the CAD information is received from the CAD 2, the received CAD information is stored in the image DB 5, and a record of which UID agrees with the CAD information in the image management table is retrieved, and the file name, the file location, and the like of the CAD information are written in the retrieved record. Thus, the medical image and the CAD information generated from the medical image are associated with each other, and stored in the image DB 5 so as to be retrievable) extracting, by an extraction process, a region to be a target of creation work of a medical document by a user in the medical image(para, “[0032] The medical image display system 100 is a system in which: a medical image is taken; on the basis of the medical image, abnormal shadow candidate(s) is detected; and the detection result along with the medical image is provided to an image interpreter(s).” Also, para, “[0041] After finishing the process of detecting abnormal shadow candidates with the detection algorithm, the CAD 2 generates an abnormal shadow candidate detection result (hereinafter called CAD information). The CAD information includes: positional information of the region (contour) of each detected abnormal shadow candidate; the type of each detected abnormal shadow candidate (e.g. nodular shadows and cardiac hypertrophy); the number of the detected abnormal shadow candidates; the level of severity of each abnormal shadow candidate as a disease (e.g. the risk of death); and an abnormal shadow probability indicating a probability that the abnormal shadow candidate is an abnormal shadow. The CAD 2 attaches the generated CAD information to the header information of the medical image on which the process of detecting abnormal shadow candidates has been performed, and sends the medical image to the image DB 5 and/or the information processing apparatus 3 through the communication unit. The CAD information may be accumulated in the image DB 5 as a file separate from the corresponding medical image and associated with the corresponding medical image such that they are recognized to correspond to each other.”); deriving, by using a trained model, findings of the region which has been extracted(para, “0040] As the detection algorithm for detecting abnormal shadow candidates, an algorithm known to the public can be adopted. For example, fully convolutional networks (FCN), which is a deep learning model, can be used. Details of an FCN is described later.” Also, para, “[0041] After finishing the process of detecting abnormal shadow candidates with the detection algorithm, the CAD 2 generates an abnormal shadow candidate detection result (hereinafter called CAD information). The CAD information includes: positional information of the region (contour) of each detected abnormal shadow candidate; the type of each detected abnormal shadow candidate (e.g. nodular shadows and cardiac hypertrophy); the number of the detected abnormal shadow candidates; the level of severity of each abnormal shadow candidate as a disease (e.g. the risk of death); and an abnormal shadow probability indicating a probability that the abnormal shadow candidate is an abnormal shadow. The CAD 2 attaches the generated CAD information to the header information of the medical image on which the process of detecting abnormal shadow candidates has been performed, and sends the medical image to the image DB 5 and/or the information processing apparatus 3 through the communication unit. The CAD information may be accumulated in the image DB 5 as a file separate from the corresponding medical image and associated with the corresponding medical image such that they are recognized to correspond to each other.”), and in response to receiving an operation by the user in which the region is designated by the user, performing control to display, in an identifiable manner, a status for the region which has been designated among a plurality of statuses related to the creation work of the medical document by the user (para, “[0069] In the process of determining the number of image interpreters, the controller 31 determines the number of image interpreters according to whether or not an abnormal shadow candidate is present. Because only medical images that are highly likely to include an abnormal shadow(s) are interpreted by a plurality of image interpreters, accuracy in image interpretation is improved, and increase of time for image interpretation is restrained.” Also para, “[0061] By performing the process using FCN, a heat map is output for each type of lesion to be diagnosed. The heat map shows probabilities indicating respective points on an image being a lesion.”) and use at least one of a plurality of comments on findings which are generated based on the findings of the region to create the medical document(para, “[0131] The viewer screen also displays an input section for the image interpreter to input findings on the specified lesion region. The input section has checkboxes for selecting a lesion type of the specified lesion region. When the image interpreter checks the lesion type with the operation unit 42, checkboxes for selecting findings (characteristics (e.g. small round, amorphous or indistinct, pleomorphic), categories, etc.) on the lesion region for the checked lesion type are displayed.” Also, para, “[0135] The image interpretation result information includes, as described above, information on the lesion type, the number of lesion regions determined to be the lesion, positional information of each lesion region, and findings. The image interpretation result information may also include the image interpretation completion information, the reassignment instruction information and reassignment criteria.”), wherein the status for the designated region is displayed in the identifiable manner for indicating the creation work is incomplete(para, “[0135] The image interpretation result information includes, as described above, information on the lesion type, the number of lesion regions determined to be the lesion, positional information of each lesion region, and findings. The image interpretation result information may also include the image interpretation completion information, the reassignment instruction information and reassignment criteria.” Also, see Fig. 9.) MATSUMOTO does not explicitly teach: wherein the trained model is trained using training data, the training data comprises a large number of combinations of a medical image including an abnormal shadow, information specifying a region in the medical image in which the abnormal shadow is present, and a finding of the abnormal shadow; Ishida teaches: wherein the trained model is trained using training data, the training data comprises a large number of combinations of a medical image including an abnormal shadow, information specifying a region in the medical image in which the abnormal shadow is present, and a finding of the abnormal shadow(para, “0004] In common practice, when a discrimination device is used for pattern recognition, for example, preparation is made to get the pattern image of an abnormal shadow to be detected. Then image feature quantity including such statistical values as the average pixel value and distribution value or such geometric feature quantities as size and circularity in the image area of that abnormal shadow are inputted into the ANN as training data. Further, the ANN is made to learn in such a way that the output value close to "1" should be outputted if the pattern is similar to that of the abnormal shadow image. Likewise, using the pattern image of the shadow of a normal tissue (called the normal shadow), the ANN is made to learn in such a way that the output value close to "0" should be outputted if the pattern is similar to that of the normal shadow image. This arrangement ensures that, if the image feature quantity of the image to be detected is inputted to the aforementioned ANN, the output value of 0 through 1 is obtained from that image feature quantity. Accordingly, if this value is close to "1", it is highly likely that the shadow is abnormal; whereas, if this value is close to "0", it is highly likely that the shadow is normal. Thus, in the conventional CAD, the abnormal shadow candidates have been detected according to the output value obtained from this method.” Para, “[0015] The invention described in Structure (3) is the image processing method described in Structure (1) or (2) wherein the aforementioned training input image includes a plurality of training feature images created by applying image processing to the training input image, in the learning step, the pixel value of the pixel of interest located at the corresponding position in each of a plurality of the training input images is inputted into the discrimination device, and in the training output image, the pixel value of the pixel corresponding to the pixel of interest is set as the learning target value for the input of the discrimination device.”) It would have been obvious for a person of ordinary skill in the art to apply abnormal shadow training teachings of Ishida into the teachings of MATSUMOTO at the time the application was filed in order to develop apparatus for detecting the candidate area for the abnormal shadow. (para, “[0003] In the medical field, this method is used to develop the apparatus for detecting a candidate area for the abnormal shadow by recognizing the pattern of the image area assumed to be the shadow (called the abnormal shadow) of a portion of lesion from the medical image obtained by examination of radiographing. This apparatus is called the CAD (Computer Aided Diagnosis Apparatus).”) Regarding claim 17, MATSUMOTO teaches a non-transitory computer-readable storage medium storing a work support program for causing a processor provided in a work support apparatus to execute(see para 0009): acquiring a medical image(para, “[0057] When the medical image is received from the image generation apparatus 1, the received medical image is stored in the image DB 5. On the basis of the header information of the received medical image, the management information is generated and stored in the image management table. When the CAD information is received from the CAD 2, the received CAD information is stored in the image DB 5, and a record of which UID agrees with the CAD information in the image management table is retrieved, and the file name, the file location, and the like of the CAD information are written in the retrieved record. Thus, the medical image and the CAD information generated from the medical image are associated with each other, and stored in the image DB 5 so as to be retrievable) extracting, by an extraction process, a region to be a target of creation work of a medical document by a user in the medical image para, “[0032] The medical image display system 100 is a system in which: a medical image is taken; on the basis of the medical image, abnormal shadow candidate(s) is detected; and the detection result along with the medical image is provided to an image interpreter(s).” Also, para, “[0041] After finishing the process of detecting abnormal shadow candidates with the detection algorithm, the CAD 2 generates an abnormal shadow candidate detection result (hereinafter called CAD information). The CAD information includes: positional information of the region (contour) of each detected abnormal shadow candidate; the type of each detected abnormal shadow candidate (e.g. nodular shadows and cardiac hypertrophy); the number of the detected abnormal shadow candidates; the level of severity of each abnormal shadow candidate as a disease (e.g. the risk of death); and an abnormal shadow probability indicating a probability that the abnormal shadow candidate is an abnormal shadow. The CAD 2 attaches the generated CAD information to the header information of the medical image on which the process of detecting abnormal shadow candidates has been performed, and sends the medical image to the image DB 5 and/or the information processing apparatus 3 through the communication unit. The CAD information may be accumulated in the image DB 5 as a file separate from the corresponding medical image and associated with the corresponding medical image such that they are recognized to correspond to each other.”); deriving, by using a trained model, findings of the region which has been extracted, (para, “0040] As the detection algorithm for detecting abnormal shadow candidates, an algorithm known to the public can be adopted. For example, fully convolutional networks (FCN), which is a deep learning model, can be used. Details of an FCN is described later.” Also, para, “[0041] After finishing the process of detecting abnormal shadow candidates with the detection algorithm, the CAD 2 generates an abnormal shadow candidate detection result (hereinafter called CAD information). The CAD information includes: positional information of the region (contour) of each detected abnormal shadow candidate; the type of each detected abnormal shadow candidate (e.g. nodular shadows and cardiac hypertrophy); the number of the detected abnormal shadow candidates; the level of severity of each abnormal shadow candidate as a disease (e.g. the risk of death); and an abnormal shadow probability indicating a probability that the abnormal shadow candidate is an abnormal shadow. The CAD 2 attaches the generated CAD information to the header information of the medical image on which the process of detecting abnormal shadow candidates has been performed, and sends the medical image to the image DB 5 and/or the information processing apparatus 3 through the communication unit. The CAD information may be accumulated in the image DB 5 as a file separate from the corresponding medical image and associated with the corresponding medical image such that they are recognized to correspond to each other.”), and in response to receiving an operation by the user in which the region is designated by the user, performing control to display, in an identifiable manner, a status for the region which has been designated among a plurality of statuses related to the creation work of the medical document by the user para, “[0069] In the process of determining the number of image interpreters, the controller 31 determines the number of image interpreters according to whether or not an abnormal shadow candidate is present. Because only medical images that are highly likely to include an abnormal shadow(s) are interpreted by a plurality of image interpreters, accuracy in image interpretation is improved, and increase of time for image interpretation is restrained.” Also para, “[0061] By performing the process using FCN, a heat map is output for each type of lesion to be diagnosed. The heat map shows probabilities indicating respective points on an image being a lesion.”) and use at least one of a plurality of comments on findings which are generated based on the findings of the region to create the medical document(para, “[0131] The viewer screen also displays an input section for the image interpreter to input findings on the specified lesion region. The input section has checkboxes for selecting a lesion type of the specified lesion region. When the image interpreter checks the lesion type with the operation unit 42, checkboxes for selecting findings (characteristics (e.g. small round, amorphous or indistinct, pleomorphic), categories, etc.) on the lesion region for the checked lesion type are displayed.”), wherein the status for the designated region is displayed in the identifiable manner for indicating the creation work is incomplete(para, “[0135] The image interpretation result information includes, as described above, information on the lesion type, the number of lesion regions determined to be the lesion, positional information of each lesion region, and findings. The image interpretation result information may also include the image interpretation completion information, the reassignment instruction information and reassignment criteria.” Also, see Fig. 9.) MATSUMOTO does not explicitly teach: wherein the trained model is trained using training data, the training data comprises a large number of combinations of a medical image including an abnormal shadow, information specifying a region in the medical image in which the abnormal shadow is present, and a finding of the abnormal shadow; Ishida teaches: wherein the trained model is trained using training data, the training data comprises a large number of combinations of a medical image including an abnormal shadow, information specifying a region in the medical image in which the abnormal shadow is present, and a finding of the abnormal shadow(para, “0004] In common practice, when a discrimination device is used for pattern recognition, for example, preparation is made to get the pattern image of an abnormal shadow to be detected. Then image feature quantity including such statistical values as the average pixel value and distribution value or such geometric feature quantities as size and circularity in the image area of that abnormal shadow are inputted into the ANN as training data. Further, the ANN is made to learn in such a way that the output value close to "1" should be outputted if the pattern is similar to that of the abn
Read full office action

Prosecution Timeline

Oct 05, 2023
Application Filed
Mar 13, 2025
Non-Final Rejection — §101, §103
May 22, 2025
Response Filed
Aug 18, 2025
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12475384
SELF-SUPERVISED VISUAL-RELATIONSHIP PROBING
2y 5m to grant Granted Nov 18, 2025
Patent 12346800
META-FEATURE TRAINING MODELS FOR MACHINE LEARNING ALGORITHMS
2y 5m to grant Granted Jul 01, 2025
Patent 12293290
Sparse Local Connected Artificial Neural Network Architectures Involving Hybrid Local/Nonlocal Structure
2y 5m to grant Granted May 06, 2025
Patent 12242957
DEVICE AND METHOD FOR THE GENERATION OF SYNTHETIC DATA IN GENERATIVE NETWORKS
2y 5m to grant Granted Mar 04, 2025
Patent 12217156
COMPUTING TEMPORAL CONVOLUTION NETWORKS IN REAL TIME
2y 5m to grant Granted Feb 04, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
17%
Grant Probability
35%
With Interview (+18.4%)
4y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 54 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month