DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
The instant application claims priority to and benefit of Japanese Application No. JP 2023-033155, filed on March 3, 2023. Thus, the effective filing date of Claims 1-18 is 03/03/2023.
Information Disclosure Statement
The information disclosure statement (“IDS”) filed on 02/29/2024 was reviewed and the listed references were noted.
Drawings
The 17 page drawings have been considered and placed on record in the file.
Status of Claims
Claims 1-18 are currently pending.
Specification
The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-18 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Independent Claim 1 is directed to determining the presence or absence of correspondence between an image and the description of a sentence to assist in creating a document.
Step 1:
With regard to Step 1, the instant claim is directed to an apparatus among the statutory categories of invention.
Step 2A – Prong 1:
With regard to Step 2A – Prong 1, for example in apparatus Claim 1, the limitations “acquire information regarding an object shown in one or more received images”, “acquire information described in one or more received sentences”, and “determine presence or absence of correspondence between the image and the sentence based on the information regarding the object and the described information”, as recited, is an apparatus that, under its broadest reasonable interpretation, covers performance of the limitation in the mind/observation of a person inspecting one or more images and one or more sentence to determine a correlation between an image and a sentence to create a document. That is nothing in the claim steps preclude the limitations from practically being performed in the mind or through observation/judgement of a person to determine if an object within an image correlates to the information described in a finding sentence.
Additionally, the limitation “execute processing of assisting in creating a document including the image and the sentence based on the presence or absence of the correspondence.”, as recited, is an apparatus that, under its broadest reasonable interpretation, covers performance of the limitation in the mind/observation of a person creating a medical report using a general computer based on a person’s observation of correlation between an image and a sentence. That is nothing in the claim steps preclude the limitations from practically being performed in the mind or through observation/judgement of a person to create a report to present the correspondence of an image and sentence.
If a claim limitation, under its broadest reasonably interpretation covers performance of the limitation in the mind then it falls within the "Mental processes" grouping of the abstract idea, which include concepts performed in the human mind, including an observation, evaluation, judgement, opinion. Accordingly, the claim recites an abstract idea
Step 2A – Prong 2:
The 2019 PEG defines the phrase “integration into a practical application” to require an additional element or a combination of additional elements in the claim to apply, rely on, or use the judicial exception. In the instant case, the additional elements in the claims do not apply, rely on, or use the judicial exception.
Step 2B:
Because the claims fail under Step 2A, the claims are further evaluated under Step 2B. The claims herein do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The recited one a processor and a memory merely provides additional generic computer components, which are not considered to be significant. Accordingly, this claim is not patent eligible.
Further, with regard to dependent Claims 2-16 viewed individually, these additional elements are under their broadest reasonable interpretation, cover performance of the limitation in the mind and do not provide limitations the integrate the abstract idea into a practical application and/or significantly more than the identified abstract idea. Accordingly, Claims 2-16 are rejected under 35 U.S.C. 101.
Furthermore, with regards to independent Claim 17 and Claim 18, Claim 17 recites a method with steps corresponding to the elements of the apparatus recited in Claims 1, and Claim 18 recites a computer-readable storage medium storing a program with instructions corresponding to the steps recited in Claim 17. Therefore, the recited steps and instructions of Claim 17 and 18, respectively, are similarly rejected under 35 U.S.C. 101, in the same manner as described with respect the analysis of Claim 1. Accordingly, Claims 17 and 18 are rejected under 35 U.S.C. 101.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1, 4, 14-15 and 17-18 are rejected under 35 U.S.C. 103 as being anticipated by Akira et al. (JP 2016057695).
Consider Claim 1, Akira discloses “An information processing apparatus comprising: at least one processor; and at least one memory that stores a command to be executed by the at least one processor,” (Akira; Pg. 5; “the present embodiment can be realized by a computer device. The CPU 1001 mainly controls the operation of each component. The main memory 1002 stores a control program executed by the CPU” (emphasis added)) “wherein the at least one processor is configured to: acquire information regarding an object shown in one or more received images;” (Akira; Pg. 3; “On the interpretation report creation screen, the medical information (medical image data) to be interpreted received from the case information terminal 300 is displayed, and the user interprets the displayed medical image data and interprets it using a keyboard 1007 or the like described later.”) “acquire information described in one or more received sentences;” (Akira; Pg. 5, “The report sentence acquisition unit 106 acquires a report sentence (text data) that is input using the interpretation report creation screen and is the result of the user interpreting medical image data. The acquired report text is output to the description area acquisition unit 110.”) “determine presence or absence of correspondence between the image and the sentence based on the information regarding the object and the described information;” (Akira; Pg. 7; “In the present embodiment, the anatomical structure name indicated by each label of the region information is used as a keyword, and a description region is acquired using keyword matching for a report sentence.”) “and execute processing of assisting in creating a document including the image and the sentence based on the presence or absence of the correspondence.” (Akira; Pg. 7; “FIG. 7 shows an example of the presentation information displayed on the monitor 1005 in the present embodiment. The presentation information is displayed in the form of a message box 704.…and the report creation unit 101 saves the report text on the magnetic disk 1003 or the like and ends the report creation support process.”)
Consider Claim 4, Akira discloses “The information processing apparatus according to claim 1, wherein the processing is processing of issuing a warning in a case where the sentence corresponding to the image is not present.” (Akira; Pg. 7 “In step S306, the determination unit 112 determines the consistency between the attention area and the description area based on the attention area (gaze area).… That is, it is determined whether or not there is a description omission that is not described in the report sentence even though it is watched.” (emphasis added); Examiner notes the attention or gaze area is interpreted the image containing the object of interest.)
Consider Claim 14, Akira discloses “The information processing apparatus according to claim 1, wherein the at least one processor is configured to:” (Akira; Pg. 5; “the present embodiment can be realized by a computer device. The CPU 1001 mainly controls the operation of each component.”) “analyze the received image or an original image serving as a creation source of the received image to acquire the information regarding the object.” (Akira; Pgs. 3-4; “The region information acquisition unit 102 acquires association information (region information) between the coordinate position of the medical image data transmitted from the case information terminal 300 and the region….The acquired association information is output to the attention area acquisition unit 108 and the description area acquisition unit 110. Such area information acquisition can be realized by applying a known segmentation method to medical image data and associating the segmentation result with the coordinate position of the image….The area information may be acquired by the area information acquisition unit 102 analyzing the medical image data to be interpreted…” (emphasis added)).
Consider Claim 15, Akira discloses “The information processing apparatus according to claim 1, wherein the image is a key image based on a medical image.” (Akira, Pg. 4; “Such area information acquisition can be realized by applying a known segmentation method to medical image data and associating the segmentation result with the coordinate position of the image….The area information may be acquired by the area information acquisition unit 102 analyzing the medical image data to be interpreted…”); Examiner notes the segmented image used for analysis in Akira is interpreted as a key image.)
Consider Claim 17, Claim 17 recites a method with steps corresponding to the elements recited in Claim 1. Therefore, the recited steps of this claim are mapped to the proposed reference in the same manner as the corresponding elements in its corresponding apparatus claim.
Consider Claim 18, Claim 18 recites a computer-readable storage medium storing a program with instructions corresponding to the elements recited in Claim 1. Therefore, the recited programming instructions of this claim are mapped to the proposed reference in the same manner as the corresponding elements in its corresponding apparatus claim.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
Determining the scope and contents of the prior art.
Ascertaining the differences between the prior art and the claims at issue.
Resolving the level of ordinary skill in the pertinent art.
Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Akira et al. (JP 2016057695).
Consider Claim 2, Akira discloses “The information processing apparatus according to claim 1, wherein the processing is processing of issuing a warning in a case where the image corresponding to the sentence is not present” (Akira; Pg. 9; “As described above, according to the second embodiment, it is possible to alert the doctor of omission of confirmation regarding an area that was not observed at the time of interpretation even though the doctor described in the report sentence.”; Examiner notes the user observed area is interpreted as the image.)
Accordingly, before the effective filing date of the instant application, it would have been obvious to one of ordinary skill in the art to combine the first and second embodiments of Akira to further issue an alert to the user in the absence of a corresponding image to a sentence. One of ordinary skill in the art would be motivated to combine the first and second embodiments of Akira to combining prior art elements from different embodiments with the same field of endeavor according to known methods to yield predictable results. Accordingly, the combination of the first and second embodiments of Akira discloses the invention of Claim 2.
Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Akira ( JP 2016057695) in view of Cen ( CN 113990431 with publication date of Jan. 28, 2022) and in further view of Harzig ( JP 2020149682 ).
Consider Claim 3, Akira discloses “The information processing apparatus according to claim 1, wherein the processing is processing of issuing a warning in a case where the image corresponding to the sentence is not present” (Akira; Pg. 9; “As described above, according to the second embodiment, it is possible to alert the doctor of omission of confirmation regarding an area that was not observed at the time of interpretation even though the doctor described in the report sentence.”)
Akira does not explicitly teach “and a degree of importance of the sentence, (Cen, Pgs. 6-7; “embodiment adopts LDA theme model set topic number is 6, the set threshold value is through each input to be generated report text dynamic calculation, taking all sentences in the maximum importance degree value of 1/3 and important degree value before 40% of the minimum value, calculating the maximum value of the two values as the threshold value extracted by the important sentence.”).
Accordingly, before the effective filing date of the instant application, it would have been obvious to one of ordinary skill in the art to combine Akira with the teachings of Cen to further identify important sentences within the user’s text inputs. One of ordinary skill in the art would be motivated to combine Akira and Cen to determine important sentences to create a concise report that highlights pertinent information for the reader. Accordingly, the combination of Akira and Cen discloses the above-recited limitations of Claim 3.
The combination of Akira and Cen does not explicitly disclose “which indicates a degree to which an image corresponding to the sentence is necessary”. However, in an analogous field of endeavor, Harzig teaches (Harzig; [0005]; “trained neural network generating the written report based on a sentence annotation model that provides an anomaly annotation ( annotation ) on a per-sentence basis, determining, by the trained neural network, an anomaly score associated with each written report”; Examiner notes that Harzig discloses (Harzig; [0049]; “Each sentence in the generated report may be analyzed to determine which sentences are associated with the anomaly. Further, in some example embodiments, an anomaly score representing a number of anomalous sentences in the report may be determined. The abnormality score may be used to automatically prioritize reports associated with images having the most abnormalities and reports associated with images that are relatively normal” (emphasis added)) and examiner has interpreted the anomaly score to indicate importance of a sentence input. Harzig continues to teach (Harzig; [0014]; “the neural network is trained…using a plurality of patient image records associated with the text reports, the training process comprising analyzing, by the neural network, each text report based on a sentence annotation model to generate sentence-by-sentence anomaly annotations…, extracting, by the neural network, image features from each patient image record, and generating, by the neural network, an image report generation model by associating the extracted image features with the text report having sentence-by-sentence anomaly annotations.” (emphasis added)). Examiner notes Harzig’s disclosed report generation model is interpreted to extract and associate images to sentence based on the anomaly analysis of each sentence. )
Accordingly, before the effective filing date of the instant application, it would have been obvious to one of ordinary skill in the art to combine Akira and Cen with the teachings of Harzig to further calculate the importance of a sentence to determine the need for a corresponding image. One of ordinary skill in the art would be motivated to combine Akira, Cen, and Harzig to efficiently group corresponding finding and images, and prevent additional burden to the user to find corresponding images for important findings. Accordingly, the combination of Akira, Cen, and Harzig discloses the invention of Claim 3.
Claim 5-6 is rejected under 35 U.S.C. 103 as being unpatentable over Akira et al. (JP 2016057695) in view of Shibuya et al. (US 2023/0253083 with filing date Feb. 3, 2023).
Consider Claim 5, Akira does not explicitly disclose “The information processing apparatus according to claim 1, wherein the processing is processing of rearranging, based on an order of one of the sentences and the images determined to have the correspondence, an order of the other.”. However, in analogous field of endeavor, Shibuya et al. teaches (Shibuya et al.; [0069]; “The key image attachment function 76b is a function that automatically attaches a medical image related to a reading order selected from the reading order list (medical image to be read) and a medical image identified as a related image by the above-mentioned related image identifying function 76a (i.e., a medical image captured by a modality different from that of the medical image to be read) to a predetermined position in the reading report.”).
Accordingly, before the effective filing date of the instant application, it would have been obvious to one of ordinary skill in the art to combine Akira with the teachings of Shibuya et al. to further arrange the images and sentences in the same order of appearance in a report. One of ordinary skill in the art would be motivated to combine Akira and Shibuya et al. so that (Shibuya et al.; [0069]; “the user does not have to attach the so-called key image to the reading report by himself/herself. Therefore, the time and effort required for attaching the key image can be reduced.”). Accordingly, the combination of Akira and Shibuya et al. discloses the invention of Claim 5.
Consider Claim 6, the combination of Akira and Shibuya et al. discloses “The information processing apparatus according to claim 5, wherein the processing is processing of rearranging an order of the images based on an order of the sentences.” (Shibuya et al.; [0069]; “The key image attachment function 76b is a function that automatically attaches a medical image related to a reading order selected from the reading order list (medical image to be read) and a medical image identified as a related image by the above-mentioned related image identifying function 76a (i.e., a medical image captured by a modality different from that of the medical image to be read) to a predetermined position in the reading report.” (emphasis added)). The proposed combination as well as the motivation for combining the Akira and Shibuya references presented in the rejection of Claim 5, apply to Claim 6 and are incorporated herein by reference. Thus, the apparatus recited in Claim 6 is met by Akira and Shibuya.
Claim 11 and 16 is rejected under 35 U.S.C. 103 as being unpatentable over Akira et al. ( JP 2016057695 ) in view of Kota (JP 2010160590).
Consider Claim 11, Akira does not explicitly disclose “The information processing apparatus according to claim 1, wherein the at least one processor is configured to: execute the processing each time any one of an input of the image or an input of the sentence is received.”.
However, in an analogous field of endeavor, Kota teaches “The information processing apparatus according to claim 1, wherein the at least one processor:” (Kota; Pg. 3; “The processing unit 12 is configured to be able to perform various types of information processing”) “is configured to execute the processing each time any one of an input of the image or an input of the sentence is received” (Kota; [0015]; “A program of the present invention causes a computer to execute a process of generating an input screen for prompting data input for creating a medical report to be displayed on an output screen of a display unit…a process of generating the output screen for displaying at least a part of the medical report in which data by the data input is reflected…”; Examiner notes Kota teaches a display unit that displays the medical report output as the apparatus is actively receiving input. Examiner has interpreted the in-real time display of the medical report to execute the processor after each sentence input is received.)
Accordingly, before the effective filing date of the instant application, it would have been obvious to one of ordinary skill in the art to combine Akira with the teachings of Kota to execute the processing of an image or sentence input every time an input is received. One of ordinary skill in the art would be motivated to combine Akira and Kota so that the user can easily view (Kota; [0006]; “an output layout while editing a medical report or the like when creating the medical report or the like.”). Accordingly, the combination of Akira and Kota discloses the invention of Claim 11.
Consider Claim 16, the combination of Akira and Kota disclose “The information processing apparatus according to claim 15, wherein the object includes at least one of an organ or a tumor,” (Akira; Pg. 4; “In this embodiment, an anatomical structure (for example, heart, liver, upper right lobe, etc.) is used as a medically divided region.”) “and the information regarding the object includes at least one of a size, a property, a disease name, a position, or a feature amount.” (Kota; [0090]; “In the following description, it is assumed that a text box ARE1 is provided in the palette area B9, and items related to the size of the tumor are displayed. To be specific, like "tumor [(text box B9)] cm", the prefix "tumor" is displayed in the front part of the text box B9, and the suffix "cm" is displayed in the rear part thereof”). The proposed combination as well as the motivation for combining the Akira and Kota references presented in the rejection of claim 11, apply to claim 16 and are incorporated herein by reference. Thus, the apparatus recited in claim 16 is met by Akira and Kota.
Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Akira et al. (JP 2016057695 ) in view of Kota (JP 2010160590 ) in further view of Yohei Momoki (US 2023/0005580 with filing date Sept. 11, 2022).
Consider Claim 12, although the combination of Akira and Kota discloses “The information processing apparatus according to claim 11, wherein there are provided a first mode in which the processing is executed each time any one of the input of the image or the input of the sentence is received” (Kota; [0015]; “A program of the present invention causes a computer to execute a process of generating an input screen for prompting data input for creating a medical report to be displayed on an output screen of a display unit…a process of generating the output screen for displaying at least a part of the medical report in which data by the data input is reflected), this combination does not explicitly teach “and a second mode in which the processing is executed after inputs of all the images and all the sentences are received.” However, in an analogous field, Momoki teaches (Momoki; [0058]; “The acquisition unit 21 acquires a first medical image G1 for creating an interpretation report from the image server….Further, the acquisition unit 21 acquires, from the image server 5, a plurality of second medical images ….Further, the acquisition unit 21 acquires an interpretation report R2-i for the plurality of second medical images G2-i from the report server 7. The acquired first medical image G1 and second medical images G2-i, and the interpretation report R2-i are saved in the storage 13.”; Examiner notes that the acquisition of the first image, the plurality of second images and the interpretation report is interpreted as acquiring all the images and sentence before executing the processor.)
Accordingly, before the effective filing date of the instant application, it would have been obvious to one of ordinary skill in the art to combine Akira and Kota with the teachings of Momoki to process and create a medical report after receiving all the images and sentences. One of ordinary skill in the art would be motivated to combine Akira, Kota and Momoki to expedite the processing time of report generation. Accordingly, the combination of Akira, Kota and Momoki discloses the invention of Claim 12.
Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Akira et al. (JP 2016057695) in view of Lyman et al. (US 2022/0051771).
Consider Claim 13, Akira discloses “The information processing apparatus according to claim 1, wherein the at least one processor is configured to:” (Akira; Pg. 5; “the present embodiment can be realized by a computer device. The CPU 1001 mainly controls the operation of each component.”)
Akira does not explicitly disclose “acquire a degree of certainty indicating certainty of correspondence between the information regarding the object and the described information; and determine whether or not the image and the sentence correspond to each other based on the degree of certainty.”. However, in an analogous field of endeavor, Lyman teaches (Lyman, [0080]; “Confidence score data 460 can be mapped to some or all of the diagnosis data 440 for each abnormality, and/or for the scan as a whole. This can include an overall confidence score for the diagnosis, a confidence score for the binary indicator of whether or not the scan was normal, a confidence score for the location a detected abnormality, and/or confidence scores for some or all of the abnormality classifier data. This may be generated automatically by a subsystem, for example, based on the annotation author data and corresponding performance score of one or more identified users and/or subsystem attributes such as interactive interface types or medical scan image analysis functions indicated by the annotation author data”; Examiner notes the classifier data taught in Lyman is interpreted at object and description information, and a factor used to map correspondence between an image and author annotation data to create a medical report.)
Accordingly, before the effective filing date of the instant application, it would have been obvious to one of ordinary skill in the art to combine Akira with the teachings of Lyman to further map an image and a sentence together by the correspondence between the information or classifier of the image containing an object and the information of the sentence. One of ordinary skill in the art would be motivated to combine Akira and Lyman to output an accurate pair of an image and sentence by further comparing the relationship between the information of the image and sentence. Accordingly, the combination of Akira and Lyman discloses the invention of Claim 13.
Allowable Subject Matter
Claim 7-10 are not rejected over prior art and are objected to as being dependent upon a rejected base claim, but would be allowable if; (1) rewritten in independent form including all of the limitations of the base claim and any intervening claims; (2) the above-described rejection of these claims under 35 U.S.C. 101 is overcome.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Annie Pham whose telephone number is (571)272-1673. The examiner can be normally be reached Mon-Fri 9:00a – 5:00p.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amandeep Saini can be reached on (571)272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For
additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ANNIE H PHAM/Examiner, Art Unit 2662
/Siamak Harandi/Primary Examiner, Art Unit 2662