DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged of papers submitted under 35 U.S.C. 119(a)-(d), which papers have been placed of record in the file.
Information Disclosure Statement
The information disclosure statements (IDS) submitted on 7/18/2024 and 12/26/2024 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner.
Status of Claims
Claims 1-15 are pending in this application.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 14 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim does not fall within at least one of the four categories of patent eligible subject matter because claim 14 covers both statutory and non-statutory embodiments (under the broadest reasonable interpretation of the claim when read in light of the specification and in view of one skilled in the art) and embraces subject matter that is not eligible for patent protection and therefore is directed to non-statutory subject matter.
As per claim 14, a “computer program product” may be interpreted as a transitory signal, which is non-statutory subject matter, if not modified by a limitation rendering it non-transitory. Page 15 of the specification as filed states “In one example, the computer program product may be downloadable from a server, e.g., via the internet.”, which means that at least in one example, the computer program product is not a physical item of any type.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-4, 8-15 are rejected under 35 U.S.C. 103 as being unpatentable over Chaudhury et al. (U.S. Patent Application Publication EP 4099338, listed in IDS dated 7/18/2024) in view of Tanaka et al. (Japanese Patent Application 2021184169).
As per claim 1, Chaudhury et al. discloses:
A method to manipulate a medical image acquisition system comprising an image acquisition sensor (Paragraph [0001] – Autonomous imaging), the method comprising using a large language model configured to:
receive a user input (Paragraph [0005]), wherein the user input comprises:
a medical imaging exam type (Paragraph [0070] – which body part is to be scanned);
a plurality of symptoms; and
an instruction to manipulate the medical image acquisition system (Paragraphs [0075-0076] – acquisition starts and stops and image data is evaluated); and
based on the user input (Paragraph [0071]), determine:
a first position instruction for the image acquisition sensor with respect to a subject (Paragraph [0072]); and
a first plurality of medical image acquisition system settings (Paragraph [0005] - instructions for controlling an autonomous imaging apparatus are output), wherein the first plurality of medical image acquisition system settings comprises at least image acquisition sensor settings.
Chaudhury et al. fails to disclose, but Tanaka et al. in the same field of endeavor teaches:
A language model (“Further, as a technique related to natural language processing, for example, BERT (Bidirectional Encoder Representations from Transfermers) may be applied.” BERT is a type of language model);
A plurality of symptoms (“In step S21, the acquisition unit 121 according to the present embodiment acquires images captured by a plurality of imaging techniques using the imaging device 101. This imaging technique may include OCT, fundus camera, SLO, OCTA and the like. The examiner may specify a single or a plurality of symptoms to be diagnosed via the operation unit 104, and the acquisition unit 121 may select a combination of images to be acquired accordingly.”);
the first plurality of medical image acquisition system settings comprises at least image acquisition sensor settings (“Further, the instruction from the examiner to which this modification is applicable may be an instruction before shooting as well as an instruction after shooting, for example, an instruction regarding various adjustments and an instruction regarding setting of various shooting conditions”);
It would be obvious for a person having ordinary skill in the art at the effective filing date of the invention to modify the method of Chaudhury et al. with the language model and symptom inclusion capabilities of Tanaka et al. because it is a case of combining prior art elements according to known methods to yield predictable results. Chaudhury et al. specifies the use of machine learning models and specification of type of image to be taken, but fails to specify the use of a language model or the use of symptoms in image selection. Tanaka et al. teaches both of those elements. It would have been obvious for a person having ordinary skill in the art at the effective filing date of the invention to modify the method of Chaudhury et al. with the language model and symptom inclusion capabilities of Tanaka et al. and the results would have been predictable.
As per claim 2, the combination of Chaudhury et al. and Tanaka et al. discloses all of the limitations of claim 1 above. Tanaka et al. in the combination further discloses:
conveying the first position instruction for the image acquisition sensor with respect to the subject to a user; and/or moving the imaging sensor to the first position (“Here, the motion contrast data is data showing a change between a plurality of volume data obtained by controlling the measurement light to be scanned a plurality of times in the same region (same position) of the eye to be inspected. At this time, the volume data is composed of a plurality of tomographic images obtained at different positions.”).
As per claim 3, the combination of Chaudhury et al. and Tanaka et al. discloses all of the limitations of claim 1 above. Chaudhury et al. in the combination further discloses:
responsive to the image acquisition sensor being positioned at the first position: determine an instruction to acquire a medical image; wherein the method optionally further comprises acquiring the medical image according to the instruction to acquire the medical image (Paragraph [0075] – successive image acquisition).
As per claim 4, the combination of Chaudhury et al. and Tanaka et al. discloses all of the limitations of claim 3 above. Chaudhury et al. in the combination further discloses:
responsive to receiving the acquired medical image: determine a second plurality of medical image acquisition system settings, wherein the second plurality of medical image acquisition system settings comprises at least image acquisition sensor; and/or determine a second position instruction for the image acquisition sensor with respect to the subject (Paragraph [0075] – successive image acquisition).
As per claim 8, the combination of Chaudhury et al. and Tanaka et al. discloses all of the limitations of claim 1 above. Tanaka et al. in the combination further discloses:
receive positioning data of the medical image acquisition sensor with respect to the subject; and wherein one or a combination of the instructions are further based on the positioning data of the medical image acquisition sensor with respect to the subject (“Here, the motion contrast data is data showing a change between a plurality of volume data obtained by controlling the measurement light to be scanned a plurality of times in the same region (same position) of the eye to be inspected. At this time, the volume data is composed of a plurality of tomographic images obtained at different positions.”).
As per claim 9, the combination of Chaudhury et al. and Tanaka et al. discloses all of the limitations of claim 1 above. Tanaka et al. in the combination further discloses:
the large language model is trained through unsupervised learning, and wherein unsupervised learning comprises: receiving an unlabeled data set comprising a plurality of instances, each instance including one or more features; and initializing the large language model with adjustable parameters; and applying an unsupervised learning algorithm to the data set using the large language model, wherein the unsupervised learning algorithm modifies the adjustable parameters of the large language model based on relationships between the features in the instances without referring to a predetermined label or outcome and wherein the modification of the adjustable parameters is performed iteratively until a stopping criterion is met (“Further, as a technique related to natural language processing, a trained model obtained by pre-learning document data by unsupervised learning may be used“ The step enumerated are inherent to unsupervised training of a language model).
As per claim 10, the combination of Chaudhury et al. and Tanaka et al. discloses all of the limitations of claim 1 above. Tanaka et al. in the combination further discloses:
the large language model is trained through supervised learning, and wherein supervised learning comprises: receiving a labeled dataset comprising a plurality of instances, each instance comprising an input feature and an associated output feature; initializing the large language model with adjustable parameters; and applying a supervised learning algorithm to the labeled dataset using the large language model, wherein the supervised learning algorithm iteratively modifies the adjustable parameters of the large language model based on a comparison between the large language model prediction given the input feature and the associated output feature until a predetermined stopping criterion is satisfied (“The information including one may be the data labeled (annotated) with the input data as the correct answer data (for supervised learning)” “The learning of the various trained models described above may be not only supervised learning (learning with labeled learning data) but also semi-supervised learning.“ The steps enumerated are inherent to supervised training of a language model).
As per claim 11, the combination of Chaudhury et al. and Tanaka et al. discloses all of the limitations of claim 10 above. Tanaka et al. in the combination further discloses:
the labeled data set comprises medical imaging acquisition exams, wherein, in each instance, the input feature comprises: the user input; and optionally: the pressure, the moisture and the medical image; and the associated output comprises: the first position instruction for the image acquisition sensor with respect to a subject ; the first plurality of medical image acquisition system settings; and optionally: the instruction to acquire a medical image; the second plurality of medical image acquisition system settings; the second position instruction for the image acquisition sensor with respect to the subject; the medical examination report; the pressure instruction; the gel-fluid application instruction (“The information including one may be the data labeled (annotated) with the input data as the correct answer data (for supervised learning)” “The learning of the various trained models described above may be not only supervised learning (learning with labeled learning data) but also semi-supervised learning.“ The steps enumerated are inherent to supervised training of a language model).
As per claim 12, the combination of Chaudhury et al. and Tanaka et al. discloses all of the limitations of claim 1 above. Tanaka et al. in the combination further discloses:
the large language model is trained through reinforcement learning, and wherein reinforcement learning comprises: initializing the large language model with adjustable parameters; applying a reinforcement learning algorithm, wherein the large language model interacts with an environment, performs actions based on its current state and parameters, and receives rewards or penalties based on the performed actions; and iteratively adjusting the model parameters based on the received rewards or penalties until a predetermined stopping criterion is met. (“Further, as a technique related to natural language processing, a trained model obtained by transfer learning (or fine tuning) of a trained model obtained by pre-learning may be used“ Fine-tuning is reinforcement learning and the step enumerated are inherent to reinforcement learning of a language model)
As per claim 13, the combination of Chaudhury et al. and Tanaka et al. discloses all of the limitations of claim 12 above. Tanaka et al. in the combination further discloses:
the interaction of the large language model with the environment comprises instructing, configuring, and performing medical imaging examinations according to the method of claim 1, wherein the rewards or penalties based on the performed actions are based on a user defined performance or based on a pre-defined loss function.13. The method of claim 12, wherein the interaction of the large language model with the environment comprises instructing, configuring, and performing medical imaging examinations according to the method of claim 1, wherein the rewards or penalties based on the performed actions are based on a user defined performance or based on a pre-defined loss function. (“Further, as a technique related to natural language processing, a trained model obtained by transfer learning (or fine tuning) of a trained model obtained by pre-learning may be used“ Fine-tuning is reinforcement learning and the step enumerated are inherent to reinforcement learning of a language model)
As per claim 14, the combination of Chaudhury et al. and Tanaka et al. discloses all of the limitations of claim 1 above. Chaudhury et al. in the combination further discloses:
A computer program product comprising instructions, which when executed by a processor, cause the processor to perform the method of claim 1 (Claim 14).
As per claim 15, the combination of Chaudhury et al. and Tanaka et al. discloses all of the limitations of claim 1 above. Chaudhury et al. in the combination further discloses:
A medical image acquisition system (Figure 5) comprising: a medical image acquisition sensor (Paragraph [0040]); a user interface (Paragraph [0071]); and a processor (Paragraph [0040]) in communication with the medical image acquisition sensor and the user interface, the processor being configured to perform the method of claim 1.
Claims 5-6 are rejected under 35 U.S.C. 103 as being unpatentable over Chaudhury et al. (U.S. Patent Application Publication EP 4099338, listed in IDS dated 7/18/2024) and Tanaka et al. (Japanese Patent Application 2021184169) in view of Dalvin et al. (U.S. Patent Application Publication 2019/0239850).
As per claim 5, the combination of Chaudhury et al. and Tanaka et al. discloses all of the limitations of claim 3 above. The combination fails to disclose but Dalvin et al. in the same field of endeavor teaches:
generate a medical examination report (Paragraphs [0031],[0039], [0044] & [0057] – Exam data and its interpretation is output).
It would be obvious for a person having ordinary skill in the art at the effective filing date of the invention to modify the method of Chaudhury et al. and Tanaka et al. with the data output capabilities of Dalvin et al. because it is a case of combining prior art elements according to known methods to yield predictable results. The combination specifies the targeted gathering of exam data, but fails to specify the output of such data. Dalvin et al. teaches this element. It would have been obvious for a person having ordinary skill in the art at the effective filing date of the invention to modify the method of Chaudhury et al. and Tanaka et al . with the data output capabilities of Dalvin et al. and the results would have been predictable.
As per claim 6, the combination of Chaudhury et al., Tanaka et al. and Dalvin et al. discloses all of the limitations of claim 5 above. Dalvin et al. in the combination further discloses:
the examination report is based on at least one of: the user input; the medical image; a medical report template; and a medical proficiency level of a potential recipient (Paragraphs [0031],[0039], [0044] & [0057] – Exam data and its interpretation is output).
Allowable Subject Matter
Claim 7 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Examiner Notes
The Examiner cites particular columns and line numbers in the references as applied to the claims above for the convenience of the Applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the Applicant fully considers the references in its entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or as disclosed by the Examiner.
Communications via Internet e-mail are at the discretion of the applicant and require written authorization. Should the Applicant wish to communicate via e-mail, including the following paragraph in their response will allow the Examiner to do so:
“Recognizing that Internet communications are not secure, I hereby authorize the USPTO to communicate with me concerning any subject matter of this application by electronic mail. I understand that a copy of these communications will be made of record in the application file.”
Should e-mail communication be desired, the Examiner can be reached at Edwin.Leland@USPTO.gov
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to EDWIN S LELAND III whose telephone number is (571)270-5678. The examiner can normally be reached 8:00 - 5:00 M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hai Phan can be reached at 571-272-6338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/EDWIN S LELAND III/Primary Examiner, Art Unit 2654