DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged that application claims priority to foreign application with application number EP23155841.2 dated 2/9/2023. Copies of certified papers required by 37 CFR 1.55 have been received. Priority is acknowledged under 35 USC 119(e) and 37 CFR 1.78.
Information Disclosure Statement
The IDS dated 2/9/2024 has been considered and placed in the application file.
Drawing Objections
All of the figures are objected to for missing labels in each corresponding figure. For example, Fig. 1a is missing text/description for 110, 140, 160, and 170. Appropriate correction is required.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f), is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f):
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f). The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f), is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f), because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are:
“An imaging system comprising the system according to claim 13 and a scientific imaging device, with the scientific imaging device being configured to generate the set of images” in claim 14;
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f), they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f), applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f).
Claim 7 recites “at least one of” then listing “one of a use of one or more image processing steps, one or more numerical parameters of one or more image processing steps, and one or more categorical parameters of one or more image processing steps for the image analysis workflow”. Since “and” is conjunctive, all of the elements must be present in order to reject the claim.
Claim 10 recites “one of” then listing “one of spoken text and unstructured written text”. Since “and” is conjunctive, all of the elements must be present in order to reject the claim.
Claim 11 recites “at least one of” then listing “a relation between two entities, a relation between two entities being dependent on a condition, a cell fate being dependent on a condition, a cell type distribution being dependent on a condition, a two-dimensional or three-dimensional geometry being dependent on a condition, and an entity distribution of a non-numerable entity being dependent on a condition”. Since “and” is conjunctive, all of the elements must be present in order to reject the claim.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
Claim(s) 10-11 are rejected under 35 U.S.C. 112(b), as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention.
Claim 10 recites “wherein the method comprises processing user input to generate the formal representation of the actual hypothesis”. It is unclear if this method is executed with the first or second machine learning model or by something else entirely. For examination purposes, the examiner will assume this method is executed with the second machine learning model.
Claim 11 recites “wherein the respective formal representation represents at least one of a relation between two entities, a relation between two entities being dependent on a condition, a cell fate being dependent on a condition, a cell type distribution being dependent on a condition, a two-dimensional or three-dimensional geometry being dependent on a condition, and an entity distribution of a non-numerable entity being dependent on a condition”. It is unclear if the underlined “a condition” refers to the same condition or unique conditions for each relation. For examination purposes, the examiner will interpret “a condition” as “the condition”.
The following is a quotation of 35 U.S.C. 112(d):
(d) REFERENCE IN DEPENDENT FORMS.—Subject to subsection (e), a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers.
Claims 12 recites “A method for processing a set of images representing a biological process, the method comprising: inputting the set of images representing the biological process into a machine-learning model, the machine-learning model being trained, according to the method of claim 1, to perform an image analysis workflow or to generate parameters for parametrizing an image analysis workflow; processing the set of images using the image analysis workflow; and providing an output of the image analysis workflow.” Claim 12 is reciting subject matter that has already been disclosed in claim 1 in a broader scope, and thus is an improper dependent form that fails to further limit the subject matter of the claim upon which it depends, or for failing to include all the limitations of the claim upon which it depends.
Claim 15-16, 18 are rejected for their dependencies on claim 12.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-4, 6-9, 11-18 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Park (US 20240127950 A1).
Regarding claim 1, Park discloses a method for adjusting a first and a second machine-learning model (Park, paragraph [0053], "Referring to FIG. 9, the disease prediction apparatus 100 may include a receiving unit 900, a quantitative analyzing unit 910, a predicting unit 920, a first artificial intelligence model 930, and a second artificial intelligence model 940"), the method comprising:
inputting a set of images representing a biological process into the first machine-learning model, the first machine-learning model being trained to perform an image analysis workflow or to generate parameters for parametrizing an image analysis workflow (Park, paragraph [0033], "For example, the disease prediction apparatus 100 may input the medical image 310 of the learning data to the first artificial intelligence model 300 to obtain the anatomical structure 320 and train the first artificial intelligence model 300 to reduce a value of a loss function indicating a difference between the anatomical structure 320 and the separation result of the learning data"),
inputting an output of the image analysis workflow into the second machine-learning model, the second machine-learning model being trained to output a prediction of a hypothesis being evaluated using the biological process (Park, paragraph [0037], "Referring to FIG. 4, a second artificial intelligence model 400 may be an artificial neural network model to output a disease prediction result 430 in response to an input of quantitative data 410 of an anatomical structure and clinical information 420"),
calculating a loss function based on a difference between the prediction of the hypothesis being evaluated using the biological process and an actual hypothesis being evaluated using the biological process (Park, paragraph [0038], "For example, the disease prediction apparatus 100 may train the second artificial intelligence model 400 based on a loss information indicating a difference between information, of the learning data, indicating whether the disease breaks out and a disease prediction result 430 obtained by inputting the quantitative data 410 and the clinical information 420 of the learning data to the second artificial intelligence model 400"),
and adjusting the first and/or second machine-learning model based on the result of the loss function (Park, paragraph [0038], "For example, the disease prediction apparatus 100 may train the second artificial intelligence model 400 based on a loss information indicating a difference between information, of the learning data, indicating whether the disease breaks out and a disease prediction result 430 obtained by inputting the quantitative data 410 and the clinical information 420 of the learning data to the second artificial intelligence model 400").
Regarding claim 2, Park discloses the method according to claim 1, wherein the first and/or second machine-learning model are adjusted until the prediction of the hypothesis matches the actual hypothesis according to a matching criterion (Park, paragraph [0038], "For example, the disease prediction apparatus 100 may train the second artificial intelligence model 400 based on a loss information indicating a difference between information, of the learning data, indicating whether the disease breaks out and a disease prediction result 430 obtained by inputting the quantitative data 410 and the clinical information 420 of the learning data to the second artificial intelligence model 400"*).
*as additionally supported by Wikipedia, neural networks are trained to minimize the difference between their predicted and actual output. Thus, Park’s neural network is adjusted until it matches its disease information with its quantitative/clinical information.
PNG
media_image1.png
392
843
media_image1.png
Greyscale
Regarding claim 3, Park discloses the method according to claim 1, wherein the method is performed over a plurality of iterations using a plurality of sets of images as training input images (Park, paragraph [0033], "The disease prediction apparatus 100 may train the first artificial intelligence model 300 with supervised learning by using learning data including a dataset of the medical image 310 and a separation result (i.e., ground truth)") and a plurality of corresponding actual hypotheses for comparison with the hypotheses predicted by the second machine-learning model to train the first and/or second machine-learning model (Park, paragraph [0038], "The disease prediction apparatus 100 may train the second artificial intelligence model 400 by using learning data including a dataset of the quantitative data 410 of the anatomical structure, the clinical information 420, and information indicating whether a disease breaks out. For example, the disease prediction apparatus 100 may train the second artificial intelligence model 400 based on a loss information indicating a difference between information, of the learning data, indicating whether the disease breaks out and a disease prediction result 430 obtained by inputting the quantitative data 410 and the clinical information 420 of the learning data to the second artificial intelligence model 400").
Regarding claim 4, Park discloses the method according to claim 1, wherein the first (Park, paragraph [0033], "For example, the disease prediction apparatus 100 may input the medical image 310 of the learning data to the first artificial intelligence model 300 to obtain the anatomical structure 320 and train the first artificial intelligence model 300 to reduce a value of a loss function indicating a difference between the anatomical structure 320 and the separation result of the learning data") and second machine-learning model are adjusted and/or trained together in an end-to-end manner (Park, paragraph [0038], "For example, the disease prediction apparatus 100 may train the second artificial intelligence model 400 based on a loss information indicating a difference between information, of the learning data, indicating whether the disease breaks out and a disease prediction result 430 obtained by inputting the quantitative data 410 and the clinical information 420 of the learning data to the second artificial intelligence model 400").
Regarding claim 6, Park discloses the method according to claim 1, wherein the first machine-learning model is trained to generate parameters for parametrizing the image analysis workflow, the method comprising processing the set of images using the image analysis workflow, the image analysis workflow being parametrized based on an output of the first machine-learning model (Park, paragraph [0035], "The disease prediction apparatus 100 may identify quantitative data such as a volume of the anatomical structure 320, etc., separated through the first artificial intelligence model 300. The disease prediction apparatus 100 may identify a volume of an anatomical structure, etc., by applying various existing methods for obtaining a volume of a 3D structure, etc. The current embodiment proposes a volume as an example of quantitative data, but this is merely an example, and various information (e.g., an area, a density, etc.) identifiable from the anatomical structure may be used together as quantitative data").
Regarding claim 7, Park discloses the method according to claim 6, wherein the first machine-learning model is trained to select at least one of a use of one or more image processing steps (Park, paragraph [0033], "The disease prediction apparatus 100 may train the first artificial intelligence model 300 with supervised learning by using learning data including a dataset of the medical image 310 and a separation result (i.e., ground truth)"), one or more numerical parameters of one or more image processing steps, and one or more categorical parameters of one or more image processing steps for the image analysis workflow (Park, paragraph [0035], "The disease prediction apparatus 100 may identify quantitative data such as a volume of the anatomical structure 320, etc., separated through the first artificial intelligence model 300. The disease prediction apparatus 100 may identify a volume of an anatomical structure, etc., by applying various existing methods for obtaining a volume of a 3D structure, etc. The current embodiment proposes a volume as an example of quantitative data, but this is merely an example, and various information (e.g., an area, a density, etc.) identifiable from the anatomical structure may be used together as quantitative data").
Regarding claim 8, Park discloses the method according to claim 1, wherein the set of images or a processed version of the set of images is used as further input to the second machine-learning model (Park, paragraph [0037], "Referring to FIG. 4, a second artificial intelligence model 400 may be an artificial neural network model to output a disease prediction result 430 in response to an input of quantitative data 410 of an anatomical structure and clinical information 420.", quantitative data is a processed version of the inputted medical images).
Regarding claim 9, Park discloses the method according to claim 1, wherein the second machine-learning model is trained to output a formal representation of the prediction of the hypothesis, with the loss function being calculated based on a comparison between the formal representation of the prediction of the hypothesis and a formal representation of the actual hypothesis (Park, paragraph [0038], "For example, the disease prediction apparatus 100 may train the second artificial intelligence model 400 based on a loss information indicating a difference between information, of the learning data, indicating whether the disease breaks out and a disease prediction result 430 obtained by inputting the quantitative data 410 and the clinical information 420 of the learning data to the second artificial intelligence model 400").
Regarding claim 11, Park discloses the method according to claim 9, wherein the respective formal representation represents at least one of a relation between two entities, a relation between two entities being dependent on a condition (Park, paragraph [0038], "For example, the disease prediction apparatus 100 may train the second artificial intelligence model 400 based on a loss information indicating a difference between information, of the learning data, indicating whether the disease breaks out and a disease prediction result 430 obtained by inputting the quantitative data 410 and the clinical information 420 of the learning data to the second artificial intelligence model 400"), a cell fate being dependent on a condition, a cell type distribution being dependent on a condition (Park, paragraph [0049], "Referring to FIG. 7, the disease prediction apparatus 100 may receive clinical information required for disease prediction from the EMR system 120 and display the clinical information on a screen interface 700", a person’s fate and their disease type is dependent on their clinical information), a two-dimensional or three-dimensional geometry being dependent on a condition (Park, paragraph [0046], "Referring to FIG. 6, the disease prediction apparatus 100 may display an anatomical structure separated from the medical image through the first artificial intelligence model on a screen 600. For example, the disease prediction apparatus 100 may display sagittal, coronal, and axial images of the anatomical structure or display a 3D modeling result.", a person’s anatomy changes depending on their disease/condition), and an entity distribution of a non-numerable entity being dependent on a condition (Park, paragraph [0050], "FIG. 8 is a view showing an example of an output screen of a disease prediction result according to an embodiment", a person’s anatomy is the entity distribution).
Regarding claim 12, Park discloses a method for processing a set of images representing a biological process (Park, paragraph [0027], "Referring to FIG. 2, the disease prediction apparatus 100 may receive a medical image and clinical information of a patient, in operation S200"), the method comprising:
inputting the set of images representing the biological process into a machine-learning model, the machine-learning model being trained, according to the method of claim 1, to perform an image analysis workflow or to generate parameters for parametrizing an image analysis workflow (Park, paragraph [0033], "The disease prediction apparatus 100 may train the first artificial intelligence model 300 with supervised learning by using learning data including a dataset of the medical image 310 and a separation result (i.e., ground truth)."),
processing the set of images using the image analysis workflow (Park, paragraph [0033], "For example, the disease prediction apparatus 100 may input the medical image 310 of the learning data to the first artificial intelligence model 300 to obtain the anatomical structure 320 and train the first artificial intelligence model 300 to reduce a value of a loss function indicating a difference between the anatomical structure 320 and the separation result of the learning data"),
and providing an output of the image analysis workflow (Park, paragraph [0046], "Referring to FIG. 6, the disease prediction apparatus 100 may display an anatomical structure separated from the medical image through the first artificial intelligence model on a screen 600").
Regarding claim 13, Park discloses a system comprising one or more processors and one or more storage devices (Park, paragraph [0057], "Examples of the computer-readable recording medium may include read-only memory (ROM), random access memory (RAM), compact-disc ROM (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, etc."), wherein the system is configured to perform the method according to claim 1 (Park, paragraph [0057], "The computer-readable recording medium may be distributed over computer systems connected through a network to store and execute a computer-readable code in a distributed manner").
Regarding claim 14, Park discloses an imaging system comprising the system according to claim 13 and a scientific imaging device, with the scientific imaging device being configured to generate the set of images. (Park, paragraph [0023], "In an embodiment, the disease prediction apparatus 100 may receive a medical image in a digital image and communication in medicine (DICOM) form from a picture archiving and communication system (PACS)) 110. The medical image may be a three-dimensional (3D) image such as a CT image, an MRI image, etc.").
Regarding claim 15, Park discloses a system comprising one or more processors and one or more storage devices (Park, paragraph [0057], "Examples of the computer-readable recording medium may include read-only memory (ROM), random access memory (RAM), compact-disc ROM (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, etc."), wherein the system is configured to perform the method according to claim 12 (Park, paragraph [0057], "The computer-readable recording medium may be distributed over computer systems connected through a network to store and execute a computer-readable code in a distributed manner").
Regarding claim 16, Park discloses an imaging system comprising the system according to claim 15 and a scientific imaging device, with the scientific imaging device being configured to generate the set of images (Park, paragraph [0023], "In an embodiment, the disease prediction apparatus 100 may receive a medical image in a digital image and communication in medicine (DICOM) form from a picture archiving and communication system (PACS)) 110. The medical image may be a three-dimensional (3D) image such as a CT image, an MRI image, etc.").
Regarding claim 17, Park discloses a non-transitory, computer-readable medium comprising a program code that, when the program code is executed on a processor, a computer, or a programmable hardware component, causes the processor, computer, or programmable hardware component to perform the method of claim 1 (Park, paragraph [0057], "The computer-readable recording medium may include all types of recording devices in which data that is readable by a computer system is stored. Examples of the computer-readable recording medium may include read-only memory (ROM), random access memory (RAM), compact-disc ROM (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, etc. The computer-readable recording medium may be distributed over computer systems connected through a network to store and execute a computer-readable code in a distributed manner").
Regarding claim 18, Park discloses a non-transitory, computer-readable medium comprising a program code that, when the program code is executed on a processor, a computer, or a programmable hardware component, causes the processor, computer, or programmable hardware component to perform the method of claim 12 (Park, paragraph [0057], "The computer-readable recording medium may include all types of recording devices in which data that is readable by a computer system is stored. Examples of the computer-readable recording medium may include read-only memory (ROM), random access memory (RAM), compact-disc ROM (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, etc. The computer-readable recording medium may be distributed over computer systems connected through a network to store and execute a computer-readable code in a distributed manner").
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 5 is rejected under 35 U.S.C. 103 as being unpatentable over Park (US 20240127950 A1) in view of Pardasani (US 20230087363 A1).
Regarding claim 5, Park discloses the method according to claim 1.
Park does not teach “wherein the first and second machine-learning models are pre-trained machine-learning models, which are adjusted in the field”.
However, Pardasani teaches wherein the first and second machine-learning models are pre-trained machine-learning models, which are adjusted in the field (Pardasani, paragraph [0088], "In an example embodiment, each of the first pre-trained model and the second pre-trained model comprises artificial intelligence (AI) based model.").
It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to pre-train Park’s AI models, as taught by Pardasani.
The suggestion/motivation for doing so would have been to save time on training and reduce computational costs.
Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results.
Therefore, it would have been obvious to combine Park in view of Pardasani to obtain the invention as specified in claim 5.
Claim(s) 10 is rejected under 35 U.S.C. 103 as being unpatentable over Park (US 20200411002 A1) in view of Lee (US 20200411002 A1).
Regarding claim 10, Park discloses the method of claim 9.
Park does not teach “wherein the method comprises processing user input to generate the formal representation of the actual hypothesis, wherein the user input comprises one of spoken text and unstructured written text, the method comprising processing the user input using natural language processing, or wherein the user input comprises structured input”.
However, Lee teaches wherein the method comprises processing user input to generate the formal representation of the actual hypothesis, wherein the user input comprises one of spoken text and unstructured written text (Lee, paragraph [0058], "As an example, if the user's voice is input, a voice recognition model among the artificial intelligence models of the virtual secretary service may convert the user's voice into text"), the method comprising processing the user input using natural language processing, or wherein the user input comprises structured input (Lee, paragraph [0058], "In addition, if the user's voice is converted into the text, at least one domain related to the user's voice may be identified through the domain classifier model included in the natural language understanding model among the artificial intelligence models of the virtual secretary service").
It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to implement a voice recognition model that takes in text into Park’s model, as taught by Lee.
The suggestion/motivation for doing so would have been to enhance accessibility and reduce storage use when compared to images.
Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results.
Therefore, it would have been obvious to combine Park in view of Lee to obtain the invention as specified in claim 10.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to WAYNE ZHANG whose telephone number is (571) 272-0245. The examiner can normally be reached Monday-Friday 10:00-6:00 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ms. Sumati Lefkowitz can be reached on (571) 272-3638. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/WAYNE ZHANG/Examiner, Art Unit 2672
/SUMATI LEFKOWITZ/Supervisory Patent Examiner, Art Unit 2672