Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Response to Amendment
In the amendment dated 03/05/2026, the following occurred: Claims 1, 9-10, 16 and 18 have been amended; claims 2, 4-6, 8, 11, 13-15, 17 and 19-21 have been cancelled.
Claims 1, 3, 7, 9-10, 12, 16 and 18 are pending and have been examined.
Priority
Acknowledgement is made of applicant’s claim to priority under 35 U.S.C. 371 to PCT Application No. PCT/CN2023/105216 filed 06/30/2023, which claims priority to Chinese Application No. 202211667413.6 filed 12/23/2022.
Note: 35 U.S.C. §101
Independent claims 1 and 9-10 recite the following (claim 1 being representative): “A data processing method, performed by at least one processor, comprising: for each of at least one to-be-processed object, determining to-be-trained sample data under at least two physiological indicators, and constructing a to-be-processed matrix based on a plurality of pieces of to-be-trained sample data, wherein each column in the to-be-processed matrix represents to-be-trained sample data corresponding to a same physiological feature indicator, and each row in the to-be-processed matrix corresponds to the to-be-trained sample data of each of the at least one to-be-processed object; performing a normalization process on each column in the to-be-processed matrix to obtain a to- be-spliced submatrix, and splicing the to-be-spliced submatrix to obtain a to-be-used matrix; and inputting the to-be-used matrix into a to-be-trained network model, and training the to-be-trained network model based on the to-be-used matrix until the to-be-trained network model has a minimum loss function to obtain the target network model, wherein the to-be-trained network model comprises a generative adversarial network model, a variational autoencoder model, a diffusion model, or a flow-based generation model” (emphasis added).
Step 1: The claim recites a processor that implements a target network model. The claims are directed to a method implemented with a physical processor, an electronic device comprising a physical processor that implements the method, or a non-transitory computer-readable storage medium implemented by a processor, each falling into at least one of the statutory categories of invention (Step 1: YES).
Step 2A1: The independent claims detail how a specific type of training data (physiological feature data) is assembled for application to the target network model. The Specification at pg. 8, para. 2 describes how normalization is required during training data assembly to prepare the data for use with the target network model. Also, while machine learning technology may be trained using mathematics, there is no mathematical concept recited in the claim. As such, training is considered an additional element. This training is recited in a manner that is sufficiently detailed rather than merely being recited at a high-level. The training is enacted using the specific training data and the required normalization process. This training is implemented on the target network model, such that the training improves another technology or technological field, which provides a practical application (step 2A2: NO). Thus, the claim is eligible.
Stated another way, the normalization of the specific data prior to the training and application of the specific data on the target network model provides a practical application under subject eligibility analysis Step 2A2, since it provides an improvement to the functioning of a computer or to any other technology or technological field, i.e., a technical solution to a technical problem. MPEP § 2106.05(a). As indicated in the specification at pg. 8, para. 2: “A physiological feature curve may be understood as a curve generated based on the physiological feature data. In this technical scheme, before the physiological feature data of the target object is input into the target network model, normalization needs to be performed on the physiological feature data. That is, the physiological feature data under at least one physiological indicator is processed to be in a range of 0 to 1 so that data analysis is performed on the normalized physiological feature data by using the target network model.”
Additionally, and for completeness, the training and application of the target network model is specific to the training with and application of physiological feature data, such that the additional elements are not merely generally linking to the abstract idea.
The subject matter eligibility of independent claims 1 and 9-10 also applies to dependent claims 3, 7, 12, 16 and 18.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 3, 7, 9-10, 12, 16 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Itu et al. (US 2019/0139641 A1; “Itu” herein) in view of Amatya et al. (US 2022/0301713 A1; “Amatya” herein) and Liu et al. (US 2023/0225663 A1; “Liu” herein).
Re. Claim 1, Itu teaches a data processing method, performed by at least one processor (13) (see Fig. 10 and [0093], [0095], [0097]), comprising:
acquiring physiological feature data of a target object under at least one physiological indicator (Figs. 1-2, 8, [0076] teach extracting features of interest from patient data 10, including a set of initial measurements of medical scan data 20 and/or other data 22, to generate a complete input feature data set. See specification at pg. 5, lines 17-26. See specification at pg. 5, lines 17-26.);
determining a data processing type corresponding to the physiological feature data, and invoking a target network model corresponding to the data processing type, wherein the data processing type comprises a data generalization type or a data prediction type (Figs. 1-2, [0024] teach the machine-learned network is applied to output/predict physiological quantities (determining a data prediction type). See specification at pg. 6, lines 25-30.); and
processing the physiological feature data based on the target network model to obtain target physiological feature data (Figs. 1-2, [0021], [0024], [0072] teach inputting the extracted features to a machine-learned network to output associated physiological quantities / values of physiological parameters (e.g., PV loop).), wherein a process of acquiring the target network model comprises:
for each of at least one to-be-processed object, determining to-be-trained sample data under at least two physiological indicators, and constructing […] based on a plurality of pieces of to-be-trained sample data, […] (Fig. 4, [0023], [0048] teach synthetically generated scan data and/or other data is used for training the machine learning network… The synthetic data is used without patient-specific samples, but patient-specific samples may be used instead or in addition to synthetic data. [0033], [0035], [0076] teach extracting the set of features (physiological indicators) from the medical scan data and/or other data, to build the initial complete set of input features (constructing). [0036] teaches the resulting list of values for the features (plurality of pieces) is stored as part of the training database);
[…]; and
inputting the to-be-used matrix into a to-be-trained network model, and training the to-be-trained network model based on the to-be-used matrix […] to obtain the target network model (Abstract, Fig. 4, [0048] teach the synthetic data and/or actual patient examples may be used to machine train the network… The training can be iteratively improved. [0005] teaches, thus, the machine-trained estimation network was trained.), wherein the to-be-trained network model […] a generative adversarial network model, a variational autoencoder model, a diffusion model, or a flow-based generation model ([0069] teaches an exemplary neural network (the to-be-trained network model). Fig. 6, [0005] teach a generative adversarial network.),
wherein determining the data processing type corresponding to the physiological feature data, and invoking the target network model corresponding to the data processing type comprises:
receiving a data processing instruction, and acquiring a data processing manner in the data processing instruction, wherein the data processing manner comprises a data generalization manner or a data prediction manner (Figs. 1-2 teach data prediction using a machine-trained network as a method step. Figs. 6 teaches data generation method steps. [0093], [0095] teach the image processor 13 is configured to perform any of the acts… with the processing instructions (received).); and
determining the corresponding data processing type based on the data processing manner and the physiological feature data, and invoking the target network model corresponding to the data processing type ([0093], [0097] teach the image processor 13 executes the processing instructions to perform the acts / implement the processes, methods, and techniques provided. Figs. 1-2, [0024] teach the machine-learned network is applied to output/predict physiological quantities. See specification at pg. 6, lines 25-30.),
wherein in a case where the data processing manner is the data generalization manner (Figs. 6-7), processing the physiological feature data based on the target network model to obtain the target physiological feature data comprises:
determining, based on the target network model, a physiological feature curve corresponding to the physiological feature data ([0052] teaches in act 40, the medical system creates synthetic datasets for lumped parameter modeling. FIGS. 5A and 5B (determining a physiological feature curve, e.g., generic volume curve) show the training as a two-step process for generating synthetic input data with the lumped parameter model of Fig. 3. [0056] teaches using a pressure curve over time from the lumped parameter model provides the PV loop.);
determining a data floating range corresponding to the physiological feature curve to obtain a preset number of physiological feature generalization curves from the data floating range (Fig. 5B shows a generic volume curve with data points shown to cover a particular vertical range. The Examiner interprets the preset number as 1.); and
obtaining, based on the physiological feature generalization curves, at least one group of target physiological feature data corresponding to the physiological feature data ([0056]-[0057] teach in act 46, based on the values (data floating ranges) for the model parameters, the medical system models anatomy. Any mechanistic or computational modeling may be used… In the example of FIGS. 5A and 5B and the lumped model of FIG. 3, non-imaging data is used, e.g., pressure curve, left ventricular volume curve. Rather than generate scan data as one of the features in act 42, the features are values of the parameters of the anatomy model (e.g., the lumped model) or other characteristics derived from the anatomy model. This allows a rule-based approach where non-imaging data is used as input for training (physiological feature data based on the curves) and application (obtaining).),
wherein in a case where the data processing manner is the data prediction manner, processing the physiological feature data based on the target network model to obtain the target physiological feature data comprises:
determining, based on the target network model, at least one to-be-determined physiological feature curve matching the physiological feature data ([0052] teaches in act 40, the medical system creates synthetic datasets for lumped parameter modeling. FIGS. 5A and 5B (determining a to-be-determined physiological feature curve, e.g., generic volume curve) show the training as a two-step process for generating synthetic input data with the lumped parameter model of Fig. 3. [0056] teaches using a pressure curve over time from the lumped parameter model (matching, e.g., the pressure measurements) provides the PV loop.);
determining a target physiological feature curve from the at least one to-be-determined physiological feature curve according to basic attribute information corresponding to the target object (Figs. 5A, 5B, 7, [0060], [0076] teach synthetic data is generated that closely models a variety of real data / the patient-specific datasets 76 including basic measurements like height, weight, BMI, etc. of the subject (basic attribute information). Figs. 6, [0069] teach in act 64, the machine trains a quantification network (target network model) based on the generated synthetic data, e.g., the pressure curve, to infer the physiological quantity, e.g., PV loop (a target physiological feature curve).); and
determining, based on the target physiological feature curve, the target physiological feature data corresponding to the target object (Figs. 6, [0064], [0069] teach in act 64, the machine trains a quantification network based on the generated synthetic data, e.g., the pressure curve, to infer the physiological quantity (target physiological feature data), e.g., receiving the PV loop data. See additionally [0002], [0045].)
Itu does not explicitly teach the to-be-trained network model comprises a generative adversarial network model.
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date to combine the noted features of Itu, since the combination is merely simple substitution of one known element for another producing a predictable result (KSR rationale B). Since each individual element and its function are shown in the prior art, the difference between the claimed subject matter and the prior art rests not on any individual element or function but in the very combination itself—that is, in the substitution of a generative adversarial network for the trained and applied neural network (or other machine learning algorithm). Thus, the simple substitution of one known element for another producing a predictable result renders the claim obvious.
Itu may not teach
a to-be-processed matrix… wherein each column in the to-be-processed matrix represents to-be-trained sample data corresponding to a same physiological feature indicator, and each row in the to-be-processed matrix corresponds to the to-be-trained sample data of each of the at least one to-be-processed object; or
performing a normalization process on each column in the to-be-processed matrix to obtain a to-be-spliced submatrix, and splicing the to-be-spliced submatrix to obtain a to-be-used matrix.
Amatya teaches
constructing a to-be-processed matrix based on a plurality of pieces of to-be-trained sample data (see, e.g., Fig. 6F’s MSSNG Vectors), wherein each column in the to-be-processed matrix represents to-be-trained sample data corresponding to a same physiological feature indicator (j-th gene burden), and each row in the to-be-processed matrix corresponds to the to-be-trained sample data of each of the at least one to-be-processed object (i-th subject) (Figs. 1, 8A teach forming (constructing) a vector of patient genomic data in multidimensional space (a to-be-processed matrix)… reducing the vector using a dimensionality reduction technique… and inputting the reduced vector to a machine learning model to diagnose a presence of a disease or trait. Fig. 4, [0056] also teach the individual subject vectors (the pieces) were concatenated (spliced) as rows to construct the variant burden matrix for hereditary disease risk and trait prediction… The partial display of the matrix offers a visualization of vectors as the rows of the matrix.);
performing a normalization process on each column in the to-be-processed matrix (e.g., MSSNG Vectors) to obtain a to-be-spliced submatrix (e.g., normalized and reduced MSSNG Vectors), and splicing the to-be-spliced submatrix to obtain a to-be-used matrix (e.g., a reduced MSSNG Vector) (Fig. 6F, [0068] teach the MSSNG Vectors undergo normalization prior to undergoing principal component analysis and generation of a reduced MSSNG vector for training. Further, [0056]-[0057], [0059] teach the dimensionality of the vector burden matrix is halved from 30,729 to 15,338.); and
inputting the to-be-used matrix into a to-be-trained network model, and training the to-be-trained network model based on the to-be-used matrix… to obtain the target network model (Fig. 6F, 8A, [0056]-[0057], [0059] teach reducing the dimensionality of the vector burden matrix; and inputting the reduced vector, e.g., a 7,187 x 15,338 matrix, to a machine learning model for training. Fig. 6F, 8A, [0060] teach training a classifier. See also [0062], teaching trained classification models.)
Therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date to have modified the system and method of using artificial intelligence for physiological quantification in medical imaging of Itu to handle and manipulate data matrices for machine learning operations and to use this information as part of systems and methods for disease and trait prediction through genomic analysis as taught by Amatya, with the motivation of improving computer-aided diagnosis, computational performance (efficiency, accuracy), and machine learning technology (see Amatya at para. 0004, 0047, 0052-0053, 0060).
Itu may not teach training the to-be-trained network model based on the to-be-used matrix until the to-be-trained network model has a minimum loss function to obtain the target network model.
Liu teaches
training the to-be-trained network model based on the to-be-used matrix until the to-be-trained network model has a minimum loss function to obtain the target network model ([0013], [0018] teach in the process of training the graph convolution network in (7), the parameters in the network are iteratively updated by an Adam algorithm until the cross-entropy loss function is converged.)
Therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date to have modified the system and method of using artificial intelligence for physiological quantification in medical imaging of Itu to perform machine learning operations and to use this information as part of a system and method for predicting multi-type ECG heart rhythms based on graph convolution as taught by Liu, with the motivation of improving computer-aided diagnosis and machine learning technology (see Liu at para. 0004-0005, 0023-0024, 0031, 0051).
Re. Claim 3, Itu/Amatya/Liu teaches the method according to claim 1, wherein acquiring the physiological feature data of the target object under the at least one physiological indicator comprises:
inputting the physiological feature data of the target object under the at least one physiological indicator in at least one editing control on a target display interface (Itu [0035], [0037] teaches in act 10, the medical system extracts the set of features from the medical scan data and/or the other data… The processor performs the feature extraction with user input through a user interface (target display interface)… Under this manual approach, anatomical or other features are input, annotated, or measured… via display of a dialog (at least one editing control) that the user can edit to insert the features.); or
invoking the physiological feature data of the target object under the at least one physiological indicator from a target database, wherein the target database comprises at least one reference object and physiological feature data matching each of the at least one reference object under the at least one physiological indicator (see previous citations. See also Itu [0036].)
Re. Claim 7, Itu/Amatya/Liu teaches the method according to claim 1, wherein the at least one physiological indicator comprises at least one of height, weight, temperature, blood pressure, electrocardiogram information, or biological tissue information (Liu [0076] teaches, in acts 20 and/or 22, the set of initialization measurements may be based on medical imaging (biological tissue information)… and/or basic measurements like height, weight, BMI, etc. of the subject. Additionally, Liu Figs. 5A, 8 and [0033] teaches specifying an initial set of input data / features including non-invasive measurements, e.g., blood pressure, heart rate, ECG signals.)
Re. Claim 9, Itu teaches an electronic device, comprising:
at least one processor (13); and a memory (15) communicatively connected to the at least one processor (see Fig. 10, [0093], [0095]); wherein the memory stores a computer program executable by the at least one processor, and the computer program is executed by the at least one processor to cause the at least one processor to perform the following ([0093], [0097]):
acquiring physiological feature data of a target object under at least one physiological indicator (Figs. 1-2, 8, [0076] teach extracting features of interest from patient data 10, including a set of initial measurements of medical scan data 20 and/or other data 22, to generate a complete input feature data set. See specification at pg. 5, lines 17-26. See specification at pg. 5, lines 17-26.);
determining a data processing type corresponding to the physiological feature data, and invoking a target network model corresponding to the data processing type, wherein the data processing type comprises a data generalization type or a data prediction type (Figs. 1-2, [0024] teach the machine-learned network is applied to output/predict physiological quantities (determining a data prediction type). See specification at pg. 6, lines 25-30.); and
processing the physiological feature data based on the target network model to obtain target physiological feature data (Figs. 1-2, [0021], [0024], [0072] teach inputting the extracted features to a machine-learned network to output associated physiological quantities / values of physiological parameters (e.g., PV loop).) wherein a process of acquiring the target network model comprises:
for each of at least one to-be-processed object, determining to-be-trained sample data under at least two physiological indicators, and constructing […] based on a plurality of pieces of to-be-trained sample data, […] (Fig. 4, [0023], [0048] teach synthetically generated scan data and/or other data is used for training the machine learning network… The synthetic data is used without patient-specific samples, but patient-specific samples may be used instead or in addition to synthetic data. [0033], [0035], [0076] teach extracting the set of features (physiological indicators) from the medical scan data and/or other data, to build the initial complete set of input features (constructing). [0036] teaches the resulting list of values for the features (plurality of pieces) is stored as part of the training database);
[…]; and
inputting the to-be-used matrix into a to-be-trained network model, and training the to-be-trained network model based on the to-be-used matrix […] to obtain the target network model (Abstract, Fig. 4, [0048] teach the synthetic data and/or actual patient examples may be used to machine train the network… The training can be iteratively improved. [0005] teaches, thus, the machine-trained estimation network was trained.), wherein the to-be-trained network model […] a generative adversarial network model, a variational autoencoder model, a diffusion model, or a flow-based generation model ([0069] teaches an exemplary neural network (the to-be-trained network model). Fig. 6, [0005] teach a generative adversarial network.),
wherein determining the data processing type corresponding to the physiological feature data, and invoking the target network model corresponding to the data processing type comprises:
receiving a data processing instruction, and acquiring a data processing manner in the data processing instruction, wherein the data processing manner comprises a data generalization manner or a data prediction manner (Figs. 1-2 teach data prediction using a machine-trained network as a method step. Figs. 6 teaches data generation method steps. [0093], [0095] teach the image processor 13 is configured to perform any of the acts… with the processing instructions (received).); and
determining the corresponding data processing type based on the data processing manner and the physiological feature data, and invoking the target network model corresponding to the data processing type ([0093], [0097] teach the image processor 13 executes the processing instructions to perform the acts / implement the processes, methods, and techniques provided. Figs. 1-2, [0024] teach the machine-learned network is applied to output/predict physiological quantities. See specification at pg. 6, lines 25-30.),
wherein in a case where the data processing manner is the data generalization manner (Figs. 6-7), processing the physiological feature data based on the target network model to obtain the target physiological feature data comprises:
determining, based on the target network model, a physiological feature curve corresponding to the physiological feature data ([0052] teaches in act 40, the medical system creates synthetic datasets for lumped parameter modeling. FIGS. 5A and 5B (determining a physiological feature curve, e.g., generic volume curve) show the training as a two-step process for generating synthetic input data with the lumped parameter model of Fig. 3. [0056] teaches using a pressure curve over time from the lumped parameter model provides the PV loop.);
determining a data floating range corresponding to the physiological feature curve to obtain a preset number of physiological feature generalization curves from the data floating range (Fig. 5B shows a generic volume curve with data points shown to cover a particular vertical range. The Examiner interprets the preset number as 1.); and
obtaining, based on the physiological feature generalization curves, at least one group of target physiological feature data corresponding to the physiological feature data ([0056]-[0057] teach in act 46, based on the values (data floating ranges) for the model parameters, the medical system models anatomy. Any mechanistic or computational modeling may be used… In the example of FIGS. 5A and 5B and the lumped model of FIG. 3, non-imaging data is used, e.g., pressure curve, left ventricular volume curve. Rather than generate scan data as one of the features in act 42, the features are values of the parameters of the anatomy model (e.g., the lumped model) or other characteristics derived from the anatomy model. This allows a rule-based approach where non-imaging data is used as input for training (physiological feature data based on the curves) and application (obtaining).),
wherein in a case where the data processing manner is the data prediction manner, processing the physiological feature data based on the target network model to obtain the target physiological feature data comprises:
determining, based on the target network model, at least one to-be-determined physiological feature curve matching the physiological feature data ([0052] teaches in act 40, the medical system creates synthetic datasets for lumped parameter modeling. FIGS. 5A and 5B (determining a to-be-determined physiological feature curve, e.g., generic volume curve) show the training as a two-step process for generating synthetic input data with the lumped parameter model of Fig. 3. [0056] teaches using a pressure curve over time from the lumped parameter model (matching, e.g., the pressure measurements) provides the PV loop.);
determining a target physiological feature curve from the at least one to-be-determined physiological feature curve according to basic attribute information corresponding to the target object (Figs. 5A, 5B, 7, [0060], [0076] teach synthetic data is generated that closely models a variety of real data / the patient-specific datasets 76 including basic measurements like height, weight, BMI, etc. of the subject (basic attribute information). Figs. 6, [0069] teach in act 64, the machine trains a quantification network (target network model) based on the generated synthetic data, e.g., the pressure curve, to infer the physiological quantity, e.g., PV loop (a target physiological feature curve).); and
determining, based on the target physiological feature curve, the target physiological feature data corresponding to the target object (Figs. 6, [0064], [0069] teach in act 64, the machine trains a quantification network based on the generated synthetic data, e.g., the pressure curve, to infer the physiological quantity (target physiological feature data), e.g., receiving the PV loop data. See additionally [0002], [0045].)
Itu does not explicitly teach the to-be-trained network model comprises a generative adversarial network model.
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date to combine the noted features of Itu, since the combination is merely simple substitution of one known element for another producing a predictable result (KSR rationale B). Since each individual element and its function are shown in the prior art, the difference between the claimed subject matter and the prior art rests not on any individual element or function but in the very combination itself—that is, in the substitution of a generative adversarial network for the trained and applied neural network (or other machine learning algorithm). Thus, the simple substitution of one known element for another producing a predictable result renders the claim obvious.
Itu may not teach
a to-be-processed matrix… wherein each column in the to-be-processed matrix represents to-be-trained sample data corresponding to a same physiological feature indicator, and each row in the to-be-processed matrix corresponds to the to-be-trained sample data of each of the at least one to-be-processed object; or
performing a normalization process on each column in the to-be-processed matrix to obtain a to-be-spliced submatrix, and splicing the to-be-spliced submatrix to obtain a to-be-used matrix.
Amatya teaches
constructing a to-be-processed matrix based on a plurality of pieces of to-be-trained sample data (see, e.g., Fig. 6F’s MSSNG Vectors), wherein each column in the to-be-processed matrix represents to-be-trained sample data corresponding to a same physiological feature indicator (j-th gene burden), and each row in the to-be-processed matrix corresponds to the to-be-trained sample data of each of the at least one to-be-processed object (i-th subject) (Figs. 1, 8A teach forming (constructing) a vector of patient genomic data in multidimensional space (a to-be-processed matrix)… reducing the vector using a dimensionality reduction technique… and inputting the reduced vector to a machine learning model to diagnose a presence of a disease or trait. Fig. 4, [0056] also teach the individual subject vectors (the pieces) were concatenated (spliced) as rows to construct the variant burden matrix for hereditary disease risk and trait prediction… The partial display of the matrix offers a visualization of vectors as the rows of the matrix.);
performing a normalization process on each column in the to-be-processed matrix (e.g., MSSNG Vectors) to obtain a to-be-spliced submatrix (e.g., normalized and reduced MSSNG Vectors), and splicing the to-be-spliced submatrix to obtain a to-be-used matrix (e.g., a reduced MSSNG Vector) (Fig. 6F, [0068] teach the MSSNG Vectors undergo normalization prior to undergoing principal component analysis and generation of a reduced MSSNG vector for training. Further, [0056]-[0057], [0059] teach the dimensionality of the vector burden matrix is halved from 30,729 to 15,338.); and
inputting the to-be-used matrix into a to-be-trained network model, and training the to-be-trained network model based on the to-be-used matrix… to obtain the target network model (Fig. 6F, 8A, [0056]-[0057], [0059] teach reducing the dimensionality of the vector burden matrix; and inputting the reduced vector, e.g., a 7,187 x 15,338 matrix, to a machine learning model for training. Fig. 6F, 8A, [0060] teach training a classifier. See also [0062], teaching trained classification models.)
Therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date to have modified the system and method of using artificial intelligence for physiological quantification in medical imaging of Itu to handle and manipulate data matrices for machine learning operations and to use this information as part of systems and methods for disease and trait prediction through genomic analysis as taught by Amatya, with the motivation of improving computer-aided diagnosis, computational performance (efficiency, accuracy), and machine learning technology (see Amatya at para. 0004, 0047, 0052-0053, 0060).
Itu may not teach training the to-be-trained network model based on the to-be-used matrix until the to-be-trained network model has a minimum loss function to obtain the target network model.
Liu teaches
training the to-be-trained network model based on the to-be-used matrix until the to-be-trained network model has a minimum loss function to obtain the target network model ([0013], [0018] teach in the process of training the graph convolution network in (7), the parameters in the network are iteratively updated by an Adam algorithm until the cross-entropy loss function is converged.)
Therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date to have modified the system and method of using artificial intelligence for physiological quantification in medical imaging of Itu to perform machine learning operations and to use this information as part of a system and method for predicting multi-type ECG heart rhythms based on graph convolution as taught by Liu, with the motivation of improving computer-aided diagnosis and machine learning technology (see Liu at para. 0004-0005, 0023-0024, 0031, 0051).
Re. Claim 10, Itu teaches a non-transitory computer-readable storage medium (15) storing computer instructions configured to, when executed, cause a processor (13) to perform the following (see Fig. 10, [0093], [0095], [0097]):
acquiring physiological feature data of a target object under at least one physiological indicator (Figs. 1-2, 8, [0076] teach extracting features of interest from patient data 10, including a set of initial measurements of medical scan data 20 and/or other data 22, to generate a complete input feature data set. See specification at pg. 5, lines 17-26. See specification at pg. 5, lines 17-26.);
determining a data processing type corresponding to the physiological feature data, and invoking a target network model corresponding to the data processing type, wherein the data processing type comprises a data generalization type or a data prediction type (Figs. 1-2, [0024] teach the machine-learned network is applied to output/predict physiological quantities (determining a data prediction type). See specification at pg. 6, lines 25-30.); and
processing the physiological feature data based on the target network model to obtain target physiological feature data (Figs. 1-2, [0021], [0024], [0072] teach inputting the extracted features to a machine-learned network to output associated physiological quantities / values of physiological parameters (e.g., PV loop).) wherein a process of acquiring the target network model comprises:
for each of at least one to-be-processed object, determining to-be-trained sample data under at least two physiological indicators, and constructing […] based on a plurality of pieces of to-be-trained sample data, […] (Fig. 4, [0023], [0048] teach synthetically generated scan data and/or other data is used for training the machine learning network… The synthetic data is used without patient-specific samples, but patient-specific samples may be used instead or in addition to synthetic data. [0033], [0035], [0076] teach extracting the set of features (physiological indicators) from the medical scan data and/or other data, to build the initial complete set of input features (constructing). [0036] teaches the resulting list of values for the features (plurality of pieces) is stored as part of the training database);
[…]; and
inputting the to-be-used matrix into a to-be-trained network model, and training the to-be-trained network model based on the to-be-used matrix […] to obtain the target network model (Abstract, Fig. 4, [0048] teach the synthetic data and/or actual patient examples may be used to machine train the network… The training can be iteratively improved. [0005] teaches, thus, the machine-trained estimation network was trained.), wherein the to-be-trained network model […] a generative adversarial network model, a variational autoencoder model, a diffusion model, or a flow-based generation model ([0069] teaches an exemplary neural network (the to-be-trained network model). Fig. 6, [0005] teach a generative adversarial network.),
wherein determining the data processing type corresponding to the physiological feature data, and invoking the target network model corresponding to the data processing type comprises:
receiving a data processing instruction, and acquiring a data processing manner in the data processing instruction, wherein the data processing manner comprises a data generalization manner or a data prediction manner (Figs. 1-2 teach data prediction using a machine-trained network as a method step. Figs. 6 teaches data generation method steps. [0093], [0095] teach the image processor 13 is configured to perform any of the acts… with the processing instructions (received).); and
determining the corresponding data processing type based on the data processing manner and the physiological feature data, and invoking the target network model corresponding to the data processing type ([0093], [0097] teach the image processor 13 executes the processing instructions to perform the acts / implement the processes, methods, and techniques provided. Figs. 1-2, [0024] teach the machine-learned network is applied to output/predict physiological quantities. See specification at pg. 6, lines 25-30.),
wherein in a case where the data processing manner is the data generalization manner (Figs. 6-7), processing the physiological feature data based on the target network model to obtain the target physiological feature data comprises:
determining, based on the target network model, a physiological feature curve corresponding to the physiological feature data ([0052] teaches in act 40, the medical system creates synthetic datasets for lumped parameter modeling. FIGS. 5A and 5B (determining a physiological feature curve, e.g., generic volume curve) show the training as a two-step process for generating synthetic input data with the lumped parameter model of Fig. 3. [0056] teaches using a pressure curve over time from the lumped parameter model provides the PV loop.);
determining a data floating range corresponding to the physiological feature curve to obtain a preset number of physiological feature generalization curves from the data floating range (Fig. 5B shows a generic volume curve with data points shown to cover a particular vertical range. The Examiner interprets the preset number as 1.); and
obtaining, based on the physiological feature generalization curves, at least one group of target physiological feature data corresponding to the physiological feature data ([0056]-[0057] teach in act 46, based on the values (data floating ranges) for the model parameters, the medical system models anatomy. Any mechanistic or computational modeling may be used… In the example of FIGS. 5A and 5B and the lumped model of FIG. 3, non-imaging data is used, e.g., pressure curve, left ventricular volume curve. Rather than generate scan data as one of the features in act 42, the features are values of the parameters of the anatomy model (e.g., the lumped model) or other characteristics derived from the anatomy model. This allows a rule-based approach where non-imaging data is used as input for training (physiological feature data based on the curves) and application (obtaining).),
wherein in a case where the data processing manner is the data prediction manner, processing the physiological feature data based on the target network model to obtain the target physiological feature data comprises:
determining, based on the target network model, at least one to-be-determined physiological feature curve matching the physiological feature data ([0052] teaches in act 40, the medical system creates synthetic datasets for lumped parameter modeling. FIGS. 5A and 5B (determining a to-be-determined physiological feature curve, e.g., generic volume curve) show the training as a two-step process for generating synthetic input data with the lumped parameter model of Fig. 3. [0056] teaches using a pressure curve over time from the lumped parameter model (matching, e.g., the pressure measurements) provides the PV loop.);
determining a target physiological feature curve from the at least one to-be-determined physiological feature curve according to basic attribute information corresponding to the target object (Figs. 5A, 5B, 7, [0060], [0076] teach synthetic data is generated that closely models a variety of real data / the patient-specific datasets 76 including basic measurements like height, weight, BMI, etc. of the subject (basic attribute information). Figs. 6, [0069] teach in act 64, the machine trains a quantification network (target network model) based on the generated synthetic data, e.g., the pressure curve, to infer the physiological quantity, e.g., PV loop (a target physiological feature curve).); and
determining, based on the target physiological feature curve, the target physiological feature data corresponding to the target object (Figs. 6, [0064], [0069] teach in act 64, the machine trains a quantification network based on the generated synthetic data, e.g., the pressure curve, to infer the physiological quantity (target physiological feature data), e.g., receiving the PV loop data. See additionally [0002], [0045].)
Itu does not explicitly teach the to-be-trained network model comprises a generative adversarial network model.
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date to combine the noted features of Itu, since the combination is merely simple substitution of one known element for another producing a predictable result (KSR rationale B). Since each individual element and its function are shown in the prior art, the difference between the claimed subject matter and the prior art rests not on any individual element or function but in the very combination itself—that is, in the substitution of a generative adversarial network for the trained and applied neural network (or other machine learning algorithm). Thus, the simple substitution of one known element for another producing a predictable result renders the claim obvious.
Itu may not teach
a to-be-processed matrix… wherein each column in the to-be-processed matrix represents to-be-trained sample data corresponding to a same physiological feature indicator, and each row in the to-be-processed matrix corresponds to the to-be-trained sample data of each of the at least one to-be-processed object; or
performing a normalization process on each column in the to-be-processed matrix to obtain a to-be-spliced submatrix, and splicing the to-be-spliced submatrix to obtain a to-be-used matrix.
Amatya teaches
constructing a to-be-processed matrix based on a plurality of pieces of to-be-trained sample data (see, e.g., Fig. 6F’s MSSNG Vectors), wherein each column in the to-be-processed matrix represents to-be-trained sample data corresponding to a same physiological feature indicator (j-th gene burden), and each row in the to-be-processed matrix corresponds to the to-be-trained sample data of each of the at least one to-be-processed object (i-th subject) (Figs. 1, 8A teach forming (constructing) a vector of patient genomic data in multidimensional space (a to-be-processed matrix)… reducing the vector using a dimensionality reduction technique… and inputting the reduced vector to a machine learning model to diagnose a presence of a disease or trait. Fig. 4, [0056] also teach the individual subject vectors (the pieces) were concatenated (spliced) as rows to construct the variant burden matrix for hereditary disease risk and trait prediction… The partial display of the matrix offers a visualization of vectors as the rows of the matrix.);
performing a normalization process on each column in the to-be-processed matrix (e.g., MSSNG Vectors) to obtain a to-be-spliced submatrix (e.g., normalized and reduced MSSNG Vectors), and splicing the to-be-spliced submatrix to obtain a to-be-used matrix (e.g., a reduced MSSNG Vector) (Fig. 6F, [0068] teach the MSSNG Vectors undergo normalization prior to undergoing principal component analysis and generation of a reduced MSSNG vector for training. Further, [0056]-[0057], [0059] teach the dimensionality of the vector burden matrix is halved from 30,729 to 15,338.); and
inputting the to-be-used matrix into a to-be-trained network model, and training the to-be-trained network model based on the to-be-used matrix… to obtain the target network model (Fig. 6F, 8A, [0056]-[0057], [0059] teach reducing the dimensionality of the vector burden matrix; and inputting the reduced vector, e.g., a 7,187 x 15,338 matrix, to a machine learning model for training. Fig. 6F, 8A, [0060] teach training a classifier. See also [0062], teaching trained classification models.)
Therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date to have modified the system and method of using artificial intelligence for physiological quantification in medical imaging of Itu to handle and manipulate data matrices for machine learning operations and to use this information as part of systems and methods for disease and trait prediction through genomic analysis as taught by Amatya, with the motivation of improving computer-aided diagnosis, computational performance (efficiency, accuracy), and machine learning technology (see Amatya at para. 0004, 0047, 0052-0053, 0060).
Itu may not teach training the to-be-trained network model based on the to-be-used matrix until the to-be-trained network model has a minimum loss function to obtain the target network model.
Liu teaches
training the to-be-trained network model based on the to-be-used matrix until the to-be-trained network model has a minimum loss function to obtain the target network model ([0013], [0018] teach in the process of training the graph convolution network in (7), the parameters in the network are iteratively updated by an Adam algorithm until the cross-entropy loss function is converged.)
Therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date to have modified the system and method of using artificial intelligence for physiological quantification in medical imaging of Itu to perform machine learning operations and to use this information as part of a system and method for predicting multi-type ECG heart rhythms based on graph convolution as taught by Liu, with the motivation of improving computer-aided diagnosis and machine learning technology (see Liu at para. 0004-0005, 0023-0024, 0031, 0051).
Re. CLAIM 12, the subject matter of claim 12 is essentially defined in terms of a machine, which is technically corresponding to method claim 3. Since claim 12 is analogous to claim 3, it is similarly analyzed and rejected in a manner consistent with the rejection of claims 3.
Re. Claim 16, the subject matter of claim 16 is essentially defined in terms of a machine, which is technically corresponding to method claim 7. Since claim 16 is analogous to claim 7, it is similarly analyzed and rejected in a manner consistent with the rejection of claims 7.
Re. CLAIM 18, the subject matter of claim 12 is essentially defined in terms of a manufacture, which is technically corresponding to method claim 3. Since claim 12 is analogous to claim 3, it is similarly analyzed and rejected in a manner consistent with the rejection of claims 3.
Response to Arguments
Rejections under 35 U.S.C. § 112(b)
Regarding the rejections, the Applicant has cancelled or amended the claims to obviate or overcome the previous issues of indefiniteness. The amended claims do not cause any new issues.
Rejections under 35 U.S.C. § 101
Regarding the rejections, see note on Subject Matter Eligibility.
Rejection under 35 U.S.C. § 102 or 103
Regarding the rejection of Claims 1-7 and 9-21, the Applicant has cancelled Claims 2, 4-6, 11, 13-15, 17 and 19-21, rendering the rejection of those claims moot.
Regarding the remaining claims 1, 3, 7, 9-10, 12, 16 and 18, the Examiner has considered the Applicant’s arguments but does not find them persuasive for at least the following reasons. Applicant argues:
1. “The Examiner cites Amatya to teach "constructing a to-be-processed matrix." Applicant respectfully submits that Amatya is non-analogous art… The claimed invention relates to simulating physiological feature data (e.g., continuous time-series data like ECG, blood pressure) for data augmentation. Amatya relates to genomic analysis for diagnosing hereditary diseases (Amatya, Abstract). A Person of Ordinary Skill in the Art (POSIT A) developing a physiological simulation system (like Itu) would not look to genomic variant burden analysis for data pre-processing techniques, as the nature of the data (discrete genetic mutations vs. continuous physiological signals) is fundamentally different” (Remarks, pg. 13).
Re. argument 1: The Examiner respectfully submits that the references are analogous art to the claimed invention because the references are from the same field of endeavor as the claimed invention (even if one or more of the references address a different problem). The references used in the obviousness rejection are not required to be analogous art to each other. See MPEP 2141.01(a)(I).
In response to Applicant's argument that Amatya specifically is not analogous art to the claimed invention, the Examiner reasserts that Amatya is in the field of Applicant’s endeavor and may be relied upon as a basis for rejection of the claimed invention. See also In re Oetiker, 977 F.2d 1443, 24 USPQ2d 1443 (Fed. Cir. 1992). Applicant's invention pertains to a data processing method, electronic device and storage medium, and more particularly to healthcare data processing. The Examiner respectfully submits in this case that the Amatya reference was relied upon for teaching systems and methods for disease and trait prediction through genomic variants by implementing data processing methods on patient data (see Figs. 1, 4, 6F, 8A and para. 0056-0057, 0059, 0060, 0062, 0068). Thus, it is the position of the Examiner that the Amatya reference in question is in the field of the Applicant's endeavor (i.e., it relates to pre-processing patient data as in constructing a to-be-processed matrix in the manner specified in the Applicant’s claims, normalizing the matrix data, splicing sub-matrix data, inputting sub-matrix data, and training a network model using sub-matrix data), and is therefore analogous art.
2. “Amatya does not teach the claimed "performing a normalization process on each column ... to obtain a to-be-spliced submatrix, and splicing... to obtain a to-be-used matrix." Amatya describes a "variant burden matrix" (Para [0056]) containing discrete scores (0-4). While Amatya mentions normalization (Para [0068]), it is done for the purpose of Principal Component Analysis (PCA) to reduce dimensionality for a classification task. Amatya does not teach splicing normalized submatrices to construct an input for a generative model (like the claimed GAN or Diffusion model) to generate new synthetic data. Amatya's goal is diagnosis (classification), not generation (simulation)” (Remarks, pg. 13-14).
Re. argument 2: In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). The Examiner respectfully submits that, given the broadest reasonable interpretation (BRI), Itu in view of Amatya render obvious the claimed features. Amatya teaches a matrix. The vectors of the matrix (i.e., columns in the to-be-processed matrix) undergo normalization prior to undergoing PCA (i.e., to obtain a to-be-spliced submatrix) and generation of a reduced vector for training (i.e., splicing the to-be-spliced submatrix to obtain a to-be-used matrix). The input of Amatya is constructed for training a machine learning model. Itu renders obvious the use of a processed set of features as patient examples input for the training of Itu’s machine learning algorithm (Fig. 4 and para. 0048), which can be reasonably substituted with a generative adversarial network also disclosed by Itu as being machine trained (see, e.g., Fig. 6).
In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., normalized submatrices (plural), generating new synthetic data) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). Regardless, Itu’s Fig. 4 and para. 0048 also teach that training can be iteratively improved, and Itu’s Fig. 6 teaches training that generates synthetic samples.
3. “Liu fails to teach input data splicing…” (Remarks, pg. 14).
Re. argument 3: The Examiner accepts the Applicant’s argument that Liu splices intermediate feature vectors extracted by the neural network during the convolution process, which does not map to the splicing step of the claims as drafted, since the splicing of the claims produces an input feature vector for training a machine learning model, e.g., a generative adversarial network. Itu in view of Amatya, as previously discussed, render obvious the claimed feature.
4. “there is no motivation to combine the genomic variant matrix of Amatya with the physics-based simulation of Itu” (Remarks, pg. 14-15).
Re. argument 4: In response to applicant’s argument that there is no teaching, suggestion, or motivation to combine the references, the examiner recognizes that obviousness may be established by combining or modifying the teachings of the prior art to produce the claimed invention where there is some teaching, suggestion, or motivation to do so found either in the references themselves or in the knowledge generally available to one of ordinary skill in the art. See In re Fine, 837 F.2d 1071, 5 USPQ2d 1596 (Fed. Cir. 1988), In re Jones, 958 F.2d 347, 21 USPQ2d 1941 (Fed. Cir. 1992), and KSR International Co. v. Teleflex, Inc., 550 U.S. 398, 82 USPQ2d 1385 (2007). In this case, the motivation to combine can be found in Amatya at para. 0004, 0047, 0052-0053, 0060.
In response to Applicant’s arguments that the combination of Itu and Amatya does not seem to result in anything operable: The Examiner respectfully submits that the principle of operation of Amatya is not changed. Per MPEP 2143.01(VI), this doctrine indicates that obviousness may not be present where principle operation of a reference is changed in a manner that “would require a substantial reconstruction and redesign of the elements shown in [the primary reference] as well as a change in the basic principle under which the [primary reference] construction was designed to operate.” The Examiner submits that integrating the matrix operations and feature vector application of Amatya with the teaching of Itu would not require a substantial reconstruction and redesign of the elements shown in Itu as well as a change in the basic principle under which Itu’s construction was designed to operate.
Itu is relied upon for the specific teaching of artificial intelligence for physiological quantification that includes extracting features from patient data to generate a complete input feature set that is used for training the machine learning network (see Fig. 4 and para. 0023, 0033, 0035, 0048, 0076). Amatya is relied upon for teaching systems and methods for disease and trait prediction through genomic analysis including feature extraction. Features can be extracted from patient data to form a vector of patient data in multidimensional space (see Figs. 1, 8A). The vector can be normalized and reduced using a dimensionality reduction technique to generate an input feature vector for training a machine learning model (see at least Fig. 6A). The reduced vector can be input as a feature vector to a machine learning model for training (see at least Figs. 6F, 8A and para. 0056-0057, 0059). As such, the Examiner respectfully submits that incorporating systems and methods for disease and trait progression of Amatya into Itu’s artificial intelligence for physiological quantification does not render the system and method of Itu inoperable.
In addition, the Examiner recognizes obviousness is not determined by what the references expressly state but by what they would reasonably suggest to one of ordinary skill in the art, as supported by decisions in In re DeLisle 406 Fed 1326, 160 USPQ 806; In re Kell, Terry and Davies 208 USPQ 871; and In re Fine, 837 F.2d 1071, 1074, 5 USPQ 2d 1596, 1598 (Fed. Cir. 1988) (citing In re Lalu, 747 F.2d 703, 705, 223 USPQ 1257, 1258 (Fed. Cir. 1988)). Further, it was determined in In re Lamberti et al, 192 USPQ 278 (CCPA) that:
(i) obviousness does not require absolute predictability;
(ii) non-preferred embodiments of prior art must also be considered; and
(iii) the question is not express teaching of references, but what they would suggest.
Regarding the rejection of Claims 3, 7, 9-10, 12, 16 and 18, the Applicant has not offered any arguments with respect to these claims other than to reiterate the argument(s) present for the claim(s) from which they depend or are analogous to. As such, the rejection of these claims is also maintained.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Abraham et al. (US 2022/0093217 A1; “Abraham” herein) for teaching the recitation in claims 2, 11, 17 of “training the to-be-trained network model based on the to-be-used matrix until the to-be-trained network model has a minimum loss function to obtain the target network model” (see Fig. 1A and para. 0114).
Nihtila et al. (US 2004/0002634 A1) for teaching system and method for interacting with a user’s virtual physiological model via a mobile device. See, e.g., Fig. 2B teaching a body temperature sensor.
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jessica M Webb whose telephone number is (469)295-9173. The examiner can normally be reached Mon-Fri 9:00am-1:00pm CST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Robert Morgan can be reached on (571) 272-6773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J.M.W./Examiner, Art Unit 3683
/CHRISTOPHER L GILLIGAN/Primary Examiner, Art Unit 3683