DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. IB2023/055761, filed on June 5, 2023.
Response to Preliminary Amendment
In the preliminary amendment filed on November 27, 2024, the following has occurred: claim(s) 21-32 have been cancelled. Now, claim(s) 1-20 are pending.
Claim Objections
Claim 1 objected to because of the following informalities: “the images” in p. 3, ll. 10, “of the sequence of images” in p. 3, ll. 11, “the future appearance of the wound,” in p. 3, ll. 14-15, “a future appearance of the wound” in p. 3, ll. 16-17, “the future time” in p. 3, ll. 22, “the sequence of images” in p. 3, ll. 23. These appear to be typographical errors. Appropriate correction is required. For examination purposes, the Examiner will interpret the claimed portions as “the one or more images”, “the sequence of one or more images”, “a future appearance of the wound,”, “the future appearance of the wound,”, “the corresponding future time”, “the sequence of one or more images”.
Claim 4 objected to because of the following informalities: “the one or more predicted future images” in p. 4, ll. 1, “the first image” in p. 4, ll. 4. These appear to be typographical errors. Appropriate correction is required. For examination purposes, the Examiner will interpret the claimed portions as “the one or more predicted images”, “the first reconstructed image”.
Claim 9 objected to because of the following informalities: “the historical wound” in p. 4, ll. 23. This appears to be a typographical error. Appropriate correction is required. For examination purposes, the Examiner will interpret the claimed portion as “the corresponding historical wound”.
Claim 13 objected to because of the following informalities: “the images” in p. 5, ll. 13, “the sequence of images” in p. 5, ll. 14, “the sequence of images” in p. 5, ll. 16, “the future appearance of the wound,” in p. 5, ll. 16-17, “a future appearance of the wound” in p. 5, ll. 17-18, “the future time” in p. 5, ll. 22, “the sequence of images” in p. 5, ll. 23. These appear to be typographical errors. Appropriate correction is required. For examination purposes, the Examiner will interpret the claimed portions as “the one or more images”, “the sequence of one or more images”, “the sequence of one or more images”, “a future appearance of the wound,”, “the future appearance of the wound,”, “the corresponding future time”, “the sequence of one or more images”.
Claim 15 objected to because of the following informalities: “the first image” in p. 6, ll. 4. These appear to be typographical errors. Appropriate correction is required. For examination purposes, the Examiner will interpret the claimed portions as “the one or more predicted images”, “the first reconstructed image”.
Claim 20 objected to because of the following informalities: “the treatment period” in p. 6, ll. 23. This appears to be a typographical error. Appropriate correction is required. For examination purposes, the Examiner will interpret the claimed portion as “a treatment period”.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “a processing unit” in claim 1, “the processing unit” in claim 12, “a processing unit” in claim 13.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim(s) 1-20 is/are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Claim 1: Step 2A Prong One
pass the image capture data for the sequence of images, each of the one or more predicted images representative of the future appearance of the wound at a corresponding future time, the historical image data comprising one or more historical image data sets, each historical image data set of the one or more historical image data sets comprising image data for a historical sequence of images of an appearance of a corresponding historical wound, wherein a prediction time interval between the corresponding future time and a capture time of a last image of the one or more sequence of images is greater than each of the sampling time intervals, and
output the image data representing the one or more predicted images of the future appearance of the wound
These limitations, as drafted, given the broadest reasonable interpretation, but for the recitation of generic computer components, encompass managing interactions between people (including following rules or instructions), which is a subgrouping of Certain Methods of Organizing Human Activity. For example, but for the “a memory; and a processing unit having one or more processors coupled to the memory, the one or more processors configured to execute instructions that cause the processing unit to:”, “…through a machine learning model trained to generate image data representing one or more predicted images of a future appearance of the wound…”, “…the machine learning model trained using historical image data,…” language, the “pass” function in the context of this claim encompasses a user following instructions to determining from the image capture data to determine image data representing one or more predicted images of a future appearance of the wound. Finally, but for the “a memory; and a processing unit having one or more processors coupled to the memory, the one or more processors configured to execute instructions that cause the processing unit to:” language, the “output” function in the context of this claim encompasses a user following instructions to communicate the image data representing the one or more predicted images of the future appearance of the wound. These steps could be accomplished by a person following instructions to determine prediction results from data and communicate the determination, and therefore encompass Certain Methods of Organizing Human Activity.
Claim 1: Step 2A Prong Two
This judicial exception is not integrated into a practical application because the remaining elements amount to no more than general purpose computer components programmed to perform the abstract idea, insignificant extra-solution activity, and generally linking the abstract idea to a technical environment.
Claim 1, directly or indirectly, recites the following generic computer components configured to implement the abstract idea: “a memory; and a processing unit having one or more processors coupled to the memory, the one or more processors configured to execute instructions that cause the processing unit to:”. As set forth in the 2019 Eligibility Guidance, 84 Fed. Reg. at 55 “merely including instructions to implement an abstract idea on a computer” is an example of
when an abstract idea has not been integrated into a practical application.
Additionally, the claim recites “obtain image capture data for a sequence of one or more images representative of an appearance of a wound at a corresponding image capture time, each of the one or more images prior to a final image of the sequence of one or more images separated by a sampling time interval between the image and a next image” at a high degree of generality, amount no more than receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information). As set forth in MPEP 2106.05(d)(II), computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity, is an example of when an abstract idea has not been integrated into a practical application.
Additionally, the claims recite “…through a machine learning model trained to generate image data representing one or more predicted images of a future appearance of the wound…”, “…the machine learning model trained using historical image data,…” at a high degree of generality, amount no more than generally linking the abstract idea to a particular technical environment. The recitation is also similar to adding the words “apply it” to the abstract idea. As set forth in MPEP 2106.05(f), merely reciting the words “apply it” or an equivalent, is an example of when an abstract idea has not been integrated into a practical application.
Claim 1: Step 2B
The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of using a computer configured to perform above identified functions amounts to no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. See Alice 573 U.S. at 223 (“mere recitation of a generic computer cannot transform a patent-ineligible abstract idea into a patent-eligible invention.”)
Insignificant, extra solution, data gathering activity has been found to not amount to significantly more than an abstract idea (See MPEP 2106.05(g)). Therefore, whether considered alone or in combination, the additional elements do not amount to significantly more than the abstract idea.
Additionally, generally linking the abstract idea to a particular technological environment does not amount to significantly more than the abstract idea (See MPEP 2016.05(h) and Affinity Labs of Texas v. DirectTV, LLC, 838 F.3d 1253, 120 USP12d 1201 (Fed. Cir. 2016)).
Dependent claims 2-12 incorporate the abstract idea identified above and recite additional limitations that expand on the abstract idea. For example, claims 2-3 further describe the treatment method parameters. Similarly, claims 4-10 and 12 further describe the “apply it” recitations. Finally, claim 11 further describes the prediction time interval. Therefore, these claims recite limitations that fall into the Certain Methods of Organizing Human Activity grouping of abstract ideas.
Dependent claims 2-12 recite additional subject matter which amounts to limitations consisted with the additional elements in independent claim 1 (such as claim 4, “the machine learning model” at a high degree of generality, amounts to no more than generally linking the abstract idea to a particular technical environment). The recitation is also similar to adding the words “apply it” to the abstract idea. As set forth in MPEP 2106.05(f), merely reciting the words “apply it” or an equivalent, is an example of when an abstract idea has not been integrated into a practical application.
Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation and do not impose a meaningful limit to integrate the abstract idea into a practical application. The claims are not patent eligible.
Claim 13 recites similar functions as claim 1, but in method form.
Claim 13 does not recite the generic computer component of “a memory”.
Dependent claims 14-20 incorporate the abstract idea identified above and recite additional limitations that expand on the abstract idea. For example, claims 14-20 further describe the “apply it” recitations. Therefore, these claims recite limitations that fall into the Certain Methods of Organizing Human Activity grouping of abstract ideas.
Dependent claims 14-20 recite additional subject matter which amount to limitations
consisted with the additional elements in independent claim 1 (such as claims 14-20 recite additional limitations that amount to “apply it” recitations.) Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation and do not impose a meaningful limit to integrate the abstract idea into a practical application. The claims are not patent eligible.
Therefore, whether considered alone or in combination, the additional elements do not amount to significantly more than the abstract idea.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-20 are rejected under 35 U.S.C. 102(a)(1) and (a)(2) as being unpatentable over Fan et al. (U.S. Patent Pre-Grant Publication No. 2021/0201479).
As per independent claim 1, Fan discloses a system comprising:
a memory (See [0173]: Any of the machine learning systems and/or methods of the present technology may be implemented on or in communication with processors and/or memory of the various imaging systems and devices of the present disclosure); and
a processing unit having one or more processors coupled to the memory, the one or more processors (See [0173]: Any of the machine learning systems and/or methods of the present technology may be implemented on or in communication with processors and/or memory of the various imaging systems and devices of the present disclosure) configured to execute instructions that cause the processing unit to:
obtain image capture data for a sequence of one or more images (See [0116]-[0117]: The processor along with capture control module 1135, multi-aperture spectral camera, and working memory 1105 represent one means for capturing a set of spectral images and/or a sequence of images) representative of an appearance of a wound at (See [0138]: Where PPG information is included, the disclosed imaging systems provide a method to assess pathologies involving changes to tissue blood flow and pulse rate including: tissue perfusion; cardiovascular health; wounds such as ulcers; peripheral arterial disease, and respiratory health) a corresponding image capture time, each of the one or more images prior to a final image of the sequence of one or more images separated by a sampling time interval between the image and a next image (See [0138]-[0139]: A number of different images at the same wavelength corresponding to different times (PPG data), which the Examiner is interpreting to encompass a corresponding image capture time, each of the one or more images prior to a final image of the sequence of one or more images separated by a sampling time interval between the image and a next image as the last image captured would be the final image),
pass the image capture data for the sequence of images through a machine learning model trained to generate image data representing one or more predicted images of a future appearance of the wound (See [0140]-[0144]: The multispectral datacube can be analyzed as input data into a machine learning model to generate a classified mapping of the imaged tissue, which the Examiner is interpreting a machine learning model to encompass a machine learning model trained to generate image data representing one or more predicted images of a future appearance of the wound as the machine learning model can be an artificial neural network ([0141]), and the training data can include multispectral datacubes (the input) and classified mappings (the expected output) that have been labeled, for example by a clinician who has designated areas of the wound that correspond to certain clinical states ([0144])), each of the one or more predicted images representative of the future appearance of the wound at a corresponding future time (See [0140]-[0144]: Other implementations of the machine learning model can be trained to make other types of predictions, for example the likelihood of a wound healing to a particular percentage area reduction over a specified time period (e.g., at least 50% area reduction within 30 days) or wound states such as, hemostasis, inflammation, pathogen colonization, proliferation, remodeling or healthy skin categories, which the Examiner is interpreting other types of predictions to encompass each of the one or more predicted images representative of the future appearance of the wound at a corresponding future time), the machine learning model trained using historical image data, the historical image data comprising one or more historical image data sets, each historical image data set of the one or more historical image data sets comprising image data for a historical sequence of images of an appearance of a corresponding historical wound (See Table 1, [0172]: The disclosed machine learning systems can learn to determine wound healing potential through being exposed to large volumes of labeled training data, which the Examiner is interpreting the labeled training data to encompass the machine learning model trained using historical image data), wherein a prediction time interval between the corresponding future time and a capture time of a last image of the one or more sequence of images is greater than each of the sampling time intervals (See [0138]-[0139]: A number of different images at the same wavelength corresponding to different times (PPG data), which the Examiner is interpreting the final time of the PPG data to encompass a capture time of a last image of the one or more sequence of images is greater than each of the sampling time intervals as the final time of the PPG data would be greater than the earlier times), and
output the image data representing the one or more predicted images of the future appearance of the wound (See Fig. 24-25, [0161]-[0162]: This can in turn be provided to a machine learning classifier, for example a fully connected feedforward artificial neural network or the system shown in FIG. 25, in order to output a healing prediction for the imaged ulcer or other wound, which the Examiner is interpreting output a healing prediction for the imaged ulcer or other wound to encompass the claimed portion as a healing prediction of an ulcer is an example of a future appearance of a wound.)
Claim 13 mirrors claim 1 only within a different statutory category, and is rejected for the same reason as claim 1.
As per claim 2, Fan discloses the system of claim 1 as described above. Fan further teaches wherein the image capture data includes metadata identifying a treatment method or one or more treatment method parameters (See [0200]-[0201]: The results of a layer within the convolutional neural network can be modified by information from another source, clinical data from a subject's medical history or treatment plan (e.g., patient health metrics or clinical variables as described herein) can be used as the source of this modification, which the Examiner is interpreting treatment plan to encompass metadata identifying a treatment method or one or more treatment method parameters.)
As per claim 3, Fan discloses the system of claims 1-2 as described above. Fan further teaches wherein the treatment method parameters include negative-pressure wound therapy (NPWT) parameters (See [0157]-[0159]: A wound assessment system or a clinician can determine an appropriate level of wound care therapy based on the results of the machine learning algorithms, the AWC therapies include negative-pressure wound therapy.)
As per claim 4, Fan discloses the system of claim 1 as described above. Fan further teaches wherein the machine learning model is trained bi-directionally, wherein a first direction of training trains the machine learning model to generate the one or more predicted future images from the historical sequence of images and wherein a second direction of training trains the machine learning model to generate a reconstructed first image from the one or more predicted images and images in the historical sequence of images subsequent to the first reconstructed image (See [0167]-[0169]: An artificial neural network may be an adaptive system that is configured to change its structure (e.g., the connection configuration and/or weights) based on information that flows through the network during training, and the weights of the hidden layers can be considered as an encoding of meaningful patterns in the data, which the Examiner is interpreting the encoder to encompass a first direction of training trains the machine learning model to generate the one or more predicted future images from the historical sequence of images, and interpreting the decoder to encompass a second direction of training trains the machine learning model to generate a reconstructed first image from the one or more predicted images and images in the historical sequence of images subsequent to the first reconstructed image as the goal of certain autoencoders is to compress the input data with the encoder, then decompress this encoded data with the decoder such that the output is a good/perfect reconstruction of the original input data.)
Claim 15 mirrors claim 4 only within a different statutory category, and is rejected for the same reason as claim 4.
As per claim 5, Fan discloses the system of claims 1 and 4 as described above. Fan further teaches wherein layers in the machine learning model are shared by the first direction of training and the second direction of training (See [0168]-[0169]: A fully connected neural network is one in which each node in the input layer is connected to each node in the subsequent layer (the first hidden layer), each node in that first hidden layer is connected in turn to each node in the subsequent hidden layer, and so on until each node in the final hidden layer is connected to each node in the output layer.)
Claim 16 mirrors claim 5 only within a different statutory category, and is rejected for the same reason as claim 5.
As per claim 6, Fan discloses the system of claims 1 and 4 as described above. Fan further teaches wherein:
the machine learning model comprises a second machine learning model (See [0191]-[0192]: The compressed image vector was used as an input to a second supervised machine learning algorithm);
a first machine learning model is trained prior to the second machine learning model using a first training image data set (See [0189]-[0192]: Upon extraction of the compressed image vector, the compressed image vector was used as an input to a second supervised machine learning algorithm, which the Examiner is interpreting the input to a second machine learning algorithm to encompass a first machine learning model is trained prior to the second machine learning model using a first training image data set as the identical encoder-decoder algorithm (interpreting to encompass first trained algorithm) was used for all images in the data set) that includes a first subset of images of the historical sequence of images captured during a sampling period associated with historical wound images and a second subset of images captured after the sampling period (See [0190]-[0192]: The identical encoder-decoder algorithm was used for all images in the data set, which the Examiner is interpreting all images in the data set to encompass a first training image data set that includes a first subset of images of the historical sequence of images captured during a sampling period associated with historical wound images and a second subset of images captured after the sampling period); and
the second machine learning model is constrained to include one or more layers of the first machine learning model (See [0168]-[0169]: A fully connected neural network is one in which each node in the input layer is connected to each node in the subsequent layer (the first hidden layer), each node in that first hidden layer is connected in turn to each node in the subsequent hidden layer, and so on until each node in the final hidden layer is connected to each node in the output layer.)
Claim 17 mirrors claim 6 only within a different statutory category, and is rejected for the same reason as claim 6.
As per claim 7, Fan discloses the system of claims 1, 4, and 6 as described above. Fan further teaches wherein the first machine learning model is trained bi-directionally (See [0167]-[0169]: An artificial neural network may be an adaptive system that is configured to change its structure (e.g., the connection configuration and/or weights) based on information that flows through the network during training, and the weights of the hidden layers can be considered as an encoding of meaningful patterns in the data, which the Examiner is interpreting the encoder/decoder to encompass bi-directionally.)
Claim 18 mirrors claim 7 only within a different statutory category, and is rejected for the same reason as claim 7.
As per claim 8, Fan discloses the system of claims 1, 4, and 6 as described above. Fan further teaches wherein the one or more layers comprise a final layer, penultimate layer, or one or more mid-level layers (See [0168]: A fully connected neural network is one in which each node in the input layer is connected to each node in the subsequent layer (the first hidden layer), each node in that first hidden layer is connected in turn to each node in the subsequent hidden layer, and so on until each node in the final hidden layer is connected to each node in the output layer.)
As per claim 9, Fan discloses the system of claim 1 as described above. Fan further teaches wherein:
the machine learning model comprises a second machine learning model (See [0191]-[0192]: The compressed image vector was used as an input to a second supervised machine learning algorithm);
a first machine learning model is trained prior to the second machine learning model using a first training image data set (See [0189]-[0192]: Upon extraction of the compressed image vector, the compressed image vector was used as an input to a second supervised machine learning algorithm, which the Examiner is interpreting the input to a second machine learning algorithm to encompass a first machine learning model is trained prior to the second machine learning model using a first training image data set as the identical encoder-decoder algorithm (interpreting to encompass first trained algorithm) was used for all images in the data set) that includes a first subset of images of the historical sequence of images captured during a sampling period associated with the corresponding historical wound and a second subset of images captured during the sampling period, wherein a number of images in the first subset of images is greater than the number of images in the second subset of images (See [0168]-[0170]: The dimensionality reduction allows the autoencoder neural network to learn the most salient features of the input images, where the innermost layer (or another inner layer) of the autoencoder represents a “feature reduction” version of the input, this can serve to reduce an image having, for example, approximately 1 million pixels (where each pixel value can be considered as a separate feature of the image) to a feature set of around 50 values, and the reduced-dimensionality representation of the images can be used by another machine learning model, which the Examiner is interpreting the reduced images to encompass a number of images in the first subset of images is greater than the number of images in the second subset of images); and
the second machine learning model is constrained to use one or more layers of the first machine learning model (See [0168]-[0169]: A fully connected neural network is one in which each node in the input layer is connected to each node in the subsequent layer (the first hidden layer), each node in that first hidden layer is connected in turn to each node in the subsequent hidden layer, and so on until each node in the final hidden layer is connected to each node in the output layer.)
Claim 20 mirrors claim 9 only within a different statutory category, and is rejected for the same reason as claim 9.
As per claim 10, Fan discloses the system of claims 1 and 9 as described above. Fan further teaches wherein the first machine learning model is trained bi-directionally (See [0167]-[0169]: An artificial neural network may be an adaptive system that is configured to change its structure (e.g., the connection configuration and/or weights) based on information that flows through the network during training, and the weights of the hidden layers can be considered as an encoding of meaningful patterns in the data, which the Examiner is interpreting the encoder/decoder to encompass bi-directionally.)
As per claim 11, Fan discloses the system of claim 1 as described above. Fan further teaches wherein the prediction time interval is greater than an input time interval associated with the sequence of images (See [0011]-[0012]: Determining the predicted healing parameter over the predetermined time interval, which the Examiner is interpreting the predetermined time interval to encompass the prediction time interval is greater than an input time interval as the predetermined time interval can be 30 days (Claim 4).)
As per claim 12, Fan discloses the system of claim 1 as described above. Fan further teaches wherein the machine learning model is trained using historical metadata corresponding to the historical image sequence (See [0070], [0144]: During training, an artificial neural network can be exposed to pairs in its training data and can modify its parameters to be able to predict the output of a pair when provided with the input, the training data can include multispectral datacubes (the input) and classified mappings (the expected output) that have been labeled, for example by a clinician who has designated areas of the wound that correspond to certain clinical states, and/or with healing (1) or non-healing (0) labels sometime after initial imaging of the wound when actual healing is known), and wherein the processing unit is further configured to:
obtain metadata comprising wound properties of the wound corresponding to the sequence of images, the wound properties comprising one or more of wound area, wound depth, or wound healing stage (See [0070], [0144], [0165]-[0166]: During training, an artificial neural network can be exposed to pairs in its training data and can modify its parameters to be able to predict the output of a pair when provided with the input, the training data can include multispectral datacubes (the input) and classified mappings (the expected output) that have been labeled, for example by a clinician who has designated areas of the wound that correspond to certain clinical states, and/or with healing (1) or non-healing (0) labels sometime after initial imaging of the wound when actual healing is known, which the Examiner is interpreting areas of the wound that correspond to certain clinical states to encompass the wound properties comprising one or more of wound area, wound depth, or wound healing stage);
pass the metadata through the machine learning model to generate predicted metadata for the wound at the corresponding future time (See [0141]-[0144]: These metrics can be converted into a vector representation through appropriate processing, for example through word-to-vec embeddings, a vector having binary values representing whether the patient does or does not have the patient metric (e.g., does or does not have type I diabetes), or numerical values representing a degree to which the patient has each patient metric, which the Examiner is interpreting the metrics can be converted into a vector to encompass pass the metadata through the machine learning model as the final hidden layer is connected to each node in the output layer); and
output the predicted metadata (See [0141]-[0144]: The classified mappings are the expected output, which the Examiner is interpreting to encompass the claimed portion.)
As per claim 14, Fan discloses the method of claim 13 as described above. Fan further teaches wherein the machine learning model is trained using a weighted loss that assigns a first weight to a first image that is less than a second weight assigned to a second image having a corresponding predicted future time that is later than the predicted future time corresponding to the first image (See [0141]-[0143], [0169]-[0170]: The nodes in each convolutional layer of a CNN can share weights such that the convolutional filter of a given layer is replicated across the entire width and height of the input volume (e.g., across an entire frame), reducing the overall number of trainable weights and increasing applicability of the CNN to data sets outside of the training data, and the values of a layer may be pooled to reduce the number of computations in a subsequent layer (e.g., values representing certain pixels may be passed forward while others are discarded), and further along the depth of the CNN pool masks may reintroduce any discarded values to return the number of data points to the previous size, which the Examiner is interpreting the weights to encompass first weight to a first image that is less than a second weight assigned to a second image, and interpreting the algorithm can take data from an RGB image, and optionally the subject's medical history or other clinical variables, and output a predicted healing parameter such as a conditional probability that indicates whether the DFU will respond to 30 days of standard wound care therapy ([0184]) to encompass a second image having a corresponding predicted future time that is later than the predicted future time corresponding to the first image as a second image could predict a further future time than 30 days if the second image was taken at a later time, and the Examiner is interpreting the values of a layer may be pooled to reduce the number of computations in a subsequent layer to encompass the weighted loss.)
As per claim 19, Fan discloses the method of claims 15 and 17 as described above. Fan further teaches wherein the one or more layers comprise a final layer (See [0168]: A fully connected neural network is one in which each node in the input layer is connected to each node in the subsequent layer (the first hidden layer), each node in that first hidden layer is connected in turn to each node in the subsequent hidden layer, and so on until each node in the final hidden layer is connected to each node in the output layer.)
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Min et al. (U.S. Patent Publication No. 11,861,833), describes systems, devices, and methods described herein are further configured to generate one or more assessments of plaque-based diseases from raw medical images using one or more of the identified features and/or quantified parameters.
Lee et al. (U.S. Patent Pre-Grant Publication No. 2016/0171683), describes an apparatus for diagnosis of a medical image includes a storage having a predetermined size, the storage being configured to store sample frames sampled from among received frames which are received from a medical imaging device; a frame collector configured to, once a reference frame is determined, collect one or more sample frames stored in the storage; and a diagnosis component configured to provide a diagnosis for the reference frame based on diagnostic results associated with the one or more collected sample frames.
Deng et al. (“CT Image Analysis and Clinical Diagnosis of New Coronary Pneumonia Based on Improved Convolutional Neural Network”), describes the improved convolutional neural network, in-depth analysis of the CT image of the new coronary pneumonia, using the U-Net series of deep neural networks to semantically segment the CT image of the new coronary pneumonia, to obtain the new coronary pneumonia area as the foreground and the remaining areas as the background of the binary image, provides a basis for subsequent image diagnosis.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Bennett S Erickson whose telephone number is (571)270-3690. The examiner can normally be reached Monday - Friday: 9:00am - 5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Robert Morgan can be reached at (571) 272-6773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Bennett Stephen Erickson/Primary Examiner, Art Unit 3683