Prosecution Insights
Last updated: April 19, 2026
Application No. 17/820,837

MACHINE LEARNING SYSTEMS AND METHODS FOR ASSESSMENT, HEALING PREDICTION, AND TREATMENT OF WOUNDS

Non-Final OA §103§112
Filed
Aug 18, 2022
Examiner
WINDSOR, COURTNEY J
Art Unit
2661
Tech Center
2600 — Communications
Assignee
Spectral Md Inc.
OA Round
3 (Non-Final)
86%
Grant Probability
Favorable
3-4
OA Rounds
2y 7m
To Grant
96%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
217 granted / 252 resolved
+24.1% vs TC avg
Moderate +9% lift
Without
With
+9.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
32 currently pending
Career history
284
Total Applications
across all art units

Statute-Specific Performance

§101
5.4%
-34.6% vs TC avg
§103
51.1%
+11.1% vs TC avg
§102
20.5%
-19.5% vs TC avg
§112
17.9%
-22.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 252 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on February 16, 2026 has been entered. Information Disclosure Statement The information disclosure statement (IDS) submitted on March 4, 2026 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Response to Amendment Claims 1 and 42-43 have been amended changing the scope and contents of the claim. Applicant’s amendment filed February 16, 2026 overcomes the following objection/rejection(s) from the last Office Action of November 17, 2025: Objections to the drawings for containing color Previous rejections of the claims under 35 USC § 112(b) Response to Arguments Applicant's arguments filed February 16, 2026 have been fully considered but they are not persuasive. Regarding claim 1, Applicant argues, “Applicant respectfully submits that Wang's description of using a sequence of wound images from various times following an initial time frame does not teach or suggest at least "the at least one scalar value corresponding to a predicted or assessed healing parameter over a predetermined time interval following generation of the image, wherein the at least one scalar value is generated on or near day 0 of therapy," as recited in Applicant's amended claim 1. Further, Applicant respectfully submits that Fan or Yang do not cure the above-noted deficiencies in the teachings of Wang. Accordingly, Applicant respectfully submits that claim 1 is not obvious in view of Yang, alone or in combination with Fan, and requests that the rejection of claim 1 under 35 U.S.C. § 103 be withdrawn (Remarks, 11-12).” The examiner respectfully disagrees. As a first note, the examiner would like to point out the interpretation used for the amended claim language. Claim 1 has been amended to claim, “wherein the at least one scalar value is generated on or near day 0 of therapy.” The term “near” has not been defined by the specification; the closest reference to the stated terms is at applicant’s PG Publication, paragraph 0058, “on or near day 0 of therapy, rather than being deferred until over 4 weeks from the initial presentation.” This section in simple language reads as through near could reference anything before 4 weeks. Thus, though this is not read as a true definition, for the sake of examination, “on or near day 0 of therapy” will be read as “before 4 weeks of therapy.” Based on that interpretation, the examiner disagrees with the applicant’s arguments. Specifically, it appears the center point of applicants argument is that “Wang’s generation of a healing progress prediction relies on images taken over the course of multiple weeks (Remarks, 11).” Plainly stated, “the course of multiple weeks” falls within the interpretation of “before 4 weeks.” Thus, Wang in fact does read on the claim limitation of “wherein the at least one scalar value is generated on or near day 0 of therapy.” Claim Rejections - 35 USC § 112 35 USC § 112(b) The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-43 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The term “near” in claim 1 is a relative term which renders the claim indefinite. The term “near” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. The applicants PG Publication paragraph 0058 notes, “on or near day 0 of therapy, rather than being deferred until over 4 weeks from the initial presentation.” Though this is not read as a true definition, for the sake of examination, “on or near day 0 of therapy” will be read as “before 4 weeks of therapy” Claims 2-43 are rejected based on their dependency on claim 1. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-8, 14-34 and 36-43 are rejected under 35 U.S.C. 103 as being unpatentable over Yang, Qian, et al. "Investigation of the performance of hyperspectral imaging by principal component analysis in the prediction of healing of diabetic foot ulcers." Journal of Imaging 4.12 (2018): 144. (hereinafter Yang), and further in view of U.S. Publication No. 2022/0240783 to Fan et al. (hereinafter Fan) and C. Wang et al., "A unified framework for automatic wound segmentation and analysis with deep convolutional neural networks," 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, Italy, 2015, pp. 2415-2418, doi: 10.1109/EMBC.2015.7318881. (hereinafter Wang). Regarding independent claim 1, Yang discloses A system for assessing or predicting wound healing (abstract, “Hyperspectral imaging (HSI) is a tool that has the potential to meet this clinical need. Due to the different absorption spectra of oxy- and deoxyhemoglobin,”… “It is concluded that HSI may be a better predictor of healing when analyzed by PCA than by SpO2.”), the system comprising: at least one light detection element (page 3, “The HSI setup is shown in Figure 1. Illumination of the foot was via 16 × 1 W white light-emitting diodes (LEDs) (LXHL-MWEC, LumiledsTM Lighting, San Jose, CA, USA) with 8 units placed on either side of the camera. Light scattered from the foot was passed through an aperture, which controlled the amount of collected light and was focused onto a detector by a C-mount lens (f = 15 mm, f# = 2.2; Schneider).”) configured to collect light of at least a first wavelength after being reflected from a tissue region comprising a wound or portion thereof (page 3, “For the measurements taken in this study, each 3D data cube contained 2D spatial images (120 × 170 pixels) over a wavelength range from 430 nm to 750 nm (272 values). The sweep of the system moves from heel to toe and takes ~30 s to obtain an image, with an exposure time of 100 ms per row;” page 4, “Intensity hypercubes of the ulcer site were obtained for each participant, and the data were processed using SpO2 algorithms and PCA,”); and one or more processors in communication with the at least one light detection element (page 3, “The HSI setup is shown in Figure 1. Illumination of the foot was via 16 × 1 W white light-emitting diodes (LEDs) (LXHL-MWEC, LumiledsTM Lighting, San Jose, CA, USA) with 8 units placed on either side of the camera;” see also Figure 1, “PC”) and configured to: receive a signal from the at least one light detection element, the signal representing light of the first wavelength reflected from the tissue region (page 2, “HSI is a noninvasive technique by which images are formed at different wavelengths to produce a hypercube (x, y, _);” Figure 1); generate, based on the signal, an image having a plurality of pixels depicting the tissue region (page 2, “HSI is a noninvasive technique by which images are formed at different wavelengths to produce a hypercube (x, y, _);” Figure 1); determine, based on at least a subset of the segmented plurality of pixels, one or more optically determined tissue features of the wound or portion thereof (page 2, “A spectrum for each pixel was compared with standard tissue to determine measures of oxyhemoglobin and deoxyhemoglobin.”); Yang fails to explicitly disclose s further recited. However, Fan discloses automatically segment the plurality of pixels of the image into at least wound pixels and non-wound pixels (paragraph 0249, “The machine learning classifier 2310 generates per-pixel classification wherein the output includes a classification value for each [x,y] pixel location;” paragraph 0250, “During implementation of the machine learning classifier 2310, the output classification values can be used to generate tissue map 2315 to indicate pixels corresponding to tissue of various designated categories of tissue health.”) Yang is directed toward wound healing prediction (abstract, “ It is therefore valuable to compare the performance of PCA with SpO2 measurement in the prediction of wound healing”). Fan is directed toward “ Additionally, alternatives described herein are used with a variety of tissue classification applications including assessing the presence and severity of tissue conditions (abstract).” As can be seen by one of ordinary skill in the art both Yang and Fan are directed toward similar methods of endeavor of tissue image analysis. Further, it is well known by one of ordinary skill in the art at the time of filing the claimed invention that machine learning techniques are more efficient and often more accurate than doctors, while also not using up doctor time. Thus, it would have been obvious to a person having ordinary skill in the art at the time the claimed invention was filed to incorporate the teaching of Fan in order to utilize machine learning techniques for tissue predictions becoming overall more efficient and accurate, ideally leading to better patient outcomes. Yang and Fan in the combination fail to explicitly disclose as further recited. However, Wang discloses and generate, using one or more machine learning algorithms, at least one scalar value based on the one or more optically determined features of the wound or portion thereof (page 2416, right column, “With wound images Xt1 , · · · , XtN ∈ [0, 1]h×w×3 and corresponding wound surface areas St1 , · · · , StN in past N time frames t1, t2, · · · , tN , the objective is to learn a function f H that predicts wound areas for healing date estimation: Sˆ tN +∆t = f H(StN +∆t; Xt1 , · · · , XtN , St1 , · · · , StN ), (8) where ∆t ≥ 0 and tN + ∆t is”), the at least one scalar value corresponding to a predicted or assessed healing parameter over a predetermined time interval following generation of the image (page 2416, right column, “where ∆t ≥ 0 and tN + ∆t is the future date for the prediction;” the predetermined time interval being read as days post the day this is evaluated), wherein the at least one scalar value is generated on or near day 0 of therapy (page 2416, right column, “where ∆t ≥ 0 and tN + ∆t is the future date for the prediction;” these dates can be prior to 4 weeks, and further therapy is read as starting in that determining the current state is the first step of determining a treatment regimen). As noted above, Yang and Fan are directed toward similar methods of endeavor of tissue analysis and prediction (see claim 1 analysis). Further, Wang is directed toward, “an integrated system to automatically segment wound regions and analyze wound conditions in wound images (abstract).” As can be seen by one of ordinary skill in the art, Yang, Fan and Wang are all directed toward similar methods of endeavor of tissue image analysis. Wang allows for advances in labeling various regions of the wound, and further more accurately estimating wound size (abstract). As noted above, Wang analyzes a function (function 8) in order to predict a healing data for a specific surface area, and determining the healing date estimation. One of ordinary skill in the art before the effective filing date would easily understand the benefits of predicting a future healing date in that a physician and patient would be aware of an estimation of how long healing should take, and if the actual healing is outside that estimation, there may be cause for concern, or could inform a clinician and user of how long a wound should be covered for infection prevention. Thus, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to incorporate the teaching of Wang in order to allow for calculating various metrics related to the wound itself for prediction of healing time, to more accurately inform patients and clinicians, and prevent infection. Regarding dependent claim 2, the rejection of claim 1 is incorporated herein. Additionally, Yang in the combination further discloses wherein the wound is a diabetic foot ulcer (abstract, “Diabetic foot ulcers are a major complication of diabetes and present a considerable burden for both patients and health care providers. As healing often takes many months, a method of determining which ulcers would be most likely to heal would be of great value in identifying patients who require further intervention at an early stage”) Regarding dependent claim 3, the rejection of claim 1 is incorporated herein. Additionally, Yang in the combination further discloses wherein the predicted or assessed healing parameter is a predicted amount of healing of the wound or portion thereof (abstract, “patients with diabetic foot ulcers at the time of presentation revealed that ulcer healing by 12 weeks could be predicted by the assessment of SpO2 calculated from these images”… “It is therefore valuable to compare the performance of PCA with SpO2 measurement in the prediction of wound healing;” page 3, “This study performed a novel investigation by comparing the performance of PCA with more widely used SpO2 measurements in predicting whether a wound will heal within 12 weeks of presentation”) Regarding dependent claim 4, the rejection of claim 1 is incorporated herein. Additionally, Wang in the combination further discloses wherein the predicted healing parameter is a predicted percent area reduction of the wound or portion thereof (page 2417, right column, “We provide quantitative analysis of the time (weeks) to take until the wound size become 10%, 5%, and 0% of original wound area, measured by mean absolute error (MAEtime)”). Regarding dependent claim 5, the rejection of claim 1 is incorporated herein. Additionally, Wang in the combination further discloses wherein the one or more optically determined tissue features comprise one or more dimensions of the wound (Figure 1, “Given wound images, our deep convolutional neural network model performs feature learning and wound segmentation at the same time”), the subset comprising at least the wound pixels (Figure 1, “Wound segments can be used to estimate actual wound areas (with the scale information from the ruler ticks).”). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to ensure wound dimensions are included to allow the scalar value to be as specific as possible to the wound in question, thus a more accurate prediction being output. Regarding dependent claim 6, the rejection of claim 5 is incorporated herein. Additionally, Wang in the combination further discloses wherein the one or more dimensions of the wound comprise at least one of a length of the wound, a width of the wound, or a depth of the wound (page 2415, right column, “From wound region segments, actual wound surface areas are then calculated with the scale information from the ruler ticks in wound images;” page 2416, right column, “With the wound region Rgt segmented from the image and the number of wound pixels (i.e. |Rgt|) counted, we can estimate the actual wound area S. More specifically, ruler ticks in wound images are used to make the conversion from pixel lengths to actual lengths;” the area is read as being calculated based on length (i.e. pixel length)). Regarding dependent claim 7, the rejection of claim 5 is incorporated herein. Additionally, Wang in the combination further discloses wherein the one or more dimensions of the wound are determined based at least in part on the wound pixels or a boundary between the wound pixels and the non-wound pixels (page 2415, right column, “From wound region segments, actual wound surface areas are then calculated with the scale information from the ruler ticks in wound images;” page 2416, right column, “With the wound region Rgt segmented from the image and the number of wound pixels (i.e. |Rgt|) counted, we can estimate the actual wound area S. More specifically, ruler ticks in wound images are used to make the conversion from pixel lengths to actual lengths”). Regarding dependent claim 8, the rejection of claim 1 is incorporated herein. Additionally, Yang in the combination further discloses wherein the one or more optically determined tissue features comprise at least one of a perfusion, oxygenation (abstract, “Hyperspectral imaging (HSI) is a tool that has the potential to meet this clinical need. Due to the different absorption spectra of oxy- and deoxyhemoglobin, in biomedical HSI the majority of research has utilized reflectance spectra to estimate oxygen saturation (SpO2) values from peripheral tissue.”), or tissue homogeneity corresponding to the wound pixels. Regarding dependent claim 14, the rejection of claim 1 is incorporated herein. Additionally, Wang in the combination further discloses wherein the one or more processors automatically segment the plurality of pixels using a segmentation algorithm comprising a convolutional neural network (Figure 2, “Fig. 2. The ConvNet architecture and segmentation results”). Wang notes the benefits of their system are to be “the system is efficient enough to process the wound image within 5 seconds on a typical laptop computer (page 2418, left column).” Wang’s system utilizes a ConvNet which was also noted to be more accurate than the SVM (see table 1). It would have been obvious to a person having ordinary sill in the art before the effective filing date to modify the teaching of Yang and Fan on the basis of Wang in order to perform automated segmentation to maintain a highly efficient and accurate system, in an environment where quick and accurate results are a necessity to allow for faster patient treatment. Regarding dependent claim 15, the rejection of claim 14 is incorporated herein. Additionally, Wang in the combination further discloses wherein the segmentation algorithm is at least one of a U-Net comprising a plurality of convolutional layers and a SegNet comprising a plurality of convolutional layers (Figure 2, “Fig. 2. The ConvNet architecture and segmentation results” the network is read as a segnet because it generates a segmentation mask; segnets are read as networks that perform semantic segmentation, which the ConvNet is by segmenting background and wound ) . Regarding dependent claim 16, the rejection of claim 1 is incorporated herein. Additionally, Wang in the combination further discloses wherein the at least one scalar value comprises a plurality of scalar values (Equation 8; a scalar value si generated for each area), each scalar value of the plurality of scalar values corresponding to a probability of healing of an individual pixel of the subset or of a subgroup of individual pixels of the subset (Equation 8; the healing probability is read as complete healing, page 2416, right column, “healing date estimation”). Regarding dependent claim 17, the rejection of claim 16 is incorporated herein. Additionally, Fan in the combination further discloses wherein the one or more processors are further configured to output a visual representation of the plurality of scalar values for display to a user (Figure 14; paragraph 0162, “The machine learning model can be trained to generate a quantitative output after assessing all data collected by the amputation site analysis system. The quantitative output can be translated into an image identifying areas of the scanned tissue surface that are likely or unlikely to heal following amputation, to generate a mapping of various regions of tissue classifications, and/or to recommend a LOA.”) It would have been obvious to a person having ordinary sill in the art before the effective filing date to incorporate the teaching of Fan to ensure there is a method to communicate a computed value to a user visually. Said differently, one goal of the method in Fan is “assessing the presence and severity of tissue conditions (abstract)” and this assessment is not valuable unless the user is provided with the assessment to review or read (especially if there is no other output method that can be used). Regarding dependent claim 18, the rejection of claim 17 is incorporated herein. Additionally, Fan in the combination further discloses wherein the visual representation comprises the image having each pixel of the subset displayed with a particular visual representation selected based on the probability of healing corresponding to the pixel, wherein pixels associated with different probabilities of healing are displayed in different visual representations (Figure 14, the light and dark colors on the wound healing score correlate to the classified image result of injury; paragraph 0162, “The machine learning model can be trained to generate a quantitative output after assessing all data collected by the amputation site analysis system. The quantitative output can be translated into an image identifying areas of the scanned tissue surface that are likely or unlikely to heal following amputation, to generate a mapping of various regions of tissue classifications, and/or to recommend a LOA;” paragraph 0249, “The machine learning classifier 2310 generates per-pixel classification wherein the output includes a classification value for each [x,y] pixel location. ”). Regarding dependent claim 19, the rejection of claim 16 is incorporated herein. Additionally, Wang in the combination further discloses wherein the one or more machine learning algorithms comprise a SegNet pre-trained using a wound, burn, or ulcer image database (page 2415, right column, "In model training and evaluation, we adopted the NYU Wound Database, a large-scale dataset with over 8000 high resolution wound images and corresponding medical records (e.g., clinic visit dates and wound surface areas);" page 2417, left column, "As illustrated in Fig. 2(a), our proposed convolutional neural networks have 5 encoding layers followed by 4 decoding layers. Specifically, we used ReLU as nonlinearity function for both convolutional encoder and decoder. At the output of network, we used cross-entropy loss function, together with a L2 regularization term with regularization coefficient 10−5. We trained the model using the mini-batch Stochastic Gradient Descent (mini-batch size is 16) with Nesterov Momentum [22];” a segnet is a type of CNN, thus using the CNN covers the use of a segnet). Regarding dependent claim 20, the rejection of claim 19 is incorporated herein. Additionally, Wang in the combination further discloses wherein the wound image database comprises a diabetic foot ulcer image database (page 2415, right column, "In model training and evaluation, we adopted the NYU Wound Database, a large-scale dataset with over 8000 high resolution wound images and corresponding medical records (e.g., clinic visit dates and wound surface areas)"). Regarding dependent claim 21, the rejection of claim 19 is incorporated herein. Additionally, Yang, Fan and Wang in the combination fail to explicitly disclose wherein the wound image database comprises a burn image database. However, official Notice is taken as to the fact that a burn wounds are a common type of wound. Wang allows for the estimation of healing time of wounds (see abstract); in terms of specific wound types, Wang notes at least chronic wounds and foot ulcers. One of ordinary skill in the art would understand the benefits of including a burn wound within the database, so that the system has the ability to additionally make a prediction on the healing of burn wounds and not just chronic wounds or foot ulcers. Therefore, it would have been obvious to one of ordinary skill in the art at the time of the effective filing date of the invention to modify the combination of Yang, Fan and Wang to allow the database of Wang to also include burn wounds so that accurate predictions of burn wound healing time can be made allowing the system to be applicable to more patients. Regarding dependent claim 22, the rejection of claim 1 is incorporated herein. Additionally, Fan in the combination further discloses wherein the predetermined time interval is 30 days (paragraph 0193, "During this training study, a large dataset of training images will be obtained on which testing of variables in specific classifier components will be evaluated with the goal of being able to achieve a sensitivity and specificity of 90% sensitivity and specificity. Standardized Amputation Healing can be determined based on the ultimate outcome of the amputation (healing or non-healing). Training can be performed via collection of data that accurately represents a population on which the classifier will eventually be used. Importantly, the classifier can only be as accurate as the methods used to identify the true status of the training data, in this case the healing or non-healing of the amputation site selected by a clinician;" see also Table 3, "healing: healing within 30 days and no need for revision"). It would have been obvious to one of ordinary skill in the art at the time of the effective filing date of the invention to incorporate the teaching of Fan in order to ensure the value can be representative of healing over a variety of different time points in order to be applicable to multiple different patient scenarios. Regarding dependent claim 23, the rejection of claim 1 is incorporated herein. Additionally, Wang in the combination further discloses wherein the one or more processors are further configured to identify at least one patient health metric value corresponding to a patient having the tissue region (Figure 14, the variety of patient data input into the classifier), and wherein the at least one scalar value is generated based on the one or more optically determined tissue features of the wound or portion thereof (Figure 1, “wound image features” coming from the wound image) and on the at least one patient health metric value (Figure 1, “Wound history”) . It would have been obvious to one of ordinary skill in the art at the time of the effective filing date of the invention to incorporate the teaching of Wang in order to ensure a prediction is made off of as much relevant information as possible to make a more accurate output. Regarding dependent claim 24, the rejection of claim 23 is incorporated herein. Additionally, Fan in the combination further discloses wherein the at least one patient health metric value comprises at least one variable selected from the group consisting of demographic variables, diabetic foot ulcer history variables, compliance variables, endocrine variables, cardiovascular variables, musculoskeletal variables, nutrition variables (Figure 14, “BMI… Additional information;” BMI is read as a nutrition variable; ), infectious disease variables, renal variables, obstetrics or gynecology variables, drug use variables (paragraph 0080, “which in certain alternatives includes the presence of a tissue condition, the severity of the tissue condition, and/or additional information about the subject, including any of the information mentioned in this specification;” paragraph 0240, “Given a large database of training data that includes information such as sex, diabetes status, age, smoking history, etc., the disclosed ML training techniques can subset the database based upon those parameters that match the values of these metrics for the patient of interest”), other disease variables, or laboratory values. It would have been obvious to one of ordinary skill in the art at the time of the effective filing date of the invention to incorporate the teaching of Fan in order to ensure a prediction is made off of as much relevant information as possible within the patients record to make the most accurate output. Regarding dependent claim 25, the rejection of claim 23 is incorporated herein. Additionally, Fan in the combination further discloses wherein the at least one patient health metric value comprises one or more clinical features (Figure 14, “BMI… Additional information”). It would have been obvious to one of ordinary skill in the art at the time of the effective filing date of the invention to incorporate the teaching of Fan in order to ensure a prediction is made off of as much relevant information as possible within the patients record to make the most accurate output. Regarding dependent claim 26, the rejection of claim 25 is incorporated herein. Additionally, Fan in the combination further discloses wherein the one or more clinical features comprise at least one feature selected from the group consisting of an age of the patient, a level of chronic kidney disease of the patient, a length of the wound on a day when the image is generated, and a width of the wound on the day when the image is generated (paragraph 0080, “which in certain alternatives includes the presence of a tissue condition, the severity of the tissue condition, and/or additional information about the subject, including any of the information mentioned in this specification;” paragraph 0240, “Given a large database of training data that includes information such as sex, diabetes status, age, smoking history, etc., the disclosed ML training techniques can subset the database based upon those parameters that match the values of these metrics for the patient of interest”). Regarding dependent claim 27, the rejection of claim 1 is incorporated herein. Additionally, Yang in the combination further discloses wherein the first wavelength is within the range of 420 nm ± 20 nm, 525 nm ± 35 nm, 581 nm ± 20 nm, 620 nm ± 20 nm, 660 nm ± 20 nm, 726 nm ± 41 nm, 820 nm ± 20 nm, or 855 nm ±30 nm (page 3, “For the measurements taken in this study, each 3D data cube contained 2D spatial images (120 × 170 pixels) over a wavelength range from 430 nm to 750 nm”). Regarding dependent claim 28, the rejection of claim 1 is incorporated herein. Additionally, Yang in the combination further discloses wherein the first wavelength is within the range of 620 nm ± 20 nm, 660 nm ± 20 nm, or 420 nm ±20 nm (page 3, “For the measurements taken in this study, each 3D data cube contained 2D spatial images (120 × 170 pixels) over a wavelength range from 430 nm to 750 nm”). Regarding dependent claim 29, the rejection of claim 28 is incorporated herein. Additionally, Yang, Fan and Wang in the combination fail to explicitly disclose wherein the one or more machine learning algorithms comprise a random forest ensemble. However, paragraph 0011, “The aforementioned problems, among others, are addressed in some embodiments by the machine learning techniques (also known as artificial intelligence, computer vision, and pattern recognition techniques) of the present disclosure that combine optical microcirculatory assessment with overall patient health metrics to generate prognostic information;” paragraph 0245, Fan discloses, “In some embodiments the machine learning classifier 2310 can be an artificial neural network, for example a convolutional neural network (CNN), as discussed in more detail with respect to FIG. 23B. In other embodiments, the machine learning classifier 2310 can be another type of neural network or other machine learning classifier suitable for predicting pixel-wise classifications from supervised learning. Artificial neural networks are artificial in the sense that they are computational entities, analogous to biological neural networks in animals, but implemented by computing devices. A neural network typically includes an input layer, one or more intermediate layers, and an output layer, with each layer including a number of nodes. Thus, Fan discloses the possibility of altering the machine learning algorithm to be a different algorithm. Further, the examiner takes official notice that Random Forest algorithms are well known to one of ordinary skill in the art before the effective filing date of the claimed invention to reduce overfitting by averaging predictions of the tress. Thus, It would have been obvious to a person having ordinary sill in the art before the effective filing date to incorporate the teaching of Fang to ensure the most accurate and efficient machine learning algorithm is used for the case. Regarding dependent claim 30, the rejection of claim 1 is incorporated herein. Additionally, Yang in the combination further discloses wherein the first wavelength is within the range of 726 nm ± 41 nm, 855 nm ± 30 nm, 525 nm ±35 nm, 581 nm±20nm, or 820 nm±20 nm (page 3, “For the measurements taken in this study, each 3D data cube contained 2D spatial images (120 × 170 pixels) over a wavelength range from 430 nm to 750 nm”). Regarding dependent claim 31, the rejection of claim 30 is incorporated herein. Additionally, Wang in the combination further discloses wherein the one or more machine learning algorithms comprise an ensemble of classifiers (page 2416, right column, “To learn function f I , we use SVM classifiers [19] using the ConvNet features.”). Regarding dependent claim 32, the rejection of claim 1 is incorporated herein. Additionally, Fan in the combination further discloses further comprising an optical bandpass filter configured to pass light of at least the first wavelength (paragraph 0102, "For example, an 860 nm bandpass filter may be used to filter out light wavelengths that correspond to the predominant wavelength spectrum of the ambient lighting in the room, so that the acquired images correspond to reflected light that originates with the light sources in the probe 408."). One of ordinary skill in the art before the effective filing date of the invention would be apprised of the fact that often images of samples/wounds are obtained in a lab or clinical setting containing ambient lighting (lamps, overhead lights, etc.). When analyzing the light properties of these samples, one often wants to focus only on the light specifically from the applicable probe, not the light from the room. Thus, it would have been obvious to a person having ordinary sill in the art before the effective filing date of the claimed invention to incorporate the teaching of Fan to allow for more accurate data output, that is only influenced by the controlled input light (not the ambient room light). Regarding dependent claim 33, the rejection of claim 1 is incorporated herein. Additionally, Fan in the combination further discloses wherein the one or more processors are further configured to: determine, based on the signal, a reflectance intensity value at the first wavelength for each pixel of at least the subset of the segmented plurality of pixels ( paragraph 0102, "To collect the images of data subset 402, the one or more cameras are also configured to acquire a selected number of images having temporal spacing between each image short enough to measure temporal variations in reflected light intensity due to motions of the tissue region that correspond to physiological events or conditions in the patient. In some cases, the data obtained from the multiple time separated images form three-dimensional data arrays, wherein the data arrays have one time and two spatial dimensions. Each pixel in the three-dimensional array can be characterized by a time domain variation in reflected light intensity. "); and determine one or more quantitative features of the subset of the plurality of pixels based on the reflectance intensity values of each pixel of the subset (paragraph 0115, “To each pixel in the image, the signal can be smoothed with a low pass filter to extract the envelope of the noisy signal. The noisy signal can then be divided by its envelope to remove the dramatic motion spikes in the signal.”). One of ordinary skill in the art before the effective filing date of the invention would be apprised of the fact that often image analysis relies on quantitative values to be easily comparable among pixels; said differently qualitative features aren’t as clear to compare like if a pixel is grey, if it falls under black or white classification. Thus, it would have been obvious to a person having ordinary sill in the art before the effective filing date of the claimed invention to incorporate the teaching of Fan to generate a clear quantitative output to be used for processing. Regarding dependent claim 34, the rejection of claim 33 is incorporated herein. Additionally, Fan in the combination further discloses wherein the one or more quantitative features of the subset of the plurality of pixels comprise one or more aggregate quantitative features of the plurality of pixels (paragraph 0225, “For example, the classifications can be displayed by associating specific values or ranges of values with one of a number of tissue categories, generating a visual representation of the category for display (for example, a color or pattern fill), and displaying pixels with the visual representation of the category in which they were classified;” paragraph 0243, “ Further embodiments can be captured from varied viewpoints and separately provided to the classifier 2310, with mappings of output pixel classification scores registered after calculation;” paragraph 0141, “Additionally or alternatively, data outputs such as percentage in each category could be presented to the user.”) Regarding dependent claim 36, the rejection of claim 1 is incorporated herein. Additionally, Fan in the combination further discloses wherein the at least one light detection element is further configured to collect light of at least a second wavelength after being reflected from the tissue region (paragraph 0012, “ control the at least one light emitter to sequentially emit each of the plurality of wavelengths of light”), and wherein the one or more processors are further configured to: receive a second signal from the at least one light detection element, the second signal representing light of the second wavelength reflected from the tissue region (paragraph 0019, “imaging system to capture data representing a plurality of images of the tissue region, the data representing the plurality of images including a first subset each captured using light of a different one of a number of different wavelengths”); wherein the image is generated based at least in part on the second signal (paragraph 0019, “imaging system to capture data representing a plurality of images of the tissue region, the data representing the plurality of images including a first subset each captured using light of a different one of a number of different wavelengths”). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to incorporate the teaching of Fan in order to acquire data at a variety of wavelengths in order to obtain as much relevant information as possible to predict the most accurate output. Regarding dependent claim 37, the rejection of claim 1 is incorporated herein. Additionally, Yang in the combination further discloses a method of predicting wound healing using the system of claim 1, the method (page 3) comprising: illuminating the tissue region with light of at least the first wavelength such that the tissue region reflects at least a portion of the light to the at least one light detection element (page 2, “HSI is a noninvasive technique by which images are formed at different wavelengths to produce a hypercube (x, y, _);” Figure 1); Wang in the combination further discloses using the system to generate the at least one scalar value (page 2416, right column, “With wound images Xt1 , · · · , XtN ∈ [0, 1]h×w×3 and corresponding wound surface areas St1 , · · · , StN in past N time frames t1, t2, · · · , tN , the objective is to learn a function f H that predicts wound areas for healing date estimation: Sˆ tN +∆t = f H(StN +∆t; Xt1 , · · · , XtN , St1 , · · · , StN ), (8) where ∆t ≥ 0 and tN + ∆t is”); and determining the predicted or assessed healing parameter over the predetermined time interval (page 2416, right column, “where ∆t ≥ 0 and tN + ∆t is the future date for the prediction;” the predetermined time interval being read as days post the day this is evaluated). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to incorporate the teaching of Wang in order to allow for calculating various metrics related to the wound itself for prediction of healing time, to more accurately inform patients and clinicians, and prevent infection. Regarding dependent claim 38, the rejection of claim 37 is incorporated herein. Additionally, Fan in the combination further discloses wherein illuminating the tissue region comprises activating one or more light emitters configured to emit light of at least the first wavelength (paragraph 0012, "; control the at least one light emitter to sequentially emit each of the plurality of wavelengths of light; receive a plurality of signals from the at least one light detection element, a first subset of the plurality of signals representing light sequentially emitted at the plurality of wavelengths reflected from the tissue region, and a second subset of the plurality of images representing light of a same wavelength reflected from the tissue region at a plurality of different times"). Fan allows for illuminating tissues at different wavelengths (paragraph 0012). Different materials reflect differently under different light as seen at paragraph 0040, “light incident on the tissue surface scatters within the tissue as it interacts with molecular structures.” Thus, it would have been obvious to a person having ordinary sill in the art before the effective filing date to incorporate the teaching of Fan to ensure key data is obtained at different lights, revealing different tissue features Regarding dependent claim 39, the rejection of claim 37 is incorporated herein. Additionally, Fan in the combination further discloses wherein illuminating the tissue region comprises exposing the tissue region to ambient light (paragraph 0102, “For example, an 860 nm bandpass filter may be used to filter out light wavelengths that correspond to the predominant wavelength spectrum of the ambient lighting in the room, so that the acquired images correspond to reflected light that originates with the light sources in the probe 408;” having to remove the specific wavelengths through filtering indicates that the tissue is exposed to ambient light in the first place). Fan allows for illuminating tissues at different wavelengths (paragraph 0012); if this is not performed in a dark room, there also is the impact of ambient light. Different materials reflect differently under different light as seen at paragraph 0040, “light incident on the tissue surface scatters within the tissue as it interacts with molecular structures.” Thus, it would have been obvious to a person having ordinary sill in the art before the effective filing date to incorporate the teaching of Fan to ensure key data is obtained at different lights, revealing different tissue features Regarding dependent claim 40, the rejection of claim 37 is incorporated herein. Additionally, Wang in the combination further discloses wherein determining the predicted healing parameter comprises determining an expected percent area reduction of the wound or a portion thereof over the predetermined time interval (page 2417, right column, “We provide quantitative analysis of the time (weeks) to take until the wound size become 10%, 5%, and 0% of original wound area, measured by mean absolute error (MAEtime)”). Regarding dependent claim 41, the rejection of claim 37 is incorporated herein. Additionally, Yang and Fan in the combination fail to explicitly disclose further comprising: measuring one or more dimensions of the wound or a portion thereof after the predetermined time interval has elapsed following the determination of the predicted amount of healing of the wound or said portion thereof; determining an actual amount of healing of the wound or said portion thereof over the predetermined time interval; and updating at least one machine learning algorithm of the one or more machine learning algorithms by providing at least the image and the actual amount of healing of the wound or said portion thereof as training data. However, Fan discloses methodology of training and updating a classifier for the given tasks. Specifically at paragraph 0249, Fan discloses, “The machine learning classifier 2310 generates per-pixel classification wherein the output includes a classification value for each [x,y] pixel location. During training of the machine learning classifier 2310, these output classification values can be compared to a pre-generated tissue map 2315 and identified error rates can be fed back into the machine learning classifier 2310. As described above, a pre-generated tissue map 2315 can include a ground truth mask generated by a physician after analysis and/or treatment of the imaged tissue site. The weights of the various node connections, for example convolutional filters in a number of convolutional layers, can be learnt by the machine learning classifier 2310 through this back propagation.” From this section alone, it is clear Fan demonstrates the use of a feedback loop to update machine learning systems. Further, Fan allows for classification of the tissues into various health categories at paragraph 0250, “During implementation of the machine learning classifier 2310, the output classification values can be used to generate tissue map 2315 to indicate pixels corresponding to tissue of various designated categories of tissue health.” Thus, Fan does disclose utilizing a feedback loop to alter the parameters of a network based on the known output and the predicted output. With respect specifically to the dimensions of the wound being used as the validation measurement, this is simply an alternative method of evaluating accuracy (i.e. comparison of current to predicted); Fan does this utilizing classification comparison as the metric, however if there was a desire to output the number as a percentage, then using the dimensions would be a clear alternative that one of ordinary skill in the art at the time of filing would be able to perform. Regarding dependent claim 42, the rejection of claim 37 is incorporated herein. Additionally, Fan in the combination further discloses further comprising selecting, prior to an end of the predetermined time interval (paragraph 0309, “such treatments can be monitored using a device as described herein. Some alternatives can monitor the effectiveness of therapeutic agents by evaluating the healing of tissue before, during, or after application of a particular treatment.”), between a first wound care therapy and a second wound care therapy based at least in part on the predicted or assessed healing parameter (paragraph 0066, “Alternatives described herein allow one to assess and classify in an automated or semi-automated manner tissue regions of subjects that may require amputation, and may also provide treatment recommendations;” different wounds would require different treatments (i.e. standard versus advanced); paragraph 0312, “Certain alternatives may also be used to facilitate the analysis of the treatment of chronic wounds. Chronic wound patients often receive expensive advanced treatment modalities with no measure of their efficacy. Alternatives described herein can image the chronic wound and give quantitative data to its status, including the size of the wound, the depth of the wound, the presence of wounded tissue, and the presence of healthy tissue using the aforementioned imaging techniques.”). Fan discloses at paragraph 0066, “Alternatives described herein allow one to assess and classify in an automated or semi-automated manner tissue regions of subjects that may require amputation, and may also provide treatment recommendations.” Outputting treatment recommendations saves one step for physicians who are already pressed for time. Further, it would have been obvious to a person having ordinary sill in the art before the effective filing date that there are a multitude of treatment recommendations for a disorder, each treatment having its pros and cons. Thus, it would have been obvious to a person having ordinary skill in the art before the effective filing date to incorporate the teaching of Fan to limit the determinations needing to be made by the physician, and determining an optimal treatment, ideally improving patient outcomes. Regarding dependent claim 43, the rejection of claim 42 is incorporated herein. Additionally, Yang, Fan and Wang in the combination fail to explicitly disclose wherein selecting between the first wound care therapy and the second wound care therapy comprises: when the predicted amount of healing indicates that the wound or portion thereof will heal or close by greater than 50% in 30 days, indicating or applying one or more standard therapies selected from improving nutritional status, debridement to remove devitalized tissue, maintenance of granulation tissue with a dressing, therapy to address any infection that may be present, addressing a deficiency in vascular perfusion to an extremity comprising the wound or portion thereof, offloading of pressure from the wound or portion thereof, or glucose regulation; and when the predicted amount of healing indicates that the wound or portion thereof will not heal or close by greater than 50% in 30 days, indicating or applying one or more advanced care therapies selected from the group consisting of hyperbaric oxygen therapy, negative-pressure wound therapy, bioengineered skin substitutes, synthetic growth factors, extracellular matrix proteins, matrix metalloproteinase modulators, and electrical stimulation therapy. However, Fan discloses at paragraph 0066, “Alternatives described herein allow one to assess and classify in an automated or semi-automated manner tissue regions of subjects that may require amputation, and may also provide treatment recommendations;” it is well known in the art that different wounds would require different treatments (i.e. standard versus advanced). Further, at paragraph 0312, “Certain alternatives may also be used to facilitate the analysis of the treatment of chronic wounds. Chronic wound patients often receive expensive advanced treatment modalities with no measure of their efficacy. Alternatives described herein can image the chronic wound and give quantitative data to its status, including the size of the wound, the depth of the wound, the presence of wounded tissue, and the presence of healthy tissue using the aforementioned imaging techniques.” As evidenced in Fan, advanced treatment methods are disclosed, and standard treatment methods are read as the options which are cheaper (see comparison within Fan paragraph 0312). Additionally, paragraph 0206 of Fan notes, “For example, surgical site infections, poor nutritional status, or subject non-compliance with wound management may prevent healing of the amputation site independently from factors that the classifier measures;” this section of Fan is read that treatment of infection and improving nutritional status can aid in healing. The examiner takes official notice that the claimed treatment methods are known wound treatment therapies to one of ordinary skill in the art before the effective filing date of the invention. Further, based on paragraph 0066, “Alternatives described herein allow one to assess and classify in an automated or semi-automated manner tissue regions of subjects that may require amputation, and may also provide treatment recommendations;” the tissue regions of subjects relate to the treatment; said differently, the determination of a treatment correlates to the patient’s tissue status and diagnosis. As such, the threshold of the 50% healing would be a subjective value; for example, if a doctor wanted standard therapy at 30% healing and advanced, a user of ordinary skill in the art would easily be able to alter the threshold. Thus, being that Fan allows for the determination of treatments based on the healing characteristics of the patient, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to alter Yang and Fan to ensure the desired threshold between treatments is set based on the current healing stage. Claim(s) 9-10 are rejected under 35 U.S.C. 103 as being unpatentable over Yang, Fan and Wang as applied to claim 1 above, and further in view of M. Goyal, M. H. Yap, N. D. Reeves, S. Rajbhandari and J. Spragg, "Fully convolutional networks for diabetic foot ulcer segmentation," 2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Banff, AB, Canada, 2017, pp. 618-623, doi: 10.1109/SMC.2017.8122675 (hereinafter Goyal). Regarding dependent claim 9, the rejection of claim 1 is incorporated herein. Additionally, Yang, Fan and Wang in the combination as a whole fails to explicitly disclose wherein the one or more processors are further configured to automatically segment the non-wound pixels into peri-wound pixels and background pixels, the subset comprising at least the peri- wound pixels. However, Goyal discloses wherein the one or more processors are further configured to automatically segment the non-wound pixels into peri-wound pixels and background pixels, the subset comprising at least the peri- wound pixels (page 619, right column, “In this format, index 0 maps to black pixels represent the background, index 1 (red) represents the surrounding skin and index 2 (green) as DFU;” surrounding skin is read as peri-wound; see also Figure 1). As noted above, Yang, Fan and Wang are directed toward similar methods of endeavor of tissue analysis and prediction (see claim 1 analysis). Further, Goyal is directed toward, “a two-tier transfer learning from bigger datasets to train the Fully Convolutional Networks (FCNs) to automatically segment the ulcer and surrounding skin (abstract).” As can be seen by one of ordinary skill in the art, Yang, Fan, Wang and Goyal are all directed toward similar methods of endeavor of tissue image analysis. Further, Goyal allows for advances in using neural networks for segmentation of the ulcer (abstract). It is well known by one of ordinary skill in the art at the time of filing the claimed invention that wound diagnostics often utilize metrics from different areas of the wound. Often there are areas of interest of a wound patient that are not just the wound itself (i.e. the skin around the wound, or a certain distance from the wound, etc.). Thus, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to incorporate the teaching of Goyal in order to ensure all relevant areas of a wound patient are analyzed and segmented for ease of viewing of a clinician. Regarding dependent claim 10, the rejection of claim 9 is incorporated herein. Additionally, Yang in the combination further discloses wherein the one or more optically determined tissue features comprise at least one of a perfusion, oxygenation (page 5, “tissue oxygenation was assessed by HSI at a site measuring 1 cm2 in an area of intact skin adjacent (typically 1–5 mm) to the edge of the ulcer and unaffected by callus.”), or tissue homogeneity corresponding to the peri-wound pixels. Claim(s) 11 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Yang further in view of Fan and Wang as applied to claim 1 above, and further in view of U.S. Patent No. 8,655,433 to Freeman et al. (hereinafter Freeman). Regarding dependent claim 11, the rejection of claim 1 is incorporated herein. Additionally, Yang, Fan and Wang in the combination fail to explicitly disclose wherein the one or more processors are further configured to automatically segment the non- wound pixels into callus pixels and background pixels, the subset comprising at least the callus pixels. However, Freeman discloses wherein the one or more processors are further configured to automatically segment the non- wound pixels into callus pixels and background pixels, the subset comprising at least the callus pixels (column 10, line 49, “The particular color and distinct shape of features in the pseudo-color image allow discrimination between tissue types such as ulcers, callus, intact skin, hematoma, and superficial blood vessels;” pixels not marked as these types are read as background pixels). As noted above, Yang, Fan and Wang are directed toward similar methods of endeavor of tissue analysis and prediction (see claim 1 analysis). Further, Freeman is directed toward, “methods and systems of hyperspectral and multispectral imaging of medical tissues. In particular, the invention is directed to new devices, tools and processes for the detection and evaluation of diseases and disorders such as, but not limited to diabetes and peripheral vascular disease, that incorporate hyperspectral or multispectral imaging (abstract).” As can be seen by one of ordinary skill in the art, Yang, Fan, Wang and Freeman are all directed toward similar methods of endeavor of tissue image analysis. Further, Freeman allows for advances new devices, tools and processes for the detection and evaluation of diseases and disorders such as, but not limited to diabetes and peripheral vascular disease, that incorporate hyperspectral or multispectral imaging (abstract). It is well known by one of ordinary skill in the art at the time of filing the claimed invention that wound diagnostics often utilize metrics from different areas of the wound, and also areas outside or surrounding the wound. Different conditions present differently as related to the skin in those areas. Thus, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to incorporate the teaching of Freeman in order to allow for calculating various metrics related to the wound itself and the skin surrounding the wound for diagnosis of different conditions. Regarding dependent claim 13, the rejection of claim 11 is incorporated herein. Additionally, Freeman in the combination further discloses wherein the one or more processors are further configured to automatically segment the non-wound pixels into callus pixels, normal skin pixels, and background pixels (column 6, line 30, “differentiate diseased (e.g. tumor) and ischemic tissue from normal tissue;” column 10, line 49, “The particular color and distinct shape of features in the pseudo-color image allow discrimination between tissue types such as ulcers, callus, intact skin, hematoma, and superficial blood vessels;” pixels not marked as these types are read as background pixels). Claim(s) 12 is rejected under 35 U.S.C. 103 as being unpatentable over Yang further in view of Fan, Wang and Freeman as applied to claim 11 above, and further in view of U.S. Patent No. 11,195,281 to Schoess et al. (hereinafter Schoess). Regarding dependent claim 12, the rejection of claim 11 is incorporated herein. Additionally, Yan, Fan, Wang and Freeman in the combination fail to explicitly disclose wherein the one or more optically determined tissue features comprise the presence or absence of a callus atleast partially surrounding the wound. However, Schoess discloses wherein the one or more optically determined tissue features comprise the presence or absence of a callus atleast partially surrounding the wound (column 4, line 14, “FIG. 11A is an image of a foot with granulation tissue formation, callus around the perimeter of a wound and necrotic tissue;”). As noted above, Yang, Fan, Wang and Freeman are directed toward similar methods of endeavor of tissue analysis and prediction (see claim 11 analysis). Further, Schoess is directed toward, “A method for determining healing progress of a tissue disease state (abstract).” As can be seen by one of ordinary skill in the art, Yang, Fan, Wang, Freeman and Schoess are all directed toward similar methods of endeavor of tissue image analysis. It is well known by one of ordinary skill in the art at the time of filing the claimed invention that wound diagnostics often utilize metrics from different areas of the wound, and also areas outside or surrounding the wound (such as the peri-wound area, or a callus around the wound) to indicate different diagnoses. Different conditions present differently as related to the skin in those areas. Thus, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to incorporate the teaching of Schoess in order to allow for calculating various metrics related to the wound itself and the skin surrounding the wound for diagnosis of all conditions as related to the wound itself, including the surrounding skin. Claim(s) 35 is rejected under 35 U.S.C. 103 as being unpatentable over Yang further in view of Fan and Wang as applied to claim 34 above, and further in view of WO2017074505A1 (hereinafter WO ‘505). Regarding dependent claim 35, the rejection of claim 34 is incorporated herein. Additionally, Yang, Fan and Wang in the combination as a whole fails to explicitly disclose wherein the one or more aggregate quantitative features of the subset of the plurality of pixels are selected from the group consisting of a mean of the reflectance intensity values of the pixels of the subset, a standard deviation of the reflectance intensity values of the pixels of the subset, and a median reflectance intensity value of the pixels of the subset. However, WO ‘505 discloses wherein the one or more aggregate quantitative features of the subset of the plurality of pixels are selected from the group consisting of a mean of the reflectance intensity values of the pixels of the subset, a standard deviation of the reflectance intensity values of the pixels of the subset, and a median reflectance intensity value of the pixels of the subset (paragraph 0073, “FIG. 32 shows the reflectance spectra of all wavelengths at each excision layer. It plots the absorbance spectra of the healthy control, healthy control debrided once, mean of the burn tissue spectra at each cut, and mean of the wound bed spectra at each cut.”). As noted above, Yang, Wang and Fan are directed toward similar methods of endeavor of tissue analysis and prediction (see claim 1 analysis). Further, WO ‘505 is directed toward, “ to apparatuses and techniques for non-invasive optical imaging that acquires a plurality of images corresponding to both different times and different frequencies. Additionally, alternatives described herein are used with a variety of tissue classification applications, including assessing the presence and severity of tissue conditions, such as bums and other wounds (abstract).” As can be seen by one of ordinary skill in the art, Yang, Fan, Wang and WO ‘505 are all directed toward similar methods of endeavor of tissue image analysis. It is well known by one of ordinary skill in the art at the time of filing the claimed invention that medical imaging at different wavelengths than only ambient light can indicate different diagnoses. Thus, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to incorporate the teaching of WO ‘505 in order to allow for the most accurate diagnosis; had imaging not occurred in the different wavelengths, certain diagnoses could be missed resulting in poor patient outcomes. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: U.S. Patent No. 8,812,083 to Papazoglou et al. discloses, “Optical changes of tissue during wound healing measured by Near Infrared and Diffuse Reflectance Spectroscopy are shown to correlate with histologic changes (abstract).” Further as seen in a plurality of figures, multiple features are analyzed beginning at an initial 0 days, and further (see Figure 22, 26-27). Contact Any inquiry concerning this communication or earlier communications from the examiner should be directed to Courtney J. Nelson whose telephone number is (571)272-3956. The examiner can normally be reached Monday - Friday 8:00 - 4:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John Villecco can be reached at 571-272-7319. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /COURTNEY JOAN NELSON/Primary Examiner, Art Unit 2661
Read full office action

Prosecution Timeline

Aug 18, 2022
Application Filed
Jun 04, 2025
Non-Final Rejection — §103, §112
Sep 24, 2025
Response Filed
Nov 12, 2025
Final Rejection — §103, §112
Feb 16, 2026
Request for Continued Examination
Feb 22, 2026
Response after Non-Final Action
Mar 09, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603175
METHOD AND APPARATUS FOR DETERMINING DIAGNOSIS RESULT DATA
2y 5m to grant Granted Apr 14, 2026
Patent 12597188
SYSTEMS AND METHODS FOR PROCESSING ELECTRONIC IMAGES FOR PHYSIOLOGY-COMPENSATED RECONSTRUCTION
2y 5m to grant Granted Apr 07, 2026
Patent 12597494
METHOD AND APPARATUS FOR TRAINING MEDICAL IMAGE REPORT GENERATION MODEL, AND IMAGE REPORT GENERATION METHOD AND APPARATUS
2y 5m to grant Granted Apr 07, 2026
Patent 12588881
PROVIDING A RESULT DATA SET
2y 5m to grant Granted Mar 31, 2026
Patent 12592016
Material-Specific Attenuation Maps for Combined Imaging Systems Background
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
86%
Grant Probability
96%
With Interview (+9.4%)
2y 7m
Median Time to Grant
High
PTA Risk
Based on 252 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month