Prosecution Insights
Last updated: April 19, 2026
Application No. 18/854,082

SYSTEM AND METHOD FOR WOUND TRIAGING AND RECOMMENDATIONS FOR TREATMENTS

Non-Final OA §101§103
Filed
Oct 04, 2024
Examiner
ABDULLAH, AAISHA
Art Unit
3681
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Adiuvo Diagnostics Private Limited
OA Round
1 (Non-Final)
25%
Grant Probability
At Risk
1-2
OA Rounds
4y 5m
To Grant
67%
With Interview

Examiner Intelligence

Grants only 25% of cases
25%
Career Allow Rate
11 granted / 44 resolved
-27.0% vs TC avg
Strong +42% interview lift
Without
With
+41.9%
Interview Lift
resolved cases with interview
Typical timeline
4y 5m
Avg Prosecution
18 currently pending
Career history
62
Total Applications
across all art units

Statute-Specific Performance

§101
38.8%
-1.2% vs TC avg
§103
43.6%
+3.6% vs TC avg
§102
2.4%
-37.6% vs TC avg
§112
11.3%
-28.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 44 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Application Status This is the first non-final action on the merits. Claims 1-12 as originally filed on October 4, 2024 are currently pending and considered below. Information Disclosure Statement The information disclosure statement (IDS) submitted on October 4, 2024 is being considered by the examiner. The submission is in compliance with the provisions of 37 CFR 1.97. Claim Objections Claim 6 is objected to because of the following informalities: “medicines consumed by the patient (104)”. Appropriate correction is required. For the purposes of compact prosecution, claim 6 will be interpreted as reading “medicines consumed by the patient ”. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-12 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e. an abstract idea) without significantly more. Claims 1-7 recite a system for wound triaging and recommendation for treatments, which is within the statutory category of a machine. Claims 8-12 recite a method for wound triaging and recommendation for treatments, which is within the statutory category of a process. Step 2A - Prong One: Regarding Prong One of Step 2A, the claim limitations are to be analyzed to determine whether, under their broadest reasonable interpretation, they "recite" a judicial exception or in other words whether a judicial exception is "set forth" or "described" in the claims. An "abstract idea" judicial exception is subject matter that falls within at least one of the following groupings: a) mathematical concepts, b) certain methods of organizing human activity, and/or c) mental processes. Representative independent claim 1 includes limitations that recite at least one abstract idea. Specifically, independent claim 1 recites: A wound triaging and recommendation system for wound triaging and recommendation for treatments, the wound triaging and recommendation system comprising: a hardware processor; and a memory coupled to the hardware processor, wherein the memory comprises a set of program instructions in the form of a plurality of subsystems, configured to be executed by the hardware processor, wherein the plurality of subsystems comprises: a text and voice based conversational artificial intelligence (AI) subsystem configured to: obtain patient’s medical data comprising at least one of: history of one or more diseases of a patient, family information of the patient, symptoms of the one or more diseases in the patient, and medicines consumed by the patient (104) through a user device of the patient; and determine data associated with at least one of: genetic predisposition, the one or more diseases influencing a wound, and a state of the wound based on an effect of a current medicine that the patient consumes based on the obtained patient’s medical data using a machine learning algorithm; a wound image analytics subsystem configured to: collect images of the wound from the patient; classify the wound of the patient into a plurality of categories comprising at least one of: granulation, necrotic, and cellulitis based on the collected images of the wound from the patient using the machine learning algorithm; and determine severity and risk category of the wound based on the classification of the wound of the patient using the machine learning algorithm; an AI based text and image analytics subsystem configured to: obtain information associated with patient’s clinical reports from the patient; and extract key clinical parameters and changes in the key clinical parameters over time from the patient’s clinical reports by scanning the patient’s clinical reports using optical character recognition techniques; and a patient treatment recommendation subsystem configured to: obtain at least one of: (a) the determined data associated with at least one of: the genetic predisposition, the one or more diseases influencing the wound, and the state of the wound based on the effect of the current medicine that the patient consumes, (b) the determined severity and risk category of the wound, and (c) the extracted key clinical parameters and the changes in the key clinical parameters over time from at least one of: the text and voice based conversational AI subsystem, the wound image analytics subsystem, and the AI based text and image analytics subsystem ; obtain medical data and reports from other patients, wherein the medical data and reports of the other patients comprise past medical history of the other patients; and triage the wound to at least one of: identify the severity of the wound by qualifying risk score for the wound, provide wound prognostics for wound healing, and provide treatment recommendations associated with personalized therapeutic routes for healing the wound to the patient based on results outputted from at least one of: the text and voice based conversational AI subsystem , the wound image analytics subsystem, the AI based text and image analytics subsystem, and the medical data and reports from other patients. The underlined limitations are directed to methods of organizing human activity. The claim recites steps of obtaining patient medical data, determining data, collecting images of the wound, classifying the wound, determining severity and risk category of the wound, obtaining clinical reports information, extracting key clinical parameters, obtain determined data, determined severity and risk, or extracted key clinical parameters, obtain medical data and reports from other patients and triaging the wound. These steps, under its broadest reasonable interpretation, are categorized as methods of organizing human activity, specifically associated with managing personal behavior or relationships or interactions between people (e.g. steps to determine wound triaging and personalized treatment recommendations). The claim encompasses a person following rules or instructions to receive and process data in the manner described in the abstract idea. If the claim limitation, under its broadest reasonable interpretation, covers managing personal behavior or interactions between people but for the recitation of generic computer components, then it falls within the “Certain Methods of Organizing Human Activity” grouping of abstract ideas. See MPEP § 2106.04(a). The Examiner further notes that “Certain Methods of Organizing Human Activity” includes a person's interaction with a computer (see October 2019 Update: Subject Matter Eligibility at Pg. 5). The abstract idea for Claim 8 is identical as the abstract idea for Claim 1, because the only difference between Claims 1 and 8 is that Claim 1 recites a system, whereas Claim 8 recites a method. Any limitation not identified above as part of methods of organizing human activity, are deemed “additional elements” and will be discussed further in detail below. Accordingly, claims 1 and 8 recite at least one abstract idea. Similarly, dependent claims 2-5 and 9-11 further narrow the abstract idea described in the independent claims. Claims 2 and 9 describe storing and comparing the patient’s medical data. Claims 3 and 10 further describe classifying the wound. Claim 4 and 11 further describe extracting the key clinical parameters and its changes over time. Claims 5 and 12 describe the personalized therapeutic routes. Additionally, claims 4 and 11 recite limitations that constitute an abstract idea that falls under the mathematical concepts grouping because extracting non-zero pixels and recognizing characters based on the extracted non-zero pixels using segmentation and thresholding, under its broadest reasonable interpretation, represents mathematical calculations (see MPEP 2106.04(a)(2)). These limitations only serve to further limit the abstract idea and hence, are directed toward fundamentally the same abstract ideas as independent claims 1 and 8, even when considered individually and as an ordered combination. Step 2A - Prong Two: Regarding Prong Two of Step 2A, it must be determined whether the claim as a whole integrates the abstract idea into a practical application. It must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception. The courts have indicated that additional elements merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a "practical application." In the present case, claims 1-12 as a whole do not integrate the abstract idea into a practical application because they do not impose meaningful limits on practicing the abstract idea. The additional elements or combination of additional elements, beyond the above-noted at least one abstract idea will be described as follows (where the bolded portions are the “additional limitations” while the underlined portions continue to represent the “abstract idea(s)”). A wound triaging and recommendation system for wound triaging and recommendation for treatments, the wound triaging and recommendation system comprising: a hardware processor; and a memory coupled to the hardware processor, wherein the memory comprises a set of program instructions in the form of a plurality of subsystems, configured to be executed by the hardware processor, wherein the plurality of subsystems comprises: a text and voice based conversational artificial intelligence (AI) subsystem configured to: obtain patient’s medical data comprising at least one of: history of one or more diseases of a patient, family information of the patient, symptoms of the one or more diseases in the patient, and medicines consumed by the patient (104) through a user device of the patient; and determine data associated with at least one of: genetic predisposition, the one or more diseases influencing a wound, and a state of the wound based on an effect of a current medicine that the patient consumes based on the obtained patient’s medical data using a machine learning algorithm; a wound image analytics subsystem configured to: collect images of the wound from the patient; classify the wound of the patient into a plurality of categories comprising at least one of: granulation, necrotic, and cellulitis based on the collected images of the wound from the patient using the machine learning algorithm; and determine severity and risk category of the wound based on the classification of the wound of the patient using the machine learning algorithm; an AI based text and image analytics subsystem configured to: obtain information associated with patient’s clinical reports from the patient; and extract key clinical parameters and changes in the key clinical parameters over time from the patient’s clinical reports by scanning the patient’s clinical reports using optical character recognition techniques; and a patient treatment recommendation subsystem configured to: obtain at least one of: (a) the determined data associated with at least one of: the genetic predisposition, the one or more diseases influencing the wound, and the state of the wound based on the effect of the current medicine that the patient consumes, (b) the determined severity and risk category of the wound, and (c) the extracted key clinical parameters and the changes in the key clinical parameters over time from at least one of: the text and voice based conversational AI subsystem, the wound image analytics subsystem, and the AI based text and image analytics subsystem ; obtain medical data and reports from other patients, wherein the medical data and reports of the other patients comprise past medical history of the other patients; and triage the wound to at least one of: identify the severity of the wound by qualifying risk score for the wound, provide wound prognostics for wound healing, and provide treatment recommendations associated with personalized therapeutic routes for healing the wound to the patient based on results outputted from at least one of: the text and voice based conversational AI subsystem , the wound image analytics subsystem, the AI based text and image analytics subsystem, and the medical data and reports from other patients. The claim recites the additional elements of a wound triaging and recommendation system, memory, processor, user device, machine learning, text and voice based conversational AI subsystem, wound image analytics subsystem, AI based text and image analytics subsystem, patient treatment recommendation subsystem and optical character recognition techniques that implement the identified abstract idea. The wound triaging and recommendation system, memory, processor, user device, machine learning, text and voice based conversational AI subsystem, wound image analytics subsystem, AI based text and image analytics subsystem and patient treatment recommendation subsystem are not described by the applicant and are recited at a high-level of generality such that they amounts to no more than mere instructions to apply the exception using a generic computer component (i.e., merely invoking the computer structure as a tool used to execute the limitations, MPEP 2106.05(f)). The optical character recognition techniques are recited at a high-level of generality such that they are generally linking the use of a judicial exception to a particular technological environment or field of use, and thus, do not integrate a judicial exception into a practical application. The dependent claims 6 and 7 recite additional element(s) beyond those already recited in the independent claims that implement the identified abstract idea. Claim 6 recites an explainable artificial intelligence framework. Claim 7 recites at least one of a mobile phone, camera, specialized multi-spectral, and hyperspectral in one or more wavelengths. However, these functions do not integrate a practical application more than the abstract idea because: the explainable artificial intelligence framework represents mere instructions to apply the abstract idea on a computer (i.e., merely invoking the AI as a tool used to execute the limitations); and, the mobile phone, camera, specialized multi-spectral, and hyperspectral in one or more wavelengths generally link the use of a judicial exception to a particular technological environment or field of use. Accordingly, the claims as a whole do not integrate the abstract idea into a practical application as they do not impose any meaningful limits on practicing the abstract idea. Step 2B Regarding Step 2B, representative independent claim 1 does not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for the same reasons to those discussed above with respect to determining that the claim does not integrate the abstract idea into a practical application. When viewed as a whole, claims 1-12 do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the claims recite processes that are abstract and simply implements the process on a computer(s) is not enough to qualify as "significantly more." As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of using a wound triaging and recommendation system, memory, processor, user device, machine learning, text and voice based conversational AI subsystem, wound image analytics subsystem, AI based text and image analytics subsystem and patient treatment recommendation subsystem to perform the noted steps amount to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept (“significantly more”). In addition, the optical character recognition techniques generally link the use of a judicial exception to a particular technological environment or field of use, and thus, do not amount to significantly more than the judicial exception. The dependent claims 6 and 7 recite additional element(s) beyond those already recited in the independent claims that implement the identified abstract idea. Claim 6 recites an explainable artificial intelligence framework. Claim 7 recites at least one of a mobile phone, camera, specialized multi-spectral, and hyperspectral in one or more wavelengths. However, these functions are not deemed significantly more than the abstract idea because: the explainable artificial intelligence framework represents mere instructions to apply the abstract idea on a computer (i.e., merely invoking the AI as a tool used to execute the limitations); and, the mobile phone, camera, specialized multi-spectral, and hyperspectral in one or more wavelengths generally link the use of a judicial exception to a particular technological environment or field of use. Therefore, claims 1-12 are rejected under 35 USC §101 as being directed to non-statutory subject matter. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 5, 7-10 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Fan (US 2020/0193597 A1) in further view of Wang (US 2024/0005501 A1) and Shluzas (US 2022/0301666 A1). Regarding claim 1, Fan teaches: A wound triaging and recommendation system for wound triaging and recommendation for treatments, the wound triaging and recommendation system comprising: a hardware processor; and a memory coupled to the hardware processor, wherein the memory comprises a set of program instructions in the form of a plurality of subsystems, configured to be executed by the hardware processor, wherein the plurality of subsystems comprises: (e.g. see [0234]) […] obtain patient’s medical data comprising at least one of: history of one or more diseases of a patient, family information of the patient, symptoms of the one or more diseases in the patient, and medicines consumed by the patient (104) […]; and (obtaining patient medical data comprising the history of diseases and medicines consumed, such as “Prior DFU” (diabetic foot ulcer), “Diabetes Mellitus Medications”, “Steroids” and “Current or prior medical diagnosis”, e.g. see [0163], Table 1) determine data associated with at least one of: genetic predisposition, the one or more diseases influencing a wound, and a state of the wound based on an effect of a current medicine that the patient consumes based on the obtained patient’s medical data using a machine learning algorithm; (feeding the obtained patient medical data into a machine learning algorithm to determine the state of the wound and disease (i.e. diabetes) influence; regarding healing of the DFU, “Predictions based on patient medical history alone were approximately 76% accurate…When combining these medical variables with image data we observed an increase in prediction accuracy to approximately 78%.”, e.g. see [0164]; “supervised machine learning algorithm, using the compressed image vector and patient clinical variables as inputs to predict DFU healing”, e.g. see [0192]) a wound image analytics subsystem configured to: collect images of the wound from the patient; (collecting images of a wound using a mobile device, e.g. see [0096], [0108]; “Color photography images (RGB images) were used as input data…Images were captured by a clinician using a portable digital camera.”, e.g. see [0188]) classify the wound of the patient […] based on the collected images of the wound from the patient using the machine learning algorithm; and (using a machine learning algorithm, a convolutional neural network (CNN)/U-net, to segment and classify wound images, e.g. see [0224]-[0226]) determine severity and risk category of the wound based on the classification of the wound of the patient using the machine learning algorithm; (“diagnosing the healing potential of DFUs, burns, and other wounds”, e.g. see [0058]; using the “machine learning model” to predict “the likelihood of a wound healing to a particular percentage area reduction over a specified time period (e.g., at least 50% area reduction within 30 days)”, e.g. see [0144]) an AI based text and image analytics subsystem configured to: obtain information associated with patient’s clinical reports from the patient; and extract key clinical parameters and changes in the key clinical parameters over time from the patient’s clinical reports […]; and (“From each subject, a set of clinical data (e.g., clinical variables or health metric values) was also obtained including their medical history, prior wounds, and blood work.”, e.g. see [0188]; “Patient metrics can include textual information or medical history or aspects thereof describing characteristics of the patient or the patient's health status”, e.g. see [0144]; extracting and utilizing longitudinal clinical parameters, e.g. “Healing rates and times of prior DFUs” and Hemoglobin A1C%, creatinine clearance and while blood cell counts over time, to feed into the machine learning model, e.g. see [0163], Table 1; “These metrics can be converted into a vector representation through appropriate processing, for example through word-to-vec embeddings, a vector having binary values representing whether the patient does or does not have the patient metric”, e.g. see [0164]) a patient treatment recommendation subsystem configured to: obtain at least one of: (a) the determined data associated with at least one of: the genetic predisposition, the one or more diseases influencing the wound, and the state of the wound based on the effect of the current medicine that the patient consumes, (b) the determined severity and risk category of the wound, and (c) the extracted key clinical parameters and the changes in the key clinical parameters over time from at least one of: the text and voice based conversational AI subsystem, the wound image analytics subsystem, and the AI based text and image analytics subsystem ; (“the ID representation of the image data can be concatenated with the vector representation of the patient metrics. This concatenated value can then be provided as an input into a fully connected neural network” (the system concatenates and obtains the outputs of the image analytics subsystem and the extracted clinical parameters into a unified recommendation engine), e.g. see [0165]) obtain medical data and reports from other patients, wherein the medical data and reports of the other patients comprise past medical history of the other patients; and (“The database of DFU images contained 29 individual images of diabetic foot ulcers obtained from 15 subjects. For each image, the true PAR measured at day 30 was known.”, e.g. see [0175]) triage the wound to at least one of: identify the severity of the wound by qualifying risk score for the wound, provide wound prognostics for wound healing, and (“generate, using one or more machine learning algorithms, at least one scalar value…corresponding to a predicted or assessed healing parameter over a predetermined time interval”, e.g. see [0007]; “A variety of healing parameters may be predicted by the present technology. By way of non-limiting example, some predicted healing parameters may include…(2) a percentage likelihood that the ulcer will heal to greater than 50% area reduction (or another threshold percentage, as desired according to clinical standards) within a period of 30 days (or another time period, as desired according to clinical standards); or (3) a prediction regarding the actual area reduction that is expected within 30 days (or another time period, as desired according to clinical standards) due to healing of the ulcer.”, e.g. see [0160]) provide treatment recommendations associated with personalized therapeutic routes for healing the wound to the patient (“In various embodiments, a wound assessment system or a clinician can determine an appropriate level of wound care therapy based on the results of the machine learning algorithms disclosed herein.”, e.g. see [0157]; “determining the predicted healing parameter comprises…selecting between a standard wound care therapy and an advanced wound care therapy”, e.g. see [0012]; “a standard wound care regimen, SOC therapy can include one or more of: optimization of nutritional status; debridement by any means to remove devitalized tissue; maintenance of a clean, moist bed of granulation tissue with appropriate moist dressings…”, e.g. see [0158]; “Advanced Wound Care (AWC) therapies…include, but are not limited to, any one or more of: hyperbaric oxygen therapy; negative-pressure wound therapy; bioengineered skin substitutes; synthetic growth factors...”, e.g. see [0159]) based on results outputted from at least one of: the text and voice based conversational AI subsystem , the wound image analytics subsystem, the AI based text and image analytics subsystem, and the medical data and reports from other patients. (“the ID representation of the image data can be concatenated with the vector representation of the patient metrics [from the text analytics subsytem]. This concatenated value can then be provided as an input into a fully connected neural network, which outputs a predicted healing parameter.”, e.g. see [0165]) Fan does not teach: classify the wound of the patient into a plurality of categories comprising at least one of: granulation, necrotic, and cellulitis However, Wang in the analogous art of machine learning for assessing wounds (e.g. see [0006]) teaches: classify the wound of the patient into a plurality of categories comprising at least one of: granulation, necrotic, and cellulitis (“a deep learning model that has been trained to classify pixels…between a plurality of classes comprising a plurality of classes associated with different types of wound tissue”, e.g. see [0007]; these classes include “granulation tissue”, “collagen” and “clot”, e.g. see [0061]) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Fan to include classifying the wound of the patient into a plurality of categories comprising at least one of granulation, necrotic, and cellulitis as taught by Wang, for the purposes of “provid[ing] a new strategy for assessing wounds” that yields “richer and less variable clinically relevant information” (Wang [0005]). Fan and Wang do not teach: a text and voice based conversational artificial intelligence (AI) subsystem configured to: obtain patient’s medical data through a user device of the patient scanning the patient’s clinical reports using optical character recognition techniques However, Shluzas in the analogous art of machine learning and artificial intelligence for medical diagnostics (e.g. see [0012]) teaches: a text and voice based conversational artificial intelligence (AI) subsystem configured to: obtain patient’s medical data through a user device of the patient (a “patient monitoring system” utilizing “machine learning algorithms” and a mobile user device equipped with “voice-driven data entry via automatic speech recognition” to capture clinical data, patient history, symptoms and medications, e.g. see [0012], [0081]; “incorporate Deep Neural Network(DNN) support in the ASR (automatic speech recognition) to improve speech recognition”, e.g. see [0129]) scanning the patient’s clinical reports using optical character recognition techniques (“software and hardware capable of reading patient information…optical character recognition (OCR)…or other data entry methods may be employed”; the system uses OCR to scan patient documents to extract the patient’s medical information for entry into the electronic health record system (EHR) and AI/ML system, e.g. see [0130], [0137], [0195]) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Fan and Wang to include a text and voice based conversational artificial intelligence subsystem configured to obtain patient’s medical data through a user device of the patient and scanning the patient’s clinical reports using optical character recognition techniques as taught by Shluzas, for the purposes of streamlining data capture, providing continuity across disconnected groups and reducing the chance of human error (Shluzas [0006]). Regarding claim 2, Fan, Wang and Shluzas teach the system of claim 1 as described above. Fan does not teach: store the obtained patient’s medical data; and compare the stored patient’s medical data with predetermined medical data to determine the data associated with at least one of: the genetic predisposition, the one or more diseases influencing the wound, and the state of the wound using the machine learning algorithm However, Shluzas in the analogous art teaches: store the obtained patient’s medical data; and (“As treatment occurs, physiological and treatment data is recorded by the PMSU (physiological monitor sensor unit) and the HMD (head-mounted display) and stored on the data tag.”, e.g. see [0136]; the data is stored on “internal non-volatile storage”, e.g. see [0134], [0151]) compare the stored patient’s medical data with predetermined medical data to determine the data associated with at least one of: the genetic predisposition, the one or more diseases influencing the wound, and the state of the wound using the machine learning algorithm (“a machine learning algorithm…that combines and analyzes human clinical data to compare data inputs with baseline data for establishing pertinent patient information (changes to a patient's physiological and neurological status…)”, e.g. see [0012]) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Fan and Wang to include store the obtained patient’s medical data and compare the stored patient’s medical data with predetermined medical data to determine the data associated with at least one of the genetic predisposition, the one or more diseases influencing the wound, and the state of the wound using the machine learning algorithm as taught by Shluzas, for the purposes of “continuously captur[ing] treatment or condition data” and “identify[ing] patients whose condition is worsening” (Shluzas [0011]-[0012]). Regarding claim 3, Fan, Wang and Shluzas teach the system of claim 1 as described above. Fan does not teach: wherein in classifying the wound of the patient, the wound image analytics subsystem is configured to: analyze the wound from the images of the wound collected from the patient; and compare the collected images of the wound with pre-classified images associated with the wound to classify the wound of the patient into the plurality of categories using the machine learning algorithm However, Wang in the analogous art teaches: wherein in classifying the wound of the patient, the wound image analytics subsystem is configured to: analyze the wound from the images of the wound collected from the patient; and (“analysing one or more optical coherence tomography images of the wound using a deep learning model”, e.g. see [0007]) compare the collected images of the wound with pre-classified images associated with the wound to classify the wound of the patient into the plurality of categories using the machine learning algorithm (“The deep learning model may have been trained using a plurality of training optical coherence tomography images, wherein areas of each training image showing visual features indicative of the presence of the different types of wound tissues are labelled accordingly. The labels associated with the training images may be referred to as "ground truth labels".”, e.g. see [0012]) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Fan to include analyzing the wound from the images of the wound collected from the patient and comparing the collected images of the wound with pre-classified images associated with the wound to classify the wound of the patient into the plurality of categories using the machine learning algorithm as taught by Wang, for the purposes of removing human “subjectivity” and “increased accuracy” (Wang [0078]). Regarding claim 5, Fan, Wang and Shluzas teach the system of claim 1 as described above. Fan further teaches: wherein the personalized therapeutic routes provided by the patient treatment recommendation subsystem comprise at least one of: drugs, diet, lifestyle changes, antibiotics, topicals, wound dressing, negative wound pressure therapy, hyperbaric oxygen therapy, and wound debridement (“a standard wound care regimen, SOC therapy can include one or more of: optimization of nutritional status (i.e. diet); debridement by any means to remove devitalized tissue; maintenance of a clean, moist bed of granulation tissue with appropriate moist dressings…”, e.g. see [0158]; “Advanced Wound Care (AWC) therapies…include, but are not limited to, any one or more of: hyperbaric oxygen therapy; negative-pressure wound therapy; bioengineered skin substitutes; synthetic growth factors...”, e.g. see [0159]) Regarding claim 7, Fan, Wang and Shluzas teach the system of claim 1 as described above. Fan further teaches: wherein the images of the wound are captured using at least one of: a mobile phone, a camera, a specialized multi-spectral, and hyperspectral in one or more wavelengths comprising at least one of: ultraviolet (UV), and visible infrared (IR) (“images may be captured with a monochrome, RGB, and/or infrared imaging device such as those included in commercially available mobile devices”, e.g. see [0059]; “a multispectral multi-aperture imaging system…implemented as a set of multibandpass filters 905 that are attachable over a multi-aperture camera 915 of a mobile device 910…such as smartphones…having two openings leading to two image sensor regions”, e.g. see [0107]) Claim 8 recites substantially similar limitations as those already addressed in claim 1, and, as such is rejected for similar reasons as given above. Claim 9 recites substantially similar limitations as those already addressed in claim 2, and, as such is rejected for similar reasons as given above. Claim 10 recites substantially similar limitations as those already addressed in claim 3, and, as such is rejected for similar reasons as given above. Claim 12 recites substantially similar limitations as those already addressed in claim 5, and, as such is rejected for similar reasons as given above. Claims 4 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Fan, Wang and Shluzas in further view of AK (WO 2022/124865 A1). Regarding claim 4, Fan, Wang and Shluzas teach the system of claim 1 as described above. Fan does not teach: wherein in extracting the key clinical parameters and the changes in the key clinical parameters over time, the AI based text and image analytics subsystem is configured to: obtain the information associated with the patient’s clinical reports from the patient as at least one of: an image and a text; However, Shluzas in the analogous art teaches: wherein in extracting the key clinical parameters and the changes in the key clinical parameters over time, the AI based text and image analytics subsystem is configured to: obtain the information associated with the patient’s clinical reports from the patient as at least one of: an image and a text; (“automatic object detection from image and video data”, e.g. see [0018]) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Fan and Wang to include obtaining the information associated with the patient’s clinical reports from the patient as at least one of an image and a text as taught by Shluzas, for the purposes of “continuously captur[ing] treatment or condition data” (Shluzas [0011]). Fan, Wang and Shluzas do not teach: pre-process at least one of: the image and the text to extract non-zero pixels; recognize one or more characters based on the extracted non-zero pixels using segmentation, thresholding and AI based classification; and post-process the one or more characters to return at least one of: exact text and alphanumeric data comprising one or more numbers related to the key clinical parameters However, AK in the analogous art of utilizing artificial intelligence, machine learning and computer vision pipelines to perform character recognition (e.g. see [0133], [0232]) teaches: pre-process at least one of: the image and the text to extract non-zero pixels; (“the preprocessing includes…transforming the primary image into a binary image exhibiting two color values as zero and non-zero.”, e.g. see [0023]-[0025]; “each pixel of the secondary image has a zero color value or non-zero color value, and the boundary criteria is satisfied when a corresponding pixel has the non-zero color value”, e.g. see [0060]) recognize one or more characters based on the extracted non-zero pixels using segmentation, thresholding and AI based classification; and (“Image Processing Modules 310 may include modules related to image pre-processing such as cropping, image splitting (i.e. segmentation), resizing, color to gray scale, thresholding, contour detection…”, e.g. see [0138]; passing the reduced non-zero pixel dataset to train an “AI-Computer vision model (i.e. decision tree architecture, machine learning model or deep learning architecture) for image classification”, e.g. see [0133]) post-process the one or more characters to return at least one of: exact text and alphanumeric data comprising one or more numbers related to the key clinical parameters (“FIG. 13b illustrates a scenario of character recognition based on contour detection. There may be many scenarios where-in characters need to be recognized for automation like on medicine bottles - expiry date, reading number plate, street sign etc. Character Recognition based on present subject matter's contour detection will be faster because of pre-processing and feature detection”, e.g. see [0232]) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Fan, Wang and Shluzas to include pre-processing at least one of the image and the text to extract non-zero pixels, recognizing one or more characters based on the extracted non-zero pixels using segmentation, thresholding and AI based classification and post-processing the one or more characters to return at least one of exact text and alphanumeric data comprising one or more numbers related to the key clinical parameters as taught by AK, for the purposes of reducing the “input feature set” which “reduces complexity, improves training and inferencing time” (AK [0179]) and allowing for the character recognition to be faster (AK [0232]). Claim 11 recites substantially similar limitations as those already addressed in claim 4, and, as such is rejected for similar reasons as given above. Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Fan, Wang and Shluzas in further view of Sarp (“The Enlightening Role of Explainable Artificial Intelligence in Chronic Wound Classification”, Electronics, 2021). Regarding claim 6, Fan, Wang and Shluzas teach the system of claim 1 as described above. Fan, Wang and Shluzas teach the wound triaging and the treatment recommendations outputted by the patient treatment recommendation subsystem as described above. Fan, Wang and Shluzas do not teach: further comprising an explainable artificial intelligence (AI) framework, which enables physician, doctors and the patient to understand the recommendations outputted However, Sarp in the analogous art of artificial intelligence for wound classification (e.g. see abstract) teaches: further comprising an explainable artificial intelligence (AI) framework, which enables physician, doctors and the patient to understand the recommendations outputted (implementing an “Explainable Artificial Intelligence (XAI)” framework; provide an explanation to “extract additional knowledge that can also be interpreted by non-data-science experts, such as medical scientists and physicians”, e.g. see abstract; feeding the classified chronic wound images into the model to generate an explanation; the “proposed model forms a hybrid XAI framework through the use of LIME and heatmap proposals”, e.g. see pgs. 7-8 para. 5; “With this information related to model rationale, the clinician can decide to trust the model or not.”, e.g. see pg. 7 para. 4; “helps caregivers make a decision and support their decision with a visual explanation”, e.g. see pg. 9 para. 3) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Fan, Wang and Shluzas to include an explainable artificial intelligence (AI) framework, which enables physician, doctors and the patient to understand the recommendations outputted as taught by Sarp, for the purposes of “help[ing] users decide when to trust or not to trust their predictions” (Sarp pg. 7 para. 3) and “better steer the treatment approach” (Sarp pg. 7 para. 4). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Reference DiMaio (US 2017/0079530 A1) discloses reflective mode multi-spectral time-resolved optical imaging methods and apparatuses for tissue classification. Reference Dhawan (US 2010/0042004 A1) discloses multi-spectral imaging and analysis of skin lesions and biological tissues. Reference You (US 2022/0398739 A1) discloses automatically recognizing wound boundary based on artificial intelligence and generating three-dimensional wound model. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Aaisha Abdullah whose telephone number is (571)272-5668. The examiner can normally be reached Monday through Friday 8:00 am - 5:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Peter Choi can be reached on (469) 295-9171. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /A.A./ /PETER H CHOI/Supervisory Patent Examiner, Art Unit 3681
Read full office action

Prosecution Timeline

Oct 04, 2024
Application Filed
Mar 13, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12451247
USER INTERFACE FOR MANAGING A MULTIPLE DIAGNOSTIC ENGINE ENVIRONMENT
2y 5m to grant Granted Oct 21, 2025
Patent 12406768
SYSTEM AND METHOD FOR COLLECTION AND MANAGEMENT OF DATA FROM MANAGED AND UNMANAGED DEVICES
2y 5m to grant Granted Sep 02, 2025
Patent 12394511
Methods And Systems For Remote Analysis Of Medical Image Records
2y 5m to grant Granted Aug 19, 2025
Patent 12249425
INSULIN TITRATION ALGORITHM BASED ON PATIENT PROFILE
2y 5m to grant Granted Mar 11, 2025
Patent 12211624
METHODS AND SYSTEMS OF PREDICTING PPE NEEDS
2y 5m to grant Granted Jan 28, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
25%
Grant Probability
67%
With Interview (+41.9%)
4y 5m
Median Time to Grant
Low
PTA Risk
Based on 44 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month