DETAILED ACTION
Notices to Applicant
This communication is a final rejection. Claims 1-20, as filed 11/20/2025 are currently pending and have been considered below.
Priority is generally acknowledged as shown on the filing receipt with the earliest priority date being 08/09/2023.
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon and the rationale supporting the rejection would be the same under either status.
Claim Objections
Claim 1 is objected to because of the following informalities. The claim contains a typographical mistake: “point-of-case”. For purposes of examination, this will be treated as though the claim said “point-of-care”.
Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
Step 1
The claim(s) recite(s) subject matter within a statutory category as a process, machine, and/or article of manufacture which recite:
1. A system for optimizing wound healing in a patient, the system comprising:
a mobile device configured to capture an image of the wound, wherein the wound comprise an area and a tissue type (apply the abstract idea with a computer);
artificial intelligence configured to analyze the image using color information of the wound image to standardize the wound area of the wound and the tissue type of the wound (abstract idea – mental process and mathematical process);
a platform for receiving the artificial intelligence analyzed image and for receiving point of care wound assessment information, the platform electronically combining the standardized wound image data with the point-of-care wound assessment information (apply the abstract idea with a computer);
wherein the image standardization and the point-of-care wound assessment information are combined to produce a dressing guideline with evidence-based standards of care for treating the wound (abstract idea – mental process); and
wherein the dressing guideline is modified after a predetermined time period using machine learning based on whether the wound area has improved by at least a predetermined percentage threshold to identify patient and wound characteristics in combination with the dressing guideline to optimize wound healing (abstract idea – mental process);
wherein the artificial intelligence comprises machine-learning algorithms configured to determine wound-area measurements and to classify wound tissue type based on analysis of the color composition of the wound image (abstract idea – mental process).
2. The system of claim 1, wherein the point-of-care wound assessment information is selected from one or more of wound type, wound stage, wound depth, drainage amount, presence of purulent drainage, peri-wound characteristics, presence of undermining, and presence of tunneling (abstract idea – mental process and applying the idea with a computer).
3. The system of claim 1, wherein the digital device is selected from one or more of a smart phone, a smart watch, a tablet, a laptop computer, a personal digital assistant, a pair of smart glasses, a virtual reality viewing device, a digital camera, and a digital scanning device (apply the abstract idea with a computer).
4. The system of claim 1, wherein the wound image is uploaded to a mobile application or computer for analyzing using artificial intelligence (apply the abstract idea with a computer).
5. The system of claim 1, wherein the standardization of the wound and tissue type is determined in part by the color of the wound image (abstract idea – mental process).
6. The system of claim 1, wherein the tissue type is selected from granulation, slough, eschar, or combinations thereof (abstract idea – mental process).
7. The system of claim 1, wherein the point-of-care wound assessment information are selected from one or more of patient allergies, patient health or immune problems, topography of the body part on which the wound lies, color of skin surrounding the wound, wound depth, wound stage, drainage amount, peri-wound characteristics, presence of tunneling, and presence of undermining (abstract idea – mental process).
8. The system of claim 1, wherein the wound dressing guideline comprises a type of wound dressing and wound treatment methods (abstract idea – mental process).
9. The system of claim 8, wherein the wound dressing guideline includes physical dressing information, chemical dressing information, geometrical dressing information, optical dressing information, electrical dressing information, number of layers, porosity of a layer, thickness of a dressing, adsorbing capacity, water penetration capacity, water vapor penetration capacity, gas penetration capacity, thickness, material, material form, pharmacological or healing enhancing additives, color, local absence of dressing, adhesive, or combinations thereof (abstract idea – mental process).
10. The system of claim 1, wherein the wound dressing guideline includes predictors of non-healing by wound type (abstract idea – mental process).
11. The system of claim 1, wherein the predetermined time period is two weeks (abstract idea – mental process).
12. The system of claim 1, wherein an improvement of at least about 25% in the wound area automatically triggers a wound guideline for the wound, while an improvement of less than about 25% or deterioration in the wound area automatically triggers a non-healing wound guideline (abstract idea – mental process).
Claims 1-12 are analyzed in detail but the same reasoning applies to claims 13-20.
Step 2A Prong One
The broadest reasonable interpretation of these steps includes certain methods of organizing human activity such as the process of a clinician monitoring a patient’s healing. The steps also include mental processes such as the determinations and analyses that a clinic could perform mentally while caring for a patient. For example, the clinician could look at the wound, consider the location and type of wound, apply a simple AI/ML algorithm like linear regression, and output a treatment. No details are given for the AI analysis that go beyond what a human could practicably do in his mind or using a computer with generic functions.
Dependent claims recite additional subject matter which further narrows or defines the abstract idea embodied in the claims as shown in the analysis above. For example, claims 5-12 give additional aspects of the determinations that can be performed practicably in the human mind for recitation of generic computer components.
Step 2A Prong Two
This judicial exception is not integrated into a practical application. In particular, the additional elements do not integrate the abstract idea into a practical application, other than the abstract idea per se, because the additional elements:
amount to mere instructions to apply an exception. For example, a mobile device configured to capture an image of the wound, wherein the wound comprise an area and a tissue type and a platform for receiving the artificial intelligence amounts to invoking computers as a tool to perform the abstract idea, see the published application [0009] and [0037] and MPEP 2106.05(f))
add insignificant extra-solution activity to the abstract idea. For example, capturing an image of a wound amounts to mere data gathering, selecting a particular data source or type of data to be manipulated, see MPEP 2106.05(g), and generally linking the abstract idea to a particular technological environment or field of use such as cameras and AI, see MPEP 2106.05(h))
Dependent claims recite additional subject matter which amount to limitations consistent with the additional elements in the independent claims. For example, claims 3 and 4 apply the abstract idea using computers as tools. Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation and do not impose a meaningful limit to integrate the abstract idea into a practical application.
Step 2B
The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to discussion of integration of the abstract idea into a practical application, the additional elements amount to no more than mere instructions to apply an exception, add insignificant extra-solution activity to the abstract idea, and generally link the abstract idea to a particular technological environment or field of use. Additionally, the additional limitations, other than the abstract idea per se amount to elements that have been recognized as well-understood, routine, and conventional activity in particular fields. For example, a mobile device configured to capture an image of the wound, wherein the wound comprise an area and a tissue type and a platform for receiving the artificial intelligence amounts to receiving or transmitting data over a network, Symantec, MPEP 2106.05(d)(II)(i), performing repetitive calculations, Flook, MPEP 2106.05(d)(II)(ii), electronic recordkeeping, Alice Corp., MPEP 2106.05(d)(II)(iii), and/or storing and retrieving information in memory, Versata Dev. Group, MPEP 2106.05(d)(II)(iv).
Dependent claims recite additional subject matter which, as discussed above with respect to integration of the abstract idea into a practical application, amount to invoking computers as a tool to perform the abstract idea. Dependent claims recite additional subject matter which amount to limitations consistent with the additional elements in the independent claims. Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-10 and 13-20 are rejected under 35 U.S.C. 103 as being unpatentable over Barakat-Johnson (Barakat-Johnson M, Jones A, Burger M, Leong T, Frotjold A, Randall S, Kim B, Fethney J, Coyer F. Reshaping wound care: Evaluation of an artificial intelligence app to improve wound assessment and management amid the COVID-19 pandemic. Int Wound J. 2022 Oct;19(6):1561-1577) in view of Ramachandram (Ramachandram D, Ramirez-GarciaLuna J, Fraser R, Martínez-Jiménez M, Arriaga-Caballero J, Allport J; Fully Automated Wound Tissue Segmentation Using Deep Learning on Mobile Devices: Cohort Study; JMIR Mhealth Uhealth 2022;10(4):e36977) and Fan (US20210201479A1).
Regarding claim 1, Barakat-Johnson discloses: A system for optimizing wound healing in a patient, the system comprising:
--a mobile device configured to capture an image of the wound, wherein the wound comprises an area and a tissue type (“place sticker for wound reference, capture image” in FIGURE 3; “The TA app was available for any smartphone and Android device with an integrated camera,” page 1564);
--artificial intelligence configured to analyze the image using color information of the wound image (“The TA app is designed to facilitate patient wound care delivery using artificial intelligence‐based technology to support clinical decision‐making. By capturing an image of the wound, the TA app analyses its dimensions and perimeters, surface area and tissue composition and presents augmented visual images (Figure 2),” page 1564; FIGURE 2 shows a wound image with color);
--a platform for receiving the artificial intelligence analyzed image and for receiving point of care wound assessment information, the platform electronically combining the standardized wound image data with the point-of-care wound assessment information (FIGURES 2 and 3; FIGURE 3 shows a clinician entering wound characteristics which is combined with “algorithms and clinical decision-support tools to assist in determining the best treatment options, the type of wound products to use, the tracking of the wound healing progression,” on page 1563; this shows a combination of point-of-care information entered by the clinician and AI-derived image metrics like wound area and tissue composition);
--wherein the image standardization and the point-of-care wound assessment information are combined to produce a dressing guideline with evidence-based standards of care for treating the wound (“based on your wound documentation the following product properties are relevant” in FIGURE 3; Analysis of wound shown in FIGURE 3 includes color-coded tissue composition shown in FIGURE 2; “type of wound products to use,” page 1563); and
--wherein the dressing guideline is modified after a predetermined time period using machine learning …to identify patient and wound characteristics in combination with the dressing guideline to optimize wound healing (measurements were taken every week, page 1568, leading to updated dressing guidelines; this is further supported by the “view wound tracking overtime” and healing rate report in FIGURE 3).
PNG
media_image1.png
562
1052
media_image1.png
Greyscale
Barakat-Johnson does not expressly disclose but Ramachandram teaches that the color information of the wound image is used to standardize the wound area of the wound and the tissue type of the wound (“These tissues are present in an open wound in various color spectra when observed through a conventional imaging sensor. Epithelial tissue is observed as being pinkish or white regions that migrate from the wound margin with minimal exudate,” page 2; “4 different tissue types (epithelial, granulation, slough, and eschar) within the wound bed were independently labeled,” Abstract; “The tissue segmentation network, AutoTissue, produces a dense prediction of 4 wound tissue types (epithelial, granulation, slough, and eschar) when present within the detected wound bed,” page 4); “slough is observed as soft, yellow glutinous,” page 2; “The models developed could automatically detect the location of a wound in an image, delineate the accurate boundaries of the wound, determine if any of the 4 types of tissue are present within the wound bed, and finally compute their relative proportions for reporting,” page 2).
--wherein the artificial intelligence comprises machine-learning algorithms configured to determine wound-area measurements and to classify wound tissue type based on analysis of the color composition of the wound image (AutoTrace Wound Segmentation Model is a “deep convolutional encoder-decoder neural network,” page 7; wound area measurement and wound tissue segmentation on page 3 using “dynamic color thresholding” on page 4; various color-based techniques on pages 3 and 4).
One of ordinary skill in the art would have been motivated before the effective filing date to expand the wound classification of Barakat-Johnson to include the tissue type classification and color analysis of Ramachandram because these are the “four major tissue types present in chronic wounds” (page 2) and properly assessing the wound with these tissue types will help “improv[e] treatment selection,” (page 2).
Barakat-Johnson does not expressly disclose using wound healing percentage thresholds in the wound analyses. Fan teaches: based on whether the wound area has improved by at least a predetermined percentage threshold (“the wound is assessed to determine if it is healing (e.g., percent area reduction of greater than 50%). If the wound is not healing sufficiently, the treatment is supplemented with one or more advanced wound management therapies,” [0056]; “when the predicted healing parameter indicates that the wound, preferably a DFU, will heal or close by greater than 50% in 30 days, indicating or applying one or more standard therapies,” [0012]).
One of ordinary skill in the art would have been motivated before the effective filing date to expand the wound classification of Barakat-Johnson and Ramachandran to include the clinical decision support based on healing percentage thresholds of Fan because this would improve disease prevention by, for example, directing clinicians to use standard or advanced wound management techniques (Fan [0004] and [0057]).
Regarding claim 2, Barakat-Johnson discloses: wherein the point-of-care wound assessment information is selected from one or more of wound type, wound stage, wound depth, drainage amount, presence of purulent drainage, peri-wound characteristics, presence of undermining, and presence of tunneling (Input wound date in FIGURE 3 shows would type, depth description, and more).
PNG
media_image2.png
846
398
media_image2.png
Greyscale
Regarding claim 3, Barakat-Johnson discloses: wherein the digital device is selected from one or more of a smart phone, a smart watch, a tablet, a laptop computer, a personal digital assistant, a pair of smart glasses, a virtual reality viewing device, a digital camera, and a digital scanning device (smartphone on page 1563).
Regarding claim 4, Barakat-Johnson discloses: wherein the wound image is uploaded to a mobile application or computer for analyzing using artificial intelligence (“The TA app is a cloud‐based application to measure, analyse and treat wounds. The TA app is designed to facilitate patient wound care delivery using artificial intelligence‐based technology to support clinical decision‐making. By capturing an image of the wound, the TA app analyses its dimensions and perimeters, surface area and tissue composition and presents augmented visual images (Figure 2),” page 1564).
Regarding claim 5, Barakat-Johnson discloses: wherein the standardization of the wound and tissue type is determined in part by the color of the wound image (colors in FIGURE 2 and in the pie chart “View analysis” in FIGURE 3).
Regarding claim 6, Barakat-Johnson does not expressly disclose but Ramachandram teaches: wherein the tissue type is selected from granulation, slough, eschar, or combinations thereof (“4 different tissue types (epithelial, granulation, slough, and eschar) within the wound bed were independently labeled,” Abstract; “The tissue segmentation network, AutoTissue, produces a dense prediction of 4 wound tissue types (epithelial, granulation, slough, and eschar) when present within the detected wound bed,” page 4).
The motivation to combine is the same as in claim 1.
Regarding claim 7, Barakat-Johnson discloses: wherein the point-of-care wound assessment information are selected from one or more of patient allergies, patient health or immune problems, topography of the body part on which the wound lies, color of skin surrounding the wound, wound depth, wound stage, drainage amount, peri-wound characteristics, presence of tunneling, and presence of undermining (FIGURE 3 includes Periwound assessment, Wound Site/Location, Depth description, and other assessments).
Regarding claim 8, Barakat-Johnson discloses: wherein the wound dressing guideline comprises a type of wound dressing and wound treatment methods (PRODUCTS and CONSIDERATIONS in FIGURE 3).
PNG
media_image3.png
720
446
media_image3.png
Greyscale
Regarding claim 9, Barakat-Johnson discloses: wherein the wound dressing guideline includes physical dressing information, chemical dressing information, geometrical dressing information, optical dressing information, electrical dressing information, number of layers, porosity of a layer, thickness of a dressing, adsorbing capacity, water penetration capacity, water vapor penetration capacity, gas penetration capacity, thickness, material, material form, pharmacological or healing enhancing additives, color, local absence of dressing, adhesive, or combinations thereof (“self-adhesive fixation” in FIGURE 3).
Regarding claim 10, Barakat-Johnson does not expressly disclose but Ramachandram teaches: wherein the wound dressing guideline includes predictors of non-healing by wound type (“healing risk prediction, identification on nonhealing wounds, adjustment of treatment options,” page 16; “For example, the PUSH score [6] was proposed for pressure injuries and consists of three parameters: length×width, exudate amount (none, light, moderate, and heavy), and tissue type (necrotic tissue, slough, granulation tissue, epithelial tissue, and closed). Each parameter was scored, and the sum of the 3 scores yielded a total wound status score, which helped classify wound severity and identify nonhealing wounds,” page 3).
One of ordinary skill in the art would have been motivated before the effective filing date to expand the wound classification of Barakat-Johnson, Ramachandram, and Fan to include the predictors of Ramachandram because identifying non-healing wounds earlier would “ultimately lead to improved healing rates for chronic wounds,” (page 16).
Regarding claims 13-20, the claims are substantially similar to claims 1-8 (respectively) and are rejected with the same reasoning.
Claims 11 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Barakat-Johnson in view of Ramachandram, Fan, and Harries (R. L. Harries, D. C. Bosanquet, and K. G. Harding, Wound Bed Preparation: TIME for an Update, International Wound Journal 13 Suppl 3, no. Suppl 3 (2016)).
Regarding claim 11, Barakat-Johnson discloses weekly check-ups and Fan discloses check-ups every 30 days, but not every two weeks. Harries teaches: wherein the predetermined time period is two weeks (“A 20–40% reduction in wound area after 2 and 4 weeks of treatment has been shown to be a reliable predictor of healing,” page 11).
One of ordinary skill in the art would have been motivated before the effective filing date to expand the wound classification of Barakat-Johnson, Ramachandran, and Fan to include the two-week follow-up assessment of Harries because it would require fewer clinical resources than the weekly check-ups of Barakat-Johnson, Ramachandran, and Fan while maintaining efficacy of treatment (page 11). The Examiner additionally notes that “where the general conditions of a claim are disclosed in the prior art, it is not inventive to discover the optimum or workable ranges by routine experimentation,” MPEP 2144.05(II).
Regarding claim 12, Barakat-Johnson does not expressly disclose but Harries teaches: wherein an improvement of at least about 25% in the wound area automatically triggers a wound guideline for the wound, while an improvement of less than about 25% or deterioration in the wound area automatically triggers a non-healing wound guideline (“A 20–40% reduction in wound area after 2 and 4 weeks of treatment has been shown to be a reliable predictor of healing,” page 11).
Barakat-Johnson suggests that any reduction is seen as an improvement (TABLE 4). Harries’ stricter criterion of 20-40% reduction would allow the system to label some patients as non-healing whose minor reductions would have otherwise been seen as healing in Barakat-Johnson. One of ordinary skill in the art would have been motivated before the effective filing date to expand the wound classification of Barakat-Johnson, Ramachandran, and Fan to include the wound healing/nonhealing classification of Harries because this would give the system a “reliable predictor of healing” that would allow for fine-tuned recommendations.
Response to arguments
Applicant's arguments filed 11/20/2025 have been fully considered and are discussed below.
Regarding the subject matter ineligibility rejections, Applicant argues that the claimed invention is not directed to an abstract idea (Step 2A Prong One) because the Examiner abstracts away too many elements such as a mobile device configured to capture an image of the wound, AI to analyze the image and standardize the wound area and tissue type, generating an evidence-based dressing guideline, and other features. Remarks pages 7-8. The Examiner disagrees because the existence of non-abstract elements does not necessary mean that the claimed invention is not directed to an abstract idea. As described above in greater detail, the examiner considers the abstract and non-abstract elements of the claims, and most of these non-abstract elements amount to, for example, merely applying the abstract idea with generic computer equipment. Applicant argues that a human cannot capture a wound image band perform analyses on the image because it requires steps that a human cannot perform like “programmatically segment pixel regions corresponding to wound area,” Remarks page 8, but this is a level of detail not found in the claims or specification. The term “pixel” is not found anywhere in the specification. That Applicant omitted the actual detail of image capture and analysis is further evidence that these features are not inventive in the instant case. Instead, the limitations amounts to high level recitation of abstract ideas being implemented on computers performing generic functions.
Applicant’s analogy to McRo on page 9 is not persuasive because the claims in that case contained a number of rules in 3D animation that were not mental processes and succeeded in automating a process that was not previously automated. That stands in contrast to wound analysis (i.e., looking at a wound, analyzing color and margins, and applying structured decision-making to make a treatment decision) which can occur altogether without computers. Similarly, the instant invention claims functional results of analysis without claiming the technical means to achieve them, which is distinct from McRo’s 3D modeling rules. These and other analogies are not persuasive because the claimed invention recites abstract ideas in the form of, e.g., mathematical concepts like color composition analysis, percentage threshold computation, and wound-area measurement algorithms and mental processes like tissue classification by color, treatment recommendation, and treatment modifications based on measured outcomes.
Applicant argues that the claimed invention integrates any abstract idea into a practical application (Step 2A Prong Two) because it amounts to a particular machine or transformation. Remarks pages 11-12. This is not persuasive because MPEP 2106.05(b) requires not merely that hardware is recited but that it meaningfully limit the claim. A generic mobile device (i.e., a smartphone), a generic AI/ML system, and a generic data platform do not satisfy this requirement. The claims do not limit the mobile device to any particular sensor type, the AI to any specific architecture, or the platform to any specialized clinical software. The hardware is recited at the level of “apply it with a computer”. The components “cooperat[ing] to form a multi-stage image-processing and clinical decision engine,” Remarks page 12, describes a logic arrangement rather than any limitation on the hardware itself.
Applicant argues that the claimed invention improves wound care technology by providing standardized wound assessment and using AI-driven guideline refinement. Remarks page 12. This is not persuasive because the improvement is in wound care decision-making rather than any particular computer technology. The claims do not actually treat a wound. They merely give a clinician clinical decision support which can then be used to guide treatment. Improving clinical decision-making is an improvement to the abstract idea itself.
Applicant argues that the image capture is not mere data gathering “but is instead computational medical decision support, producing real clinical outputs.” Remarks page 12. The Examiner disagrees because collecting data from multiple sources, analyzing the data, and outputting results was held to be an abstract idea in Electric Power Group v. Alstrom. See MPEP 2106.04(a)(2)(III)(A).
Applicant argues that the claimed invention amounts to significantly more than any abstract idea (Step 2B). Remarks pages 13-15. Applicant first argues that “the use of machine learning to dynamically modify dressing guidelines is not conventional” and “absent evidence demonstrating that such an ML-driven, pixel-standardized, evidence-based would-care optimization system was routine, the Patent Office cannot meet its burden under Berkheimer.” Remarks pages 13-14. This argument conflates obviousness with eligibility. As the Supreme Court emphasizes: “[t]he ‘novelty' of any element or steps in a process, or even of the process itself, is of no relevance in determining whether the subject matter of a claim falls within the § 101 categories of possibly patentable subject matter.” Diehr, 450 U.S. at 188-89 (emphasis added). The Federal Circuit further guides that “[eligibility and novelty are separate inquiries.” Two-Way Media Ltd. v. Comcast Cable Commc' ns, LLC, 874 F.3d 1329, 1340 (Fed. Cir. 2017). The Examiner is not required to provide evidence that every limitation of the claim and the combination overall are well-understood, routine, and conventional. This evidentiary requirement only applies to additional elements that have not been held to be well-understood, routine, and conventional by courts. There are no such elements in the above eligibility analysis.
Regarding the prior art rejections, Applicant’s remarks about anticipation are moot in light of the rejections based on Fan described above. Regarding obviousness, Applicant argues that one of ordinary skill in the art would not have been motivated to combine Barakat-Johnson and Ramachandran. This is not persuasive because they are both references in the same field addressing the same problems. Ramachandran’s models were even designed for mobile deployment because Ramachandran (“The model is small and fast enough to enable real-time inference on mobile devices,” page 6). Applicant’s assertion that Ramachandran is “limited to producing segmentation masks” and “does not generate uniform wound metrics suitable for treatment protocols” on Remarks page 20 fails to recognize that Ramachandran expressly teaches wound-area measurements (page 3), tissue type classification (page 3), and tissue proportion computation (page 4). Other arguments regarding the failure of the combination to disclose wound guidelines based on percentage thresholds are moot in light of the teachings of Fan described above.
Conclusion
Prior art that is considered pertinent but is not relied upon for any rejections includes Chairat (Chairat S, Chaichulee S, Dissaneewate T, Wangkulangkul P, Kongpanichakul L. AI-Assisted Assessment of Wound Tissue with Automatic Color and Measurement Calibration on Images Taken with a Smartphone. Healthcare. 2023; 11(2):273. https://doi.org/10.3390/healthcare11020273) discloses another AI-assisted wound assessment mobile app that includes techniques for color calibration (Abstract).
Applicant’s amendment necessitated the new ground(s) of rejection presented in this Office Action (See MPEP 706.07(a)). Accordingly, THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSHUA BLANCHETTE whose telephone number is (571)272-2299. The examiner can normally be reached on Monday - Thursday 7:30AM - 6:00PM, EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Shahid Merchant, can be reached on (571) 270-1360. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JOSHUA B BLANCHETTE/ Primary Examiner, Art Unit 3624