Prosecution Insights
Last updated: April 19, 2026
Application No. 18/312,407

Three-Dimensional Wound Reconstruction using Images and Depth Maps

Final Rejection §103
Filed
May 04, 2023
Examiner
LE, JOHNNY TRAN
Art Unit
2614
Tech Center
2600 — Communications
Assignee
Fabri Sciences Inc.
OA Round
2 (Final)
67%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
0%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
2 granted / 3 resolved
+4.7% vs TC avg
Minimal -67% lift
Without
With
+-66.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
32 currently pending
Career history
35
Total Applications
across all art units

Statute-Specific Performance

§101
6.1%
-33.9% vs TC avg
§103
65.9%
+25.9% vs TC avg
§102
16.7%
-23.3% vs TC avg
§112
8.3%
-31.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 3 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment 1 This action is in response to the amendment filed on 09/19/2025. Claims 1, 16, 19, and 20 have been amended. Claims 1-14, 16-18 and 20 remain rejected, but claims 15-17 will be objected and claim 19 to be made allowable. Response to Arguments 2 Applicant’s arguments with respect to claim 1 filed on 09/19/2025, with respect to the rejection under 35 U.S.C. § 103 regarding that prior art does not teach “a three-dimensional reconstruction of the wound healing, wherein the three-dimensional reconstruction of the wound healing represents an appearance of a body part while the wound heals or after the wound has fully healed.” Elements of claim 16 were removed and were added to claim 1. This argument has been considered, but are moot due to new grounds of rejection under Pedersen. 3 Applicant’s arguments with respect to claim 19 filed on 09/19/2025 with respect to the rejection under 35 U.S.C. § 103 regarding that prior art does not teach elements like “pixel area” that “corresponds to an area of an infrared-sensitive pixel within a depth sensor used to capture the depth map”. Elements of claim 15 were added to claim 19, while claim 15 remains unaltered. This argument has been considered, and will consider making claim 15 (as well as it dependent claims 16 and 17) to be objected due to still being dependent on claim 1, and also will consider claim 19 to be allowable. Due to this, the prior arts from Broaddus, Yoo, Mesher, Wissel, and Yuda have been dropped. 4 Applicant’s arguments with respect to claim 20 filed on 09/19/2025 with respect to the rejection under 35 U.S.C. § 103 regarding that prior art does not teach to “denoise the wound mask to remove artifacts generated as a result of using the machine-learned model for segmentation to generate the wound mask,”. Elements of claim 9 were added to claim 20, while claim 9 remains unaltered. This argument has been considered, but are moot due to new grounds of rejection under Kennett. 5 Regarding arguments to claims 2-8, 10-14, and 17-18, they directly/indirectly depend on independent claims 1, 19, and 20 respectively. Applicant does not argue anything other than independent claims 1, 9, and 20 (as well as the combined material from claims 9, 15, and 16). The limitations in those claims, in conjunction with combination, was mostly previously established as explained, but claim 17 is now considered objected as mention previously. Claim Rejections - 35 USC § 103 6 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 7 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 8 Claim(s) 1-2, 5, 7, 13, 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Edlund et al. (US 20240249406 A1) in view of J. M. Juszczyk et al., "Wound 3D Geometrical Feature Estimation Using Poisson Reconstruction," in IEEE Access, vol. 9, pp. 7894-7907, 2021, doi: 10.1109/ACCESS.2020.3035125 (hereinafter Juszczyk) and Pedersen et al. (US 20170076446 A1). 9 Regarding claim 1, Edlund teaches a method comprising: receiving, by a computing device, an image that includes a wound ([0055] reciting “Such image-processing vendors may offer one or more additional software packages that may be used for processing specific aspects of captured wound images.”); receiving, by the computing device, a depth map that includes the wound ([0057] reciting “Wound images captured by the image capture device 302 may be used by the wound imaging and diagnostic application to determine a depth map associated with each captured wound image…”; [0079] reciting “…may apply a trained machine learning model, such as a wound detection model, in order to find all regions within the digital image where possible wounds are located.”); aligning, by the computing device, the image with the depth map ([0016] reciting “The executable instructions may configure a processor to receive a digital image, receive a depth map associated with the digital image, determine a bounding box of the wound site by processing the digital image with a first trained neural network, determine a wound mask by processing the digital image with a second trained neural network, determine a wound boundary from the wound mask, determine whether the wound boundary is aligned within a camera frame, and generate a wound map from the depth data, wound mask, and wound boundary.”); determining, by the computing device based on the identified region of the image that corresponds to the wound, a region of the depth map that corresponds to the wound ([0057] reciting “Wound images captured by the image capture device 302 may be used by the wound imaging and diagnostic application to determine a depth map associated with each captured wound image…”); 10 Edlund does not explicitly teach generating, by the computing device, a three-dimensional reconstruction of the wound based on the region of the depth map that corresponds to the wound; applying, by the computing device, one or more colorations to the three-dimensional reconstruction of the wound based on one or more colorations in the identified region of the image that corresponds to the wound; generating, by the computing device, a three-dimensional reconstruction of the wound healing, wherein the three-dimensional reconstruction of the wound healing represents an appearance of a body part while the wound heals or after the wound has fully healed and displaying, by the computing device on a display, the three-dimensional reconstruction of the wound healing. 11 Juszczyk teaches generating, by the computing device, a three-dimensional reconstruction of the wound based on the region of the depth map that corresponds to the wound (See Figures 8 and 11); applying, by the computing device, one or more colorations to the three-dimensional reconstruction of the wound based on one or more colorations in the identified region of the image that corresponds to the wound (See Figures 8 and 11; [Figure 11] reciting “An exemplary result of wound reconstruction, Light red wireframe represents SPR reconstruction, dark red – MC reconstruction, and green – expert.”); generating, by the computing device, a three-dimensional reconstruction of the wound , wherein the three-dimensional reconstruction of the wound (See Figures 8 and 11) represents an appearance of a body part ([Section VI] reciting “Applicability to wounds located on body parts exhibiting a high curvature (i.e. limb) can be considered as an important advantage of this approach.”) and displaying, by the computing device on a display, the three-dimensional reconstruction of the wound ([Section VI.] reciting “In return, a virtual 3D skin is displayed and 3D wound measures are assessed.”) . PNG media_image1.png 700 953 media_image1.png Greyscale PNG media_image2.png 511 955 media_image2.png Greyscale 12 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Edlund) to incorporate the teachings of Juszczyk to provide a colorized three-dimensional reconstruction of the region of the wound being based on the depth map instead of a three-dimensional depth map itself as taught by Edlund ([0059] recited), as well as to display the reconstruction based on the type of body part. Doing so would provide the presented methods to feature higher accuracy as stated by Juszczyk ([Section VII] recited). 13 Edlund in view of Juszczyk does not explicitly teach generating, by the computing device, a three-dimensional reconstruction of the wound healing, wherein the three-dimensional reconstruction of the wound healing represents an appearance of a body part while the wound heals or after the wound has fully healed and displaying, by the computing device on a display, the three-dimensional reconstruction of the wound healing. 14 Pedersen teaches (in bold) generating, by the computing device, a three-dimensional reconstruction of the wound healing, wherein the three-dimensional reconstruction of the wound healing represents an appearance of a body part while the wound heals or after the wound has fully healed and displaying, by the computing device on a display, the three-dimensional reconstruction of the wound healing ([0006] reciting “The present teachings provide patients with diabetes and chronic foot ulcers an easy-to-use and affordable tool to monitor the healing of their foot ulcers via a healing score; at the same time, the patient's physician can review the wound image data to determine whether intervention is warranted. The system is also applicable for patients with venous leg ulcers. The system and method also includes correcting the wound area if the image was acquired at an angle relative to normal incidence.”; [0007] reciting “In accordance with one aspect, the present teachings provide a method for assessing wound. In one or more embodiments, the method of these teachings includes capturing an image of a body part including the wound area, analyzing the image to extract a boundary of the wound area, performing color segmentation within the boundary, wherein the wound area is divided into a plurality of segments, each segment being associated with a color indicating a healing condition of the segment and evaluating the wound area.”; [0045] reciting “Component configured to compute a healing score. The Healing Score is an important element of communicating in a simple fashion the healing status of the patient's wound. The Healing Score is a weighted sum of factors, such as: wound area; weekly change in wound area; wound texture; relative size and shapes of the healing, inflamed and necrotic regions within the wound boundary, and possibly the skin color around the wound. The weighing factors are determined from expert clinical input.”). 15 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Edlund in view of Juszczyk) to incorporate the teachings of Pedersen to provide a method to combine the limitations of the 3D reconstructions of the wound(s) taught by Edlund in view of Juszczyk to incorporate the wound healing scores calculated from the wounds taught by Pedersen. Doing so would provide patients with diabetes and chronic foot ulcers an easy-to-use and affordable tool to monitor the healing of conditions like their foot ulcers as stated by Pedersen ([0006] recited). 16 Regarding claim 2, Edlund in view of Juszczyk and Pedersen teaches the method of claim 1 (see claim 1 rejection above), further comprising displaying, by the computing device on a display (Edlund; [0091] reciting “FIG. 15C shows an example of how the user interface may display the wound boundary calculated at step 20e of FIG. 7 to the user on the touchscreen 308. As shown in FIG. 15C, the calculated wound boundary may be displayed as an outline 1508 overlaid on the wound.”), the colorized three-dimensional reconstruction of the wound (Juszczyk; [Section VI] reciting “In return, a virtual 3D skin is displayed and 3D wound measures are assessed.”; [See Figure 8 and 11 above]). 17 Regarding claim 5, Edlund in view of Juszczyk and Pedersen teaches the method of claim 1 (see claim 1 rejection above), wherein the image is captured in red-green-blue (RGB) color space, and wherein the one or more colorations are applied using RGB values (Juszczyk; [Section III] reciting “The setup allows the following data to be acquired: color photo…Each record also includes manual wound delineations prepared by a physician in 2D performed on the color image and in 3D performed on the depth point cloud.”; [FIGURE 8] reciting “An exemplary mesh result of the (a) reconstructed wound surface (red), (b) reconstructed virtual skin surface (blue), (c) reconstructed wound volume (gray) based on the depth data. The green wireframe represents expert delineated wound region in 3D.”). 18 Regarding claim 7, Edlund in view of Juszczyk and Pedersen teaches the method of claim 1 (see claim 1 rejection above), further comprising: modifying, by the computing device, a resolution of the image so the resolution matches an input resolution of the machine-learned model for wound identification; or modifying, by the computing device, an aspect ratio of the image so the aspect ratio matches an input aspect ratio of the machine-learned model for wound identification (Edlund; [0080] reciting “For example, the scaled image may be a lower resolution version of the digital image. The scaled image should be of sufficient resolution for the trained machine learning model to distinguish between regions on the scaled image containing a wound and regions on the scaled image not containing a wound.”). 19 Regarding claim 13, Edlund in view of Juszczyk and Pedersen teaches the method of claim 1 (see claim 1 rejection above), further comprising generating, by the computing device, a mesh based on the three-dimensional reconstruction, wherein the mesh represents a surface profile of the wound (Juszczyk; [Figure 7] reciting “A wound volume mesh reconstruction steps: (a) - 2D ROI, (b) - point selection, (c) - normal vectors computation, (d) - Poisson Surface Reconstruction, (e) - wound (red) and skin (blue) mesh selection, (f) - wound volume reconstruction.”; [Figure 8] reciting “An exemplary mesh result of the (a) reconstructed wound surface (red), (b) reconstructed virtual skin surface (blue), (c) reconstructed wound volume (gray) based on the depth data.”). 20 Regarding claim 18, Edlund in view of Juszczyk and Pedersen teaches the method of claim 1 (see claim 1 rejection above), further comprising: providing, by the computing device, the three-dimensional reconstruction of the wound to a user; or storing, by the computing device, the three-dimensional reconstruction of the wound within a memory such that the three-dimensional reconstruction of the wound is later accessible for analysis (Edlund; [0067] reciting “For example, the camera data module 510 may store images captured from the image capture device 202, depth maps captured from the image capture device 202, intrinsic data such as image metadata from the image capture device 202, and wound masks and predictions from the machine learning processor 514.”). 21 Claim(s) 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Edlund et al. (US 20240249406 A1) in view of J. M. Juszczyk et al., "Wound 3D Geometrical Feature Estimation Using Poisson Reconstruction," in IEEE Access, vol. 9, pp. 7894-7907, 2021, doi: 10.1109/ACCESS.2020.3035125 (hereinafter Juszczyk) and Pedersen et al. (US 20170076446 A1) as of claim 1, and further in view of Nikitidis et al. (US 20230252662 A1). 22 Regarding claim 3, Edlund in view of Juszczyk and Pedersen teaches the method of claim 1 (see claim 1 rejection above), wherein the image was captured by a camera of a mobile computing device ([0057] reciting “As also previously mentioned, the image capture device 202 may include a three-dimensional camera connected to the mobile device 110, which may also be used to capture wound images that may be used by the wound imaging and diagnostic application to automatically determine one or more wound dimensions and upload the dimensions to the proper data fields in the wound imaging application.”), and wherein the depth map was captured by a of a mobile computing device. 23 Edlund in view of Juszczyk and Pedersen does not explicitly teach wherein the depth map was captured by a depth sensor of a mobile computing device. 24 Nikitidis teaches wherein the depth map was captured by a depth sensor of a mobile computing device ([0198] reciting “Users of mobile devices 102 which have depth sensors may be requested, by the mobile app, to enable the depth sensors to be used with the mobile application and to provide 2D images and depth maps to the application.”). 25 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Edlund in view of Juszczyk and Pedersen) to incorporate the teachings of Nikitidis to provide a depth sensor within an image capture device ([0196] from Nikitidis) to capture and produce depth maps instead of only using the same image capturing device that is used for capturing images taught by Edlund in view of Juszczyk and Pedersen. Doing so would give the ability to store in a training database to train a network at a later time as stated by Nikitidis. 26 Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Edlund et al. (US 20240249406 A1) in view of J. M. Juszczyk et al., "Wound 3D Geometrical Feature Estimation Using Poisson Reconstruction," in IEEE Access, vol. 9, pp. 7894-7907, 2021, doi: 10.1109/ACCESS.2020.3035125 (hereinafter Juszczyk), Pedersen et al. (US 20170076446 A1), and Nikitidis et al. (US 20230252662 A1) as of claims 1 and 3, and further in view of Lee et al. (US 20220148259 A1). 27 Regarding claim 4, Edlund in view of Juszczyk, Pedersen, and Nikitidis teaches the method of claim 3 (see claims 1 and 3 rejections above), wherein the camera of the mobile computing device comprises a front-facing camera of the mobile computing device (Edlund; [0090] reciting “For example, the image capture device 202 may be the front-facing camera assembly of a smartphone.”), and wherein the depth sensor (Nikitidis; See claim 3 rejection for “depth sensor”) of the mobile computing device comprises: one or more infrared emitters configured to project an array of infrared signals into a surrounding environment (Edlund; [0059] reciting “…and the second camera module 318 may be an infrared camera suitable for detecting and converting infrared light into an electronic signal. The light emitting module 320 may be configured to emit multiple light rays, such as multiple rays of infrared light, towards an object to be detected.”); an array of infrared-sensitive pixels configured to detect reflections of the array of infrared signals from objects in the surrounding environment (Edlund; [0059] reciting “Thus, when the multiple light rays are emitted onto the object to be detected, the distance from the light emitting module 320 and various points on the object to be detected may be calculated by measuring the spacing between the points of light from the multiple light rays projected onto the surface of the object to be detected”); and a configured to determine depth based on the infrared signals detected by the array of infrared-sensitive pixels (Edlund; [0059] reciting “…and a three-dimensional depth map of the surface of the object to be detected may be generated”). 28 Edlund in view of Juszczyk, Pedersen, and Nikitidis does not explicitly teach a controller configured to determine depth based on the infrared signals detected by the array of infrared-sensitive pixels. 29 Lee teaches a controller configured to determine depth based on the infrared signals detected by the array of infrared-sensitive pixels ([0216] reciting “in order to obtain depth information of at least one object (subject) included in the tele 2D image, the controller 240 activates the infrared camera 230, as shown in FIG. 12, turns on only some infrared light emitting devices corresponding to pixel areas of the tele 2D image among all the infrared light emitting devices in the light emitting unit 220 and turns off the rest of the infrared light emitting devices.”). 30 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Edlund in view of Juszczyk, Pedersen, and Nikitidis) to incorporate the teachings of Lee to provide a controller to activate any infrared scanners from the teachings of Edlund in view of Juszczyk, Pedersen, and Nikitidis. Doing so would may increase the pulse width for the light quantity of some infrared light emitting devices and adjust the strength of the pulse width to become weaker than a preset reference strength as stated by Lee ([0219] recited). 31 Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Edlund et al. (US 20240249406 A1) in view of J. M. Juszczyk et al., "Wound 3D Geometrical Feature Estimation Using Poisson Reconstruction," in IEEE Access, vol. 9, pp. 7894-7907, 2021, doi: 10.1109/ACCESS.2020.3035125 (hereinafter Juszczyk) and Pedersen et al. (US 20170076446 A1) as of claim 1, and further in view of Asama et al. (US 20200074691 A1). 32 Regarding claim 6, Edlund in view of Juszczyk and Pedersen teaches the method of claim 1 (see claim 1 rejection above), but does not explicitly teach cropping, by the computing device, the image or the depth map such that the image and the depth map have the same dimensions; or downscaling, by the computing device, the image or the depth map such that the image and the depth map have the same resolution. 33 Asama teaches cropping, by the computing device, the image or the depth map such that the image and the depth map have the same dimensions; or downscaling, by the computing device, the image or the depth map such that the image and the depth map have the same resolution ([0050] reciting “At block 506, the images are downscaled, via a trainable vision scaler (TVS), in response to detecting that a resolution of the received image exceeds a predetermined threshold resolution”; [0052] reciting “At block 510, the processed image information is upsampled, via a micro upscaler, and the upsampled processed image information is output. Thus, if the images were downscaled at block 504, then the processed image information may be upsampled to generate images, depth maps, or feature maps with the same resolution as the processed image information of block 508.)”. 34 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Edlund in view of Juszczyk and Pedersen) to incorporate the teachings of Asama to provide a way to downscale the image and depth map from Edlund in view of Juszczyk and Pedersen to have the same resolution, as opposed to specifically Edlund’s fixed scaling (Edlund; [0080] reciting “For example, after the digital image is scaled at step 30a and padded at step 30b, the size of the padded image may be a square having dimensions of 320 pixels by 320 pixels. In some examples, the width of the scaled image and the length of the scaled image should be in multiples of 32 pixels. For example, the size of the scaled image may be 256 pixels by 256 pixels, 320 pixels by 320 pixels, or 512 pixels by 512 pixels.”). Doing so would provide a way to recover missing color information as stated by Asama ([0051] recited). 35 Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Edlund et al. (US 20240249406 A1) in view of J. M. Juszczyk et al., "Wound 3D Geometrical Feature Estimation Using Poisson Reconstruction," in IEEE Access, vol. 9, pp. 7894-7907, 2021, doi: 10.1109/ACCESS.2020.3035125 (hereinafter Juszczyk) and Pedersen et al. (US 20170076446 A1) as of claim 1, and further in view of Jones et al. (US 20140168212 A1). 36 Regarding claim 8, Edlund in view of Juszczyk and Pedersen teaches the method of claim 1 (see claim 1 rejection above), but does not explicitly teach transforming, by the computing device, the image to negate optical aberrations resulting from one or more imaging optics of a camera used to capture the image. 37 Jones teaches transforming, by the computing device, the image to negate optical aberrations resulting from one or more imaging optics of a camera used to capture the image ([0046] reciting “Prior to identifying features 114 and/or obtaining 2D locations 216, the images may be pre-processed to remove distortion and/or optical aberrations caused by the lens and/or sensor of camera 102. As described above, features 114 may then be identified and tracked across the images using a number of feature-detection and/or optical-flow estimation techniques and/or user input from the user of the portable electronic device.”). 38 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Edlund in view of Juszczyk and Pedersen) to incorporate the teachings of Jones to provide a way to negate optical aberrations from the images provided and taken from Edlund in view of Juszczyk and Pedersen. Doing so would remove the distortion as well as stated by Jones ([0046] recited). 39 Claim(s) 9-10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Edlund et al. (US 20240249406 A1) in view of J. M. Juszczyk et al., "Wound 3D Geometrical Feature Estimation Using Poisson Reconstruction," in IEEE Access, vol. 9, pp. 7894-7907, 2021, doi: 10.1109/ACCESS.2020.3035125 (hereinafter Juszczyk) and Pedersen et al. (US 20170076446 A1) as of claim 1, and further in view of Kennett et al. (US 20200389672 A1). 40 Regarding claim 9, Edlund in view of Juszczyk and Pedersen teaches the method of claim 1 (see claim 1 rejection above), further comprising: generating, by the computing device using a machine-learned model for segmentation, a wound mask for the image based on the identified region of the image that corresponds to the wound (Edlund; [0069] reciting “The machine learning processor 514 may detect wounds on the wound images, generate a wound mask corresponding with the detected wounds, and calculate predictions for the detected wounds. For example, the machine learning processor 514 may include a segmentation model for detecting a wound boundary and/or a wound mask from an input image, and an object detection model for detecting wound objects.”); and , by the computing device, the wound mask (Edlund; [0079] reciting “At step 20e, after the wound mask has been generated, a wound boundary can be calculated based on the wound mask. For example, the wound boundary may be where the wound mask transitions from regions where the value of the pixels of the binary mask is 1 to regions where the value of the pixels is 0. At step 20f, a wound map may be generated by removing portions of the depth map outside of the wound boundary. In some examples, the wound map may be generated by removing portions of the depth map corresponding to regions of the wound mask where the value of the pixels is 0.”). 41 Although Edlund in view of Juszczyk and Pedersen may teach denoising, by the computing device, the wound mask (Edlund; [0079] reciting “At step 20e, after the wound mask has been generated, a wound boundary can be calculated based on the wound mask. For example, the wound boundary may be where the wound mask transitions from regions where the value of the pixels of the binary mask is 1 to regions where the value of the pixels is 0. At step 20f, a wound map may be generated by removing portions of the depth map outside of the wound boundary. In some examples, the wound map may be generated by removing portions of the depth map corresponding to regions of the wound mask where the value of the pixels is 0.”), the limitation can be further taught by Kennett. 42 Kennett teaches denoising, by the computing device, the wound mask ([0049] reciting “As shown in FIG. 3A, the denoising system 202 may apply a denoising model to the compressed video frame 302 based information from the segmentation mask 306 to generate a repaired video frame 310 in which one or more compression artifacts have been removed from the decompressed video frame 302.”). 43 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Edlund in view of Juszczyk and Pedersen) to incorporate the teachings of Kennett to provide a method that gives a clearer indication of denoising a type of mask, where the mask can be a wound mask taught by Edlund in view of Juszczyk and Pedersen. Doing so would incorporate a denoising system to be trained to remove compression artifacts in a variety of ways as stated by Kennett ([0035] recited). 44 Regarding claim 10, Edlund in view of Juszczyk, Pedersen, and Kennett teaches the method of claim 9 (see claims 1 and 9 rejections above), further comprising applying, by the computing device, the wound mask to the depth map in order to isolate portions of the depth map related to the wound (Edlund; [0117] reciting “By removing the outlier pixels, the depth prediction system 106 can ascertain and implement an accurate depth range to utilize across the multiple sources as well as standardize depth data from the same source.”). 45 Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Edlund et al. (US 20240249406 A1) in view of J. M. Juszczyk et al., "Wound 3D Geometrical Feature Estimation Using Poisson Reconstruction," in IEEE Access, vol. 9, pp. 7894-7907, 2021, doi: 10.1109/ACCESS.2020.3035125 (hereinafter Juszczyk), Pedersen et al. (US 20170076446 A1), and Kennett et al. (US 20200389672 A1) as of claims 1 and 9-10, and further in view of Wang et al. (US 20230260211 A1) and Liu et al. (US 20220130018 A1). 46 Regarding claim 11, Edlund in view of Juszczyk, Pedersen, and Kennett teaches the method of claim 10 (see claims 1 and 9-10 rejections above), further comprising denoising (Kennett; See claim 9 rejection regarding “denoising”), by the computing device, the portions of the depth map related to the wound, wherein denoising the portions of the depth map related to the wound comprises eliminating or adjusting depth values (Edlund; [0079] reciting “At step 20e, after the wound mask has been generated, a wound boundary can be calculated based on the wound mask. For example, the wound boundary may be where the wound mask transitions from regions where the value of the pixels of the binary mask is 1 to regions where the value of the pixels is 0. At step 20f, a wound map may be generated by removing portions of the depth map outside of the wound boundary. In some examples, the wound map may be generated by removing portions of the depth map corresponding to regions of the wound mask where the value of the pixels is 0.”) 47 Edlund in view of Juszczyk, Pedersen, and Kennett does not explicitly teach applying, by the computing device, the wound mask to the depth map in order to isolate portions of the depth map related to the wound that are outside of a range of depths from a lower threshold depth to an upper threshold depth. 48 Wang teaches denoising, by the computing device, the portions of the depth map related to the wound, wherein denoising the portions of the depth map ([0027] reciting “during the filtering process, denoising process is performed in the two-dimensional space to realize image denoising and obtain the target depth map.”) related to the wound comprises eliminating or adjusting depth values that are outside of a range of depths from a lower threshold depth to an upper threshold depth ([0038] reciting “By comparing the gradient values of each pixel in the 2D depth map in the first direction and the second direction with the preset thresholds respectively, each pixel whose gradient values in the first direction and the second direction in the 2D depth map are greater than preset thresholds, is deleted, thereby realizing filtering of pixels of the 2D depth map, i.e., realizing the denoising of the 2D depth image and obtaining the target depth image.”). 49 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Edlund in view of Juszczyk, Pedersen, and Kennett) to incorporate the teachings of Wang to provide a method that limits the denoising to a certain threshold as the methods taught by Edlund in view of Juszczyk, Pedersen, and Kennett of denoising does not provide a threshold value. Doing so would improve accuracy of filtering pixels or values as stated by Wang ([0039] recited). 50 Edlund in view of Juszczyk, Pedersen, Kennett, and Wang does not explicitly teach eliminating or adjusting depth values that are outside of a range of depths from a lower threshold depth to an upper threshold depth. 51 Liu teaches eliminating or adjusting depth values that are outside of a range of depths from a lower threshold depth to an upper threshold depth ([0043] reciting “Specifically, the first predetermined difference range should have a lower threshold of difference and an upper threshold of difference, wherein the lower threshold of difference cannot be too large, for example, the typical value of which is 1 (assuming that the pixel value is represented as an 8-bit depth value of 0-255)”). 52 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Edlund, Juszczyk, Pedersen, Kennett, and Wang) to incorporate the teachings of Liu to provide a upper and lower threshold value for depth from specifically Edlund and to use the threshold teachings of specifically Wang. Doing so would avoid having difficulty to detect false contour as stated by Liu ([0043] recited). 53 Claim(s) 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Edlund et al. (US 20240249406 A1) in view of J. M. Juszczyk et al., "Wound 3D Geometrical Feature Estimation Using Poisson Reconstruction," in IEEE Access, vol. 9, pp. 7894-7907, 2021, doi: 10.1109/ACCESS.2020.3035125 (hereinafter Juszczyk), Pedersen et al. (US 20170076446 A1), and Kennett et al. (US 20200389672 A1) as of claims 1 and 9-10, and further in view of Ramchandra et al. (US 20150371393 A1) and Luo et al. (US 20230113902 A1). 54 Regarding claim 12, Edlund in view of Juszczyk, Pedersen, and Kennett teaches the method of claim 10 (see claims 1 and 9-10 rejections above), and although Edlund in view of Juszczyk, Pedersen, and Kennett may teach identifying, by the computing device, one or more missing depth values within the portions of the depth map related to the wound; and replacing, by the computing device, the missing depth values using nearest-neighbor interpolation or inpainting (Kennett; [0013] reciting “The video enhancement system may then apply a denoising model to the decoded video frame to remove one or more compression artifacts introduced to the digital content during a compression and decompression process. Once the decoded video frame is denoised, the video enhancement system may further refine the decoded video frame by interpolating pixels, up-sampling, or otherwise increasing the pixel resolution for the repaired video frame prior to displaying an output video frame via a graphical user interface of a display device.”), Ramchandra and Luo can further teach the limitations. 55 Ramachandra teaches identifying, by the computing device, one or more missing depth values within the portions of the depth map related to the wound ([0049] reciting “In the modified depth map image 240, missing depth values identified based on the pixel by pixel comparison may be populated and included in the depth map. For example, depth information corresponding to the depth map image 210 and/or the intermediate depth map image 230 may be modified to include the missing depth values. To populate a particular missing depth value corresponding to a particular pixel of the depth map”). 56 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Edlund in view of Juszczyk, Pedersen, and Kennett) to incorporate the teachings of Ramachandra to provide a clearer way to identify if any depth values are missing from the wound map from the teachings of Edlund in view of Juszczyk, Pedersen, and Kennett. Doing so would enable improved processing as stated by Ramachandra ([0020] recited). 57 Luo teaches the limitation “replacing, by the computing device, the missing depth values using nearest-neighbor interpolation or inpainting” further ([0062] reciting “sorting the original depth values of all non-edge pixels in a connected region of the pixel that needs to be replaced in descending order, and using a median of a sort result in the descending order as the inpainted depth value of the pixel that needs to be replaced”). 58 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Edlund in view of Juszczyk, Pedersen, Kennett, and Ramachandra) to incorporate the teachings of Luo to provide a clearer way of the methods of inpainting to replace the missing depth values or pixels from the teachings of specifically Edlund that were found by the teachings of specifically Ramachandra. Doing so would provide a greater depth characterizing a farther visual distance of a pixel as stated by Luo ([0071] recited). 59 Claim(s) 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Edlund et al. (US 20240249406 A1) in view of J. M. Juszczyk et al., "Wound 3D Geometrical Feature Estimation Using Poisson Reconstruction," in IEEE Access, vol. 9, pp. 7894-7907, 2021, doi: 10.1109/ACCESS.2020.3035125 (hereinafter Juszczyk) and Pedersen et al. (US 20170076446 A1) as of claims 1 and 13, and further in view of Loss et al. (US 20150213572 A1) and Zaides et al. (US 20230281891 A1). 60 Regarding claim 14, Edlund in view of Juszczyk and Pedersen teaches the method of claim 13 (see claims 1 and 13 rejections above), further comprising determining, by the computing device, a surface area of the wound, wherein determining the surface area of the wound comprises (Juszczyk; [Section V. B] reciting “Comparing the numerical methods of estimating the wound surface area with the wound surface outlined by an expert in the depth point cloud”): 61 Edlund in view of Juszczyk and Pedersen does not explicitly teach computing, by the computing device for each face of the mesh, a cross-product of two component vectors of the face; and summing, by the computing device, magnitudes of the computed cross-products. 62 Loss teaches computing, by the computing device for each face of the mesh, a cross-product of two component vectors of the face ([0057] reciting “The walls (e.g., walls of structures (e.g., buildings)) may be removed, by the texturized 3D segment model module 78, based on computation of the normal direction of the 3D triangles of the meshes. In this regard, the texturized 3D segment model module 78 may analyze and compute the cross-product between two vectors formed by connecting one of the triangle vertices to the other two triangle vertices of the meshes.”). 63 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Edlund in view of Juszczyk and Pedersen) to incorporate the teachings of Loss to provide a calculation of a cross product for a face from the mesh described by specifically Juszczyk for the missing mesh details from specifically Edlund. Doing so would allow the meshes to have different methodologies without departing from the scope of the invention as stated by Loss ([0057] recited). 64 Edlund in view of Juszczyk, Pedersen, and Loss does not explicitly teach summing, by the computing device, magnitudes of the computed cross-products. 65 Zaides teaches summing, by the computing device, magnitudes of the computed cross-products ([0048] reciting “Sum the cross-product outputs and normalize: a proxy of a normal plane through the origins.”). 66 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Edlund, Juszczyk, Pedersen, and Loss) to incorporate the teachings of Zaides to add the cross-product values calculated from specifically Loss in order to satisfy the mesh from specifically Juszczyk. Doing so would filter out outlier points as stated by Zaides ([0043] recited). 67 Claim(s) 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Edlund et al. (US 20240249406 A1) in view of Nikitidis et al. (US 20230252662 A1), Kennett et al. (US 20200389672 A1), and J. M. Juszczyk et al., "Wound 3D Geometrical Feature Estimation Using Poisson Reconstruction," in IEEE Access, vol. 9, pp. 7894-7907, 2021, doi: 10.1109/ACCESS.2020.3035125 (hereinafter Juszczyk). 68 Regarding claim 20, Edlund teaches a device comprising ([Abstract] reciting “A system for measuring a wound site including an image capture device, a touchscreen, a vibration motor, and a processor.”): a camera configured to capture an image ([Abstract] reciting “A system for measuring a wound site including an image capture device, a touchscreen, a vibration motor, and a processor.”); configured to capture a depth map ([0057] reciting “Wound images captured by the image capture device 302 may be used by the wound imaging and diagnostic application to determine a depth map associated with each captured wound image…”; [0079] reciting “…may apply a trained machine learning model, such as a wound detection model, in order to find all regions within the digital image where possible wounds are located.”); and a computing device configured to: receive the image, wherein the image includes a wound ([0055] reciting “Such image-processing vendors may offer one or more additional software packages that may be used for processing specific aspects of captured wound images.”); receive the depth map, wherein the depth map includes the wound ([0057] reciting “Wound images captured by the image capture device 302 may be used by the wound imaging and diagnostic application to determine a depth map associated with each captured wound image…”; [0079] reciting “…may apply a trained machine learning model, such as a wound detection model, in order to find all regions within the digital image where possible wounds are located.”); identify, by applying a machine-learned model for wound identification, a region of the image that corresponds to the wound ([0057] reciting “Wound images captured by the image capture device 302 may be used by the wound imaging and diagnostic application to determine a depth map associated with each captured wound image…”; [0080] reciting “For example, the scaled image may be a lower resolution version of the digital image. The scaled image should be of sufficient resolution for the trained machine learning model to distinguish between regions on the scaled image containing a wound and regions on the scaled image not containing a wound.”); align the image with the depth map ([0016] reciting “The executable instructions may configure a processor to receive a digital image, receive a depth map associated with the digital image, determine a bounding box of the wound site by processing the digital image with a first trained neural network, determine a wound mask by processing the digital image with a second trained neural network, determine a wound boundary from the wound mask, determine whether the wound boundary is aligned within a camera frame, and generate a wound map from the depth data, wound mask, and wound boundary.”); generate, using a machine-learned model for segmentation, a wound mask for the image based on the identified region of the image that corresponds to the wound ([0069] reciting “The machine learning processor 514 may detect wounds on the wound images, generate a wound mask corresponding with the detected wounds, and calculate predictions for the detected wounds. For example, the machine learning processor 514 may include a segmentation model for detecting a wound boundary and/or a wound mask from an input image, and an object detection model for detecting wound objects.”); determine, based on the identified region of the image that corresponds to the wound, a region of the depth map that corresponds to the wound ([0057] reciting “Wound images captured by the image capture device 302 may be used by the wound imaging and diagnostic application to determine a depth map associated with each captured wound image…”); 69 Edlund does not explicitly teach a depth sensor configured to capture a depth map; denoise the wound mask to remove artifacts generated as a result of using the machine-learned model for segmentation to generate the wound mask; generate a three-dimensional reconstruction of the wound based on the region of the depth map that corresponds to the wound; and apply one or more colorations to the three-dimensional reconstruction of the wound based on one or more colorations in the identified region of the image that corresponds to the wound. 70 Nikitidis teaches a depth sensor configured to capture a depth map ([0198] reciting “Users of mobile devices 102 which have depth sensors may be requested, by the mobile app, to enable the depth sensors to be used with the mobile application and to provide 2D images and depth maps to the application.”); 71 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Edlund) to incorporate the teachings of Nikitidis to provide a depth sensor within an image capture device ([0196] from Nikitidis) to capture and produce depth maps instead of only using the same image capturing device that is used for capturing images taught by Edlund. Doing so would give the ability to store in a training database to train a network at a later time as stated by Nikitidis. 72 Edlund in view of Nikitidis does not explicitly teach to denoise the wound mask to remove artifacts generated as a result of using the machine-learned model for segmentation to generate the wound mask; generate a three-dimensional reconstruction of the wound based on the region of the depth map that corresponds to the wound; and apply one or more colorations to the three-dimensional reconstruction of the wound based on one or more colorations in the identified region of the image that corresponds to the wound. 73 Kennett teaches to denoise the wound mask to remove artifacts generated as a result of using the machine-learned model for segmentation to generate the wound mask ([0049] reciting “As shown in FIG. 3A, the denoising system 202 may apply a denoising model to the compressed video frame 302 based information from the segmentation mask 306 to generate a repaired video frame 310 in which one or more compression artifacts have been removed from the decompressed video frame 302.”); 74 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Edlund in view of Nikitidis) to incorporate the teachings of Kennett to provide a method that gives an indication of denoising a type of mask that can remove specific artifacts, where the mask can be a wound mask as described by Edlund in view of Nikitidis. Doing so would incorporate a denoising system to be trained to remove compression artifacts in a variety of ways as stated by Kennett ([0035] recited). 75 Edlund in view of Nikitidis and Kennett does not explicitly teach to generate a three-dimensional reconstruction of the wound based on the region of the depth map that corresponds to the wound; and apply one or more colorations to the three-dimensional reconstruction of the wound based on one or more colorations in the identified region of the image that corresponds to the wound. 76 Juszczyk teaches to generate a three-dimensional reconstruction of the wound based on the region of the depth map that corresponds to the wound (See Figures 8 and 11); and apply one or more colorations to the three-dimensional reconstruction of the wound based on one or more colorations in the identified region of the image that corresponds to the wound (See Figures 8 and 11; [Figure 11] reciting “An exemplary result of wound reconstruction, Light red wireframe represents SPR reconstruction, dark red – MC reconstruction, and green – expert.”). PNG media_image1.png 700 953 media_image1.png Greyscale PNG media_image2.png 511 955 media_image2.png Greyscale 77 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Edlund in view of Nikitidis and Kennett) to incorporate the teachings of Juszczyk to provide a colorized three-dimensional reconstruction of the region of the wound being based on the depth map instead of a three-dimensional depth map itself as taught by specifically Edlund ([0059] recited), as well as to display the reconstruction based on the type of body part. Doing so would provide th
Read full office action

Prosecution Timeline

May 04, 2023
Application Filed
May 09, 2025
Non-Final Rejection — §103
Sep 11, 2025
Interview Requested
Sep 17, 2025
Examiner Interview Summary
Sep 17, 2025
Applicant Interview (Telephonic)
Sep 19, 2025
Response Filed
Nov 14, 2025
Final Rejection — §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
67%
Grant Probability
0%
With Interview (-66.7%)
2y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 3 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month