DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
Claims 1, 8, 13, and 15 has been amended.
Claims 1-20 are still pending for consideration.
Drawing objection has been withdrawn.
Response to Arguments
Applicant's arguments filed on Dec 12, 2025 have been fully considered but they are not persuasive.
On page 2 of the “Remarks” applicant asserts “Prodanovic does not disclose or suggest the previously claimed subject matter of displaying pose templates on the display of the mobile device. As such, Prodanovic does not teach the previously submitted claims.
To advance prosecution, exemplary claim 1 has been amended to clarify the distinctions with the cited art by reciting subject matter directed to the pose template comprising body positions displayed by the mobile device. As described above, Prodanovic does not disclose displaying a template comprising body positions. Prodanovic, in combination with Pederson and DiMaio does not teach the instant claims”
Response: Examiner respectfully disagree with applicant’s argument. Under broadest reasonable interpretation (BRI) consistent with the specification, a pose template is a visual representation of a body orientation used to guide capture of images. The claim does not require a specific graphical format, avatar overlay or specific UI structure. Rather, “a pose template” reasonably encompasses any displayed guidance that assists a user in positioning a patient or camera to obtain standardized images. Prodanovic explicitly discloses such displayed body positions. On para [0225] Prodanovic discloses “A body position component 410 provides a human body in a characteristic position to allow easy identification and location of particular anatomical regions”. This paragraph describes a displayed representation of the human body in a specific orientation, which serves as a reference for image capture and mapping. Such a displayed orientation constitutes a pose template comprising a body position, as now recited in amended claim 1. Prodanovic further on para [0214] disclose “Additionally, there is predefined set of different camera positions for the whole body (A, P, R, L, 45.degree. R, 45.degree. L). These predefined positions correspond to anterior, posterior, left, right, and angled body orientations, which are the exact type body positions used as pose templates for capturing images. Additionally, Prodanovic on para [0211] disclose “Via UI controls (i.e. buttons) on the form for: rotate left/right, up/down, move (left, right, up, down), zoom in/out”. The user interacts with the displayed body position component confirming that the body positions are presented on a display and manipulated through the user inference. Furthermore, in mobile medical imaging and clinical photography, standardized capture protocols are commonly implemented using on-screen positioning instructions, capture angle guides, or anatomical view references to ensure reproducibility of patient images. Thus, the amendment does not distinguish over Prodanovic.
Pederson teaches the mobile imaging system and automated recognition. On para [0132] Pederson disclose “a mobile communication system 280 includes a processor 250 and one or more memories 260”. Pederson further on para [0035]- [0037] disclose “boundary detection … color segmentation… and a trained classifier for identifying wound regions”. Applicant’s argument in not persuasive because Prodanovic discloses displaying body positions that function as pose templates, and Pederson teaches the mobile imaging and recognition components.
Therefore, amended claim 1 remains obvious over Prodanovic in view of Pederson and further in view of DiMaio.
The dependent claims stand or fall with independent claim 1 and remain rejected for the same reasons discussed above.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Pedersen et al. (US 20150119721 A1) in view of Prodanovic (US 20120197657 A1), and further in view of DiMaio et al. (US 20140316235 A1) herein after Di.
Regarding claim 1, Pedersen et al. teaches the system comprising: a mobile device comprising at least one processor and a display (see para [0128]; “view his/her wound on the LCD display of the smartphone camera”, see also para [0132]; “a mobile communication system 280 includes a processor 250 and one or more memories 260. In the embodiment shown in FIG. 14, a camera 265, where the camera as an objective lens 267, can also supply the physiological indicators signal to the mobile communication device 280”); at least one camera communicatively coupled to the mobile device (see para [0132]; “a camera 265, where the camera as an objective lens 267, can also supply the physiological indicators signal to the mobile communication device 280”); and one or more non-transitory computer-readable media storing computer-executable instructions that, when executed by the at least one processor (see para [0147]; “A tangible machine readable medium can be used to store software and data that, when executed by a computing device, causes the computing device to perform a method(s) as may be recited”), receiving inputs by a user to obtain the plurality of images (see para [0136]; “one or more processors 155 are operatively connected to an input component 160, which could receive the images transmitted by the handheld portable electronic/communication device”), recognizing burns and burn locations on the patient in the plurality of images (see para [0137]; “the one or more processors 155 to receive the image from the handheld portable electronic device, extract a boundary of the wound area, perform color segmentation within the boundary of the wound area, wherein the wound area is divided into a plurality of segments, each segment being associated with a color indicating a healing condition of the segment and evaluate the wound area”). However, Pedersen et al. does not teach a system for creating a burn chart charting burns of a patient, perform a method of creating the burn chart, the method comprising: displaying one or more pose templates on the display of the mobile device, wherein the one or more pose templates comprise one or more body positions to assist in obtaining one or more patient poses in a plurality of images, obtaining the plurality of images by the at least one camera when each pose template is displayed on the display.
In the same field of endeavor Prodanovic teaches perform a method of creating the burn chart (see para [0201]; “Visual representation of the dermatologic problems (allowing medical providers to see multiple problems at the same time)”, see also para [0220; “If multiple problems and/or orders are present on the same location, then the system is able to display all of them” Note: body map chart), the method comprising: displaying one or more pose templates on the display of the mobile device, wherein the one or more pose templates comprise one or more body positions to assist in obtaining one or more patient poses in a plurality of images (see para [0225]; “A body position component 410 provides a human body in a characteristic position to allow easy identification and location of particular anatomical region”, para [0214]; “there is predefined set of different camera positions for the whole body (A, P, R, L, 45.degree. R, 45.degree. L), see also para [0211]; “Via UI controls (i.e. buttons) on the form for: rotate left/right, up/down, move (left, right, up, down), zoom in/out” Note; These displayed body positions used during capture correspond to pose templates,); obtaining the plurality of images by the at least one camera when each pose template is displayed on the display (see para [0214]; “there is predefined set of different camera positions for the whole body (A, P, R, L, 45.degree. R, 45.degree. L). It can also be possible to focus at a specific body part (e.g. feet, ears, hands, etc.)”, see also para [0225]; “A body position component 410 provides a human body in a characteristic position to allow easy identification and location of particular anatomical regions”). Accordingly, it would have been obvious to one of ordinary skill in the art at the time of the invention to modify a wound assessing method to provide a convenient, quantitative mechanism for diabetic foot ulcer assessment of Pedersen et al. in view of the use of smart device with boundary extraction, color segmentation of Prodanovic in order to triangulate the dark brown lesion in the upper right area (see para [0201).
However, the combination of Pedersen et al. and Prodanovic does not teach a system for creating a burn chart charting burns of a patient and combining the plurality of images to create a burn chart.
Di teaches a system for creating a burn chart charting burns of a patient and combining the plurality of images to create a burn chart (see para [0154]; “FIG. 4 illustrates another example of how some devices described herein can calculate % TBSA burned. This figure shows a mosaic technique, wherein several pictures are added together to calculate a % TBSA burned” see also para [0157]; “mosaic portions 211 and 212 may be some of the images used to estimate the surface area afflicted with tissue condition such as a burn”, and para [0161; “Both the Rule of Nines and the Lund-Browder Chart are just example estimations that can be used for calculating total body surface area (TBSA)”). Accordingly, it would have been obvious to one of ordinary skill in the art at the time of the invention to modify a wound assessing method to provide a convenient, quantitative mechanism for diabetic foot ulcer assessment of Pedersen et al. in view of the use of smart device with boundary extraction, color segmentation of Prodanovic and techniques for non-invasive optical image assessing the presence and severity of tissue conditions of burns and other wounds of Di in order to allow the un-imaged leg surface to be estimated (see para [0154]).
Regarding claim 2, the rejection of claim 1 is incorporated herein.
Di in the combination further teach wherein the burn chart is an enhanced Lund and Browder chart, and wherein the method further comprises determining a percentage of a total body of the patient that is burned (see para [0159]; “There are other formulas that can be used to estimate the surface area of the various parts of a subject. For example FIG. 5 shows the Rule of Nines and Lund-Browder Charts. For example, illustration 500 shows the Rule of Nines, wherein the head and neck, and arm are each estimated to be 9% of total body surface area. For example, the total surface area of arm 501 can be estimated to be 9% of the total body surface area of the illustrated person under the Rule of Nines”).
Regarding claim 3, the rejection of claim 2 is incorporated herein.
Prodanovic in the combination further teach and labeling the enhanced Lund and Browder chart with the burn score, patient information, and a treatment regimen (see para [0220]; “The system is able to display Physical Exam (PEx) Findings (i.e., descriptive elaboration of the patient skin lesions), Problem, and Biopsy and Treatment sites on the Body mapping component 152”).
Di in the combination further teach wherein the method further comprises: determining a burn score for the patient based on the percentage (see para [0163]; “For example, in burns, fatality rates increase with increasing % TBSA burned”);
Regarding claim 4, the rejection of claim 1 is incorporated herein.
Di in the combination further teach wherein the method further comprises: generating a multispectral image; and determining burn severity based on the multispectral image (see para [0011]; “Multispectral Imaging (MSI), measures the reflectance of select wavelengths of visible and near-infrared light from the surface of a burn… These light-tissue interactions produce unique reflectance signatures captured by MSI that can be used to classify burn severity”).
Regarding claim 5, the rejection of claim 1 is incorporated herein.
Pedersen in the combination further teach wherein automatically recognizing the burn locations on the patient comprises utilizing one or more machine learning algorithms to: recognizing skin of the patient; and recognize the burn locations on the skin of the patient (see para [0035]; “A more accurate method may be used for wound boundary detection based on skills and insight by experienced wound clinicians. For this purpose, machine learning methods, such as the Support Vector Machine, may be used to train the wound analysis system to learn about the essential features about the wound. [0036] (iii) Component configured for color image segmentation. The color segmentation method is instrumental in determining the healing state of the wound where red indicates healing, yellow indicates inflamed, and black indicates necrotic”).
Regarding claim 6, the rejection of claim 5 is incorporated herein.
Pedersen in the combination further teach wherein the one or more machine learning algorithms are further configured to perform: recognizing a background in the plurality of images, recognize distractors in the plurality of images; and classifying the distractors in the plurality of images (see para [0078]; “In object recognition field, three major tasks needed to be solved to achieve the best recognition performance: 1) find the best representation to distinguish the object and background, 2) find the most efficient object search method and 3) design the most effective machine learning based classifier to determine whether a representation belongs to the object category or not”).
Regarding claim 7, the rejection of claim 1 is incorporated herein.
Prodanovic in the combination further teach wherein the method further comprises: detecting depth by the at least one camera; generating a point cloud model of the patient (see para [0214]; “there is predefined set of different camera positions for the whole body (A, P, R, L, 45.degree. R, 45.degree. L). It can also be possible to focus at a specific body part (e.g. feet, ears, hands, etc.)”, see also para [0225]; “A body position component 410 provides a human body in a characteristic position to allow easy identification and location of particular anatomical regions”).
Di in the combination further teach and generating a three-dimensional model of the patient by fitting a mesh to the point cloud model of the patient (see para [0151]; “Once the three-dimensional body model is created, the classified tissue regions can be projected onto areas of the three-dimensional body model”).
Regarding claim 8, the scope of claim 8 is fully incorporated in claim 1, and the
rejection of claim 1 is equally applicable here.
Regarding claim 9, the rejection of claim 8 is incorporated herein.
Pedersen et al. in the combination further teach comprises recognizing the burn locations (see para [0075]; “The above disclosed method mainly classifies the wound locations into three categories: 1) wound in the middle of the foot, 2) wound at the edge of the foot without toe-amputation and 3) wound at the edge of the foot with toe-amputation”), and classifying the burn locations and the severity of the burns by one or more machine learning algorithms trained on images of burns (see para [0076]; “analyzing the image includes using a trained classifier and, in the system of these teachings, the image analysis component is configured to use a trained classifier. [0077] A machine learning based solutions has been developed in which the wound boundary determination is an object recognition task since it is claimed that the machine learning (ML) is currently the only known way to develop computer vision systems that are robust and easily reusable in different environments. Herein below, the term "wound recognition" is used as the equivalent expression of "wound boundary determination", since both have the same goal” Note: shows 3D charting framework).
Di in the combination further teach wherein the method further, recognizing severity of the burns, (see Abstract; “Additionally, alternatives described herein are used with a variety of tissue classification applications, including assessing the presence and severity of tissue conditions, such as burns and other wounds” Note: converting 2D images into 3D model of the patient body surface).
Regarding claim 10, the rejection of claim 8 is incorporated herein.
Pedersen et al. in the combination further teach and labeling the burn chart with the burn score, patient information, and a treatment regimen (see para [0220]; “The system is able to display Physical Exam (PEx) Findings (i.e., descriptive elaboration of the patient skin lesions), Problem, and Biopsy and Treatment sites on the Body mapping component 152”).
Di in the combination further teach wherein the method further comprises: determining a percentage of area of the skin that is burned; determining a burn score for the patient based on the percentage (see para [0149]; “the % TBSA burned may be estimated by generating a first count that is the sum of all the pixels classified as burned in all the images, generating a second count that is the sum of all the pixels of the subject in all the images, and dividing the first count by the second count. For example, to calculate the % TBSA that is third degree burned, the system may count the pixels of regions 222, 230, and 236, and divide that total by the total number of pixels of all surfaces of the subject 250 by counting and adding the total pixels of the subject 250 found in each of images 212, 214, 216, and 218”);
Regarding claim 11, the rejection of claim 8 is incorporated herein.
Pedersen et al. in the combination further teach wherein the display and the one or more cameras are integrated into the mobile device (see para [0130]; [0132]; disclose smartphone with display and integrated camera).
Regarding claim 12, the rejection of claim 8 is incorporated herein.
Di in the combination further teach wherein the one or more cameras comprises a red-green-blue camera (see para [0547]; “FIG. 67B illustrates an RGB real image”), a multispectral camera, and a light detection and ranging camera (see para [0562]; “the multispectral images described herein can be captured, in some embodiments, by a fiber optic cable having both light emitters and a light detector at the same end of a probe. The light emitters can be capable of emitting around 1000 different wavelengths of light between 400 nm and 1100 nm to provide for a smooth range of illumination of the subject at different wavelengths”).
Regarding claim 13, the rejection of claim 8 is incorporated herein.
Di in the combination further teach wherein the method further comprises displaying the burn chart as an anterior pose and a posterior pose (see Fig. 3, and Fig. 5 para [0159]; “Under the Rule of Nines, each leg and each of the anterior and posterior surfaces of the trunk are estimated to be 18% of the total body surface area”).
Regarding claim 14, the rejection of claim 8 is incorporated herein.
Di in the combination further teach wherein the anterior pose and the posterior pose are displayed as two-dimension poses (see Fig. 3, and Fig. 5 disclose two-dimension poses).
Regarding claim 15, the scope of claim 15 is fully incorporated in claim 1, and the
rejection of claim 1 is equally applicable here. Additionally,
Pedersen in the combination further teach classifying the burn locations and the burn severity (see para [0076]; “analyzing the image includes using a trained classifier and, in the system of these teachings, the image analysis component is configured to use a trained classifier, see also para [0075] “method mainly classifies the wound locations into three categories: 1) wound in the middle of the foot, 2) wound at the edge of the foot without toe-amputation and 3) wound at the edge of the foot with toe-amputation” Note: the wound analysis method disclosed includes categorizing wound positions relative to anatomical regions of the foot).
Prodanovic in the combination further teach labeling the burn chart with the burn locations and the burn severity (see para [0220] Body mapping component, display physical exam (PEx) findings, and biopsy/treatment sites at specific anatomical locations using visual markers, and stores this information for patient records. This teaching of associating observed conditions with anatomical locations supports classifying and documenting lesion locations, which reasonably corresponds to the claimed classification and labeling of burn locations on a burn chart)
Di in the combination further teach recognizing burn severity of the burns on the patient in the plurality of images (see Abstract; “with a variety of tissue classification applications, including assessing the presence and severity of tissue conditions, such as burns and other wounds”).
Regarding claim 16, the rejection of claim 15 is incorporated herein.
Prodanovic in the combination further teach further comprising labeling the burn chart with patient information and a treatment regimen (see para [0220]; “The system is able to display Physical Exam (PEx) Findings (i.e., descriptive elaboration of the patient skin lesions), Problem, and Biopsy and Treatment sites on the Body mapping component 152”).
Regarding claim 17, the rejection of claim 15 is incorporated herein.
Di in the combination further teach wherein the burn chart is an enhanced Lund and Browder chart (see para [0162]; “the age of the subject may be effectively used in an estimation of relative percentage of body surface area using a Lund-Browder Chart”), and the enhanced Lund and Browder chart is displayed as a posterior pose and an anterior pose (see Fig. 3, and Fig. 5 para [0159]; “Under the Rule of Nines, each leg and each of the anterior and posterior surfaces of the trunk are estimated to be 18% of the total body surface area”).
Regarding claim 18, the rejection of claim 15 is incorporated herein.
Pedersen et al. in the combination further teach comprises recognizing the burn locations (see para [0075]; “The above disclosed method mainly classifies the wound locations into three categories: 1) wound in the middle of the foot, 2) wound at the edge of the foot without toe-amputation and 3) wound at the edge of the foot with toe-amputation”), and classifying the burn locations and the severity of the burns by one or more machine learning algorithms trained on images of burns (see para [0076]; “analyzing the image includes using a trained classifier and, in the system of these teachings, the image analysis component is configured to use a trained classifier. [0077] A machine learning based solutions has been developed in which the wound boundary determination is an object recognition task since it is claimed that the machine learning (ML) is currently the only known way to develop computer vision systems that are robust and easily reusable in different environments. Herein below, the term "wound recognition" is used as the equivalent expression of "wound boundary determination", since both have the same goal”).
Di in the combination further teach wherein the method further, recognizing severity of the burns, (see Abstract; “Additionally, alternatives described herein are used with a variety of tissue classification applications, including assessing the presence and severity of tissue conditions, such as burns and other wounds”).
Regarding claim 19, the rejection of claim 15 is incorporated herein.
Di in the combination further teach further comprising: recognizing skin of the patient (see para [0086]; “FIG. 44 illustrates example burn injured skin”); recognize the burn locations on the skin of the patient (see para [0296]; “FIG. 24 illustrates the location of burn injuries on dorsum of the pig”); and determining a percentage of the skin that is burned (see para [0162]; “Accordingly, the imaging techniques described herein can provide a more accurate % TBSA burned calculation than relying only on these charts as is conventionally done”).
Regarding claim 20, the rejection of claim 15 is incorporated herein.
Pedersen et al. in the combination further teach further comprising: recognizing distractors in the plurality of images; and classifying the distractors in the plurality of images (see para [0078]; “In object recognition field, three major tasks needed to be solved to achieve the best recognition performance: 1) find the best representation to distinguish the object and background, 2) find the most efficient object search method and 3) design the most effective machine learning based classifier to determine whether a representation belongs to the object category or not”).
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to WINTA GEBRESLASSIE whose telephone number is (571)272-3475. The examiner can normally be reached Monday-Friday9:00-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached at 571-270-5180. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/WINTA GEBRESLASSIE/ Examiner, Art Unit 2677
/ANDREW W BEE/ Supervisory Patent Examiner, Art Unit 2677