DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 17-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because a “computer program product comprising computer program code” is not one of the four statutory categories and is considered to be software per se.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1, 2, 10, 11, and 13-20 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Serval et al. (US 2024/0065554 A1, Feb. 29, 2024) (hereinafter “Serval”).
Regarding claims 1, 15, and 18: Serval discloses a computer-implemented method of monitoring human skin, the method comprising: Recording, by a camera and a depth sensor, a plurality of 3D-surface images of a skin surface, each 3D-surface image being taken from a different angle and a different distance with respect to the skin surface and comprising a plurality of pixels with depth information and RGB-color information (fig. 5A, steps 502-504, [0094]; [0037] - where images are 3D acquired with the RGB stereo camera); detecting, by a computer system, skin surface features in the 3D-surface images (fig. 5A, step 506, [0098]); determining, by the computer system, respective angular orientation and distance of the 3D-surface images, using the skin surface features and the depth information (fig. 5A, step 508, [0099]); generating, by the computer system, a 6D-model of the skin surface, using the 3D-surface images and their respective angular orientation and distance, the 6D-model of the skin surface comprising a plurality of surface data points, each surface data point comprising 3D-coordinates and RGB-color information ([0080] - 3D map ("model"), constructed according to step 508, is combined with imaging data including RGB data, [0085] - visible light features combined with the 3D map/model where visible light images are RGB [0037], [0040]-[0043], [0080]); generating, by the computer system, a dermatological evaluation of the 6D-model of the skin surface ([0084], [0117]-[0118], [0127]-[0128]); and rendering, by the computer system, the dermatological evaluation on a user interface ([0070], [0084], [0121]).
Regarding claim 2: Serval discloses the method of claim 1, wherein detecting the skin surface features comprises the computer system determining from the 3D-surface images a body part, and using an initial coordinate system and respective feature outlines, associated with the body part, for detecting the skin surface features in the 3D-surface images ([0099]-[0100], [0124], [0127]).
Regarding claim 10: Serval discloses the method of claim 1, further comprising the computer system using the 6D-model of the skin surface to determine skin surface characteristics, the skin surface characteristics including at least one of: wrinkles, wrinkle dimensions, pores, pore dimensions, milia, nodules, nodules, nodule dimensions, nodule shapes, macula, macula dimensions, macula shapes, papule, papule dimensions, papule shapes, plaque, plaque dimensions, plaque shape, pustules, pustule dimensions, blisters, blister dimensions, wheal, wheal dimensions, wheal shapes, comedo, erosions, erosion dimensions, erosion shapes, ulcers, ulcer dimensions, ulcer shapes, crust, scale, scale types, rhagade, or atrophy; and to generate the dermatological evaluation using the skin surface characteristics ([0061], [0124], [0133]).
Regarding claim 11: Serval discloses the method of claim 1, further comprising the computer system, upon completion of a first dermatological evaluation for a first 6D-model of the skin surface, storing in a data storage the first 6D-model of the skin surface and the first dermatological evaluation (fig. 3, [0080]-[0082]); upon receiving a second 6D-model of the skin surface, identifying in the data storage the first dermatological evaluation, by comparing the second 6D-model of the skin surface to the first 6D-model of the skin surface (fig. 3, [0080]-[0082]); upon completion of a second dermatological evaluation for the second 6D-model of the skin surface, generating a tracking report by comparing the second dermatological evaluation to the first dermatological evaluation (fig. 3, [0082]-[0084]); and rendering the tracking report on the user interface ([0084]).
Regarding claim 13: Serval discloses the method of claim 1, wherein generating the dermatological evaluation comprises an electronic device of the computer system transmitting the 6D-model of the skin surface via communication network to a processing system of the computer system, the processing system generating the dermatological evaluation using the 6D-model of the skin surface received from the electronic device, and the processing system transmitting the dermatological evaluation to the electronic device ([0123] - "...when the skin analysis is performed via a computing device communicatively coupled to the smart mirror, the quantification result is transmitted to one or more of the smart device, mobile device, and any other connected device communicatively coupled to the smart mirror"; fig. 1B; the analysis being performed by a computer "communicatively coupled" to the smart mirror is an implicit disclosure of the claim limitations).
Regarding claim 14: Serval discloses a computer system comprising a camera, a depth sensor, a user interface, and one or more processors (figs. 1A and 1B), the one or more processors being configured to perform the following steps: controlling the camera and the depth sensor to record a plurality of 3D-surface images of a skin surface, each 3D-surface image being taken from a different angle and a different distance with respect to the skin surface and comprising a plurality of pixels with depth information and RGB-color information (fig. 5A, steps 502-504, [0094]; [0037] - where images are 3D acquired with the RGB stereo camera); detecting skin surface features in the 3D-surface images (fig. 5A, step 506, [0098]); determining respective angular orientation and distance of the 3D-surface images, using the skin surface features and the depth information (fig. 5A, step 508, [0099]); generating a 6D-model of the skin surface, using the 3D-surface images and their respective angular orientation and distance, the 6D-model comprising a plurality of surface data points, each surface data point comprising 3D-coordinates and RGB-color information ([0080] - 3D map ("model") is combined with imaging data including RGB data, [0085] - visible light features combined with the 3D map/model where visible light images are RGB [0037], [0040]-[0043], [0080]); generating a dermatological evaluation of the 6D-model of the skin surface ([0084], [0117]-[0118], [0127]-[0128]); and rendering the dermatological evaluation on the user interface ([0070], [0084], [0121]).
Regarding claim 15: Serval discloses the computer system of claim 14, wherein the one or more processors are configured to perform a method according to one of the claims 1 to 12 (see rejection of claim 1 above).
Regarding claim 16: Serval discloses the computer system of claim 14, comprising an electronic device and a processing system, the electronic device including at least one of the one or more processors configured to transmit the 6D-model of the skin surface via communication network to the processing system, and the processing system, including at least another one of the one or more processors configured to generate the dermatological evaluation, using the 6D-model of the skin surface received from the electronic device, and to transmit the dermatological evaluation to the electronic device ([0123] - "...when the skin analysis is performed via a computing device communicatively coupled to the smart mirror, the quantification result is transmitted to one or more of the smart device, mobile device, and any other connected device communicatively coupled to the smart mirror"; fig. 1B; the analysis being performed by a computer "communicatively coupled" to the smart mirror is an implicit disclosure of the claim limitations).
Regarding claim 17: Serval discloses a computer program product comprising computer program code configured to control a processor of an electronic device comprising a camera, a depth sensor, and a user interface connected to the processor ([0210]), such that the processor performs the following steps: control the camera and the depth sensor to record a plurality of 3D-surface images of a skin surface of a human skin, each 3D-surface image being taken from a different angle and a different distance with respect to the skin surface and comprising a plurality of pixels with depth information and RGB-color information (fig. 5A, steps 502-504, [0094]; [0037] - where images are 3D acquired with the RGB stereo camera); detect skin surface features in the 3D-surface images (fig. 5A, step 506, [0098]); determine respective angular orientation and distance of the 3D-surface images, using the skin surface features and the depth information (fig. 5A, step 508, [0099]); generate a 6D-model of the skin surface, using the 3D-surface images and their respective angular orientation and distance, the 6D-model of the skin surface comprising a plurality of surface data points, each surface data point comprising 3D-coordinates and RGB-color information ([0080] - 3D map ("model") is combined with imaging data including RGB data, [0085] - visible light features combined with the 3D map/model where visible light images are RGB [0037], [0040]-[0043], [0080]); transmit the 6D-model of the skin surface to a computerized processing system ([0081]-[0084], [0123]; fig. 1B); receive from the computerized processing system a dermatological evaluation of the 6D-model of the skin surface ([0084], [0117]-[0118], [0127]-[0128]); and render the dermatological evaluation on the user interface ([0070], [0084], [0121]).
Regarding claim 18: Serval discloses the computer program product of claim 17, wherein the computer program code is further configured to control the processor to perform a method according to one of the claims 1 to 12 (see rejection of claim 1 above).
Regarding claim 19: Serval discloses a computer program product comprising computer program code configured to control one or more processors of a computerized processing system ([0210]) to perform the following steps: receiving from an electronic device a 6D-model of a skin surface of a person, the 6D-model of the skin surface comprising a plurality of surface data points, each surface data point comprising 3D-coordinates and RGB-color information ([0080]-[0084], [0099], [0123], [0204]; fig. 1B); generating a dermatological evaluation of the 6D-model of the skin surface of the person ([0080]-[0084], [0099], [0123], [0204]; fig. 1B); and transmitting the dermatological evaluation to the electronic device ([0080]-[0084], [0099], [0123], [0204]; fig. 1B).
Regarding claim 20: Serval discloses the computer program product of claim 19, wherein the computer program code is further configured to control the one or more processors to perform a method according to one of the claims 5 to 12 (see rejections of at least claims 10-11 above).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 3-4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Serval in view of Rubbert et al. (US 2002/0006217 Jan. 17, 2002) (hereinafter “Rubbert”).
Regarding claim 3: Serval discloses the method of claim 2, including the generation of a 3D model from image data ([0099]) but is generally silent on the details of the process of creating a model from images.
Rubbert, in the same problem solving area of 3D model generation, discloses a process of generating a 3D model from a plurality of 3D surface images including selecting from the plurality of 3D-surface images a 3D-reference image, the 3D-reference image having a coordinate system with an angular orientation comparatively closest to the initial coordinate system, and generating the 6D-model of the skin surface using the 3D-reference image (fig. 36; [0232] - images acquired at different angles need to be registered to one another to generate 3D model, [0236]-[0238] - registration is based on reference frame corresponding to initial coordinate system). Rubbert further teaches that this method of registration yields a highly accurate 3D model without requiring or using pre-knowledge of the spatial relationship between the (image) frames ([0021]).
It would have been prima facie obvious for one having ordinary skill in the art prior to the effective filing date of the claimed invention to implement the 3D model generation of Serval in the manner taught by Rubbert in order to gain the advantages of a highly accurate 3D reconstruction without the need for prior knowledge of the spatial relationship between frames.
Regarding claim 4: Serval and Rubbert teach the method of claim 3, wherein generating the 6D-model of the skin surface comprises the computer system determining adjacent 3D-surface images, starting with the 3D-reference image, whereby two adjacent 3D-surface images have a comparatively closest angular orientation and distance to each other, and rotating and translating adjacent 3D-reference images towards the 3D-reference image, starting with the 3D-surface image adjacent to the 3D-reference image, to match their respective angular orientation and distance (Rubbert - [0244]-[0246]).
Claim(s) 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Serval in view of Shin et al. (US 2022/0020494 A1, Jan. 20, 2022) (hereinafter “Shin”).
Regarding claim 5: Serval discloses the method of claim 1, but is silent on wherein generating the dermatological evaluation comprises the computer system including in the dermatological evaluation a probabilistic ranking of dermatological diagnoses related to skin diseases, skin issues, and/or skin types.
Shin, in the same field of endeavor, discloses generating a dermatological evaluation comprising a probabilistic ranking of dermatological diagnoses related to skin diseases, skin issues, and/or skin types ([0079]).
It would have been prima facie obvious for one having ordinary skill in the art prior to the effective filing date of the claimed invention to provide a probabilistic ranking of dermatological diagnoses as taught by Shin in order provide the user with an indication of the confidence of the diagnosis.
Claim(s) 6-9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Serval.
Regarding claim 6: Serval discloses the method of claim 1, further comprising the computer system generating the dermatological evaluation using the 6D-model of the skin surface as input to a neural network, trained to identify skin diseases, skin issues, and/or skin types, using a plurality of 6D-models of skin surfaces, which 6D-models of skin surfaces comprise a plurality of surface data points with 3D-coordinates and RGB-color information ([0054]-[0055], [0077]-[0080], [0115] – it is noted that any neural network must be trained on the same type/format of data that is to be used to classify, i.e. in order to learn to classify the features of a 6D model the training data must be 6D models).
While Serval does not explicitly state that the training data sets “comprising features corresponding to various skin conditions” are from a plurality of people, it is considered prima facie obvious for one having ordinary skill in the art prior to the effective filing date of the claimed invention to generate the training data from a plurality of people in order to obtain representative data of a large number of possible skin types and skin conditions in order to more effectively train the neural network.
Regarding claim 7: Serval discloses generating a map of the skin by projecting the 6D model onto a 2D map and generating the dermatological evaluation based on the projection by using the projection map as input to a neural network trained to identify skin diseases, skin issues, and/or skin types ([0104], [0054], [0115]; figs. 5A-5B; it is noted that any neural network must be trained on the same type/format of data that is to be used to classify). Serval discloses that the projection is converted to the LAB color space (from RGB) because the LAB color space provides advantages for quantification, however a person having ordinary skill in the art prior to the effective filing date would have been well aware that more than one color space could be used to perform this function and it would have been prima facie obvious for one having ordinary skill in the art prior to the effective filing date of the claimed invention to simplify the process of the projection by leaving the projection in the original RGB color space rather than converting to LAB.
While Serval does not explicitly state that the training data sets “comprising features corresponding to various skin conditions” are from a plurality of people, it is considered prima facie obvious for one having ordinary skill in the art prior to the effective filing date of the claimed invention to generate the training data from a plurality of people in order to obtain representative data of a large number of possible skin types and skin conditions in order to more effectively train the neural network.
Regarding claim 8: Serval discloses the method of claim 1, further comprising the computer system generating one or more 6D-sub-models of the skin surface, by extracting one or more areas of the 6D-model of the skin surface, each of the 6D-sub-models of the skin surface comprising a plurality of surface data points of the respective area, each surface data point comprising 3D-coordinates and RGB-color information ([0075]-[0078], where the method of fig. 3 uses the maps described with respect to step 508 - [0080], [0099]); and generating the dermatological evaluation using the one or more 6D-sub-models of the skin surface as input to a neural network, trained to identify skin diseases, skin issues, and/or skin types, using a plurality of one or more of the 6D-sub-models of skin surfaces, which one or more 6D-sub-models of skin surfaces comprise a plurality of surface data points of the respective area with 3D-coordinates and RGB-color information ([0054]-[0055], [0077]-[0080] – it is noted that any neural network must be trained on the same type/format of data that is to be used to classify, i.e. in order to learn to classify the features of a 6D model the training data must be 6D models).
While Serval does not explicitly state that the training data sets “comprising features corresponding to various skin conditions” are from a plurality of people, it is considered prima facie obvious for one having ordinary skill in the art prior to the effective filing date of the claimed invention to generate the training data from a plurality of people in order to obtain representative data of a large number of possible skin types and skin conditions in order to more effectively train the neural network.
Regarding claim 9: Serval discloses the method of claim 1, further comprising the computer system generating at least one of: one or more 6D-sub-models of the skin surface or a 5D-map of the skin surface, the one or more 6D-sub-models of the skin surface being generated by extracting one or more areas of the 6D-model of the skin surface, each of the 6D-sub-models of the skin surface (20) comprising a plurality of surface data points of the respective area, each surface data point comprising 3D-coordinates and RGB-color information, and the 5D-map of the skin surface being generated by applying a projection to the 6D-model of the skin surface, the 5D-map of the skin surface comprising a plurality of map data points, each map data point comprising 2D-coordinates and RGB-color information ([0075]-[0078], where the method of fig. 3 uses the maps described with respect to step 508 - [0080], [0099]); and generating the dermatological evaluation using at least one of: the 6D-model of the skin surface, the one or more 6D-sub-models of the skin surface, or the 5D-map of the skin surface, as input to a neural network, trained to identify skin diseases, skin issues, and/or skin types, using at least one of: a plurality of 6D-models of skin surfaces from a plurality of people, a plurality of one or more 6D-sub-models of skin surfaces, or a plurality of 5D-maps of skin surfaces ([0054]-[0055], [0077]-[0080] – it is noted that any neural network must be trained on the same type/format of data that is to be used to classify, i.e. in order to learn to classify the features of a 6D model the training data must be 6D models).
While Serval does not explicitly state that the training data sets “comprising features corresponding to various skin conditions” are from a plurality of people, it is considered prima facie obvious for one having ordinary skill in the art prior to the effective filing date of the claimed invention to generate the training data from a plurality of people in order to obtain representative data of a large number of possible skin types and skin conditions in order to more effectively train the neural network.
Claim(s) 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Serval in view of Thomas (US 2016/0314585 A1, Oct. 27, 2016) (hereinafter “Thomas”).
Regarding claim 12: Serval discloses the method of claim 1, but is silent on wherein generating the dermatological evaluation comprises the computer system rendering the 6D-model of the skin surface on a display, receiving evaluation data from a user via a user interface, and generating the dermatological evaluation using the evaluation data.
Thomas, in the same field of endeavor, discloses a method of performing skin evaluation where the skin features may be determined automatically or based on receiving evaluation data regarding a displayed patient image from a user via a user interface ([0033]).
It would have been prima facie obvious for one having ordinary skill in the art prior to the effective filing date of the claimed invention to modify the method of Serval to allow manual or partially manual annotation as disclosed by Thomas in order to allow the user to control which features are designated for analysis.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CAROLYN A PEHLKE whose telephone number is (571)270-3484. The examiner can normally be reached 9:00am - 5:00pm (Central Time), Monday - Friday.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chris Koharski can be reached at (571) 272-7230. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CAROLYN A PEHLKE/Primary Examiner, Art Unit 3799