DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of the Claims
Claims 1-18, as originally filed, are currently pending and have been considered below.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “image obtaining unit” in claim 17.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 17 and 18 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 17 recites the limitation "the apparatus" in line 1 of the claim. There is insufficient antecedent basis for this limitation in the claim.
Claim 17 recites the limitation "said image obtaining device" in line 3 of the claim. There is insufficient antecedent basis for this limitation in the claim.
Claim 17 recites the limitation "at least one color image" in line 4 of the claim. There is insufficient antecedent basis for this limitation in the claim.
Regarding claim 17, the phrase "preferably" renders the claim indefinite because it is unclear whether the limitation(s) following the phrase are part of the claimed invention. See MPEP § 2173.05(d).
Claim 18 recites the limitation "said image processing device" in line 1 of the claim. There is insufficient antecedent basis for this limitation in the claim.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1, 5-10, 12 and 16-18 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Jeong et al., U.S. Publication No. 2019/0295728, hereinafter, “Jeong”.
As per claim 1, Jeong discloses a method of determining a cosmetic skin attribute of a person (Jeong, ¶0001, a customized cosmetics provision system and an operating method thereof, and more particularly, to a customized cosmetics provision system for providing customized cosmetics appropriate for a skin condition of a user by diagnosing the skin condition of the user), the method comprising the steps of:
a) obtaining at least one color image comprising at least one portion of skin of the person (Jeong, ¶0077-0079, The photographing unit 12 may photograph a region required for analysis of a user's skin condition. For example, the photographing unit 12 may photograph a user's skin region, a user's entire face region, and the like. The measurement unit 13 may measure data required for analysis of a skin condition. For example, the measurement unit 13 may measure light reflected from a skin region, or skin color, or moisture content of skin, or pore depth, or ambient temperature/humidity around the diagnosis device 10, or the like. The measurement unit 13 may include an optical sensor, an imaging sensor, a moisture measuring sensor, a temperature sensor, and the like);
b) analyzing the at least one color image to obtain a disorder value of a certain color (Jeong, ¶0097, The photographing unit 12 of the diagnosis device 10 may photograph a skin region to obtain a skin image. The analysis unit 15 of the diagnosis device 10 may separate the obtained image into a plurality of skin color planes which are R-plane, G-plane and B-plane to diagnose a skin condition; Jeong, ¶0098, According to one embodiment, the analysis unit 15 may calculate an average gradation of a skin image by extracting data of each separated color plane. Meanwhile, the storage unit 17 of the diagnosis device 10 may store a plurality of gradations and a gradation table in which skin condition information corresponding to each gradation is mapped. The analysis unit 15 of the diagnosis device 10 may obtain a gradation matching the calculated average gradation from the stored gradation table to diagnose a skin condition of a user; Jeong, ¶0099, the analysis unit 15 of the diagnosis device 10 of may extract only one color plane of a plurality of separated color planes (R-plane, G-plane and B-plane), and may calculate a variance of gray levels of the extracted color plane. Meanwhile, the storage unit 17 of the diagnosis device 10 may store an age curve mapping a gray level variance and a skin age. Therefore, the analysis unit 15 may obtain the skin age corresponding to the calculated gray level variance from the age curve to diagnose a skin condition); and
c) determining the cosmetic skin attribute of the at least one portion of skin of the person based on the disorder value (Jeong, ¶0098, The analysis unit 15 of the diagnosis device 10 may obtain a gradation matching the calculated average gradation from the stored gradation table to diagnose a skin condition of a user; Jeong, ¶0099, the analysis unit 15 may obtain the skin age corresponding to the calculated gray level variance from the age curve to diagnose a skin condition).
As per claim 5, Jeong discloses the method of claim 1, wherein the disorder value of the certain color is the disorder value of red color (Jeong, ¶0097, The photographing unit 12 of the diagnosis device 10 may photograph a skin region to obtain a skin image. The analysis unit 15 of the diagnosis device 10 may separate the obtained image into a plurality of skin color planes which are R-plane, G-plane and B-plane to diagnose a skin condition).
As per claim 6, Jeong discloses the method of claim 1, wherein at least one color image is at least one color channel image (Jeong, ¶0097, The photographing unit 12 of the diagnosis device 10 may photograph a skin region to obtain a skin image. The analysis unit 15 of the diagnosis device 10 may separate the obtained image into a plurality of skin color planes which are R-plane, G-plane and B-plane to diagnose a skin condition; Jeong, ¶0098, According to one embodiment, the analysis unit 15 may calculate an average gradation of a skin image by extracting data of each separated color plane. Meanwhile, the storage unit 17 of the diagnosis device 10 may store a plurality of gradations and a gradation table in which skin condition information corresponding to each gradation is mapped. The analysis unit 15 of the diagnosis device 10 may obtain a gradation matching the calculated average gradation from the stored gradation table to diagnose a skin condition of a use).
As per claim 7, Jeong discloses the method of claim 6, wherein the at least one color channel image is an image in a color system selected from the group consisting of L*a*b* color system, RGB color system, HSL/HSV color system, and CMYK color system (Jeong, ¶0100, the analysis unit 15 of the diagnosis device 10 may convert separated color planes (R-plane, G-plane and B-plane) into standard RGB ... The analysis unit 15 may diagnose a skin condition such as a skin type using the modeled R, G and B components; Jeong, ¶0109, The analysis unit 15 may apply the obtained L, a, and b to the predetermined formula).
As per claim 8, Jeong discloses the method of claim 7, wherein the at least one color channel image is an image channel in L*a*b* color system (Jeong, ¶0109, the storage unit 17 of the diagnosis device 10 may store at least one formula related to diagnosis of a skin condition. The photographing unit 12 or the measurement unit 13 of the diagnosis device 10 may obtain data related to the skin condition, and the analysis unit 15 may apply the obtained data to the predetermined formula to diagnose the skin condition. For example, the measurement unit 13 of the diagnosis device 10 may measure light reflectivity of a skin region, a mucosa region or a connective tissue region. The analysis unit 15 may obtain L representing brightness through the measured light reflectivity, a representing a closer color of red and green, and b representing a closer color of yellow and blue. The analysis unit 15 may apply the obtained L, a, and b to the predetermined formula. The predetermined formula may be (Lmax−L)×a, (Lmax−L)×b, but it is merely illustrative. The analysis unit 15 may apply the obtained L, a, and b to the predetermined formula to diagnose a skin condition such as skin color and colors of red spots).
As per claim 9, Jeong discloses the method of claim 1, wherein the step (b) is conducted by the following steps:
1) Select a region of interest (ROI) on the at least one color channel image (Jeong, ¶0077, The photographing unit 12 may photograph a region required for analysis of a user's skin condition. For example, the photographing unit 12 may photograph a user's skin region, a user's entire face region, and the like);
2) Defining a plurality of tiles across the ROI (Jeong, ¶0097, The photographing unit 12 of the diagnosis device 10 may photograph a skin region to obtain a skin image. The analysis unit 15 of the diagnosis device 10 may separate the obtained image into a plurality of skin color planes which are R-plane, G-plane and B-plane to diagnose a skin condition);
3) Calculate an average intensity value of the certain color for each tile (Jeong, ¶0098, According to one embodiment, the analysis unit 15 may calculate an average gradation of a skin image by extracting data of each separated color plane. Meanwhile, the storage unit 17 of the diagnosis device 10 may store a plurality of gradations and a gradation table in which skin condition information corresponding to each gradation is mapped. The analysis unit 15 of the diagnosis device 10 may obtain a gradation matching the calculated average gradation from the stored gradation table to diagnose a skin condition of a user);
4) Calculate a gradient of the average intensity value between adjacent tiles by the following equation: |Ii,j - Ii+1,j| + |Ii,j - Ii,j + i| wherein Iij is an average intensity value of a tile at a position (i, j) calculated in the above step (3), Ii+1,j is an average intensity value of a tile at a position (i+1,j) calculated in the above step (3), Ii,j+1 is an average intensity value of a tile at a position (i,j+1) calculated in the above step (3) (Jeong, ¶0098, According to one embodiment, the analysis unit 15 may calculate an average gradation of a skin image by extracting data of each separated color plane. Meanwhile, the storage unit 17 of the diagnosis device 10 may store a plurality of gradations and a gradation table in which skin condition information corresponding to each gradation is mapped. The analysis unit 15 of the diagnosis device 10 may obtain a gradation matching the calculated average gradation from the stored gradation table to diagnose a skin condition of a user; Jeong, ¶0102, the light-emitting unit 14 of the diagnosis device 10 irradiates excitation light to a skin region with a predetermined area, and the measurement unit 13 of the diagnosis device 10 may detect magnetic fluorescence generated by the irradiated excitation light. The analysis unit 15 of the diagnosis device 10 may generate a fluorescence image of a skin using the detected magnetic fluorescence, and may analyze a distribution pattern of fluorescence intensity through the generated fluorescence image of the skin to diagnose the skin condition of the user);
5) Calculate a disorder value by the following equation: (∑I | Ii,j - Ii+1,j| + ∑j | Ii,j - Ii,j + 1|) / SROI
Wherein SROI is total number of tiles within ROI (Jeong, ¶0098, According to one embodiment, the analysis unit 15 may calculate an average gradation of a skin image by extracting data of each separated color plane. Meanwhile, the storage unit 17 of the diagnosis device 10 may store a plurality of gradations and a gradation table in which skin condition information corresponding to each gradation is mapped. The analysis unit 15 of the diagnosis device 10 may obtain a gradation matching the calculated average gradation from the stored gradation table to diagnose a skin condition of a user).
As per claim 10, Jeong discloses the method of claim 9, wherein the step (4) is conducted for all tiles within ROI (Jeong, ¶0097, The photographing unit 12 of the diagnosis device 10 may photograph a skin region to obtain a skin image. The analysis unit 15 of the diagnosis device 10 may separate the obtained image into a plurality of skin color planes which are R-plane, G-plane and B-plane to diagnose a skin condition; Jeong, ¶0098, According to one embodiment, the analysis unit 15 may calculate an average gradation of a skin image by extracting data of each separated color plane. Meanwhile, the storage unit 17 of the diagnosis device 10 may store a plurality of gradations and a gradation table in which skin condition information corresponding to each gradation is mapped. The analysis unit 15 of the diagnosis device 10 may obtain a gradation matching the calculated average gradation from the stored gradation table to diagnose a skin condition of a user).
As per claim 12, Jeong discloses the method according to claim 1, wherein the cosmetic skin attribute is selected from the group consisting of: skin age, skin topography, skin tone, skin pigmentation, skin pores, skin inflammation, skin hydration, skin sebum level, acne, moles, skin radiance, skin shine, skin dullness, and skin barrier, forecast of the cosmetic skin attribute in future, and mixtures thereof (Jeong, ¶0018, A diagnosis device for providing customized cosmetics according to an embodiment of the present invention may include a measurement unit for measuring a skin condition, an analysis unit for diagnosing the measured skin condition, a control unit for recommending customized cosmetics based on a diagnosis result of the skin condition, and a communication unit for transmitting information on the recommended customized cosmetics; Jeong, ¶0019, The skin condition may include at least one of skin color, moisture content in skin, sebum content in skin, elasticity, wrinkles, presence of pigmentation, amount of pores, keratin, skin texture, sensitivity, skin type, and skin trouble).
As per claim 16, Jeong discloses the method according to claim 6, wherein the at least one color channel image is a a-image; wherein the disordered value is a a-disordered value (Jeong, ¶0018, A diagnosis device for providing customized cosmetics according to an embodiment of the present invention may include a measurement unit for measuring a skin condition, an analysis unit for diagnosing the measured skin condition, a control unit for recommending customized cosmetics based on a diagnosis result of the skin condition, and a communication unit for transmitting information on the recommended customized cosmetics; Jeong, ¶0078, The measurement unit 13 may measure data required for analysis of a skin condition. For example, the measurement unit 13 may measure light reflected from a skin region, or skin color, or moisture content of skin, or pore depth, or ambient temperature/humidity around the diagnosis device 10, or the like; Jeong, ¶0087, The storage unit 17 may store data required for diagnosing a skin condition. For example, the storage unit 17 may store a table in which colors of cosmetics corresponding to a measurement value are mapped, a table in which the number of wrinkles on skin and the age of the skin are mapped, and the like; Jeong, ¶0107, the measurement unit 13 of the diagnosis device 10 may photograph a cross-polarized image and a parallel-polarized image of skin, and may obtain a surface reflected light image corresponding to a difference between the photographed cross-polarized image and the parallel-polarized image to diagnose a skin condition such as a glossy index).
As per claim 17, Jeong discloses a system for determining a cosmetic skin attribute of a person (Jeong, ¶0001, a customized cosmetics provision system and an operating method thereof, and more particularly, to a customized cosmetics provision system for providing customized cosmetics appropriate for a skin condition of a user by diagnosing the skin condition of the user), the apparatus comprising:
an image obtaining unit for obtaining at least one image comprising at least one portion of skin of the person (Jeong, ¶0077-0079, The photographing unit 12 may photograph a region required for analysis of a user's skin condition. For example, the photographing unit 12 may photograph a user's skin region, a user's entire face region, and the like. The measurement unit 13 may measure data required for analysis of a skin condition. For example, the measurement unit 13 may measure light reflected from a skin region, or skin color, or moisture content of skin, or pore depth, or ambient temperature/humidity around the diagnosis device 10, or the like. The measurement unit 13 may include an optical sensor, an imaging sensor, a moisture measuring sensor, a temperature sensor, and the like),
wherein preferably said imaging obtaining device comprises a non-transitory computer readable storage medium configured to store the obtained at least one color image (Jeong, ¶0342, The present invention described above may be implemented as computer-readable codes in a medium on which a program is recorded. The computer-readable medium includes all kinds of recording devices in which computer-readable data is stored);
an image processing unit coupled with said imaging obtaining unit for analyzing the obtained at least one image to obtain a disordered value (Jeong, ¶0097, The photographing unit 12 of the diagnosis device 10 may photograph a skin region to obtain a skin image. The analysis unit 15 of the diagnosis device 10 may separate the obtained image into a plurality of skin color planes which are R-plane, G-plane and B-plane to diagnose a skin condition; Jeong, ¶0098, According to one embodiment, the analysis unit 15 may calculate an average gradation of a skin image by extracting data of each separated color plane. Meanwhile, the storage unit 17 of the diagnosis device 10 may store a plurality of gradations and a gradation table in which skin condition information corresponding to each gradation is mapped. The analysis unit 15 of the diagnosis device 10 may obtain a gradation matching the calculated average gradation from the stored gradation table to diagnose a skin condition of a user; Jeong, ¶0099, the analysis unit 15 of the diagnosis device 10 of may extract only one color plane of a plurality of separated color planes (R-plane, G-plane and B-plane), and may calculate a variance of gray levels of the extracted color plane. Meanwhile, the storage unit 17 of the diagnosis device 10 may store an age curve mapping a gray level variance and a skin age. Therefore, the analysis unit 15 may obtain the skin age corresponding to the calculated gray level variance from the age curve to diagnose a skin condition) and
determining the cosmetic skin attribute of the at least one portion of skin of the person based on the disordered value (Jeong, ¶0098, The analysis unit 15 of the diagnosis device 10 may obtain a gradation matching the calculated average gradation from the stored gradation table to diagnose a skin condition of a user; Jeong, ¶0099, the analysis unit 15 may obtain the skin age corresponding to the calculated gray level variance from the age curve to diagnose a skin condition),
a display generating unit coupled with the image processing unit for generating a display to display content data describing the determined cosmetic skin attribute (Jeong, ¶0022, The display unit may further display a diagnosis result of the skin condition).
As per claim 18, Jeong discloses the system of claim 17 wherein said image processing device comprises a processor with computer-executable instructions (Jeong, ¶0342, The present invention described above may be implemented as computer-readable codes in a medium on which a program is recorded. The computer-readable medium includes all kinds of recording devices in which computer-readable data is stored … In addition, the computer may include a control unit of a diagnosis device, a control unit of a skin management server, or a control unit of a manufacturing apparatus).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 2-4 and 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jeong et al., U.S. Publication No. 2019/0295728, hereinafter, “Jeong” as applied to claim 1 above, and further in view of Yoo, Korean Patent Publication No. KR 20220078231A, hereinafter, “Yoo”.
As per claim 2, Jeong discloses the method of claim 1, wherein the disorder value is based on at least one of the followings:
edges of the certain color (Jeong, ¶0118, the photographing unit 12 of the diagnosis device 10 may obtain a skin image by photographing a skin region with a white light or the like, and the analysis unit 15 may detect an edge line of a portion having a higher density than a predetermined threshold value by processing a color signal having a predetermined wavelength among color signals included in the obtained skin image to obtain a pigmentation region. The analysis unit 15 of the diagnosis device 10 may diagnose a skin condition of a user through the obtained pigmentation region).
Jeong does not explicitly disclose the following limitations as further recited however Yoo discloses wherein the disorder value is based on at least one of the followings:
total lengths of the edges of the certain color; a ratio of the longest radius to the shortest radius, wherein both radii are measured from the same center of the certain color; discrepancy of a tile of the certain color; and mixtures thereof (Yoo, ¶0065, the skin diagnosis device can detect an outline from an edge image using a predetermined outline detection algorithm (S430). A contour is a curve connecting all consecutive points with the same intensity and is a useful tool for shape analysis and object detection and recognition; Yoo, ¶0066, the skin diagnosis device can produce a quantified value based on the detected edge and outline(S440) ... the skin diagnosis device can calculate the number of detected edges and contours, and calculate a numerical value or score quantifying the degree of wrinkles based on the number of detected edges and contours).
It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to combine the teachings of Yoo with Jeong because they are in the same field of endeavor. One skilled in the art would have been motivated to include the calculation of the length of the edges as taught by Yoo for the color edge line of Jeong in order to obtain an accurate region for diagnosis (Yoo, ¶0065).
As per claim 3, Jeong and Yoo disclose the method of claim 2, wherein the disorder value is based on at least one of the followings:
total lengths of the edges of the certain color; a ratio of the longest radius to the shortest radius, wherein both radii are measured from the same center of the certain color; and mixtures thereof (Yoo, ¶0065, the skin diagnosis device can detect an outline from an edge image using a predetermined outline detection algorithm (S430). A contour is a curve connecting all consecutive points with the same intensity and is a useful tool for shape analysis and object detection and recognition; Yoo, ¶0066, the skin diagnosis device can produce a quantified value based on the detected edge and outline(S440) ... the skin diagnosis device can calculate the number of detected edges and contours, and calculate a numerical value or score quantifying the degree of wrinkles based on the number of detected edges and contours).
As per claim 4, Jeong and Yoo disclose the method of claim 3, wherein the disorder value is based on a ratio of the longest radius to the shortest radius, wherein both radii are measured from the same center of the certain color (Jeong, ¶0121, The photographing unit 12 of the diagnosis device 10 may photograph a fluorescence image of a skin region. The analysis unit 15 of the diagnosis device 10 may extract a bright-white fluorescence point from the fluorescence image, and may recognize a distribution of the extracted bright-white fluorescence point as a sebum distribution of the skin. Therefore, the analysis unit 15 may obtain an area ratio or intensity of the bright-white fluorescence point, etc. to diagnose a skin condition such as a sebum distribution, sebum content, and the like of a user).
As per claim 11, Jeong discloses the method of claim 1, but does not explicitly disclose the following limitation as further recited however Yoo discloses wherein, prior to the step (b), the at least one color channel image is filtered by using: Smoothing filter and/or frequency filter (Yoo, ¶0056, the skin diagnosis device can remove noise edges by blurring the image using a Gaussian filter to improve the performance of edge detection).
It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to combine the teachings of Yoo with Jeong because they are in the same field of endeavor. One skilled in the art would have been motivated to include the Gaussian / smoothing filter as taught by Yoo in the system of Jeong in order to provide a means to remove edge noise to improve edge detection (Yoo, ¶0056)/
Claim(s) 13-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jeong et al., U.S. Publication No. 2019/0295728, hereinafter, “Jeong” as applied to claim 1 above, and further in view of Jiang et al., U.S. Publication No. 2021/0012493, hereinafter, “Jiang”.
As per claim 13, Jeong discloses the method of claim 12, but does not explicitly disclose the following limitations as further recited however Jiang discloses
wherein the cosmetic skin attribute is generated as a value indicative of a condition of the cosmetic skin attribute of the at least one portion of skin of the person relative to a defined population of people, and is generated as a function of disorder value of at least one image defined by F(Disorder Value), wherein said function is determined by a model established upon a training dataset wherein the training dataset comprises: (i) a plurality of images of the defined population of people, wherein each of the plurality of images comprises facial skin of a person in the defined population of people (Jiang, ¶0044, training techniques of convolutional neural networks (CNN) are expanded. In accordance with an example there is derived a method to fit with a nature of this grading problem: a regression task with integer-only labels. Therefore, in accordance with an example, a system is designed to have one regression objective and another auxiliary classification objective during training. In accordance with an example, added are gender prediction and ethnicity prediction as two extra auxiliary tasks. Experiments on these tasks show improved performance with introducing these tasks. In addition, in accordance with an example, unlike many other works on medical imaging, the model is trained and tested on a selfie dataset consisting of facial images taken by mobile devices and the model demonstrates that this end-to-end model works accurately on selfies in the wild; Jiang, ¶0045, an original dataset consists of 5971 images collected from 1051 subjects of five different ethnicities, where three images had been captured by mobile phones for each subject: from the frontal and two profile views);
(ii) an associated class definition based on the cosmetic skin attribute (Jiang, ¶0045, Each subject was assigned an integer score from 0 to 5 using GEA standard [3], by three dermatologists based on their expert assessment on corresponding images. For this scoring model, a dataset of 1877 frontal images are used. Ground truth is defined as the majority score of the scores of the three dermatologists).
It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to combine the teachings of Jiang and Jeong because they are in the same filed of endeavor. One skilled in the art would have been motivated to include the training dataset as taught by Jiang in the system of Jeong in order to provide an alternate means to diagnose skin conditions (Jiang, ¶0045).
As per claim 14, Jeong disclose the method of claim 12, but does not explicitly disclose the following limitations as further recited however Jiang discloses wherein the cosmetic skin attribute is generated as a function of the disorder value in combination with basal skin color at the tile defined by F(Disorder Value, Basal Skin Color) (Jiang, ¶0069, the CNN model is also configured to output a score as well as a gender and ethnicity vector for each of the k augmented images that are processed; Jiang, ¶0070, mask accuracy is tested by comparing to the mask calculated using ground truth coordinates of acne lesions. For example, a mask is outputted by aggregating all circles centered at the coordinates of acne lesions).
It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to combine the teachings of Jiang and Jeong because they are in the same filed of endeavor. One skilled in the art would have been motivated to include the training dataset as taught by Jiang in the system of Jeong in order to provide an alternate means to diagnose skin conditions (Jiang, ¶0045).
As per claim 15, Jeong and Jiang disclose the method of claim 13, wherein the model is a regression model or a classification model; wherein said model is preferably a classification model, more preferably a machine learning classification model, most preferably a machine learning random forest classification model or Gradient Boosting classification model (Jiang, ¶0069, the CNN model is also configured to output a score as well as a gender and ethnicity vector for each of the k augmented images that are processed).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to TRACY MANGIALASCHI whose telephone number is (571)270-5189. The examiner can normally be reached M-F, 9:30AM TO 6:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vu Le can be reached at (571) 272-7332. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/TRACY MANGIALASCHI/Primary Examiner, Art Unit 2668