Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-2, 6-7, 12-13, and 15 are rejected under 35 U.S.C. 103 as being unpatentable over KOSHINO et al (JP 2018105792 A) in view of ZHANG et al (CN 112634156 A) and DESCHAINTRE et al (Single-Image SVBRDF Capture with a Rendering-Aware Deep Network).
As per claim 1, Koshino teaches the claimed “subject material determination method,” comprising: “acquiring a first image to be processed and a second image to be processed which comprise a subject to be recognized and under a same visual angle, wherein the first image to be processed is determined in response to a flash lamp being turned on” (Koshino, page 2 - The measurement object 14 is irradiated with light emitted from the light source 10. Then, an image of the measurement object 14 is taken by the imaging means 12. At that time, in the illustrated example, the angle of the light emitted from the light source 10 to the measurement object 14 is changed by changing the position of the light source 10. Specifically, as shown in FIG. 1, the position where the angle formed between the direction of the irradiated light and the measurement target surface 14a of the measurement target 14 is 0 degrees is set as the initial position of the light source 10 (S11). Thereafter, the light source 10 is moved horizontally while maintaining a constant distance from the measurement object 14) (Note: Koshino’s changing of light source 10’s angle suggests the claimed different light sources including a flash light, see Zhang, Abstract - The invention claims a method for collecting reflection parameter of image estimation material based on portable device. The method comprises the following steps: shooting the material environment light picture and the flash light picture); “determining a highlight center point of the first image to be processed, and determining material information to be fused according to the highlight center point and the first image to be processed” (Koshino, page 8 - This makes it possible to extract pixels having extremely different luminance values from the captured image as compared to the surrounding images. Then, the number and area of bright spots of each difference image are calculated (S37). And the number and area of this calculated bright spot can be made into the pearl feeling of cosmetics; Zhang, page 4 - 3. The pixel-by-pixel calibration to the direction of the camera and light source, distance and real irradiance… Because the flashlight is not likely to accurately fall in the center of the image, so the centroid of the most bright point of the selected image is the XY coordinate origin of the flashlight); “determining color information according to the second image to be processed” (Koshino, page 6 - the surface on which the cosmetic is applied is photographed while changing the angle of light that irradiates the surface, a plurality of images are acquired, and RGB values are acquired from each image); and “determining target material information of the subject to be recognized according to the material information to be fused and the color information” (Koshino, page 7 - the method of the present embodiment emphasizes both a glossy feeling and a pearly feeling, especially a cosmetic material in which a plurality of textures are combined. Or, it can be suitably used for cosmetics, for example, lipsticks, whose products are evaluated mainly by a glossy feeling and a pearly feeling). As noted, instead of changing angles of the light source as in Koshino, a flash light and a normal light conditions can be implemented as claimed to implement the different light sources’ conditions (Zhang, Abstract - The invention claims a method for collecting reflection parameter of image estimation material based on portable device. The method comprises the following steps: shooting the material environment light picture and the flash light picture; see also Deschaintre, Abstract - our network is capable of recovering per-pixel normal, diffuse albedo, specular albedo and specular roughness from a single picture of a flat surface lit by a hand-held flash). Thus, it would have been obvious, in view of Zhang and Deschaintre, to configure Koshino’s method as claimed by using a flash light to implement the different light conditions on the captured images. The purpose is to detect the visual properties of the cosmetic materials.
Claim 2 adds into claim 1 “wherein the acquiring a first image to be processed and a second image to be processed which comprise a subject to be recognized and under a same visual angle, comprises: when a material determination control is detected to have been triggered, activating the flash lamp and photographing the first image to be processed of the subject to be recognized being coated onto a target sphere; and when the flash lamp is turned off, photographing the second image to be processed of the subject to be recognized being coated onto the target sphere, wherein the first image to be processed and the second image to be processed are photographed based on the same visual angle” (Koshino, page 2 - The measurement object 14 is irradiated with light emitted from the light source 10. Then, an image of the measurement object 14 is taken by the imaging means 12. At that time, in the illustrated example, the angle of the light emitted from the light source 10 to the measurement object 14 is changed by changing the position of the light source 10. Specifically, as shown in FIG. 1, the position where the angle formed between the direction of the irradiated light and the measurement target surface 14a of the measurement target 14 is 0 degrees is set as the initial position of the light source 10 (S11). Thereafter, the light source 10 is moved horizontally while maintaining a constant distance from the measurement object 14… In the first texture measuring method of the present invention, a surface on which a cosmetic is applied is photographed while changing the angle of light that irradiates the surface, a plurality of images are acquired, and an RGB value is acquired from each image).
Claim 6 adds into claim 1 wherein the determining material information to be fused according to the highlight center point and the first image to be processed, comprises: “determining a target normal map of the first image to be processed” (Zhang, Abstract - The method comprises the following steps: shooting the material environment light picture and the flash light picture; estimating the roughness of the material according to the ambient light pattern; the mirror reflection coefficient; the diffuse reflection coefficient and the normal mapping); and “processing the target normal map and information of the highlight center point based on a pre-trained parameter generation model to acquire a material parameter to be fused” (Zhang, Abstract - The invention can be used for conveniently estimating the SVBRDF parameter of the material) (see also Deschaintre, page 2, column 2 - we introduce a method to recover spatially-varying diffuse, specular and normal maps from a single image captured under flash lighting… We introduce a rendering loss that evaluates how well a prediction reproduces the appearance of a ground-truth material sample; page 9, column 1 - We found it to perform particularly well on materials exhibiting bold large-scale features, where the normal maps capture sharp and complex geometric shapes from the photographed surfaces). Thus, it would have been obvious, in view of Zhang and Deschaintre, to configure Koshino’s method as claimed by estimating a normal map from the image lighted by a flash light for used in acquiring the material parameter. The purpose is to detect the visual properties of the cosmetic materials.
Claim 7 adds into claim 1 wherein the determining color information according to the second image to be processed, comprises: determining the color information of the subject to be recognized according to a Red-Green-Blue (RGB) value of a pixel point of the subject to be recognized in the second image to be processed” (Koshino, page 2 - In the first texture measuring method of the present invention, a surface on which a cosmetic is applied is photographed while changing the angle of light that irradiates the surface, a plurality of images are acquired, and an RGB value is acquired from each image).
Claims 12, 15, 19 and 13 claim a device and a storage medium based on the method of claims 1-2, 6-7; therefore, they are rejected under a similar rationale.
Claims 3-5 and 16-18 are rejected under 35 U.S.C. 103 as being unpatentable over KOSHINO et al (JP 2018105792 A) in view of ZHANG et al (CN 112634156 A) and DESCHAINTRE et al (Single-Image SVBRDF Capture with a Rendering-Aware Deep Network), and further in view of SAWAKI et al (JP 2015211357 A).
Claim 3 adds into claim 1 “wherein the determining a highlight center point of the first image to be processed, comprises: dividing the first image to be processed into at least one to-be-processed sub-region based on a preset region size; determining a brightness value corresponding to each to-be-processed sub-region, and taking a to-be-processed sub-region with a highest brightness value as a target sub-region; and taking a center point of the target sub-region as the highlight center point” which Koshino does not explicitly teach. However, Koshino’s areas of bright spots (Koshino, page 8 - This makes it possible to extract pixels having extremely different luminance values from the captured image as compared to the surrounding images. Then, the number and area of bright spots of each difference image are calculated (S37). And the number and area of this calculated bright spot can be made into the pearl feeling of cosmetics) suggests the claimed “determining a brightness value corresponding to each to-be-processed sub-region, and taking a to-be-processed sub-region with a highest brightness value as a target sub-region; and taking a center point of the target sub-region as the highlight center point” (e.g., Zhang, page 4 - 3. The pixel-by-pixel calibration to the direction of the camera and light source, distance and real irradiance… Because the flashlight is not likely to accurately fall in the center of the image, so the centroid of the most bright point of the selected image is the XY coordinate origin of the flashlight; Deschaintre, page 10, column 1 - we find it particularly impressive that the low roughness level was apparently resolved based on the small highlight cues on the center tile and the edges of the outer tiles. For most materials, the specular albedo is resolved as monochrome, as it should be. Similar globally consistent behavior can be seen across the result set: cues from sparsely observed specular highlights often inform the specularity across the entire material); furthermore, Sawaki teaches the claimed “dividing the first image to be processed into at least one to-be-processed sub-region based on a preset region size” (Sawaki, page 9 - the determination unit 142 irradiates from the plurality of areas 102a to 102i obtained by dividing the captured image 102 so that the filter processing unit 143 and the recognition processing unit 15 perform processing on the irradiation area of the captured image 102… When there are a plurality of areas having an average luminance value equal to or greater than a predetermined threshold, the determination unit 142 may determine, for example, an area having the highest average luminance value among the plurality of areas as an irradiation area). Thus, it would have been obvious, in view of Zhang, Deschaintre, and Sawaki, to configure Koshino’s method as claimed by dividing the first image to be processed into at least one to-be-processed sub-region based on a preset region size. The purpose is to detect the visual properties of the cosmetic materials applied on the pre-determined regions of the captured image.
Claim 4 adds into claim 3 “wherein the dividing the first image to be processed into at least one to-be-processed sub-region based on a preset region size, comprises: dividing a first to-be-processed region into the at least one to-be-processed sub-region according to the preset region adjustment size by taking a pixel point as a region adjustment step” (Sawaki, page 9 - the determination unit 142 irradiates from the plurality of areas 102a to 102i obtained by dividing the captured image 102 so that the filter processing unit 143 and the recognition processing unit 15 perform processing on the irradiation area of the captured image 102… When there are a plurality of areas having an average luminance value equal to or greater than a predetermined threshold, the determination unit 142 may determine, for example, an area having the highest average luminance value among the plurality of areas as an irradiation area) (Noted: Sawaki’s pre-size of the examining areas are arbitrary and adjustable depending on the characteristic of the material (e.g., page 8 - The determination unit 142 divides the captured image 102 output from the image buffer unit 141 into a predetermined number of areas. In the example of FIG. 12, the determination unit 142 divides the captured image 102 into nine 3 × 3 areas 102 a to 102 i) . Thus, it would have been obvious, in view of Zhang, Deschaintre, and Sawaki, to configure Koshino’s method as claimed by dividing the first image to be processed into at least one to-be-processed sub-region based on an adjustable preset region size. The purpose is to detect the visual properties of the cosmetic materials applied on the pre-determined regions of the captured image.
Claim 5 adds into claim 3 “determining a weight value corresponding to each pixel point in each to-be-processed sub-region based on a Poisson distribution” which is obvious in a mathematical model in which the random number k (i.e., the weight) of the pixels with highest (i.e., larger than a threshold) brightness value detected within a large area of image is estimated by a Poisson distribution Pr(x=k) (i.e., the number of succeeded events occur (e.g., pixels with highest brightness value) in fixed large intervals (e.g., area of large number of pixels) when these events happen independently at a constant average rate); “determining a region brightness value of each to-be-processed sub-region based on the weight value and a pixel brightness value corresponding to each pixel point; and taking a to-be-processed sub-region corresponding to a highest region brightness value as the target sub-region” (Sawaki, page 9 - the determination unit 142 irradiates from the plurality of areas 102a to 102i obtained by dividing the captured image 102 so that the filter processing unit 143 and the recognition processing unit 15 perform processing on the irradiation area of the captured image 102… When there are a plurality of areas having an average luminance value equal to or greater than a predetermined threshold, the determination unit 142 may determine, for example, an area having the highest average luminance value among the plurality of areas as an irradiation area). Thus, it would have been obvious, in view of Zhang, Deschaintre, and Sawaki, to configure Koshino’s method as claimed by using Poisson distribution as the mathematic model for determining the random number of brightest pixels happen within a large area. The purpose is to approximately detect the visual properties of the cosmetic materials applied on the pre-determined regions of the captured image.
Claims 16-18 claim a device based on the method of claims 3-5; therefore, they are rejected under a similar rationale.
Claims 8-10 and 20-22 are rejected under 35 U.S.C. 103 as being unpatentable over KOSHINO et al (JP 2018105792 A) in view of ZHANG et al (CN 112634156 A) and DESCHAINTRE et al (Single-Image SVBRDF Capture with a Rendering-Aware Deep Network), and further in view of CHENG et al (CN 109472655 A).
Claim 8 adds into claim 1 “displaying an object corresponding to the subject to be recognized on a target display interface, and taking the object corresponding to the subject to be recognized as an object to be tried” which Deschaintre suggests in Figure 14 of different displayed objects corresponding to their corresponding subjects (see also Cheng, page 19 - the first client, installed in the test device, used for loading the test data, received after the target trial data object for request of the trial, for image acquisition; said server is further used for, from the first client collected image data in the target image content identification, determining the position of the target image content; said first client terminal is further used for trial data object to the target according to the target position of the image content corresponding to material information for rendering display). Thus, it would have been obvious, in view of Zhang, Deschaintre, and Cheng, to configure Koshino’s method as claimed by using the object corresponding to the subject to be recognized as an object to be tried. The purpose is to approximately detect the visual properties of the cosmetic materials applied on the pre-determined regions of the captured image.
Claim 9 adds into claim 8 “acquiring a face image corresponding to a target audience and retrieving the target material information corresponding to a triggered target trial object, when an instruction of object trial is detected; determining target rendered material information based on the face image and the target material information; and adding a target effect to the face image based on the target rendered material information to acquire a target effect image” (Cheng, page 21 - the first client is installed in the test device, used for loading the test data, receiving the request of the target data object after trial, collecting human face image, and from the collected image data for positions of the eyes and cheek edge position identification, trial data object to the target at the positions of the eyes and cheeks edge position corresponding to the material information for rendering display). Thus, it would have been obvious, in view of Zhang, Deschaintre, and Cheng, to configure Koshino’s method as claimed by applying the cosmetic material on a pre-determined region corresponding to the human face to be recognized as an object to be tried. The purpose is to approximately detect the visual properties of the cosmetic materials applied on the pre-determined regions of the captured image.
Claim 10 adds into claim 9 “wherein the determining target rendered material information based on the face image and the target material information” (Cheng, page 21 - the first client is installed in the test device, used for loading the test data, receiving the request of the target data object after trial, collecting human face image, and from the collected image data for positions of the eyes and cheek edge position identification, trial data object to the target at the positions of the eyes and cheeks edge position corresponding to the material information for rendering display), comprises: “determining to-be-used light brightness based on the face image; and adjusting the target material information according to the to-be-used light brightness to acquire the target rendered material information” (Koshino, page 2 - The measurement object 14 is irradiated with light emitted from the light source 10. Then, an image of the measurement object 14 is taken by the imaging means 12. At that time, in the illustrated example, the angle of the light emitted from the light source 10 to the measurement object 14 is changed by changing the position of the light source 10. Specifically, as shown in FIG. 1, the position where the angle formed between the direction of the irradiated light and the measurement target surface 14a of the measurement target 14 is 0 degrees is set as the initial position of the light source 10 (S11). Thereafter, the light source 10 is moved horizontally while maintaining a constant distance from the measurement object 14… In the first texture measuring method of the present invention, a surface on which a cosmetic is applied is photographed while changing the angle of light that irradiates the surface, a plurality of images are acquired, and an RGB value is acquired from each image). Thus, it would have been obvious, in view of Zhang, Deschaintre, and Cheng, to configure Koshino’s method as claimed by applying the cosmetic material on a pre-determined region corresponding to the human face to be recognized as an object to be tried. The purpose is to approximately detect the visual properties of the cosmetic materials applied on the pre-determined regions of the captured image.
Claims 20-22 claim a device based on the method of claims 8-10; therefore, they are rejected under a similar rationale.
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 13 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because the claimed “storage medium” can be a wave carrier embodying signals.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PHU K NGUYEN whose telephone number is (571)272-7645. The examiner can normally be reached M-F 8-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel F. Hajnik can be reached at (571) 272-7642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PHU K NGUYEN/Primary Examiner, Art Unit 2616