Prosecution Insights
Last updated: April 19, 2026
Application No. 18/594,659

SYSTEM AND METHOD FOR MEASURING SURFACE FEATURES ON SKIN

Non-Final OA §101§103§112
Filed
Mar 04, 2024
Examiner
DU, HAIXIA
Art Unit
2611
Tech Center
2600 — Communications
Assignee
The Gillette Company LLC
OA Round
1 (Non-Final)
86%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
477 granted / 553 resolved
+24.3% vs TC avg
Strong +18% interview lift
Without
With
+18.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
22 currently pending
Career history
575
Total Applications
across all art units

Statute-Specific Performance

§101
10.0%
-30.0% vs TC avg
§103
50.1%
+10.1% vs TC avg
§102
8.4%
-31.6% vs TC avg
§112
20.2%
-19.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 553 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-22 are present for examination. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 17-20 and 22 are rejected under 35 U.S.C. 101 because the claimed invention is directed to abstract idea without significantly more. MPEP 2106 III provides a flowchart for the subject matter eligibility test for product and processes. The claim analysis following the flowchart is as follows: Regarding claim 17, it recites: A method comprising: receive, at a processor, 3D calibration data and a plurality of 3D bitmap images of a surface over a respective plurality of frames; automatically determine, with the processor, whether a surface feature in the 3D bitmap image for each frame is in focus; automatically determine, with the processor, a 3D model of the surface features based on the 3D calibration data and one or more of the 3D bitmap images where the surface feature is in focus; automatically determine, with the processor, a value of one or more parameters of the surface feature that is in focus based on the 3D model for the plurality of frames; and automatically calculate, with the processor, a characteristic value of the one or more parameters of the surface feature over the plurality of frames; and store, with the processor, the calculated characteristic value of the one or more parameters of the surface feature and an identifier that indicates the surface feature. Step 1: Is the claim to a process, machine, manufacture or composition of matter? Yes. It recites a method, which is a process. Step 2A, Prong One: Does the claim recite an abstract idea, law of nature, or nature phenomenon? Yes. The “determine … whether a surface feature in the 3D bitmap image for each frame is in focus” can be performed as a mental process because a person can look at the 3D bitmap image and determine whether a surface feature in the 3D bitmap image is in focus or not. The “determine … a 3D model of the surface features based on the 3D calibration data and one or more of the 3D bitmap images where the surface feature is in focus” can be performed as a mental process with simple aid of pen and paper because a person can draw a 3D model based on the 3D calibration data and 3D bitmap images. The “determine … a value of one or more parameters of the surface feature that is in focus based on the 3D model for the plurality of frames” can be performed mentally because a person can look at the 3D model to determine values of parameters of the surface features. The “calculate … a characteristic value of the one or more parameters of the surface feature over the plurality of frames” can be performed as mathematical concept because it expressly recite mathematical calculations or as a mental process when the calculations can be done mentally. The claim recites a “processor” to performed these abstract ideas. However, the processor is recited in such high level without any details, so it can only be considered as a generic computer component. According to MPEP 2106.04(a)(2) III.C, a claim that requires a computer may still recite a mental process. Here, the mental processes are merely performed using a processor, as part of a generic computer, in a computer environment, or as a tool. Therefore, they are still mental processes. Step 2A, Prong Two: Does the claim recite additional elements that integrate the judicial exception into a practical application? No. The “receive” step and the “store” step are additional elements but they can be considered as insignificant solutions such as data gathering (receive step) and simply storing/output results (store step). Even if the processor recited in the claim can be considered as an additional elements, it is still a generic computer components without any detailed structure. Therefore, these additional elements cannot integrate the abstract ideas into practically application. Therefore, this judicial exception is not integrated into a practical application because the additional elements are either insignificant extra solutions or generic computer components. Step 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? No. As discussed above, the additional elements are either insignificant extra solutions or generic computer components. Therefore, the claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception. Therefore, claim 17 is not eligible subject matter under 35 USC 101. Regarding claim 18, it depends from claim 17 and further recites wherein the automatically determining the 3D model of the surface is based on the 3D calibration data and the 3D bitmap image for a first frame of the plurality of frames; wherein the automatically determining the value of the one or more parameters of the surface feature based on the 3D model is for the first frame of the plurality of frames; and wherein the method further comprises: automatically determine, with the processor, an updated 3D model of the surface based on the 3D calibration data and the 3D bitmap image for each of a next frame after the first frame where the surface feature is in focus; and automatically determine, with the processor, the value of the one or more parameters of the surface feature based on the updated 3D model for each of the next frame after the first frame. The determining steps are still mental processes for merely limiting the 3D bitmap image to be for a first frame. The step “determine … an updated 3D model based on the 3D calibration data and the 3D bitmap image for each of a next frame after the first frame where the surface feature is in focus” can be performed mentally with the simple aid of pen and paper similar to the “determine … 3D model” step discussed above with respect to claim 17. The step “determine … the value of the one or more parameters of the surface feature based on the updated 3D model for each of the next frame after the first frame” can be performed as a mental process because a person can determine parameter values based on a 3D model. The process does not integrate the abstract ideas of claim 18 into practically application or amount to significantly more with similar reasons discussed above with respect to claim 17. Therefore, claim 18 is not eligible subject matter under 35 USC 101. Regarding claim 19, it depends from claim 17 and further recites “where the surface is a skin surface and the surface feature is a hair.” It simply further limits the data without introducing additional elements that can integrate the abstract ideas of claim 18 into practically application or amount to significantly more with similar reasons discussed above with respect to claim 17. Therefore, claim 19 is not eligible subject matter under 35 USC 101. Regarding claim 20, it depends from claim 17 and further recites “the automatically determining the value of the parameter of the surface feature in the 3D model comprises automatically identifying a location of the surface feature in the 3D model for the first frame; and wherein the automatically determining the value of the parameter of the surface feature in the updated 3D model comprises automatically identifying a location of the surface feature in the updated 3D model for each of the next frame after the first frame.” The claim recites two identifying steps, which can be performed as mental processes because a person can look at the 3D model and the updated 3D model to identify locations of surface features. Therefore, no additional elements are recited that can integrate the abstract ideas of claim 20 into practically application or amount to significantly more. Therefore, claim 20 is not eligible subject matter under 35 USC 101. Regarding claim 22, it recites similar limitations of claim 17 with further limiting the surface as a skin surface and a hair as surface feature. Limiting the input data does not introduce any additional elements that can integrate the abstract ideas of claim 20 into practically application or amount to significantly more. Therefore, claim 22 is not eligible subject matter under 35 USC 101. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 7-10, 17-20, and 22 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 7 recites the limitation "the first image data" and “the second image data” in line 2. There are insufficient antecedent basis for these limitations in the claim. Claims 8-10 depend from claim 7 but fail to cure the deficiencies of claim 7. Regarding claim 17, it recites “determine … a value of one or more parameters of the surface feature that is in focus” and “calculate … a characteristic value of the one or more parameters of the surface feature”. It is not clear whether the characteristic value can be the same as or different than the determined value, and if they are different, what is the difference between these two values. Claims 18-20 depend from claim 17 but fail to cure the deficiencies of claim 17. Claim 22 recites similar limitations discussed above with respect to claim 17. In addition, claim 18 recites “the 3D model of the surface” while its parent claim 17 recites “a 3D model of the surface features”. There is insufficient antecedent basis for the limitation “the 3D model of the surface” in the claim. Claim 20 depends from claim 18 but fails to cure the deficiencies of claim 18. In addition, claim 20 recites “the automatically determining the value of the parameter of the surface feature in the 3D model”. There is insufficient antecedent basis for the limitation. Therefore, claims 7-10, 17-20, and 22 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 3-9, and 14-16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sun (Sun et al., Scheimpflug Camera-Based Stereo-Digital Image Correlation for Full-Field 3D Deformation Measurement) in view of Chinese Patent Publication No. CN 106604015 A to Yang. Regarding claim 1, Sun discloses A system (Sun, Figure 3, showing a system) comprising: a plurality of cameras (Sun, Figure 3, showing two cameras); a plurality of optical elements configured to receive light from an area of a surface having one or more features and further configured to direct the light to the plurality of cameras (Sun, p. 5, col. 1, Sec. 4.1, 2nd para., disclosing two cameras to record the images of the specimen surface, equipped with Kowa lenses, 3rd para.-col. 2, 1st para., disclosing the specimen has a suitable speckle pattern, and two LED lamps, indicating the lenses can correspond to a plurality of optical elements configured to receive light from the specimen surface corresponding to an area of a surface having the suitable speckle pattern as one or more features and further configured to direct the light to the plurality of camera); at least one processor communicatively coupled with the plurality of cameras; and at least one memory including one or more sequences of instructions, the at least one memory and the one or more sequences of instructions configured to, with the at least one processor (Sun, p. 5, col. 2, 1st para., disclosing a specially tailored experimental data processing software, indicating the system should have at least one processor communicatively coupled with the cameras and at least one memory including the software as one or more sequences of instruction), cause the system to perform at least the following, determine 3D calibration data of the plurality of cameras (Sun, p. 6, col. 1, Sec. 4.2, 1st para., disclosing determining calibration data including intrinsic and extrinsic parameters using the stepwise stereo camera calibration technique. Because the calibration is for the stereo camera, the calibration data can be considered as 3D calibration data of the two cameras as the plurality of cameras); automatically receive image data of the area in focus from the plurality of cameras over a plurality of frames (Sun, p. 5, col. 1, Sec. 4.1, 2nd para., disclosing two cameras simultaneously record the images of the specimen surface, 3rd para., disclosing adjusting the cameras and lenses to obtain entirely focused images with small distortions, p. 7, col. 1, 1st para., disclosing stereo images are simultaneously captured for each position of the specimen, p. 8, col. 1, Sec. 4.4, 1st para., disclosing a series of image pairs are recorded over a series of load, indicating the recorded series of image pairs over a series of load can correspond to the image data of the specimen as the area in focus from the two cameras as the plurality of cameras over a series of load corresponding to a plurality of frames); automatically determine a 3D image for each of the plurality of frames based on the image data for each of the plurality of frames (Sun, p. 5, col. 1, Sec. 4.1, 2nd para., disclosing two cameras simultaneously record the images of the specimen surface, 3rd para., disclosing adjusting the cameras and lenses to obtain entirely focused images with small distortions, p. 7, col. 1, 1st para., disclosing stereo images are simultaneously captured for each position of the specimen, p. 8, col. 1, Sec. 4.4, 1st para., disclosing a series of image pairs are recorded over a series of load, indicating the stereo images can correspond to a 3D image for each of the plurality of frames determined based on the recorded images for each of the plurality of frames); and store the 3D calibration data and the 3D images over the plurality of frames in the memory (Sun, p. 6, col. 1, Sec. 4.2, 1st para., disclosing determining the calibration data including the intrinsic and extrinsic parameters, 2nd para., disclosing verifying the calibration results by reconstructing the 3D coordinates and structure of checkerboards via the rig with the calibrated parameters, indicating the determined calibration data including the intrinsic and extrinsic parameters must have been stored in the memory for the verification stage; p. 8, col. 1, Sec. 4.4, disclosing a series of image pairs are recorded as the deformed configuration, indicating the series of image pairs can correspond to the 3D images over a plurality of frames being recorded corresponding to be stored in the memory). However, Sun does not expressly disclose the 3D image to be 3D bitmap image. On the other hand, Yang discloses a plurality of cameras (Yang, Translation, para. [0009], a terminal having two rear cameras), at least one processor communicatively coupled with the plurality of cameras (Yang, Translation, para. [0009], a terminal having two rear cameras, para. [0097], disclosing the terminal includes at least a transceiver and a processor); and at least one memory including one or more sequences of instructions, the at least one memory and the one or more sequences of instructions configured to, with the at least one processor (Yang, Translation, para. [0097], disclosing the terminal may further include a memory, para. [0110], disclosing the method can be implemented by a computer program, which can be stored in a computer-readable storage medium such as a memory), cause the system to perform at least the following, determine a 3D bitmap image for each of the plurality of frames based on the image data for each of the plurality of frames (Yang, Translation, para. [0011], disclosing based on the two acquired images, generate a stereo image and a stereo bitmap corresponding to the stereo image, para. [0055], disclosing the stereoscopic bitmap as 3D bitmap, indicating the stereo bitmap can correspond to a 3D bitmap image for each of the two acquired images as the plurality of frames determined based on the two acquired images corresponding to the image data for each of the plurality of frames). Before the invention was effectively filed, it would have been obvious for a person skilled in the art to combine Sun and Yang. The suggestion/motivation would have been to provide image processing that can present actual location of each scene to the user, as suggested by Yang (see Yang, Translation, para. [0007]). Regarding claim 3, Sun in view of Yang discloses the system of claim 1, wherein the plurality of cameras define a respective plurality of image planes (Sun, FIGURE 1 and FIGURE 2, showing two cameras define respective two image planes); wherein the plurality of optical elements define a respective plurality of optical planes (Sun, p. 5, col. 1, Sec. 4.1, 2nd para., disclosing two cameras to record the images of the specimen surface, equipped with Kowa lenses, FIGURE 1, showing a lens plane for an imaging sensor which has an image plane, indicating the Kowa lenses corresponding to a plurality of optical elements can define a respective plurality of lens planes) and wherein the plurality of optical elements are configured such that each optical plane intersects at least one of the image planes within a plane of focus (Sun, p. 5, col. 1, Sec. 4.1, 2nd para., disclosing two cameras to record the images of the specimen surface, equipped with Kowa lenses, FIGURE 1, showing a lens plane for an imaging sensor which has an image plane, and the lens plane and image plane intersects at the object plane, p. 2, col. 1, Sec. 2.1, 1st para., disclosing the object plane as the plane that is in focus, indicating the Kowa lenses corresponding to a plurality of optical elements can configured such that each lens plane as optical plane intersects at least one of the image planes within the object plane as a plane of focus). Regarding claim 4, Sun in view of Yang discloses the system of claim 3, wherein the plane of focus is aligned with the surface (Sun, FIGURE 1, showing an object plane, p. 5, col. 1, Sec. 4.1, 2nd para., disclosing two cameras to record the images of the specimen surface, equipped with Kowa lenses, 3rd para.-col. 2, 1st para., disclosing the specimen has a suitable speckle pattern, indicating the specimen corresponding to the surface can be the object and therefore the object plane can be the plane of focus aligned with the surface). Regarding claim 5, Sun in view of Yang discloses the system of claim 1, wherein the plurality of optical elements are configured to reduce a first angular spread of light received from the area of the surface to a second angular spread of light incident on the plurality of cameras, wherein the second angular spread is less than the first angular spread (Sun, FIGGURE 1, showing the angular spread area on the object plane is larger than the angular spread area on the image plane when the lines passing through the lens, indicating the lens as optical elements are configured to reduce the first angular spread of light received from the area of the surface (corresponding to the area between the lens and the object plane bounded by the dashed lines) to a second angular spread of light incident on the plurality of cameras (corresponding to the area between the lens and the image sensor bounded by the dashed lines), and the second angular spread is less than the first angular spread). Regarding claim 6, Sun in view of Yang discloses the system of claim 5, wherein the plurality of cameras are spaced apart by a first distance that is less than a second distance to space the plurality of cameras to receive light having the first angular spread without the plurality of optical elements (Sun, FIGURE 1, showing the first angular spread between the object plane and the lens, and the second angular spread between the lens and the image sensor, FIGURE 3, showing two cameras spaced apart with a first distance. To receive a larger light having a larger angular spread (such as the first angular spread) without the optical elements, the cameras have to be spaced apart by a second distance that is larger than the first distance which allows the cameras to receive light having the (smaller) second angular spread). Regarding claim 7, Sun in view of Yang discloses the system of claim 1, wherein the system is a contactless system that is configured to receive the first image data and the second image data of the area of the surface without making contact with the surface (Sun, FIGURE 3, showing two cameras and the specimen is placed at a distance from the cameras, indicating the system is a contactless system that can receive the first image data and the second image data of the specimen corresponding to the area of the surface without making contact with the surface). Regarding claim 8, Sun in view of Yang discloses the system of claim 7, further comprising a housing defining an opening, wherein the plurality of cameras are positioned within the housing (Sun, FIGURE 3, showing a housing having two cameras, the apparatus having the specimen, and two LED lamps, the housing defines an opening); wherein the plurality of optical elements are positioned within the housing between the opening and the plurality of cameras and wherein the plurality of optical elements are configured to receive light through the opening from the area (Sun, FIGURE 3, showing a housing having two cameras, and two LED lamps, the housing defines an opening between the cameras and the specimen, the Kowa lenses as the optical elements are positioned within the housing between the opening and the cameras, and the lenses as the optical elements are configured to receive light through the opening from the specimen area). Regarding claim 9, Sun in view of Yang discloses the system of claim 8, wherein the housing is configured to be positioned at a distance from the area of the surface that is greater than a minimum distance threshold and less than a maximum distance threshold (Sun, FIGURE 3, showing a housing having two cameras, and two LED lamps, indicating the housing having the cameras is positioned at a distance to the specimen corresponding to the area of the surface, the distance is greater than 0 (minimum distance threshold) and less than the distance + 1 microns (a maximum distance threshold). Regarding claim 14, it recites similar limitations of claim 1 but in a method form. The rationale of claim 1 rejection is applied to reject claim 14. Regarding claim 15, Sun in view of Yang discloses the method of claim 14, further comprising: receive, at the processor, second image data of the area of the surface from the camera system (Sun, p. 6, col. 1, Sec. 4.2, 1st para., disclosing calibrating the cameras before the specimen testing, and during calibration, images of the checker-board under different orientations and positions are captured, 2nd para., disclosing verifying the calibration results using image pairs, indicating the obtained image pairs can correspond to the second image data of the area of the surface from the camera system received by the processor for verifying calibration); automatically determine, with the processor, whether the second image data is in focus with the surface (Sun, p. 5, col. 1, Sec. 4.1, last para., disclosing in order to obtain entirely focused images, the cameras and lenses are adjusted, p. 6, col. 1, Sec. 4.2, 1st para., disclosing calibrating the cameras before the specimen testing, and during calibration, images of the checker-board under different orientations and positions are captured to adjust the intrinsic and extrinsic parameters, 2nd para., disclosing verifying the calibration results using image pairs, indicating the image pairs can correspond to the second image data, and calibration verification will determine whether the image pairs are in focus with the surface); and wherein the automatic receiving of the first image data over the plurality of frames is based on the second image data being in focus (Sun, p. 5, col. 1, Sec. 4.1, last para., disclosing in order to obtain entirely focused images, the cameras and lenses are adjusted, p. 6, col. 1, Sec. 4.2, 1st para., disclosing calibrating the cameras before the specimen testing, and during calibration, images of the checker-board under different orientations and positions are captured to adjust the intrinsic and extrinsic parameters, 2nd para., disclosing verifying the calibration results using image pairs, indicating the entirely focused images as the first image data over the plurality of frames for specimen testing can be obtained after the calibration being verified (the image pairs are in focus)). Regarding claim 16, Sun in view of Yang discloses the method of claim 14, wherein the 3D calibration data is determined by capturing, with the plurality of cameras, image data of an object with a predetermined geometry at a plurality of separations between the plurality of cameras and the object (Sun, p. 6, col. 1, Sec. 4.2, 1st para., disclosing calibrating the cameras before the specimen testing, and during calibration, images of the checker-board under different orientations and positions are captured to adjust the intrinsic and extrinsic parameters, the checker board can correspond to an object with a predetermined geometry and the images of the checker-board can correspond to the image data of an object with a predetermined geometry at different orientations and positions corresponding to a plurality of separations between the cameras and the object). Claim(s) 2 and 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sun in view of Yang, and further in view of US Patent Publication No. 20160324586 A1 to Zingaretti et al. Regarding claim 21, it recites similar limitations recited in claim 1 but in a method form and further requires the surface having one or more features is a skin surface having one or more hairs. The rationale of claim 1 rejection is applied to claim 21 but Sun or Yang does not expressly disclose a skin surface having one or more hairs. On the other hand, Zingaretti discloses the surface is a skin surface (Zingaretti, para. [0056], discloses using a stereo camera pair to obtain image data regarding the position and orientation of objects of interest (e.g., hair follicles, wrinkle lines, tattoos, moles, etc.) on the skin surface). Before the invention was effectively filed, it would have been obvious for a person skilled in the art to combine Sun in view of Yang with Zingaretti. The suggestion/motivation would have been to determine and track objects on the skin surface, as suggested by Zingaretti (see Zingaretti, para. [0056]). Regarding claim 2, Sun in view of Yang discloses the system of claim 1. However, Sun or Yang does not expressly disclose wherein the surface is a skin surface. On the other hand, Zingaretti discloses the surface is a skin surface (Zingaretti, para. [0056], discloses using a stereo camera pair to obtain image data regarding the position and orientation of objects of interest (e.g., hair follicles, wrinkle lines, tattoos, moles, etc.) on the skin surface). Before the invention was effectively filed, it would have been obvious for a person skilled in the art to combine Sun in view of Yang with Zingaretti. The suggestion/motivation would have been to determine and track objects on the skin surface, as suggested by Zingaretti (see Zingaretti, para. [0056]). Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sun in view of Yang as applied to claim 1 above, and further in view of US Patent Publication No. 20220087406 A1 to Kosecoff. Regarding claim 11, Sun in view of Yang discloses the system of claim 1, further comprising a radiation source configured to output a radiation signal to illuminate the surface features (Sun, FIGURE 3,, showing two LED lamps that can be a radiation source configured to output a radiation signal to illuminate the specimen thus illuminate the surface features). However, Sun or Yang does not expressly disclose wherein an absorption of the radiation signal in the surface feature is different from the surface. On the other hand, Kosecoff discloses an absorption of the radiation signal in the surface feature is different from the surface (Kosecoff, para. [0073], disclosing relating the absorption of light of a certain wavelength to a skin or hair condition, skin and hair conditions related to hair density, tone, and dryness can be identified by measuring the absorption of light, and skin and hair diagnosis can be made based on images from a camera, indicating the skin and the hair as the surface feature can have different absorption of the light as the radiation signal). Before the invention was effectively filed, it would have been obvious for a person skilled in the art to combine Sun in view of Yang with Kosecoff. The suggestion/motivation would have been to provide treatment based on skin and hair conditions, as suggested by Kosecoff (see Kosecoff, para. [00073]). Claim(s) 12 and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Sun, Yang, and Kosecoff as applied to claim 11 above, and further in view of US Patent Publication No. 20080154247 A1 to Dallarosa et al. Regarding claim 12, the combination of Sun, Yang, and Kosecoff discloses the system of claim 11, wherein the surface feature is a hair and the surface is a skin surface (Kosecoff, para. [0073], disclosing skin and hair diagnosis based on the absorption of light of a certain wavelength, indicating the surface feature can be a hair and the surface can be a skin surface). Before the invention was effectively filed, it would have been obvious for a person skilled in the art to combine Sun in view of Yang with Kosecoff. The suggestion/motivation would have been to provide treatment based on skin and hair conditions, as suggested by Kosecoff (see Kosecoff, para. [00073]). However, Sun, Yang, or Kosecoff does not expressly disclose wherein the radiation signal has a wavelength range such that the absorption of the radiation signal at the skin surface is greater than the absorption of the radiation signal at the hair. On the other hand, Dallarosa discloses the radiation signal has a wavelength range such that the absorption of the radiation signal at the skin surface is greater than the absorption of the radiation signal at the hair (Dallarosa, para. [0066], disclosing under illumination by a diagnostic energy source, hair absorbs less energy than areas of skin). Before the invention was effectively filed, it would have been obvious for a person skilled in the art to combine Dallarosa into the combination of Sun, Yang, and Kosecoff. The suggestion/motivation would have been to detect hair in a region of skin not dependent upon the color of the hair, as suggested by Dallarosa (see Dallarosa, para. [0066]). Regarding claim 13, the combination of Sun, Yang, Kosecoff, and Dallarosa discloses the system of claim 12, wherein the wavelength of the radiation signal is within a range comprising at least one of a first range between about 500 nm and about 560 nm and a second range between about 1400 nm and about 1550 nm (Kosecoff, para. [0054], disclosing the energy in a range of wavelengths from about 200 nm to about 2000 nm, Dallarosa, para. [0066], disclosing the laser energy at a non-selective wavelength between about 1200 nm and about 1400 nm, the about 1400nm is within the second range between about 1400 nm and about 1550 nm). Before the invention was effectively filed, it would have been obvious for a person skilled in the art to combine Sun in view of Yang with Kosecoff. The suggestion/motivation would have been to provide treatment based on skin and hair conditions, as suggested by Kosecoff (see Kosecoff, para. [00073]). Before the invention was effectively filed, it would have been obvious for a person skilled in the art to combine Dallarosa into the combination of Sun, Yang, and Kosecoff. The suggestion/motivation would have been to detect hair in a region of skin not dependent upon the color of the hair, as suggested by Dallarosa (see Dallarosa, para. [0066]). Claim(s) 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sun in view of US Patent No. 10473454 B1 to Ding and Yang. Regarding claim 17, Sun discloses A method comprising: receive, at a processor, 3D calibration data and a plurality of 3D images of a surface over a respective plurality of frames (Sun, p. 5, col. 2, 1st para., disclosing a specially tailored experimental data processing software, indicating there is a processor to execute the software; p. 6, col. 1, Sec. 4.2, 2nd para., disclosing to further verify the calibration results, the 3D coordinates and structure of 20 arbitrarily placed checkerboards are reconstructed via the rig with the calibrated parameters and simultaneously obtained image pairs, indicating the calibration parameters can correspond to 3D calibration data, and the image pairs of the 20 arbitrarily placed checkerboards can correspond to the plurality of 3D images of a surface over a respective plurality of frames, and are received by the processor for calibration verification); automatically determine, with the processor, a 3D model of the surface features based on the 3D calibration data and one or more of the 3D images where the surface feature is in focus (Sun, p. 5, col. 1, last para., disclosing to obtain entirely focused images, the cameras and lenses are adjusted, col. 2, 1st para., disclosing a specially tailored experimental data processing software, indicating there is a processor to execute the software; p. 6, col. 1, Sec. 4.2, 2nd para., disclosing to further verify the calibration results, the 3D coordinates and structure of 20 arbitrarily placed checkerboards are reconstructed via the rig with the calibrated parameters and simultaneously obtained image pairs, indicating the reconstructed 3D coordinates and structure of checkerboard can correspond to a 3D model of the checkerboard corresponding to surface features, which are determined based on the calibration parameters as the 3D calibration data and the image pairs as the one or more of the 3D images where the surface feature is in focus (after calibration to obtain entirely focused images); automatically determine, with the processor, a value of one or more parameters of the surface feature that is in focus based on the 3D model for the plurality of frames (Sun, p. 5, col. 2, 1st para., disclosing a specially tailored experimental data processing software, indicating there is a processor to execute the software; p. 6, col. 1, Sec. 4.2, 2nd para., disclosing to further verify the calibration results, the 3D coordinates and structure of 20 arbitrarily placed checkerboards are reconstructed via the rig with the calibrated parameters and simultaneously obtained image pairs, last para., disclosing the reconstructed 3D checkerboard points and the fitted plane while the error distribution of the reconstructed checkerboard points is presented, and the deviations of the reconstructed points are approximately symmetrical about the center of the fitted plane, indicating the deviations can correspond to a value of the reconstructed points as one or more parameters of the surface features of the checkerboard that is in focus determined based on the reconstructed 3D points as the 3D model for the plurality of frames); and automatically calculate, with the processor, a characteristic value of the one or more parameters of the surface feature over the plurality of frames (Sun, p. 5, col. 2, 1st para., disclosing a specially tailored experimental data processing software, indicating there is a processor to execute the software; p. 6, col. 1, Sec. 4.2, 2nd para., disclosing to further verify the calibration results, the 3D coordinates and structure of 20 arbitrarily placed checkerboards are reconstructed via the rig with the calibrated parameters and simultaneously obtained image pairs, last para., disclosing the reconstructed 3D checkerboard points and the fitted plane while the error distribution of the reconstructed checkerboard points is presented, and the deviations of the reconstructed points are approximately symmetrical about the center of the fitted plane, and the maximum deviation of the reconstructed points is determined, indicating maximum deviation can correspond to a characteristic value of the checkerboards points corresponding to one or more parameters of the surface feature over the plurality of frames); and store, with the processor, the calculated characteristic value of the one or more parameters of the surface feature and an identifier that indicates the surface feature (Sun, p. 5, col. 2, 1st para., disclosing a specially tailored experimental data processing software, indicating there is a processor to execute the software; p. 6, col. 1, Sec. 4.2, 2nd para., disclosing to further verify the calibration results, the 3D coordinates and structure of 20 arbitrarily placed checkerboards are reconstructed via the rig with the calibrated parameters and simultaneously obtained image pairs, last para., disclosing the reconstructed 3D checkerboard points and the fitted plane while the error distribution of the reconstructed checkerboard points is presented, and the deviations of the reconstructed points are approximately symmetrical about the center of the fitted plane, and the maximum deviation of the reconstructed points is determined, FIGURE 5 shows the reconstructed 3D checkerboard points and the fitted 3D plane and the error distribution of the reconstructed pointes with respect to the fitted plane. Although Sun does not expressly disclose storing these values including the maximum deviation as the calculated characteristic value and an identifier indicating the surface feature, Before the invention was effectively filed, it would have been obvious for a person skilled in the art to modify Sun to store these values, because storing results is a well-known practice and storing the results for the calibration verification would yield of predictable results for documenting the verification results for future reference). However, Sun does not expressly disclose the 3D images are 3D bitmap images and automatically determine, with the processor, whether a surface feature in the 3D bitmap image for each frame is in focus. On the other hand Ding discloses automatically determine, with the processor, whether a surface feature in the 3D image for each frame is in focus (Ding, col. 13, lines 25-43, disclosing images 402-412 of features at various imaging planes captured with a 35x objective lens with a NA of 0.875 and a depth of field smaller than he feature height, in-focus portions of the images have different characteristic than out-of-focus portions, indicating the images 402-412 can correspond to the 3D image for each frame, and the in-focus portions can correspond to a surface feature that is determined as in focus based on the differences between the in-focus portion and the out-of-focus portions). Before the invention was effectively filed, it would have been obvious for a person skilled in the art to combine Sun and Ding. The suggestion/motivation would have been to provide image-based measurements of surface height, as suggested by Ding (see Ding, col. 1, lines 18-20). However, Sun or Ding does not expressly disclose the 3D images are 3D bitmap images. On the other hand, Yang discloses at least one processor (Yang, Translation, para. [0097], disclosing the terminal includes at least a transceiver and a processor); and 3D bitmap images (Yang, Translation, para. [0011], disclosing based on the two acquired images, generate a stereo image and a stereo bitmap corresponding to the stereo image, para. [0055], disclosing the stereoscopic bitmap as 3D bitmap, indicating the stereo bitmaps can correspond to 3D bitmap images). Before the invention was effectively filed, it would have been obvious for a person skilled in the art to combine Sun in view of Ding with Yang. The suggestion/motivation would have been to provide image processing that can present actual location of each scene to the user, as suggested by Yang (see Yang, Translation, para. [0007]). Claim(s) 19 and 22 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Sun, Ding, and Yang, and further in view of Zingaretti. Regarding claim 22, it recites similar limitations of claim 17 but further requires that the surface is a skin surface and the surface feature is a hair. The rationale of claim 17 rejection is applied to reject claim 22, but Sun, Ding, or Yang does not expressly disclose a skin surface and a hair. On the other hand, Zingaretti discloses the surface is a skin surface and the surface feature is a hair (Zingaretti, para. [0056], discloses using a stereo camera pair to obtain image data regarding the position and orientation of objects of interest (e.g., hair follicles, wrinkle lines, tattoos, moles, etc.) on the skin surface). Before the invention was effectively filed, it would have been obvious for a person skilled in the art to combine the combination of Sun, Ding, and Yang with Zingaretti. The suggestion/motivation would have been to determine and track objects on the skin surface, as suggested by Zingaretti (see Zingaretti, para. [0056]). Regarding claim 19, the combination of Sun, Ding, and Yang discloses the method of claim 17. However, Sun, Ding, or Yang does not expressly disclose where the surface is a skin surface and the surface feature is a hair. On the other hand, Zingaretti discloses the surface is a skin surface and the surface feature is a hair (Zingaretti, para. [0056], discloses using a stereo camera pair to obtain image data regarding the position and orientation of objects of interest (e.g., hair follicles, wrinkle lines, tattoos, moles, etc.) on the skin surface). Before the invention was effectively filed, it would have been obvious for a person skilled in the art to combine the combination of Sun, Ding, and Yang with Zingaretti. The suggestion/motivation would have been to determine and track objects on the skin surface, as suggested by Zingaretti (see Zingaretti, para. [0056]). Allowable Subject Matter Claim 10, 18, and 20 would be allowable if rewritten to overcome the rejection(s) under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), 2nd paragraph, set forth in this Office action and to include all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: Regarding claim 10, none of the prior art references discloses the system of claim 9, wherein the minimum distance threshold is about 400 microns and the maximum distance threshold is about 800 microns. Regarding claim 18, none of the prior art references discloses wherein the automatically determining the 3D model of the surface is based on the 3D calibration data and the 3D bitmap image for a first frame of the plurality of frames; wherein the automatically determining the value of the one or more parameters of the surface feature based on the 3D model is for the first frame of the plurality of frames; and wherein the method further comprises: automatically determine, with the processor, an updated 3D model of the surface based on the 3D calibration data and the 3D bitmap image for each of a next frame after the first frame where the surface feature is in focus; and automatically determine, with the processor, the value of the one or more parameters of the surface feature based on the updated 3D model for each of the next frame after the first frame. Claim 20 depends from claim 18 with additional limitations. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to HAIXIA DU whose telephone number is (571)270-5646. The examiner can normally be reached Monday - Friday 8:00 am-4:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at 571-272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HAIXIA DU/Primary Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Mar 04, 2024
Application Filed
Dec 03, 2025
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602857
GENERATING IMAGE DATA
2y 5m to grant Granted Apr 14, 2026
Patent 12597204
MODEL GENERATING DEVICE, MODEL GENERATING SYSTEM, MODEL GENERATING METHOD, AND PROGRAM
2y 5m to grant Granted Apr 07, 2026
Patent 12573137
System and Method for Unsupervised and Autonomous 4D Dynamic Scene and Objects Interpretation, Segmentation, 3D Reconstruction, and Streaming
2y 5m to grant Granted Mar 10, 2026
Patent 12561882
IMAGE RENDERING METHOD AND APPARATUS
2y 5m to grant Granted Feb 24, 2026
Patent 12555304
RAY TRACING USING INDICATIONS OF RE-ENTRY POINTS IN A HIERARCHICAL ACCELERATION STRUCTURE
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
86%
Grant Probability
99%
With Interview (+18.0%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 553 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month