Prosecution Insights
Last updated: April 19, 2026
Application No. 17/904,214

GESTURE RECOGNITION

Non-Final OA §103
Filed
Aug 12, 2022
Examiner
HANSEN, CONNOR LEVI
Art Unit
2672
Tech Center
2600 — Communications
Assignee
TRINAMIX GMBH
OA Round
5 (Non-Final)
75%
Grant Probability
Favorable
5-6
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
21 granted / 28 resolved
+13.0% vs TC avg
Strong +29% interview lift
Without
With
+29.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
32 currently pending
Career history
60
Total Applications
across all art units

Statute-Specific Performance

§101
19.1%
-20.9% vs TC avg
§103
39.9%
-0.1% vs TC avg
§102
16.8%
-23.2% vs TC avg
§112
23.7%
-16.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 28 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/23/2026 has been entered. Response to Arguments Applicant’s arguments with respect to claims 1-18 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. PNG media_image1.png 274 670 media_image1.png Greyscale On page 10, Applicant argues, Examiner notes Romano does not teach that depth determination is separate from the reflection beam profile evaluation. Specifically, columns 4 and 5, lines 44-67 and 1-39, respectively, teaches performing a beam profile evaluation for light patterns projected on a volume containing a hand to perform segmentation based on identify a predefined material properties associated with the beam profiles. Column 6, lines 38-47 further teach that depth mapping is performed with respect to the detected reflected light patterns of the hand. Therefore, depth determination is not a separate process but directly relies on the beam profile evaluation that identifies the object. As indicated previously, Romano does not teach comparing reflection beam profiles to predefined profiles. However, the combination of Romano in view of Fourre provides teachings which satisfy the limitation of amended claim 1. See analysis of claim 1 below for additional details. Claim Interpretation Claims 8, 9, 15 and 16 contain the presence of Markush grouping (see MPEP § 2117 and § 2173.05(h)). As previously indicated, the interpretation of the claims will be in the disjunctive form. The interpretation under 112(f) for the limitations “illumination source” and “evaluation device” will be maintained in this Non-Final Office action. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 7, 8, 12, 14, and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Romano et al. (US 10,049,460 B2), (hereinafter, Romano) in view of Fourre et al. (US 20200057000 A1), (hereinafter Fourre) and further in view of Sun et al. (US 20140254919 A1), (hereinafter, Sun). Regarding claim 1, Romano teaches a detector (Romano, see Fig. 3) for gesture detection comprising: at least one illumination source (Romano, light source 313) configured for projecting at least one illumination pattern comprising a plurality of illumination features on at least one area comprising at least one object (Romano, “the transmitter 311 includes a light source 313 (e.g., a laser light source) followed by one or more optical elements 314.1,314.2, which encode a pattern onto light from the light source 313. The transmitter 311 emits the light onto which the light pattern is encoded into a volume 315 (e.g., a scene). As objects (e.g., object 316) in the volume 315 reflect the light pattern, the reflected light pattern acquired various characteristics. Example characteristics of the reflected light pattern include: collimation angle, intensity variations, polarization, and one or more coherence characteristics.”, column 6, lines 48-61, see Fig. 3), wherein the object comprises at least partially at least one human hand (Romano, “FIG. 4 is a conceptual diagram of a light pattern projected onto a user's hand 410.”, column 7, lines 11-12, see Fig. 4); at least one optical sensor (Romano, receiver 312) having at least one light-sensitive area, wherein the optical sensor is configured for determining at least one image of the area, wherein the image comprises a plurality of reflection features generated from the area in response to illumination by the illumination features (Romano, “The receiver 312 (e.g., a camera having a charge coupled display detector, a complementary metal oxide semiconductor detector) captures reflected light from objects in the volume 315 and analyzes the reflected light pattern. Using signatures of the light pattern reflected from objects in the volume 315, the receiver 312 determines whether an object in the volume 315 is an object of interest.”, column 6, lines 61-67, “In the example shown in FIG. 4, the light pattern includes multiple stripes parallel to, or nearly parallel to, each other.” The receiver captures an image of the reflected light pattern that contains multiple stripes.); and at least one evaluation device (Romano, receiver 312), wherein the evaluation device is configured for determining at least one reflection beam profile of each of the reflection features (Romano, “Examples of the predefined optical structure encoded onto the light beam include: collimation angle of a light profile, intensity in the light beam profile, uniformity in the light profile, and coherence of the light source. The detected reflected light is analyzed and the object is segmented according to at least one light reflective characteristic of the object.”, column 2, lines 23-29, “Hence, light from the light pattern scatted by the hand's 420 skin changes one or more characteristics of the light pattern at points along a strip of the pattern projected onto the hand 420. Based on the change in the one or more characteristics of the light pattern from scattering by the hand 420, a processor or other device identifies the hand 420.”, column 7, lines 51-57, “Example characteristics of the reflected light pattern include: reflected pattern light width, intensity profile change (e.g. expansion) of the reflected light (e.g., a change in a Gaussian cross-sectional intensity profile of the light pattern)…”, column 8, lines 21-35, the camera can be predefined for beam profile analysis. The characteristics to-be-analyzed from the reflected light pattern includes intensity beam profiles, for various points along the strip.), determining at least one depth map of the area by determining at least one depth information for each of the reflection features based on the reflection beam profile for each of the reflection features (Romano, “the receiver 312 also obtains depth information for various points in the volume 315 that reflect the light pattern.”, column 7, lines 8-10), and determining at least one material property of the object by evaluating the reflection beam profile of one or more of the reflection features (Romano, “In some embodiments, the projected light pattern has specific predefined or known properties according to various depths or fields of vision. For example, such as a user's hand, includes a specific set of surface properties. Example properties of the user's hand include roughness (e.g. scattering), transparency, diffusion properties, absorbance, and specularity. Because of the properties of the user's hand, the light pattern reflected by the user's hand has a specific and unique signature that differs from signatures of the light pattern reflected by other objects in the volume.”, column 5, lines 1-11, Known surface properties of the hand are used to define signatures, which are evaluated based on the beam profile for various points on each strip.), wherein the evaluation device is configured for finding the object within the image by identifying the reflection features which were generated by illuminating biological tissue (Romano, “Objects, such as the hand 420, in the volume are identified or segmented according to unique light characteristics reflected from surfaces of various objects ( e.g., skin of the hand 420).”, column 7, lines 48-51), wherein a reflection feature is identified as having been generated by illuminating biological tissue when the reflection beam profile fulfills at least one predetermined or predefined criterion (Romano, “Based on the change in the one or more characteristics of the light pattern from scattering by the hand 420, a processor or other device identifies the hand 420.”, column 7, lines 51-57, When a change in the beam profile of the reflected light pattern is observed, the hand can be distinguished.), wherein the predetermined or predefined criterion is or comprises at least one predetermined or predefined value and/or threshold and/or threshold range associated with the determined at least one material property, and is identified as background otherwise (Romano, “In the example of FIG. 4, a change in a segment of the light pattern reflected by the hand 420 divides the volume into a "skin" zone that includes the hand 420 and a "background" zone including objects or material other than the hand 420.”, column 7, lines 60-64), wherein the evaluation device is configured for segmenting the image of the area by using at least one segmentation algorithm, (Romano, “the receiver 312 performs a segmentation process on the reflected light pattern”, column 7, lines 1-2), wherein the segmentation algorithm is configured for segmenting the image of the area based at least in part on the at least one material property (Romano, “the object is segmented based on an intensity profile change of the detected reflected light. The intensity profile change may be a change in a local uniform reflected profile of the pattern in the detected reflected light or may be a speckle of the pattern in the detected reflected light.”, column 2, lines 35-39), wherein the evaluation device is configured for determining a position and/or orientation of the object in space based on the segmented image and the depth map (Romano, “In some embodiments, the processor… may combine the depth map and the two-dimensional images into three-dimensional images of the user's hands 1412”, column 13, lines 44-56, see Fig. 14). Romano does not teach said evaluating including comparing the reflection beam profile of each of the reflection features with at least one predetermined and/or prerecorded and/or predefined beam profile. However, Fourre teaches said evaluating including comparing the reflection beam profile of each of the reflection features with at least one predetermined and/or prerecorded and/or predefined beam profile (Fourre, “In all cases, a light source 6 is arranged in such a way as to emit light rays in the propagation medium 2 in the direction of the surface 3 to illuminate the location intended to receive the object 5… Preferably, the light source 6 emits light in the form of a non-collimated beam, with a light cone having a certain wealth of angles.”, pg. 3, paragraph 0048, lines 1-4, “The determination of whether there is a match between object 5 and authentic human skin is done based on the spatial light distribution on the image acquired by the imager 4. To do so, that spatial distribution is compared to an expected spatial distribution for authentic human skin, using a light intensity profile. To compare the spatial distribution, a light intensity profile based on a distance from the light source 6, as depicted in FIGS. 6, 7, 8, is determined from the acquired image. The light intensity profile may be determined from multiple images acquired. This light intensity profile is representative of the spatial light distribution on the image acquired. A characteristic derived from the intensity profile is compared to at least one baseline characteristic representative of a light intensity profile corresponding to a spatial distribution expected for authentic human skin.”, pg. 8, paragraphs 0094 and 0095, see Figs. 6-8, Reflection features are projected to a scene and imaged to authenticate human skin. A light intensity profile (i.e., beam profile) is determined from the captured image of the reflection features. Characteristics are extracted from the beam profile and compared to baseline characteristics representative of authentic human skin. The baseline characteristics correspond to a predefined representation of a beam profile associated with authentic human skin. Thus, comparing the captured beam profile characteristics to the baseline characteristics satisfies the limitations of comparing the beam profile with a predefined or predetermined beam profile.). Romano teaches detecting reflected light from images taken of an object in a volume, including analyzing multiple characteristics of beam profiles to identify those which correspond to surface properties of skin (Romano, columns 4 and 5, lines 44-67 and 1-20, respectively, see Figs. 4 and 14). Romano does not teach comparing beam profiles with predetermined and/or prerecorded and/or predefined beam profiles. Fourre teaches authenticating human skin from images by comparing reflection beam profiles to predetermined, prerecorded and/or predefined beam profiles (see above). Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to have modified beam profile analysis of Romano to include comparison to predefined beam profiles as taught by Fourre (Romano, pg. 8, paragraphs 0094 and 0095, see Figs. 6-8). The motivation for doing so would be to simplify beam profile evaluation by replacing multi-characteristic analysis with a direct comparison against reference profiles, thereby reducing computation load of the system. Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine the teachings of Romano with Fourre to obtain the invention as specified above. Romano in view of Fourre does not teach wherein the segmentation algorithm is configured for segmenting the image of the area based at least in part on the at least one material property and the at least one depth information for each of the reflection features, wherein the reflection features identified as to be generated by illuminating biological tissue are used as seed points and the reflection features identified as background are used as background seed points for the segmentation algorithm. However, Sun teaches wherein the segmentation algorithm is configured for segmenting the image of the area based at least in part on the at least one material property and the at least one depth information for each of the reflection features, wherein the reflection features identified as to be generated by illuminating biological tissue are used as seed points and the reflection features identified as background are used as background seed points for the segmentation algorithm (“The image processing device and method according to one or more embodiments may construct a background model based on a depth map of a successive 3D image, determine an initial seed point, segment a moving object by performing region growing, identify and track the segmented moving object, and extract a foreground moving object. Here, the moving object may refer to an object that dynamically moves in the successive 3D image. For example, the moving object may include objects associated with a human being, an animal, and other moving entities.”, pg. 3, paragraphs 0042 and 0043, “When a depth between a background depth at a point corresponding to the pixel p, and the pixel p is greater than or equal to a predetermined depth, the image processing device may select the pixel p as the initial seed point.”, pg. 4, paragraph 0068, lines 1-4, Seed points are determined from difference between foreground and background pixels, and region growing from these seed points to segments foreground moving objects.). Romano in view of Fourre teaches segmenting foreground and background regions based on a material property of reflection features (Romano, “An identified change in a characteristic of a segment of the reflected light pattern segments (e.g., divides) the volume into two or more sections or zones. In the example of FIG. 4, a change in a segment of the light pattern reflected by the hand 420 divides the volume into a "skin" zone that includes the hand 420 and a "background" zone including objects or material other than the hand 420.”, column 7, lines 58-64). Romano in view of Fourre further teaches determining depth information for each of the reflection features (Romano, “In some embodiments, the receiver 312 also obtains depth information for various points in the volume 315 that reflect the light pattern.”, column 7, lines 8-10) but does not teach segmenting the image based on both the material property and depth information. Sun teaches segmenting images based on depth information of foreground and background regions (see above). Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to have modified the segmentation of Romano in view of Fourre to consider depth information for foreground and background regions, as taught by Sun (Sun, pg. 3, paragraphs 0042 and 0043, pg. 4, paragraph 0068, lines 1-4), thereby segmenting foreground and background regions in the image based on both the material property and depth of reflection features. The motivation for doing so would have been to cross-validate seed point positions, thereby increasing the accuracy of segmentation. Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine the teachings of Romano in view of Fourre with Sun to obtain the invention as specified in claim 1. Regarding claim 2, Romano in view of Fourre and further in view of Sun teaches the detector according to claim 1, wherein the evaluation device is configured for identifying image coordinates of palm and finger in the segmented image (Romano, “According to one embodiment, the light pattern is specifically designed to track movement of the hand's 410 digits in bi-dimensional video data (e.g., video images from a conventional video camera). More specifically, the light pattern is designed to enable detection and tracking of digits (i.e., fingers and thumb) of the hand 410 as well as a palm of the hand 410 from the bidimensional video data according to a detected predefined signature of the light pattern reflected by the object”, column 7, lines 22-30, tracking of the hand inherently requires the identification of image coordinates for each of the digits and palm), wherein the evaluation device is configured for determining at least one three-dimensional finger vector considering image coordinates of palm and finger and the depth map (Romano, “the processor is further configured to determine a depth map of the user's hands 1412 or of the gesturing object and may combine the depth map and the two-dimensional images into three-dimensional images of the user's hands 1412… three-dimensional images of the user's hands 1410 are superimposed into the synthetic scene while preserving three-dimensional attributes of the user's hands 1410”, column 13, lines 47-56). Regarding claim 3, Romano in view of Fourre and further in view of Sun teaches the detector according to claim 1, wherein the evaluation device is configured for determining at least one hand pose or gesture from the position and/or the orientation of the object in space (Romano, “FIG. 14 a conceptual diagram of using hand gestures to interact with a virtual reality environment.”, column 13, lines 27-28, see Fig. 14). Regarding claim 8, Romano in view of Fourre and further in view of Sun teaches the detector according to claim 1, wherein the evaluation device is configured for determining the depth information for each of the reflection features by one or more of the following techniques: selected from the group consisting of depth-from-photon-ratio, structured light, beam profile analysis (Romano, “the receiver 312 also obtains depth information for various points in the volume 315 that reflect the light pattern.”, column 7, lines 8-10, depth information for various points can be obtained while processing the reflected light patterns), time-of-flight, shape-from-motion, depth- from-focus, triangulation, depth-from-defocus, and stereo sensors. Regarding claim 12, Romano in view of Fourre and further in view of Sun teaches the detector according to claim 1, wherein the illumination source is configured for generating the at least one illumination pattern in the near infrared region (NIR) (Romano, “is block diagram of one embodiment of a system 1200 for segmenting an object in a volume using a light beam. In the embodiment shown by FIG. 12, the system 1200 includes an infrared (IR) illuminator 1202 configured 20 to illuminate the volume with a light pattern,”, column 12, lines 16-21). Claim 14 corresponds to claim 1, reciting a method of using the detector according to claim 1. As indicated in the analysis of claim 1, Romano in view of Fourre and further in view of Sun teaches the structures and functions according to claim 1. Therefore, claim 14 is rejected for the same reason of obviousness as claim 1. Regarding claim 15, Romano in view of Fourre and further in view of Sun teaches a method of using the detector according to claim 1 the method comprising using the detector for a purpose selected from the group consisting of: Driver monitoring; in-cabin surveillance; gesture tracking (Romano, “FIG. 14 a conceptual diagram of using hand gestures to interact with a virtual reality environment.”, column 13, lines 27-28); a security application; a safety application; a human-machine interface application (Romano, “The user 1410 wears a near eye display 1430 on which the device 1400 may be mounted, such as explained above in conjunction with FIG. 13. The device 1400 may include an illuminator (e.g., laser transmitter) 1420 and a capturing unit (e.g., camera) 1415 having a field of view wide enough to capture images of a volume surrounding the user.”, column 13, lines 28-34); an information technology application; an agriculture application; a crop protection application; a medical application; a maintenance application; and a cosmetics application. Claims 4, 5, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Romano et al. (US 10,049,460 B2), (hereinafter, Romano) in view of Fourre et al. (US 20200057000 A1) and further in view of Sun et al. (US 20140254919 A1) and Li et al. (Li et al. (“Lazy Snapping”, ACM Transactions on Graphics (ToG), 2004), (hereinafter Li). Regarding claim 4, Romano in view of Fourre and further in view of Sun teaches the detector according to claim 1. Romano in view of Fourre and further in view of Sun does not teach wherein the segmentation algorithm is further based on energy or cost functions. However, Li teaches wherein the segmentation algorithm is further based on energy or cost functions (Li, “Our system adopts a novel interactive graph cut algorithm”, pg. 304, section 2.1, 2nd paragraph, lines 5-8, “The foreground seeds F, the background seeds B, and the uncertain region U are defined similarly as in Section 2.2, except that now these nodes are small regions instead of pixels.”, pg. 305, section 2.3, 3rd paragraph, lines 1-3). Romano in view of Fourre and further in view of Sun teaches determining a material property and depth information for each reflection features and performing segmentation using threshold-based region growing from foreground and background region seed points (Sun, “When a depth between a background depth at a point corresponding to the pixel p, and the pixel p is greater than or equal to a predetermined depth, the image processing device may select the pixel p as the initial seed point.”, pg. 4, paragraph 0068, lines 1-4). Li teaches segmenting foreground and background using seed points as part of a global optimization graph cut algorithm (see above). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to have modified the segmentation of Romano in view of Fourre and further in view of Sun to be based on a cost function, such as the graph cut algorithm of Li (Li, pg. 304, section 2.1, 2nd paragraph, lines 5-8). The motivation for doing so would have been to produce a higher quality cutout (segmentation) in less time than existing cutout tools (as suggested by Li, “In this paper, we have developed an interactive image cutout system that is easy to learn, produces better quality cutouts in less time than existing image cutout tools.”, pg. 308, section 5, 3rd paragraph, lines 1-3). Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine the teachings of Romano in view of Fourre and further in view of Sun with Li to obtain the invention as specified in claim 4. Regarding claim 5, Romano in view of Fourre and further in view of Sun teaches the detector according to claim 1. Romano in view of Fourre and further in view of Sun does not teach wherein segmentation of the image is further based on color homogeneity and edge indicators, wherein the seed points constitute edge- and color homogeneity criterions. However, Li teaches wherein segmentation of the image is further based on color homogeneity and edge indicators, wherein the seed points constitute edge- and color homogeneity criterions. (Li, “Our system adopts a novel interactive graph cut algorithm to optimize the object boundary, by maximizing both the color similarity inside the object and the gradient along the boundary.”, pg. 304, section 2.1, 2nd paragraph, lines 5-8, “In Equation (1), E1 encodes the color similarity of a node, indicating if it belongs to the foreground or background… We use E2 to represent the energy due to the gradient along the object boundary... E2 is a penalty term when adjacent nodes are assigned with different labels. The more similar the colors of the two nodes are, the larger E2 is, and thus the less likely the edge is on the object boundary.”, pg. 304 and 305, section 2.2, The seed point nodes of a graph cut algorithm is defined based on similar colors and edge indicators.) Romano in view of Fourre and further in view of Sun teaches determining a material property and depth information for each reflection features and performing segmentation using threshold-based region growing from foreground and background region seed points (Sun, “When a depth between a background depth at a point corresponding to the pixel p, and the pixel p is greater than or equal to a predetermined depth, the image processing device may select the pixel p as the initial seed point.”, pg. 4, paragraph 0068, lines 1-4). Li teaches segmenting foreground and background by defining seed points as part of a global optimization graph cut algorithm, which considers color homogeneity and edge indicators (see above). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to have modified the segmentation of Romano in view of Fourre and further in view of Sun to be based on the graph cut algorithm of Li (Li, pg. 304, section 2.1, 2nd paragraph, lines 5-8), thereby including color homogeneity and edge indicators as criteria for seed point selection. The motivation for doing so would have been to produce a higher quality cutout (segmentation) in less time than existing cutout tools (as suggested by Li, “In this paper, we have developed an interactive image cutout system that is easy to learn, produces better quality cutouts in less time than existing image cutout tools.”, pg. 308, section 5, 3rd paragraph, lines 1-3). Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine the teachings of Romano in view of Fourre and further in view of Sun with Li to obtain the invention as specified in claim 5. Regarding claim 16, Romano in view of Fourre and further in view of Sun teaches the detector according to claim 1. Romano in view of Fourre and further in view of Sun does not teach wherein the segmentation algorithm is based on energy or cost functions selected from the group consisting of graph cut, level-set, fast marching, Markov random field approaches, and combinations thereof. However, Li teaches wherein the segmentation algorithm is based on energy or cost functions selected from the group consisting of graph cut, level-set, fast marching, Markov random field approaches, and combinations thereof (Li, “Our system adopts a novel interactive graph cut algorithm”, pg. 304, section 2.1, 2nd paragraph, lines 5-8, “The foreground seeds F, the background seeds B, and the uncertain region U are defined similarly as in Section 2.2, except that now these nodes are small regions instead of pixels.”, pg. 305, section 2.3, 3rd paragraph, lines 1-3). Romano in view of Fourre and further in view of Sun teaches determining a material property and depth information for each reflection features and performing segmentation using threshold-based region growing from foreground and background region seed points (Sun, “When a depth between a background depth at a point corresponding to the pixel p, and the pixel p is greater than or equal to a predetermined depth, the image processing device may select the pixel p as the initial seed point.”, pg. 4, paragraph 0068, lines 1-4). Li teaches segmenting foreground and background using seed points as part of a global optimization graph cut algorithm (see above). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to have modified the segmentation of Romano in view of Fourre and further in view of Sun to be based the graph cut algorithm of Li (Li, pg. 304, section 2.1, 2nd paragraph, lines 5-8). The motivation for doing so would have been to produce a higher quality cutout (segmentation) in less time than existing cutout tools (as suggested by Li, “In this paper, we have developed an interactive image cutout system that is easy to learn, produces better quality cutouts in less time than existing image cutout tools.”, pg. 308, section 5, 3rd paragraph, lines 1-3). Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine the teachings of Romano in view of Fourre and further in view of Sun with Li to obtain the invention as specified in claim 16. Claims 9-11 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Romano et al. (US 10,049,460 B2) in view of Fourre et al. (US 20200057000 A1) and further in view of Sun et al. (US 20140254919 A1) and Eberspach et al. (WO 2018/091649 A1), (hereinafter, Eberspach). Regarding claim 9, Romano in view of Fourre and further in view of Sun teaches the detector according to claim 1, but does not teach wherein the evaluation device is configured for determining the depth information for each of the reflection features by using depth-from-photon-ratio technique, wherein the evaluation device is configured for determining at least one first area and at least one second area of a beam profile of at least one of the reflection features, wherein the evaluation device is configured for integrating the first area and the second area, wherein the evaluation device is configured to derive a quotient Q by one or more techniques selected from the group consisting of dividing the integrated first area and the integrated second area, dividing multiples of the integrated first area and the integrated second area, and dividing linear combinations of the integrated first area and the integrated second area. However, Eberspach teaches wherein the evaluation device is configured for determining the depth information for each of the reflection features by using depth-from-photon-ratio technique, wherein the evaluation device is configured for determining at least one first area and at least one second area of a beam profile of at least one of the reflection features (Eberspach, “A1 and A2 are areas of at least one beam profile at the position of the sensor element”, pg. 16, lines 5-6), wherein the evaluation device is configured for integrating the first area and the second area (see equation on pg. 16), wherein the evaluation device is configured to derive a quotient Q by one or more techniques selected from the group consisting of dividing the integrated first area and the integrated second area, dividing multiples of the integrated first area and the integrated second area, and dividing linear combinations of the integrated first area and the integrated second area (Eberspach, “The evaluation device may be configured for deriving the combined signal Q by one or more of dividing the sum signal and the center signal, dividing multiples of the sum signal and the center signal, dividing linear combinations of the sum signal and the center signal.”, pg. 15 and 16, lines 41-42 and 1, respectively). Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to have modified the detector as taught by Romano in view of Fourre and further in view of Sun to use the depth-from-photon-ratio technique of Eberspach (Eberspach, pg. 15 and 16, lines 41-42 and 1, respectively, pg. 16, lines 5-6, see equation on pg. 16) as the reflection characteristic of Romano in view of Fourre and further in view of Sun. The motivation for doing so would be to generate longitudinal coordinates for an object of interest to assist in object tracking and/or recognition (as suggested by Eberspach, “By comparing the center signal and the sum signal, thus, an item of information on the size of the light spot generated by the light beam and, thus, on the longitudinal coordinate of the object may be generated.”, pg. 9, lines 9-12). Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine the teachings of Romano in view of Fourre and further in view of Sun with Eberspach to obtain the invention as specified in claim 9. Regarding claim 10, Romano in view of Fourre and further in view of Sun and Eberspach teach the detector according to claim 9, wherein the first area of the reflection beam profile comprises essentially edge information of the reflection beam profile and the second area of the reflection beam profile comprises essentially center information of the reflection beam profile (Eberspach, “The center signal may be a sensor signal comprising essentially center information of the beam profile. The sum signal may be a signal comprising essentially edge information of the beam profile.”, pg. 16, lines 8-10), and/or wherein the first area of the reflection beam profile comprises essentially information about a left part of the reflection beam profile and the second area of the reflection beam profile comprises essentially information about a right part of the reflection beam profile. PNG media_image2.png 79 213 media_image2.png Greyscale Regarding claim 11, Romano in view of Fourre and further in view of Sun and Eberspach teach the detector according to claim 9, wherein the evaluation device is configured for deriving the quotient Q by wherein x and y are transversal coordinates, A1 and A2 are the first and second area of the reflection beam profile, respectively, and E(x, y) denotes the reflection beam profile (Eberspach, see pg. 16 equation, “wherein x and y are transversal coordinates, A1 and A2 are areas of at least one beam profile at the position of the sensor element, and E(x, y, zo) denotes the beam profile given at the object distance Za.”, pg. 16, lines 5-7). Regarding claim 13, Romano in view of Fourre and further in view of Sun teaches the detector according to claim 1, but does not teach wherein the optical sensor comprises at least one CMOS sensor. However, Eberspach teaches an optical sensor that comprises at least one CMOS sensor (Eberspach, “the optical sensors may be or may comprise at least one… CMOS sensor element”, pg. 12, lines 8-10). Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to have modified the detector as taught by Romano in view of Fourre and further in view of Sun to include wherein the optical sensor comprises at least one CMOS sensor as taught by Eberspach (pg. 12, lines 8-10). The motivation for doing so would have been to select a conventional sensor to support an energy efficient system. Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine the teachings of Romano in view of Fourre and further in view of Sun with Eberspach to obtain the invention as specified in claim 13. Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over Romano et al. (US 10,049,460 B2) in view of Fourre et al. (US 20200057000 A1) and further in view of Sun et al. (US 20140254919 A1) and Kollias et al. (US 20040146290 A1), (hereinafter Kollias). Regarding claim 18, Romano in view of Fourie and further in view of Sun teaches the detector according to claim 1. Romano in view of Fourie and further in view of Sun does not teach wherein the evaluation device is further configured for applying at least one material-dependent image filter to the image to determine the at least one material property of the object. However, Kollias teaches wherein the evaluation device is further configured for applying at least one material-dependent image filter to the image to determine the at least one material property of the object (Kollias, “In another aspect, the invention features a method of photographing the skin of a person comprising: (i) illuminating the Skin with at least one light Source, wherein the light Source either emits Substantially only light having a wavelength from about 380 to about 430 nm or emits light through a filter that emits Substantially only light having a wavelength from about 380 to about 430 nm, and (ii) capturing the image of Such illuminated Skin with a camera; wherein the light entering the camera is also filtered with a long pass filter, wherein the long pass filter Substantially eliminates light having a wavelength below about 400 nm.”, pg. 1, paragraph 0010, Images taken of skin are captured by implementing a filter that captures images at wavelengths which corresponding to the material of skin.). Romano in view of Fourre and further in view of Sun teaches using a camera to evaluate reflection features to determine a material property of an imaged object (Romano, columns 4 and 5, lines 44-67 and 1-20, respectively). Romano in view of Fourre and further in view of Sun does not teach applying a material-dependent image filter. Kollias teaches capturing images through a material-dependent image filter (see above). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to have modified the camera of Romano in view of Fourre and further in view of Sun to include capturing images using a material-dependent filter as taught by Kollias (Kollias, pg. 1, paragraph 0010). The motivation for doing so would have been to configure the camera to captured distinct material characteristics corresponding to skin, thereby improving the evaluation of reflection features. Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine the teachings of Romano in view of Fourre and further in view of Sun with Kollias to obtain the invention as specified in claim 18. Allowable Subject Matter Claims 7 and 17 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to CONNOR LEVI HANSEN whose telephone number is (703)756-5533. The examiner can normally be reached Monday-Friday 9:00-5:00 (ET). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sumati Lefkowitz can be reached on (571) 272-3638. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CONNOR L HANSEN/Examiner, Art Unit 2672 /SUMATI LEFKOWITZ/Supervisory Patent Examiner, Art Unit 2672
Read full office action

Prosecution Timeline

Aug 12, 2022
Application Filed
Nov 06, 2024
Non-Final Rejection — §103
Jan 22, 2025
Response Filed
Feb 28, 2025
Final Rejection — §103
Mar 31, 2025
Applicant Interview (Telephonic)
Mar 31, 2025
Examiner Interview Summary
Apr 02, 2025
Response after Non-Final Action
May 06, 2025
Request for Continued Examination
May 08, 2025
Response after Non-Final Action
May 22, 2025
Non-Final Rejection — §103
Aug 17, 2025
Interview Requested
Aug 20, 2025
Applicant Interview (Telephonic)
Aug 20, 2025
Examiner Interview Summary
Aug 25, 2025
Response Filed
Oct 17, 2025
Final Rejection — §103
Dec 15, 2025
Interview Requested
Dec 22, 2025
Examiner Interview (Telephonic)
Dec 22, 2025
Examiner Interview Summary
Jan 23, 2026
Request for Continued Examination
Feb 04, 2026
Response after Non-Final Action
Feb 18, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12530785
TRACKING DEVICE, TRACKING METHOD, AND RECORDING MEDIUM
2y 5m to grant Granted Jan 20, 2026
Patent 12524984
HISTOGRAM OF GRADIENT GENERATION
2y 5m to grant Granted Jan 13, 2026
Patent 12518363
IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, IMAGE PROCESSING SYSTEM, AND STORAGE MEDIUM WITH PIECEWISE LINEAR FUNCTION FOR TONE CONVERSION ON IMAGE
2y 5m to grant Granted Jan 06, 2026
Patent 12499648
IMAGE PROCESSING APPARATUS, IMAGE CAPTURING APPARATUS, CONTROL METHOD, AND STORAGE MEDIUM FOR DETECTING SUBJECT IN CAPTURED IMAGE
2y 5m to grant Granted Dec 16, 2025
Patent 12482257
REDUCING ENVIRONMENTAL INTERFERENCE FROM IMAGES
2y 5m to grant Granted Nov 25, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
75%
Grant Probability
99%
With Interview (+29.2%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 28 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month