Prosecution Insights
Last updated: April 19, 2026
Application No. 18/573,988

SIGNAL PROCESSING DEVICE AND SIGNAL PROCESSING METHOD

Non-Final OA §103
Filed
Dec 22, 2023
Examiner
THOMAS, SOUMYA
Art Unit
2664
Tech Center
2600 — Communications
Assignee
Sony Group Corporation
OA Round
1 (Non-Final)
100%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 100% — above average
100%
Career Allow Rate
2 granted / 2 resolved
+38.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
17 currently pending
Career history
19
Total Applications
across all art units

Statute-Specific Performance

§101
6.8%
-33.2% vs TC avg
§103
64.4%
+24.4% vs TC avg
§102
13.6%
-26.4% vs TC avg
§112
11.9%
-28.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 2 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Specification The disclosure is objected to because it contains an embedded hyperlink and/or other form of browser-executable code (see paragraph [0142]). Applicant is required to delete the embedded hyperlink and/or other form of browser-executable code; references to websites should be limited to the top-level domain name without any prefix such as http:// or other browser-executable code. See MPEP § 608.01. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations is/are: an acquisition unit in Claim 1; a transparent subject determination unit in Claim 1, 2, 3, 4, 5,and 6; an output unit in Claim 1,7, 9, 15, 17, and 18; a candidate prediction unit in Claim 7, 12, 15, and 18; a calculation unit in Claim 10; a storage unit in Claim 11 and 14; a predictor in Claim 12, 13, 18, and 19 and; a control unit in Claim 14. Because these claim limitation(s) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 103 Claims 1-3, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Kienzler et al. (US Pub No 2018/0372849), hereinafter Kienzler, in view of Chaudry (US Pub No 2016/0356887), hereinafter Chaudhry, and further in view of Klank et al. (Klank, Ulrich, et al, “Transparent Object Detection and Reconstruction on a Mobile Platform”, 2011 IEEE International Conference on Robotics and Automation, 2011), hereinafter Klank, and further in view of Grossmann et al. (WO 2013/148308A1), hereinafter Grossmann. As to Claim 1, Kienzler teaches a signal processing device (see Fig. 1, optoelectronic sensor 10) comprising: an acquisition unit (see Fig 1, “Accumulate” 30, which is a memory (see paragraph [0038]), that acquires histogram data of a flight time of irradiation light to a subject; (see paragraph [0038], " Very roughly, individual times of flight are first collected in a memory 30. This can already be done in a combined manner, for example in a histogram having a bin width that is selected while taking account of the desired resolution and of the memory requirement;” and paragraph [0001] “an individual time of flight measurement unit for determining an individual time of flight of a light signal from the sensor to the object”), Kienzler fails to teach a transparent subject determination unit that determined transparency on a basis of peak information indicated by the histogram data. However, Chaudhry teaches a transparent subject determination unit (see Fig. 1, face detection system 140), and teaches that histogram data collected by a ranging sensor can be used to determine whether or not the subject is transparent (see paragraph [0020], "According to various implementations of the invention, a nearest cluster in histogram 200 is deemed to be those range measurements associated with transparent surface 160, and a second nearest cluster in histogram 200 is deemed to be those range measurements associated with target 110”). Chaudhry is combinable with Kienzler as both are from analogous fields of determining transparency through optical sensors. Thus, it would have been obvious to one in ordinary skill in the art to combine the teachings of Chaudhry with Kienzler. The motivation for doing so would be to more accurately detect objects located behind a transparent surface. Chaudhry teaches in paragraph [0002], [0003], and [0004], “For example, the target may be on another side of a transparent storefront in a retail environment, behind a windshield or other window in a vehicle checkpoint environment, or behind some other transparent surface in another environment as would be appreciated. In such environments, the acquisition system may receive return signals from the transparent surface, from material on the transparent surface, from the target, from other objects, or any combination thereof. Determining which of these return signals corresponds to measurements of a range to the target, as opposed to the transparent surface, etc., is difficult. What is needed is an improved system and method for determining range to a target located behind a transparent surface.” Thus, it would have been obvious to combine the teachings of Kienzler with Chaudhry. Kienzler fails to teach an output unit that outputs three-dimensional coordinates of the subject calculated on a basis of histogram data. However, Chaudhry teaches an output unit (see Fig. 1, Camera 130, which displays images) that outputs three-dimensional images of a subject behind a transparent object, calculated on a basis of histogram data (see paragraph [0013], “In some implementations of the invention, target acquisition system 100 comprises a face detection system 140. In some implementations of the invention, face detection system 140 detects a face (or other target) in the scene, and attempts to obtain a three-dimensional image (i.e., a collection of three-dimensional measurements) of the face based on the range and Doppler velocity measurements from lidar 120”, and see paragraph [0021], "According to various implementations of the invention, once a cluster of bins in histogram 200 is deemed to be associated with target 110, range measurements outside this cluster may be filtered as extraneous and in some implementations, ignored.” Thus, the histogram is used to identify ranges associates with a target, and these ranges are used to obtain a three dimensional image.). Both Chaudhry and Kienzler fail to teach that transparency is determined from three dimensional coordinates. Additionally, Chaudhry and Kienzler fail to teach that the three-dimensional coordinates of the subject are corrected on a basis of a transparent subject determination result of the transparent subject determination unit. However, Klank teaches a method in which a TOF sensor (see Fig. 2, pg. 5972) is used obtain three dimensional data, which can be used to determine transparency, (see Section I, subsection A, page 5972, “The method then processes every candidate and checks whether it has the characteristic of a transparent objects when comparing the two views. In order to perform this check, we first establish 2D image correspondences by applying a perspectively invariant matching in the intensity channels [17] for the respective candidate. In the next step the algorithm ascertains whether a candidate is a transparent object or not by checking for inconsistencies in its 2D and 3D points when comparing the two views”). Klank also teaches that three dimensional coordinates can be corrected to create a reconstruction of transparent objects when a transparent object is detected (see Abstract, pg. 5971, “In this paper we propose a novel approach to detect and reconstruct transparent objects…If their line of sight did not pass a transparent object or suffered any other major defect, this prediction will highly correspond to the actual measured 3D points of the second view. Otherwise, if a detectable error occurs, we approximate a more exact point to point matching and reconstruct the original shape by triangulating the points in the stereo setup.” Klank is combinable with both Chaudhry and Kienzler, because all three are from the analogous art of using ranging sensors to measure distances. Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Klank with the teachings of Kienzler and Chaudry. The motivation for doing so would be to allow for more robust detection of transparent objects in a variety of settings, which will allow robots to more easily detect and manipulate transparent objects (see Section I, Introduction, 5972, “Our approach provides a foundation to expand available object recognition systems by transparency, leading to more robustness in the robot’s environment perception. Apart from that, we took the household-robot as the use case for the actual system. Accordingly, our method is supposed to enable a robot not only to detect but also to manipulate transparent objects, which requires a reconstruction.”). Kienzler, Chaudry, and Klank all fail to teach that color information is corrected. However, Grossmann teaches distortion caused by a lens (a transparent object), and that three dimensional coordinates with corresponding color information can be output (see page 15, lines 8-11, “The color image processing block 1150 may rectify the color image so that the lens distortion is removed and so that the image plane of the rectified color image is parallel to the image plane of the stereo depth map”). Grossmann is combinable with Kienzler, Chaudry, and Klank since all three come from the analogous art of image analysis. Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Grossmann with the teachings of Kienzler , Chaudry, and Klank. The motivation for doing so would be to improve the alignment of a depth map and image. Grossmann teaches on page 1, lines 1-20, “Depth maps and images together may constitute the primary input of many applications, such as video surveillance, video games (e.g. the Microsoft Kinect), hand gesture interpretation and other applications that take input unobtrusively from an un-instrumented user…In some cases, the design of such applications may be easier if the depth map and image are registered or aligned, in the sense that the depth map is, or made to appear to be, produced by a depth sensor that is placed at the same physical location as the imaging sensor that produced the image. In practice, however, the depth map and the image are often produced by different sensors and consequently may be imaged from distinct physical locations. Fortunately, it may be possible to warp a depth map or image in such a way that it appears nearly as seen from a different center of projection.” Thus, it would have been obvious to combine the teachings of Grossmann, Kienzler, Chaudhry, and Klank in order to obtain the invention as taught in Claim 1. As to Claim 2, Kienzler in view of Chaudhry, Klank, and Grossmann teaches that transparency is determined on a basis of whether one peak is observed or a plurality of peaks is observed in the histogram data (see Chaudhry, paragraph [0020], "According to various implementations of the invention, a nearest cluster in histogram 200 is deemed to be those range measurements associated with transparent surface 160, and a second nearest cluster in histogram 200 is deemed to be those range measurements associated with target 110"). As to Claim 3, Kienzler in view of Chaudhry, Klank, and Grossmann teaches that when a plurality of peaks is observed in the histogram data, the transparent subject determination unit determines whether the plurality of peaks is due to the transparent subject or an object boundary (see Kienzler, paragraph [0046], “ A plurality of measurement peaks can occur in a measurement period, for example with semi-transparent objects or edge impingements” and Chaudhry paragraph [0020], "According to various implementations of the invention, a nearest cluster in histogram 200 is deemed to be those range measurements associated with transparent surface 160, and a second nearest cluster in histogram 200 is deemed to be those range measurements associated with target 110"). As to Claim 10, Kienzler in view of Chaudhry, Klank, and Grossmann teaches a calculation unit (see Fig. 1, evaluation circuits 28, 30 , 32, and paragraph [0039]), that calculates the three- dimensional coordinates of the subject on a basis of the histogram data (see Chaudhry [0013]), and a camera posture of a ranging sensor (see Section I, Subsection A., page 5972, “Secondly the robot performs a movement within a certain range which provides pose parameters that can be used for a 3D point transformation between the first and the second view on the scene. These parameters are acquired from the operating system ROS by comparing the two robot positions for each view which originate from an AMCL driven self-localization supported by two laser sensors. To ensure that the candidates remain in the field of view of the ToF camera, every candidate has an approximated world coordinate pose attached to it such that the platform can focus this pose and run the last step” and see Section II, page 5973, “The only movement if performed by our platform such that the ToF-camera position changes.” Thus, the two different positions (or postures) of the ToF sensor are used to calculate a 3D point transformation) that generates the histogram data (see Kienzler [0005]). As to Claim 20, Claim 20 recites a method that describes that same process executed by the signal processing device claimed in Claim 1. Therefore, the rejection and rationale are analogous to that made in Claim 1. Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Kienzler et al. (US Pub No 2018/0372849), hereinafter Kienzler, in view of Chaudry (US Pub No 2016/0356887), further in view of Klank et al. (Klank, Ulrich, et al., “Transparent Object Detection and Reconstruction on a Mobile Platform”, 2011 IEEE International Conference on Robotics and Automation, 2011), hereinafter Klank, and further in view of Grossmann et al. (WO 2013/148308A1), hereinafter Grossmann, and further in view Price et al. (US Pub No 2021/0027479), hereinafter Price. As to Claim 6, Kienzler in view of Chaudhry, Klank, and Grossmann fails to teach wherein the transparent subject determination unit determines whether the subject is the transparent subject on a basis of the peak information, the three-dimensional coordinates of the subject, and a thermo image of the subject. However, Price teaches that a thermal image can be used to identify a transparent object (see paragraph [0002], “The processor may be configured to detect image discrepancies between the visible light image and the thermal image and, based on the detected image discrepancies, determine a presence of a transparent object in the scene”). Price is combinable with Kienzler, Chaudry, Klank, and Grossmann, because all four are from the analogous field of image analysis. Thus, it would have been of obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Price with the teachings of with Kienzler, Chaudry, Klank., and Grossmann The motivation for doing so would be to create more accurate depth maps of transparent images, which are difficult to image with IR light. Price teaches in paragraph [0015], “Although numerous imaging devices and technologies exist through which images may be rendered for two and three-dimensional imaging, some objects remain difficult to detect depending on the imaging technology employed. In particular, transparent objects such as windows may not be detectable and/or discernable using known imaging approaches. Visible light (VL) images, infra-red (IR) images, thermal images, and depth images generated via time-of-flight data may exclude or provide inaccurate or undiscernible representations of transparent objects. For example, a depth map created for a scene that includes a window may only produce a void area or inaccurate information in the map where the window is located. When image data is subsequently processed into a surface reconstruction, a transparent object may not be properly represented. Windows are specular reflectors for light, and are lossy reflectors in the IR range. Thus, when rendering a depth image, depth values for locations of transparent objects may be given as void values. Hence, detecting and rendering transparent objects such as windows and glass has been problematic in imaging fields.” Thus, it would have been obvious to combine the teachings of Price with the teachings of Kienzler, Chaudry, Klank, and Grossmann in order to obtain the invention as claimed in in Claim 6. Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Kienzler et al. (US Pub No 2018/0372849), hereinafter Kienzler, in view of Chaudhry (US Pub No 2016/0356887), hereinafter Chaudhry, further in view of Klank et al. (Klank, Ulrich, et al, “Transparent Object Detection and Reconstruction on a Mobile Platform”, 2011 IEEE International Conference on Robotics and Automation, 2011), hereinafter Klank , and further in view of Grossmann et al. (WO 2013/148308A1), hereinafter Grossmann, and further in view Tanaka et al. (Tanaka, Kenichiro, et al., “Recovering Transparent Shape from Time-of-Flight Distortion”, 2016 ), hereinafter Tanaka. As to Claim 15, Kienzler in in view of Chaudhry, Klank, and Grossmann fails to explicitly teach a candidate prediction unit that predicts a candidate for refraction and incidence information of light of the subject on a basis of the peak information, the three-dimensional coordinates of the subject, and the transparent subject determination result, wherein the output unit outputs three-dimensional coordinates of refraction and incidence information selected out of candidates for the refraction and incidence information as the three-dimensional coordinates of the subject with corrected three- dimensional coordinates. However, Tanaka teaches that a Time of Flight sensor can be used to determine three-dimensional coordinates of a transparent subject (see Abstract, page 4387, “This paper presents a method for recovering shape and normal of a transparent object from a single viewpoint using a Time-of-Flight (ToF) camera”), and that the refraction and incidence directions can be obtained ( see Section 3.2, page 4389, paragraph 3, “Based on this, the refractive ray direction v2 can be obtained from the hypothesized front surface point f to back surface point”), and that corrected 3d coordinates can be output (see Section 3.2, page 4390, paragraph 3, “If the assumed front depth t is correct, two normals np(t) and nd(t) should coincide; therefore, the estimation problem can be casted as an optimization problem as PNG media_image1.png 137 548 media_image1.png Greyscale where t is a vector listing tc for all pixels, C is a set of all pixels, tc is the hypothesized front depth of pixel c, np,c and nd,c are the surface normal computed from the refractive path and that from hypothesized shape at pixel c, respectively”). Tanaka is combinable with Kienzler, Chaudhry, Klank, and Grossmann since all three are from the analogous fields of determining transparency through ranging sensors. Thus, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine the teachings of Tanaka with the teachings of Kienzler Chaudhry, Klank, and Grossmann. The motivation for doing so would be to reconstruct scenes with multiple refractions. Tanaka teaches in paragraph on page 4387, paragraph 4, “Unlike previous single viewpoint approaches that are restricted to a scene with a single refraction, or requiring a number of light sources to illuminate the scene, the proposed method is able to recover a scene with two refraction surfaces from a single view point”. Thus, it would have been obvious to combine the teachings of Tanaka with the teachings of Kienzler, Chaudhry, Klank, and Grossmann in order to obtain the invention as claimed in Claim 15. Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over Kienzler et al. (US Pub No 2018/0372849), hereinafter Kienzler, in view of Chaudhry (US Pub No 2016/0356887), further in view of Klank et al. (Klank, Ulrich, et al, “Transparent Object Detection and Reconstruction on a Mobile Platform”, 2011 IEEE International Conference on Robotics and Automation, 2011), hereinafter Klank , and further in view of Grossmann et al. (WO 2013/148308A1), hereinafter Grossmann, and further in view Song et al. (Song, Seonjong, et al., “Depth Reconstruction of Translucent Objects from a Single Time-of-Flight Camera using Deep Residual Networks”, 2018 ), hereinafter Song. As to Claim 18, Kienzler in view of Chaudhry, Klank, and Grossmann teaches a candidate prediction unit (see Klank, Fig. 2, operating system ROS) that predicts a positional shift of the three-dimensional coordinates of the subject (see Klank, Subsection V., paragraph 3, page 5976, “In the next step the algorithm ascertains whether a candidate is a transparent object or not by checking for inconsistencies in its 2D and 3D points when comparing the two views. If the check is positive, a 3D reconstruction is carried out and the new 3D points are transformed into a suitable form for later grasping or path planning algorithms.”), wherein the output unit outputs the three-dimensional coordinates of the subject with corrected positional shift of the three-dimensional coordinates of the subject by the candidate prediction unit (see page 5977, reconstructed ToF point cloud, Fig. 11). Klank fails to teach a predictor that that learns the positional shift of the three-dimensional coordinates due to refraction of light of the subject by a neural network. However, Song teaches a predictor (see Section 3.2, page 7, “We use RMSProp [30] with the momentum of 0.5 and a smooth L1 loss as the objective function”, where the objective function is the predictor), derived form a neural network that can learn to correct depth distortion in order to create three-dimensional reconstructions (See Section 1: Introduction, page 2, “In this paper, we propose a learning-based approach to compensating the depth distortion in translucent objects using a single time-of-flight camera. We utilize both the foreground depth map and background depth map to correct the depth distortion as inputs and recover the correct depth map for the translucent object… More specifically, we develop deep convolutional networks for recovering translucent objects from depth maps”). Song is combinable with Kienzler, Chaudhry, Klank, and Grossmann since all are from the analogous field of image analysis. Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Song with the teachings of Kienzler, Chaudhry, Klank, and Grossmann. The motivation for doing so would be to would be to more accurately determine the depth of transparent objects despite complex light interactions. Song teaches in Section I, page 1, “However, the appearance of translucent object is determined by the complex light interactions associated with the light refraction and transmission. Consequently, when we capture the translucent object using a commercial depth camera, the resultant depth map presents significant errors.” Thus, it would have been obvious to combine the teachings of Song with Kienzler, Chaudhry, Klank, and Grossmann in order to obtain the invention as claimed in Claim 18. Allowable Subject Matter Claim 4 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Kienzler in view of Chaudhry, Klank, and Grossmann fails to disclose that the plurality of peaks is due to the transparent subject in a case where the plurality of peaks is uniformly detected in a plurality of pixels, and determines that the plurality of peaks is due to the object boundary in a case where there is a bias in pixels in which the plurality of peaks is detected. Fritz et al. (Fritz, Mario, “An Additive Latent Feature Model for Transparent Object Recognition”, 2009) discloses how edges of transparent objects may be detected from histogram data, but the histogram data is not obtained from a time of flight sensor. Additionally, Fritz fails to disclose how the uniformity of the peaks or bias in the peaks can be used to distinguish between a boundary or a transparent object. Claim 5 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Kienzler in view of Chaudhry, Klank, and Grossmann fails to disclose that the transparent subject determination unit solves a boundary identification problem. Klank teaches that the boundaries of transparent objects may be identified and emphasized, but fails to disclose that the boundaries correspond to a plurality of peaks. Claims 7-9, and 11-14 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. As to Claim 7, Kienzler in view of Chaudhry, Klank, and Grossmann fails to disclose predicting a candidate for color information on the basis of the peak information, three-dimensional coordinates of the subject, and the transparent subject determination result. Rahim et al. (Rahim, Jamal Ahmed, “Colored Transparent Object Matting from a Single Image Using Deep Learning”, 2019) discloses a method in which colored transparent media can be segmented and separated from a background image. However, this can only be done on a two-dimensional image, and no peak information from a time-of-flight device is used. Claims 16-17 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. As to Claim 16, Kienzler in view of Chaudhry Klank, Grossmann, and Tanaka fails to explicitly disclose that a plurality of pairs of refraction/incidence candidates are predicted. As to Claim 17, Kienzler in view of Chaudhry, Klank, Grossmann, and Tanaka fails to explicitly disclose that an output unit selects three-dimensional coordinates of a refraction and incidence information candidate having likelihood larger than a first threshold out of the candidates for refraction and incidence information of the subject, and outputs the three- dimensional coordinates as three-dimensional coordinates Claim 19 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. As to Claim 19, Kienzler in view of Chaudhry, Klank, Grossmann, and Song teaches a predictor that can predict the positional shift of three dimensional coordinates using a camera posture as an input. However, Song fails to teach that the histogram data itself is also an input into the predictor. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Yang (US Pub No 2017/0366737) discloses a Time-of-Flight sensor which outputs a histogram. The time-of-flight data can be used to determine whether a subject is transparent. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SOUMYA THOMAS whose telephone number is (571)272-8639. The examiner can normally be reached M-F 8:30-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Mehmood can be reached at (571) 272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /S.T./Examiner, Art Unit 2664 /JENNIFER MEHMOOD/Supervisory Patent Examiner, Art Unit 2664
Read full office action

Prosecution Timeline

Dec 22, 2023
Application Filed
Dec 17, 2025
Non-Final Rejection — §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
100%
Grant Probability
99%
With Interview (+0.0%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 2 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month