Prosecution Insights
Last updated: April 19, 2026
Application No. 18/593,255

DISAMBIGUATION OF VISUAL REPLICAS FROM DIRECT VISUAL REPRESENTATIONS OF A TARGET OBJECT

Non-Final OA §101§102§103
Filed
Mar 01, 2024
Examiner
FATIMA, UROOJ
Art Unit
2676
Tech Center
2600 — Communications
Assignee
Qualcomm Incorporated
OA Round
1 (Non-Final)
100%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 100% — above average
100%
Career Allow Rate
1 granted / 1 resolved
+38.0% vs TC avg
Strong +100% interview lift
Without
With
+100.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
16 currently pending
Career history
17
Total Applications
across all art units

Statute-Specific Performance

§101
24.6%
-15.4% vs TC avg
§103
41.5%
+1.5% vs TC avg
§102
12.3%
-27.7% vs TC avg
§112
20.0%
-20.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims Currently pending Claim(s): 1-20 Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Claim 19: “means for obtaining an image frame captured by a camera” refers to figures 3A-3C “In an aspect, a means for obtaining the image frame at 710 includes processor(s) 342 or 384 or 394, data bus 308 or 382 or 392, receiver 312 or 322 or 352 or 362, sensor(s) 344, visual replica disambiguation component 348 or 388 or 398, etc., of FIG. 3A-3C.” (Application Pub, paragraph [0128]). “means for detecting a plurality of visual detections of a target object within the image frame” refers to figures 3A-3C “a means for obtaining the detection at 730 includes processor(s) 342 or 384 or 394, visual replica disambiguation component 348 or 388 or 398, etc., of FIG. 3A-3C.” (Application Pub, paragraph [0129]). “means for obtaining disambiguation information” refers to figures 3A-3C “a means for obtaining the disambiguation information at 730 includes processor(s) 342 or 384 or 394, data bus 308 or 382 or 392, receiver 312 or 322 or 352 or 362, sensor(s) 344, visual replica disambiguation component 348 or 388 or 398, etc., of FIG. 3A-3C.” (Application Pub, paragraph [0130]). “means for disambiguating the direct visual representation of the target object from the one or more visual replicas of the target object” refers to figures 3A-3C “a means for performing the disambiguation at 740 includes processor(s) 342 or 384 or 394, visual replica disambiguation component 348 or 388 or 398, etc., of FIG. 3A-3C.” (Application Pub, paragraph [0131]). Claim 20 further limits the “means for obtaining disambiguation information” and “means for disambiguating” in claim 19 to the use of behavioral, RF, DT-based, and/or sound emission information. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., abstract idea - mental process and mathematical calculation) without significantly more. Step (1) Are the claims directed to a process, machine, manufacture, or composition of matter; Step (2A) Prong One: Are the claims directed to a judicially recognized exception, i.e., a law of nature, a natural phenomenon, or an abstract idea; Prong Two: If the claims are directed to a judicial exception under Prong One, then is the judicial exception integrated into a practical application; Step (2B) If the claims are directed to a judicial exception and do not integrate the judicial exception, do the claims provide an inventive concept. Step 1: Claim 1 recites a method. Therefore, the claim is directed to the statutory categories of process. Step 2A: Prong One: Claim 1 recites: “detecting a plurality of visual detections of a target object within the image frame, wherein the plurality of visual detections comprises a direct visual representation of the target object and one or more visual replicas of the target object;”. Under its broadest reasonable interpretation in light of the specification, the limitation encompasses a mental process of detecting a visual representation of the target object and the visual replica from an image which is practically capable of being performed in the human mind with the assistance of pen and paper. “obtaining disambiguation information that is based on data separate from the image frame”. Under its broadest reasonable interpretation in light of the specification, the limitation recites a mathematical calculation and falls into the mathematical concepts grouping of abstract ideas. Obtaining disambiguation information is simply collecting and analyzing the data which is practically capable of being performed in the human mind with the assistance of pen and paper. “disambiguating the direct visual representation of the target object from the one or more visual replicas of the target object based on the disambiguation information”. Under its broadest reasonable interpretation in light of the specification, the limitation encompasses a mental process of disambiguating a visual representation of the target object and the visual replica from an image which is practically capable of being performed in the human mind with the assistance of pen and paper. Prong Two: This judicial exception is not integrated into a practical application. The additional elements of “A method of operating a device, comprising: obtaining an image frame captured by a camera” amount to no more than mere necessary data gathering and applying because, under its broadest reasonable interpretation, it is simply using generic hardware to perform the abstract idea. Thus, they are insignificant extra-solution activity. Even when viewed in combination, these additional elements do not integrate the abstract idea into a practical application and the claim is thus directed to the abstract idea. Step (2B): Claim 1 does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The limitations “A method of operating a device, comprising: obtaining an image frame captured by a camera” amount to no more than mere data gathering with general purpose hardware and provide no inventive concept. These elements, individually and in combination, are well-understood, routine, conventional activity. As such, the claim is ineligible. Step 1: Claims 2-4 recite a method. Therefore, the claims are directed to the statutory categories of process. Step 2A: Prong One: Claim 2-4 merely narrow the previously recited abstract idea limitations. For the reasons described above, this judicial exception is not meaningfully integrated into a practical application, or significantly more than the abstract idea. The claims disclose similar limitations described for the independent claims above and do not provide anything more than the mental process and mathematical calculation that are practically capable of being performed in the human mind with the assistance of pen and paper. Pring Two: These judicial exceptions are not integrated into a practical application nor includes additional elements that are sufficient to amount to significantly more. Thus, the claims are ineligible. Step 1: Claim 5 recites a method. Therefore, the claim is directed to the statutory categories of process. Step 2A: Prong One: Claim 5 recites: “geometric relationships comprise symmetry of movement of the two candidate visual detections along a plane that is perpendicular to the reflection plane across the sequence of image frames”. Under its broadest reasonable interpretation in light of the specification, the limitation recites a mathematical calculation and falls into the mathematical concepts grouping of abstract ideas. Determining the geometric relationship is simply analyzing the symmetry of movement of the two candidate visual detections along a plane which is practically capable of being performed in the human mind with the assistance of pen and paper. “wherein the one or more geometric relationships comprise equality of distance between each of the two candidate visual detections and the reflection plane across the sequence of image frames”. Under its broadest reasonable interpretation in light of the specification, the limitation recites a mathematical calculation and falls into the mathematical concepts grouping of abstract ideas. Determining the geometric relationship is simply analyzing the distance of the two candidate visual detections and plane which is practically capable of being performed in the human mind with the assistance of pen and paper. “disambiguating the direct visual representation of the target object from the one or more visual replicas of the target object based on the disambiguation information”. Under its broadest reasonable interpretation in light of the specification, the limitation encompasses a mental process of disambiguating a visual representation of the target object and the visual replica from an image which is practically capable of being performed in the human mind with the assistance of pen and paper. Prong Two: These judicial exceptions are not integrated into a practical application nor includes additional elements that are sufficient to amount to significantly more. Thus, the claim is ineligible. Step 1: Claims 6-15 recite a method. Therefore, the claims are directed to the statutory categories of process. Step 2A: Prong One: Claim 6-15 merely narrow the previously recited abstract idea limitations. For the reasons described above, this judicial exception is not meaningfully integrated into a practical application, or significantly more than the abstract idea. The claims disclose similar limitations described for the independent claims above and do not provide anything more than the mental process and mathematical calculation that are practically capable of being performed in the human mind with the assistance of pen and paper. Pring Two: These judicial exceptions are not integrated into a practical application nor includes additional elements that are sufficient to amount to significantly more. Thus, the claims are ineligible. Step 1: Claim 16 recites a device. Therefore, the claim is directed to the statutory categories of machine. Step 2A: Prong One: Claim 16 recites: “detect a plurality of visual detections of a target object within the image frame, wherein the plurality of visual detections comprises a direct visual representation of the target object and one or more visual replicas of the target object;”. Under its broadest reasonable interpretation in light of the specification, the limitation encompasses a mental process of detecting a visual representation of the target object and the visual replica from an image which is practically capable of being performed in the human mind with the assistance of pen and paper. “obtain disambiguation information that is based on data separate from the image frame”. Under its broadest reasonable interpretation in light of the specification, the limitation recites a mathematical calculation and falls into the mathematical concepts grouping of abstract ideas. Obtaining disambiguation information is simply collecting and analyzing the data which is practically capable of being performed in the human mind with the assistance of pen and paper. “disambiguate the direct visual representation of the target object from the one or more visual replicas of the target object based on the disambiguation information.”. Under its broadest reasonable interpretation in light of the specification, the limitation encompasses a mental process of disambiguating a visual representation of the target object and the visual replica from an image which is practically capable of being performed in the human mind with the assistance of pen and paper. Prong Two: This judicial exception is not integrated into a practical application. The additional elements of “one or more memories;” and “one or more transceivers;” and “one or more processors communicatively coupled to the one or more memories and the one or more transceivers, the one or more processors, either alone or in combination” and “obtain an image frame captured by a camera” amount to no more than mere necessary data gathering and applying because, under its broadest reasonable interpretation, it is simply using generic hardware to perform the abstract idea. Thus, they are insignificant extra-solution activity. Even when viewed in combination, these additional elements do not integrate the abstract idea into a practical application and the claim is thus directed to the abstract idea. Step (2B): Claim 16 does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The limitations “one or more memories;” and “one or more transceivers;” and “one or more processors communicatively coupled to the one or more memories and the one or more transceivers, the one or more processors, either alone or in combination” and “obtain an image frame captured by a camera” amount to no more than mere data gathering with general purpose hardware and provide no inventive concept. These elements, individually and in combination, are well-understood, routine, conventional activity. As such, the claim is ineligible. Step 1: Claims 17, 18, and 20 recite a device. Therefore, the claims are directed to the statutory categories of machine. Step 2A: Prong One: Claim 17, 18, and 20 merely narrow the previously recited abstract idea limitations. For the reasons described above, this judicial exception is not meaningfully integrated into a practical application, or significantly more than the abstract idea. The claims disclose similar limitations described for the independent claims above and do not provide anything more than the mental process and mathematical calculation that are practically capable of being performed in the human mind with the assistance of pen and paper. Pring Two: These judicial exceptions are not integrated into a practical application nor includes additional elements that are sufficient to amount to significantly more. Thus, the claims are ineligible. Step 1: Claim 19 recites a device. Therefore, the claim is directed to the statutory categories of machine. Step 2A: Prong One: Claim 19 recites: “means for detecting a plurality of visual detections of a target object within the image frame, wherein the plurality of visual detections comprises a direct visual representation of the target object and one or more visual replicas of the target object;”. Under its broadest reasonable interpretation in light of the specification, the limitation encompasses a mental process of detecting a visual representation of the target object and the visual replica from an image which is practically capable of being performed in the human mind with the assistance of pen and paper. “means for obtaining disambiguation information that is based on data separate from the image frame”. Under its broadest reasonable interpretation in light of the specification, the limitation recites a mathematical calculation and falls into the mathematical concepts grouping of abstract ideas. Obtaining disambiguation information is simply collecting and analyzing the data which is practically capable of being performed in the human mind with the assistance of pen and paper. “means for disambiguating the direct visual representation of the target object from the one or more visual replicas of the target object based on the disambiguation information.”. Under its broadest reasonable interpretation in light of the specification, the limitation encompasses a mental process of disambiguating a visual representation of the target object and the visual replica from an image which is practically capable of being performed in the human mind with the assistance of pen and paper. Prong Two: This judicial exception is not integrated into a practical application. The additional elements of “means for obtaining an image frame captured by a camera;” amount to no more than mere necessary data gathering and applying because, under its broadest reasonable interpretation, it is simply using generic hardware to perform the abstract idea. Thus, they are insignificant extra-solution activity. Even when viewed in combination, these additional elements do not integrate the abstract idea into a practical application and the claim is thus directed to the abstract idea. Step (2B): Claim 19 does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The limitations “means for obtaining an image frame captured by a camera;” amount to no more than mere data gathering with general purpose hardware and provide no inventive concept. These elements, individually and in combination, are well-understood, routine, conventional activity. As such, the claim is ineligible. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-5, and 12-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Park et al. ("Identifying reflected images from object detector in indoor environment utilizing depth information." IEEE Robotics and Automation Letters 6.2 (2020): 635-642.”) (hereinafter, “Park”). Regarding claim 1, Park discloses a method of operating a device, comprising: obtaining an image frame captured by a camera (Introduction, page 635, [right column first full paragraph] "manually annotated 9234 frames of total dataset and performed person detection using YOLO v3 [6] pre-trained on MS COCO dataset [7]. "; Method, page 637 [left column paragraph 1] “Virtual object image reflected by the mirror reaches the camera through the light path reflected by the mirror (C-B-D). The camera regards the object as being located on a straight line (A-B-D), so it appears as if it is behind a mirror.”); Figure 3 PNG media_image1.png 448 870 media_image1.png Greyscale detecting a plurality of visual detections of a target object within the image frame (the visual detections equates to the real image of the person marked with a green box or a green point and the virtual image reflected by the mirror marked with a red box or a red point; see Figure 4 and Figure 11), wherein the plurality of visual detections comprises a direct visual representation of the target object and one or more visual replicas of the target object (Figure 4, Method, page 637 [right column paragraph 1] "The real image of the person is marked with a green box, and the virtual image reflected by the mirror is marked with a red box."; Figure 11, Page 640, [left column last paragraph continuing on to right column paragraph 1] “The center point of the real person is represented by a green point, and the center point of the reflected person image is represented by a red point”); Figure 4 PNG media_image2.png 600 618 media_image2.png Greyscale Figure 11 PNG media_image3.png 414 818 media_image3.png Greyscale obtaining disambiguation information that is based on data separate from the image frame (Abstract, page 635 "compares the geometric relationship between the 3D spatial information of the detected object and its surrounding environment where the object locates."; Related Works, page 636 [right column paragraph 2] "in the proposed method it is not necessary to detect the mirror region itself to recognize whether the detected object in the indoor environment is a reflective image or not. Instead, the proposed method compares the geometric relationship between the detected object and the layout of the surrounding indoor environment."; Method, page 637 [right column paragraph 2] “depth camera world coordinates provided with depth information. It is then used to compare positional relationship with reference surface in world coordinate…walls are found, that is a layout of an indoor scene from a depth image. Geometric information is easily extracted from depth images, which helps to get the layout of an indoor scene. Once the scene layout is obtained, planes corresponding to the wall can be obtained as well as normal vector and center point… Space between the detected wall plane and camera can be regarded as interior space. As a result, bounding boxes locate outside this interior space can be considered as reflected virtual images.”); and disambiguating the direct visual representation of the target object from the one or more visual replicas of the target object based on the disambiguation information (Method, page 637 [left column paragraph 1] "we can find a reference surface to which the planar reflecting surface belongs, we can distinguish actual and reflected images by examining whether the object's 3D coordinate is behind or in front of the reference surface."). Regarding claim 2, which claim 1 is incorporated, Park discloses wherein the disambiguation information comprises (Related Works, page 636 [right column paragraph 2] "in the proposed method it is not necessary to detect the mirror region itself to recognize whether the detected object in the indoor environment is a reflective image or not. Instead, the proposed method compares the geometric relationship between the detected object and the layout of the surrounding indoor environment."): behavioral information associated with the target object detected within a sequence of image frames (Method, page 637 [right column paragraph 2] “depth camera world coordinates provided with depth information. It is then used to compare positional relationship with reference surface in world coordinate…walls are found, that is a layout of an indoor scene from a depth image. Geometric information is easily extracted from depth images, which helps to get the layout of an indoor scene. Once the scene layout is obtained, planes corresponding to the wall can be obtained as well as normal vector and center point… Space between the detected wall plane and camera can be regarded as interior space. As a result, bounding boxes locate outside this interior space can be considered as reflected virtual images.”), or radio frequency (RF) information associated with the target object (Note that the claim requires only one of behavioral information associated with the target object detected within a sequence of image frames, or radio frequency (RF) information associated with the target object, or digital twin (DT)-based information that characterizes an environment associated with the image frame, or one or more sound emissions from the target object, or any combination thereof.), or digital twin (DT)-based information that characterizes an environment associated with the image frame (Related Works, page 636 [right column paragraph 2] “the proposed method reformulates the problem of discriminating the mirror reflection image as the problem of detecting interior layout, not the problem of finding the mirror region by utilizing 3D depth information to detect the interior layout."; Page 640, [left column last paragraph continuing on to right column paragraph 1] “of distinguishing mirror reflection image through all the steps is shown in Fig 11. The green boxes are detected person with the baseline human detector. Fig 11(b) shows estimated layout plane by plane detection with HAC, and Fig 11(c) is reconstructed layout plane in 3D space. The center point of the real person is represented by a green point, and the center point… of the reflected person image is represented by a red point. In the 3D reconstructed point cloud, the center point of the real person (green point) is in front of the wall plane (purple plane).”), or one or more sound emissions from the target object (Note that the claim requires only one of behavioral information associated with the target object detected within a sequence of image frames, or radio frequency (RF) information associated with the target object, or digital twin (DT)-based information that characterizes an environment associated with the image frame, or one or more sound emissions from the target object, or any combination thereof.), or any combination thereof. Regarding claim 3, which claim 2 is incorporated, Park discloses wherein the disambiguation information comprises the behavioral information associated with the target object detected within a sequence of image frames (Related Works, page 636 [right column paragraph 2] "in the proposed method it is not necessary to detect the mirror region itself to recognize whether the detected object in the indoor environment is a reflective image or not. Instead, the proposed method compares the geometric relationship between the detected object and the layout of the surrounding indoor environment."; Method, page 637 [right column paragraph 2] “depth camera world coordinates provided with depth information. It is then used to compare positional relationship with reference surface in world coordinate…walls are found, that is a layout of an indoor scene from a depth image. Geometric information is easily extracted from depth images, which helps to get the layout of an indoor scene. Once the scene layout is obtained, planes corresponding to the wall can be obtained as well as normal vector and center point… Space between the detected wall plane and camera can be regarded as interior space. As a result, bounding boxes locate outside this interior space can be considered as reflected virtual images.”). Regarding claim 4, which claim 3 is incorporated, Park discloses wherein the behavioral information is associated with one or more geometric relationships of a candidate triplet that comprises two candidate visual detections of the target object and a reflection plane (Figure 3; Method, page 637 [left column paragraph 1] “The situation in which an object is being reflected by a planar reflecting surface is depicted in Fig 3.”; Method , page 637 [left column last paragraph continuing on to right column paragraph 1] “the situation in which a person image is reflected by a mirror in an indoor environment…In the point cloud in Fig. 4(c), reflected image of person is located virtually behind the wall. In this manner, the reflected image can be filtered by detecting a wall…”). Figure 3 PNG media_image1.png 448 870 media_image1.png Greyscale Regarding claim 5, which claim 4 is incorporated, Park discloses wherein the one or more geometric relationships comprise symmetry of movement of the two candidate visual detections along a plane that is perpendicular to the reflection plane across the sequence of image frames (Note that the claim requires only one of behavioral information associated with the target object detected within a sequence of image frames, or radio frequency (RF) information associated with the target object, or digital twin (DT)-based information that characterizes an environment associated with the image frame, or one or more sound emissions from the target object, or any combination thereof.), or wherein the one or more geometric relationships comprise equality of distance between each of the two candidate visual detections and the reflection plane across the sequence of image frames (Figure 3; Method, page 637 [left column paragraph 1] "reflective surface of interest is assumed to be planar for clarity of the problem definition. We focus on a clue in 3D depth information to solve the problem. The situation in which an object is being reflected by a planar reflecting surface is depicted in Fig 3."), or a combination thereof. Regarding claim 12, which claim 2 is incorporated, Park discloses wherein the disambiguation information comprises the DT-based information that characterizes the environment associated with the image frame (Related Works, page 636 right column paragraph 2 “the proposed method reformulates the problem of discriminating the mirror reflection image as the problem of detecting interior layout, not the problem of finding the mirror region by utilizing 3D depth information to detect the interior layout."; Method, page 637 right column paragraph 2 "is comparing extracted plane parameters of detected walls with 3D coordinates of the bounding box of person candidates. Space between the detected wall plane and camera can be regarded as interior space. As a result, bounding boxes locate outside this interior space can be considered as reflected virtual images."). Regarding claim 13, which claim 12 is incorporated, Park discloses wherein the DT-based information comprises (Related Works, page 636 right column paragraph 2 “the proposed method reformulates the problem of discriminating the mirror reflection image as the problem of detecting interior layout, not the problem of finding the mirror region by utilizing 3D depth information to detect the interior layout."): a two-dimensional (2D) map or model (Note that the claim requires only one of behavioral information associated with the target object detected within a sequence of image frames, or radio frequency (RF) information associated with the target object, or digital twin (DT)-based information that characterizes an environment associated with the image frame, or one or more sound emissions from the target object, or any combination thereof.), or a three-dimensional (3D) map or model (Method, page 637 left column paragraph 1 "in this work, reflective surface of interest is assumed to be planar for clarity of the problem definition. We focus on a clue in 3D depth information to solve the problem. The situation in which an object is being reflected by a planar reflecting surface is depicted in Fig 3."), or object-specific information (Method, page 637 right column paragraph 3 "We estimated the layout of indoor space using semantic segmentation and plane detection algorithms. To detect indoor layout, which is composed of planar surfaces, we find planes from input images. Hierarchical agglomerative clustering (HAC) [20] is used to detect plane segments in a 3D point cloud generated from a depth image."), or radio environmental map (REM) database information (Note that the claim requires only one of behavioral information associated with the target object detected within a sequence of image frames, or radio frequency (RF) information associated with the target object, or digital twin (DT)-based information that characterizes an environment associated with the image frame, or one or more sound emissions from the target object, or any combination thereof.), or any combination thereof. Regarding claim 14, which claim 12 is incorporated, Park discloses determining a virtual position estimate associated with a respective one of the plurality of visual detections (Method, page 637 right column paragraph 2 "is comparing extracted plane parameters of detected walls with 3D coordinates of the bounding box of person candidates. Space between the detected wall plane and camera can be regarded as interior space. As a result, bounding boxes locate outside this interior space can be considered as reflected virtual images."). Regarding claim 15, which claim 14 is incorporated, Park discloses wherein the respective visual detection is excluded from being a candidate for the direct visual representation of the target object based on the virtual position estimate being outside of a target object candidate region defined by the DT-based information (Method, page 637 left column paragraph 1 "Virtual object image reflected by the mirror reaches the camera through the light path reflected by the mirror (C-B-D). The camera regards the object as being located on a straight line (A-B-D), so it appears as if it is behind a mirror. The depth image of the scene makes this point more apparent. Distance between virtual object image and camera ( A D - ) is farther than distance between reflective surface and camera ( B D - ). Therefore, if we can find a reference surface to which the planar reflecting surface belongs, we can distinguish actual and reflected images by examining whether the object's 3D coordinate is behind or in front of the reference surface."), or wherein the respective visual detection is included as a candidate for the direct visual representation of the target object based on the virtual position estimate being inside of the target object candidate region defined by the DT-based information (Method, page 637 [left column paragraph 1] "Virtual object image reflected by the mirror reaches the camera through the light path reflected by the mirror (C-B-D). The camera regards the object as being located on a straight line (A-B-D), so it appears as if it is behind a mirror. The depth image of the scene makes this point more apparent. Distance between virtual object image and camera ( A D - ) is farther than distance between reflective surface and camera ( B D - ). Therefore, if we can find a reference surface to which the planar reflecting surface belongs, we can distinguish actual and reflected images by examining whether the object's 3D coordinate is behind or in front of the reference surface."). Regarding claim 16, Park discloses A device, comprising: one or more memories (RAM of 16 GB); one or more transceivers (Kinect V2); and one or more processors (CPU) communicatively coupled to the one or more memories and the one or more transceivers, the one or more processors, either alone or in combination (Experiment, page 439 [right column paragraph1] implemented in Python/C++ on Ubuntu 18.04 with a 3.60 GHz CPU, RAM of 16 GB, and Nvidia GTX 1080Ti. U-net for per-pixel layout estimation is trained on processed NYU Depth V2 dataset. The training and validation dataset contain 1159 and 290 images. Our method is validated on the Living-lab dataset which contains 9234 RGB images, depth images collected from Kinect V2, and annotation on human regions.), configured to: obtain an image frame captured by a camera (Introduction, page 635, [right column first full paragraph] "manually annotated 9234 frames of total dataset and performed person detection using YOLO v3 [6] pre-trained on MS COCO dataset [7]. "; Method, page 637 [left column paragraph 1] “Virtual object image reflected by the mirror reaches the camera through the light path reflected by the mirror (C-B-D). The camera regards the object as being located on a straight line (A-B-D), so it appears as if it is behind a mirror.”; Figure 3); detect a plurality of visual detections of a target object within the image frame, wherein the plurality of visual detections comprises a direct visual representation of the target object and one or more visual replicas of the target object (Figure 4, Method, page 637 [right column paragraph 1] "The real image of the person is marked with a green box, and the virtual image reflected by the mirror is marked with a red box."; Figure 11, Page 640, [left column last paragraph continuing on to right column paragraph 1] “The center point of the real person is represented by a green point, and the center point of the reflected person image is represented by a red point”); obtain disambiguation information that is based on data separate from the image frame (Abstract, page 635 "compares the geometric relationship between the 3D spatial information of the detected object and its surrounding environment where the object locates."; Related Works, page 636 [right column paragraph 2] "in the proposed method it is not necessary to detect the mirror region itself to recognize whether the detected object in the indoor environment is a reflective image or not. Instead, the proposed method compares the geometric relationship between the detected object and the layout of the surrounding indoor environment."; Method, page 637 [right column paragraph 2] “depth camera world coordinates provided with depth information. It is then used to compare positional relationship with reference surface in world coordinate…walls are found, that is a layout of an indoor scene from a depth image. Geometric information is easily extracted from depth images, which helps to get the layout of an indoor scene. Once the scene layout is obtained, planes corresponding to the wall can be obtained as well as normal vector and center point… Space between the detected wall plane and camera can be regarded as interior space. As a result, bounding boxes locate outside this interior space can be considered as reflected virtual images.”); and disambiguate the direct visual representation of the target object from the one or more visual replicas of the target object based on the disambiguation information (Method, page 637 [left column paragraph 1] "we can find a reference surface to which the planar reflecting surface belongs, we can distinguish actual and reflected images by examining whether the object's 3D coordinate is behind or in front of the reference surface."). Regarding claim 17 (drawn to a device) claim 17 is rejected the same as claim 2 and the analysis similar to that presented above for claim 2 are equally applicable to the claim 17, and all the other limitations similar to claim2 are not repeated herein, but incorporated by reference. Regarding claim 18, which claim 17 is incorporated, Park discloses wherein the disambiguation information comprises the behavioral information associated with the target object detected within a sequence of image frames (Abstract, page 635 "The proposed method compares the geometric relationship between the 3D spatial information of the detected object and its surrounding environment where the object locates."; Related Works, [page 636 right column paragraph 2] "in the proposed method it is not necessary to detect the mirror region itself to recognize whether the detected object in the indoor environment is a reflective image or not. Instead, the proposed method compares the geometric relationship between the detected object and the layout of the surrounding indoor environment."; Method, page 637 [right column paragraph 2] “depth camera world coordinates provided with depth information. It is then used to compare positional relationship with reference surface in world coordinate…walls are found, that is a layout of an indoor scene from a depth image. Geometric information is easily extracted from depth images, which helps to get the layout of an indoor scene. Once the scene layout is obtained, planes corresponding to the wall can be obtained as well as normal vector and center point… Space between the detected wall plane and camera can be regarded as interior space. As a result, bounding boxes locate outside this interior space can be considered as reflected virtual images.”), or wherein the disambiguation information comprises the RF information associated with the target object (Note that the claim requires only one of behavioral information associated with the target object detected within a sequence of image frames, or radio frequency (RF) information associated with the target object, or digital twin (DT)-based information that characterizes an environment associated with the image frame, or one or more sound emissions from the target object, or any combination thereof.), or wherein the disambiguation information comprises the DT-based information that characterizes the environment associated with the image frame (Related Works, page 636 right column paragraph 2 “the proposed method reformulates the problem of discriminating the mirror reflection image as the problem of detecting interior layout, not the problem of finding the mirror region by utilizing 3D depth information to detect the interior layout."; Method, page 637 right column paragraph 2 "is comparing extracted plane parameters of detected walls with 3D coordinates of the bounding box of person candidates. Space between the detected wall plane and camera can be regarded as interior space. As a result, bounding boxes locate outside this interior space can be considered as reflected virtual images."), or any combination thereof. Regarding claim 19, Park discloses a device, comprising: means for obtaining an image frame captured by a camera (Introduction, page 635, [right column first full paragraph] "manually annotated 9234 frames of total dataset and performed person detection using YOLO v3 [6] pre-trained on MS COCO dataset [7]. "; Method, page 637 [left column paragraph 1] “Virtual object image reflected by the mirror reaches the camera through the light path reflected by the mirror (C-B-D). The camera regards the object as being located on a straight line (A-B-D), so it appears as if it is behind a mirror.”; Figure 3); means for detecting a plurality of visual detections of a target object within the image frame, wherein the plurality of visual detections comprises a direct visual representation of the target object and one or more visual replicas of the target object (Figure 4, Method, page 637 [right column paragraph 1] "The real image of the person is marked with a green box, and the virtual image reflected by the mirror is marked with a red box."; Figure 11, Page 640, [left column last paragraph continuing on to right column paragraph 1] “The center point of the real person is represented by a green point, and the center point of the reflected person image is represented by a red point”); means for obtaining disambiguation information that is based on data separate from the image frame (Abstract, page 635 "compares the geometric relationship between the 3D spatial information of the detected object and its surrounding environment where the object locates."; Related Works, page 636 [right column paragraph 2] "in the proposed method it is not necessary to detect the mirror region itself to recognize whether the detected object in the indoor environment is a reflective image or not. Instead, the proposed method compares the geometric relationship between the detected object and the layout of the surrounding indoor environment."; Method, page 637 [right column paragraph 2] “depth camera world coordinates provided with depth information. It is then used to compare positional relationship with reference surface in world coordinate…walls are found, that is a layout of an indoor scene from a depth image. Geometric information is easily extracted from depth images, which helps to get the layout of an indoor scene. Once the scene layout is obtained, planes corresponding to the wall can be obtained as well as normal vector and center point… Space between the detected wall plane and camera can be regarded as interior space. As a result, bounding boxes locate outside this interior space can be considered as reflected virtual images.”); and means for disambiguating the direct visual representation of the target object from the one or more visual replicas of the target object based on the disambiguation information (Method, page 637 [left column paragraph 1] "we can find a reference surface to which the planar reflecting surface belongs, we can distinguish actual and reflected images by examining whether the object's 3D coordinate is behind or in front of the reference surface."). Regarding claim 20 (drawn to a device) claim 20 is rejected the same as claim 2 and the analysis similar to that presented above for claim 2 are equally applicable to the claim 20, and all the other limitations similar to claim 2 are not repeated herein, but incorporated by reference. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Park et al. ("Identifying reflected images from object detector in indoor environment utilizing depth information." IEEE Robotics and Automation Letters 6.2 (2020): 635-642.”) (hereinafter, “Park”) in view of Fischer et al. (US 9,483,708 B2) (hereinafter, “Fischer”). Regarding claim 6, which claim 4 is incorporated, Park fails to teach wherein the disambiguation of the direct visual representation of the target object from the one or more visual replicas assigns, to the candidate triplet, a likelihood that the two candidate visual detections are a pairing of the direct visual representation of the target object and a respective visual replica of the target object. Fischer teaches wherein the disambiguation of the direct visual representation of the target object from the one or more visual replicas assigns, to the candidate triplet, a likelihood that the two candidate visual detections are a pairing of the direct visual representation of the target object and a respective visual replica of the target object (Column 4 [lines 45-46] “a reflection location of a recognized object reflection is ascertained as a reflection feature”; Column 4 [lines 52-57] ““reflection location” is understood as the location at which the object reflection is actually located relative to the motor vehicle and/or to the camera. Alternatively or additionally, it is also conceivable to understand the “reflection location” as a region of the image in which the likeness of the object reflection is situated.”; Column 4 [lines 63-68] “region for an evaluation in order to avoid errors that would occur if the object reflections were classified as objects. This is the case, for example, when an additional vehicle is traveling slowly in front of the own vehicle. In this case two motor vehicles following one another might incorrectly be recognized. It is advantageous in such a case if the object reflection of the preceding vehicle is discarded, so that safety-critical systems with regard to the own vehicle can be supplied with correct information.”). Therefore, it would have been obvious to one of ordinary skill of the art before the effective filing date to modify Park’s reference to include wherein the disambiguation of the direct visual representation of the target object from the one or more visual replicas assigns, to the candidate triplet, a likelihood that the two candidate visual detections are a pairing of the direct visual representation of the target object and a respective visual replica of the target object taught by Fischer’s reference. The motivation for doing so would have been to improve the evaluation of the camera and avoid errors that would occur if the object reflection were classified as the object as suggested by Fischer (see Fischer Column 4 [lines 60-65]). Further, one skilled in the art could have combined the elements described above by known methods with no change to the respective functions, and the combination would have yielded nothing more that predictable results. Therefore, it would have been obvious to combine Fischer with Park to obtain the invention specified in claim 6. Claims 7-10 are rejected under 35 U.S.C. 103 as being unpatentable over Park et al. ("Identifying reflected images from object detector in indoor environment utilizing depth information." IEEE Robotics and Automation Letters 6.2 (2020): 635-642.”) (hereinafter, “Park”) in view of Park et al. (US 2022/0022056 A1) (hereinafter, “Park 056”). Regarding claim 7, which claim 2 is incorporated, Park fails to teach wherein the disambiguation information comprises the RF information associated with the target object. Park 056 teaches wherein the disambiguation information comprises the RF information associated with the target object (Paragraph [0128] “the transmitter base station(s) transmit RF sensing signals and the receiver base station(s) receive/measure the RF sensing signals. FIG. 13 is a diagram 1300 illustrating an example RF sensing stage, according to aspects of the disclosure.”); Paragraph [0113] “FIGS. 9A to 9C illustrate these various types of radar. Specifically, FIG. 9A is a diagram 900 illustrating a monostatic radar scenario, FIG. 9B is a diagram 930 illustrating a bistatic radar scenario, and FIG. 9C is a diagram 950 illustrating a multistatic radar scenario. In FIG. 9A, the transmitter and receiver are co-located. This is the typical use case for traditional, or conventional, radar.”. Therefore, it would have been obvious to one of ordinary skill of the art before the effective filing date to modify Park’s reference to include wherein the disambiguation information comprises the RF information associated with the target object taught by Park’s 056 reference. The motivation for doing so would have been because the higher frequency from RF signals provides a more accurate range detection as suggested by Park 056 (see Park 056, Paragraph [0100]). Further, one skilled in the art could have combined the elements described above by known methods with no change to the respective functions, and the combination would have yielded nothing more that predictable results. Therefore, it would have been obvious to combine Park 056 with Park to obtain the invention specified in claim 7. Regarding claim 8, which claim 7 is incorporated, Park fails to teach wherein the RF information is based on one or more RF transmissions from a RF antenna associated with the target object or one or more reflections of RF signals off of the target object based on one or more RF for sensing (RF-S) signals. Park 056 teaches wherein the RF information is based on one or more RF transmissions from a RF antenna associated with the target object (Paragraph [0039] “The term “base station” may refer to a single physical transmission-reception point (TRP) or to multiple physical TRPs ... where the term “base station” refers to a single physical TRP, the physical TRP may be an antenna of the base station corresponding to a cell (or several cell sectors) of the base station.; [0114] The base station may have transmitted multiple RF sensing signals in different directions, some of which followed the direct path and others of which followed the reflected path. Alternatively, the base station may have transmitted a single RF sensing signal in a broad enough beam that a portion of the RF sensing signal followed the direct path and a portion of the RF sensing signal followed the reflected path.”; Paragraph [0115] “The UE can measure the ToAs of the RF sensing signals received directly from the base station and the ToAs of the RF sensing signals reflected from the target object to determine the distance, and possibly direction, to the target object.”) or one or more reflections of RF signals off of the target object based on one or more RF for sensing (RF-S) signals (Paragraph [0129] "the transmitter base station transmits downlink RF sensing signals to the receiver base stations, but some of the RF sensing signals reflect off various target objects (five in the example of FIG. 13). The receiver base stations can measure the ToAs of the RF sensing signals received directly from the transmitter base station (illustrated as the solid lines) and the ToAs of the RF sensing signals reflected from the target objects (illustrated as the dashed lines). That is, the solid lines represents RF sensing signals that followed the LOS paths between the transmitter base station and the receiver base stations, and the dashed lines represent the RF sensing signals that followed NLOS paths between the transmitter base station and the receiver base stations due to reflecting off the target objects."). Therefore, it would have been obvious to one of ordinary skill of the art before the effective filing date to modify Park’s reference to include wherein the RF information is based on one or more RF transmissions from a RF antenna associated with the target object or one or more reflections of RF signals off of the target object based on one or more RF for sensing (RF-S) signals taught by Park’s 056 reference. The motivation for doing so would have been because the higher frequency from RF signals provides a more accurate range detection as suggested by Park 056 (see Park 056, Paragraph [0100]). Further, one skilled in the art could have combined the elements described above by known methods with no change to the respective functions, and the combination would have yielded nothing more that predictable results. Therefore, it would have been obvious to combine Park 056 with Park to obtain the invention specified in claim 8. Regarding claim 9, which claim 8 is incorporated, Park fails to teach wherein the one or more RF transmissions, the one or more RF-S signals, or both, are requested by the device. Park 056 teaches wherein the one or more RF transmissions, the one or more RF-S signals, or both, are requested by the device (Paragraph [0074] "The satellite signal receivers 330 and 370 may request information and operations as appropriate from the other systems, and, at least in some cases, perform calculations to determine locations of the UE 302 and the base station 304 , respectively, using measurements obtained by any suitable satellite positioning system algorithm."). Therefore, it would have been obvious to one of ordinary skill of the art before the effective filing date to modify Park’s reference to include wherein the one or more RF transmissions, the one or more RF-S signals, or both, are requested by the device taught by Park’s 056 reference. The motivation for doing so would have been because the higher frequency from RF signals provides a more accurate range detection as suggested by Park 056 (see Park 056, Paragraph [0100]). Further, one skilled in the art could have combined the elements described above by known methods with no change to the respective functions, and the combination would have yielded nothing more that predictable results. Therefore, it would have been obvious to combine Park 056 with Park to obtain the invention specified in claim 9. Regarding claim 10, which claim 9 is incorporated, Park fails to teach wherein the one or more RF transmissions, the one or more RF-S signals, or both, are periodic. Park 056 teaches wherein the one or more RF transmissions, the one or more RF-S signals, or both, are periodic (Paragraph [0125] “periodic resources may be configured and assigned to occur with some periodicity (e.g., every minute, every hour, once a day, once a week, etc.). For example, if periodic resources are configured, a single coordination stage among the involved base stations may be followed by periodic transmissions of RF sensing signals.”). Therefore, it would have been obvious to one of ordinary skill of the art before the effective filing date to modify Park’s reference to include wherein the one or more RF transmissions, the one or more RF-S signals, or both, are periodic taught by Park’s 056 reference. The motivation for doing so would have been because the higher frequency from RF signals provides a more accurate range detection as suggested by Park 056 (see Park 056, Paragraph [0100]). Further, one skilled in the art could have combined the elements described above by known methods with no change to the respective functions, and the combination would have yielded nothing more that predictable results. Therefore, it would have been obvious to combine Park 056 with Park to obtain the invention specified in claim 10. Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Park et al. ("Identifying reflected images from object detector in indoor environment utilizing depth information." IEEE Robotics and Automation Letters 6.2 (2020): 635-642.”) (hereinafter, “Park”) in view of Park et al. (US 2022/0022056 A1) (hereinafter, “Park 056”) further in view of Wu et al. (US 11,340,345 B2) (hereinafter, “Wu”). Regarding claim 11, which claim 7 is incorporated, Park fails to teach determining a position estimate of the target object based on the RF information, wherein the disambiguation of the direct visual representation of the target object from the one or more visual replicas is based on a bipartite algorithm that matches the direct visual representation of the target object to the position estimate. Park 056 teaches determining a position estimate of the target object based on the RF information (Paragraph [0102] "use cases of RF sensing include health monitoring use cases, such as heartbeat detection, respiration rate monitoring, and the like, gesture recognition use cases, such as human activity recognition, keystroke detection, sign language recognition, and the like, contextual information acquisition use cases, such as location detection and/or tracking, direction finding, range estimation"). Therefore, it would have been obvious to one of ordinary skill of the art before the effective filing date to modify Park’s reference to include determining a position estimate of the target object based on the RF information taught by Park’s 056 reference. The motivation for doing so would have been because the higher frequency from RF signals provides a more accurate range detection as suggested by Park 056 (see Park 056, Paragraph [0100]). Further, one skilled in the art could have combined the elements described above by known methods with no change to the respective functions, and the combination would have yielded nothing more that predictable results. However, Park and Park 056 fail to teach wherein the disambiguation of the direct visual representation of the target object from the one or more visual replicas is based on a bipartite algorithm that matches the direct visual representation of the target object to the position estimate. Wu teaches wherein the disambiguation of the direct visual representation of the target object from the one or more visual replicas is based on a bipartite algorithm that matches the direct visual representation of the target object to the position estimate (Column 50 [lines 8-19] “The spatial resolution is promoted by digital beamforming based on MVDR and a novel object detection approach is proposed to tackle the near-far-effect and measurement noise. A robust algorithm is disclosed based on k-means clustering that can accurately and robustly determine the number of users and estimate their respective locations. As such, a continuous tracking of multiple trajectories can be achieved by a novel algorithm using weighted bipartite graph matching.”). Therefore, it would have been obvious to one of ordinary skill of the art before the effective filing date to modify Park in view of Park 056 to include wherein the disambiguation of the direct visual representation of the target object from the one or more visual replicas is based on a bipartite algorithm that matches the direct visual representation of the target object to the position estimate taught by Wu’s reference. The motivation for doing so would have been to continuously track multiple objects as suggested by Wu (see Wu, Column 49 [lines 53-58]). Further, one skilled in the art could have combined the elements described above by known methods with no change to the respective functions, and the combination would have yielded nothing more that predictable results. Therefore, it would have been obvious to combine Wu with Park and Park 056 to obtain the invention specified in claim 11. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Llorca et al. (WO 2021/152208 A1) discloses a method for detecting target object by transmitting a plurality of packets and receiving a signal comprising information regarding the target object. The information is then used to detect and localize the target objects. Ondeng et al. ("Disambiguation of Visual Representations." 2023 IEEE AFRICON. IEEE, 2023.) discloses a method to disambiguate features by projecting the image regions to the word embedding space and then using the category labels corresponding to the image regions to further improve in an alignment process. Yang et al. ("On solving mirror reflection in lidar sensing." IEEE/ASME Transactions on Mechatronics 16.2 (2010): 255-265.”) discloses a method to distinguish between mirror images and true object by using a framework that detect and tracks mirrors using LIDAR information. Any inquiry concerning this communication or earlier communications from the examiner should be directed to UROOJ FATIMA whose telephone number is (571)272-2096. The examiner can normally be reached M-F 8:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Henok Shiferaw can be reached at (571) 272-4637. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /UROOJ FATIMA/Examiner, Art Unit 2676 /Henok Shiferaw/Supervisory Patent Examiner, Art Unit 2676
Read full office action

Prosecution Timeline

Mar 01, 2024
Application Filed
Jan 23, 2026
Non-Final Rejection — §101, §102, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
100%
Grant Probability
99%
With Interview (+100.0%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 1 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month