Prosecution Insights
Last updated: April 19, 2026
Application No. 18/935,322

ROBOT AND METHOD FOR CALCULATING DISTANCE TO OBJECT

Non-Final OA §102
Filed
Nov 01, 2024
Examiner
SMITH-STEWART, DEMETRA R
Art Unit
3661
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Samsung Electronics Co., Ltd.
OA Round
1 (Non-Final)
90%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
98%
With Interview

Examiner Intelligence

Grants 90% — above average
90%
Career Allow Rate
654 granted / 728 resolved
+37.8% vs TC avg
Moderate +8% lift
Without
With
+8.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
33 currently pending
Career history
761
Total Applications
across all art units

Statute-Specific Performance

§101
13.3%
-26.7% vs TC avg
§103
24.4%
-15.6% vs TC avg
§102
49.9%
+9.9% vs TC avg
§112
4.9%
-35.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 728 resolved cases

Office Action

§102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims This Office Action is in response to the application filed on November 1, 2024. Claims 1-20 are pending. Claims 1 and 10 are independent. Priority Receipt is acknowledged of certified copies of papers submitted under 35 U.S.C. 119(a)-(d), which papers have been placed of record in the file. Information Disclosure Statement The information disclosure statements (IDSs) submitted on November 1, 2024 and October 3, 2025 have been considered. The submission is in compliance with the provisions of 37 CFR 1.97. The Forms PTO-1449 are signed and attached hereto. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims1-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by U.S. Patent Publication No. 2022/0066456 to Ebrahim Afrouzi et al. (hereinafter “Ebrahim Afrouzi”). Claims 1-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Ebrahim Afrouzi. With respect to independent claims 1 and 10, Ebrahim Afrouzi discloses a two-dimensional (2D) camera (see paragraph [0027]: improve the accuracy of mobile robot navigation with a monocular camera.); a one-dimensional (1D) distance sensor (see paragraph [0596]: structured light, such as a laser light, may be used to infer the distance to objects within the environment); a driving module configured to move the robot (see paragraph [0782]: movement services may include services that require the robot to move. For example, the user may ask the robot to bring them a coke and the robot may drive to the kitchen to obtain the coke and deliver it to a location of the user.); memory storing one or more computer programs (see paragraph [0238]: a memory storing instructions that when executed by the processor effectuates robotic operations); and one or more processors communicatively coupled to the memory (see paragraph [0238]: a processor, a memory storing instructions that when executed by the processor effectuates robotic operations); wherein the one or more computer programs include computer-executable instructions that, when executed by the one or more processors individually or collectively, cause the robot to: obtain a 2D image by controlling the 2D camera (see paragraph [0628]: Depending on the geometry of a point measurement sensor with respect to a camera, there may be objects at near distances that do not show up within the FOV and 2D image of the camera.); calculate relative depths of actual regions indicated by pixels in the 2D image, based on the obtained 2D image (see paragraphs [0955] and [1332]: at least one infrared line laser positioned at a downward angle relative to a horizontal plane coupled with at least one camera may be used to determine the depth of multiple points across the uneven surfaces from captured images of the line laser projected onto the uneven surfaces of the object. The position of the laser line (or feature of a structured light pattern) in the image may be detected by finding pixels with intensity above a threshold. Each pixel of the image may represent a particular square size on the floor plane, the particular square size depending on the resolution. In some embodiments, the color depth value of each pixel may correspond to a height of the floor plane relative to a ground zero plane.); obtain a reference distance to a point to which a laser output from the 1D distance sensor is irradiated (see paragraph [0663]: Glass materials can be deposited from a chemical vapor, where the chemical composition is varied during the process such that the required index gradient is obtained. Another example is neutron irradiation can be used to generate spatially varying refractive index modifications in certain boron-rich glasses. If the used fabrication method allows for precise control of the radial index variation, the performance of a GRIN lens may be high, with only weak spherical aberrations similar to those of aspheric lenses. Besides, some fabrication techniques allow for cheap mass production. In embodiments, refractive index changes based on radial distance for a GRIN lens.); determine a distance from the robot to an object in the 2D image based on the obtained reference distance and a relative depth of a reference point corresponding to the point to which the laser is irradiated among the pixels in the 2D image (see paragraph [1046]: the processor localizes the robot with position coordinate q=(x, y) and momentum coordinate p=(p.sub.x, p.sub.y). For simplification, the mass of the robot is 1.0, the earth is assumed to be planar, and q is a position with reference to some arbitrary point and distance.); and control the driving module to travel based on the determined distance to the object (see paragraph [1513]: capturing, by a wheel encoder of the robot, movement data indicative of movement of the robot; capturing, by a LIDAR disposed on the robot, LIDAR data as the robot performs work within the workspace, wherein the LIDAR data is indicative of distances from the LIDAR to objects and perimeters immediately surrounding the robot; comparing, by the processor of the robot, at least one object from the captured images to objects in an object dictionary.). With respect to dependent claims 2 and 11, Ebrahim Afrouzi discloses wherein the one or more computer programs further include computer-executable instructions that, when executed by the one or more processors individually or collectively, cause the robot to: identify a region of the object within the 2D image (see paragraph [0589]: The processor may also track optical flow, structure from motion, pixel entropy in different zones, and how pixel groups or edges, objects, blobs move up and down in the image or video stream.); and determine the distance from the robot to the object based on relative depths of pixels within the identified region of the object, the obtained reference distance, and the relative depth of the reference point (see paragraph [0929]: two laser emitters, an image sensor and an image processor are used to measure depth. The laser emitters project light points onto an object which is captured by the image sensor. The image processor extracts the distance between the projected light points and compares the distance to a preconfigured table (or inputs the values into a formula with outputs approximating such a table) that relates distances between light points with depth to the object onto which the light points are projected.). With respect to dependent claims 3 and 12, Ebrahim Afrouzi discloses wherein the 1D distance sensor is configured to emit the laser perpendicular to a sensor plane of the 2D camera (see paragraph [0929]: the camera may be positioned parallel to a horizontal plane (upon which the robot translates) and the IR illuminator may be positioned at an angle with respect to the horizontal plane or both the camera and IR illuminator are positioned at angle with respect to the horizontal plane. Different types of lasers may be used, including but not limited to edge emitting lasers and surface emitting lasers. In edge emitting lasers the light emitted is parallel to the wafer surface and propagates from a cleaved edge. With surface emitting lasers, light is emitted perpendicular to the wafer surface.). With respect to dependent claims 4 and 13, Ebrahim Afrouzi discloses wherein the 1D distance sensor and the 2D camera are positioned toward a front of the robot (see paragraph [0589]: a robot may include a camera with a frontal field of view.). With respect to dependent claims 5 and 14, Ebrahim Afrouzi discloses wherein: the 1D distance sensor comprises at least one of a position sensitive device (PSD) sensor or a time of flight (TOF) sensor; and the 2D camera comprises a color image-generating camera (see paragraphs [0499] and [0857]: color images provide a lot of additional information that may help in identifying objects. Floor sensors may be infrared (IR) sensors, ultrasonic sensors, laser sensors, time-of-flight (TOF) sensors, distance sensors, 3D or 2D range finders, 3D or 2D depth cameras, etc. For example, the floor sensor positioned on the front of the robot in FIG. 3 may be an IR sensor while the floor sensors positioned on the sides of the robot may be TOF sensors.). With respect to dependent claims 6 and 15, Ebrahim Afrouzi discloses wherein the robot comprises: two or more 1D distance sensors, and wherein the one or more computer programs further include computer-executable instructions that, when executed by the one or more processors individually or collectively, cause the robot to: obtain reference distances to points to which lasers output from the two or more 1D distance sensors are irradiated; and determine a distance from the robot to objects within the 2D image based on relative depths of two or more reference points corresponding to the two or more 1D distance sensors and the obtained reference distances (see paragraph [0904]: Initial estimation of a transformation function to align the newly read data to the fixed reference may be iteratively revised in order to produce minimized distances from the newly read data to the fixed reference. A point to point distance metric minimization technique may be used such that it may best align each value in the new readings to its match found in the prior readings of the fixed reference. One point to point distance metric minimization technique that may be used estimates the combination of rotation and translation using a root mean square. The process may be iterated to transform the newly read values using the obtained information. These methods may be used independently or may be combined to improve accuracy. In one embodiment, the adjustment applied to overlapping depths within the area of overlap may be applied to other depths beyond the identified area of overlap, wherein the new depths within the overlapping area may be considered ground truth when making the adjustment.). With respect to dependent claims 7 and 16, Ebrahim Afrouzi discloses wherein the one or more computer programs further include computer-executable instructions that, when executed by the one or more processors individually or collectively, cause the robot to: identify a type of the object (see paragraph [0561]: the processor of the robot may identify objects based on possible objects available within its environment (e.g., home or supermarket). In one instance, a training session may be provided through an application of a communication device or the web to label some objects around the house.); generate a map indicating a position and type of the object, based on the determined distance to the object and the identified type of the object (see paragraphs [0391] and [0392]: the processor may localize an object. The object localization may comprise a location of the object falling within a FOV of an image sensor and observed by the image sensor (or depth sensor or other type of sensor) in a local or global map frame of reference. In some embodiments, the processor locally localizes the object with respect to a position of the robot. In local object localization, the processor determines a distance or geometrical position of the object in relation to the robot. In some embodiments, the processor globally localizes the object with respect to the frame of reference of the environment. An object is identified when the processor identifies the object in an image of a stream of images (or video) captured by an image sensor of the robot. In some embodiments, upon identifying the object the processor has not yet determined a distance of the object, a classification of the object, or distinguished the object in any way.); and travel based on the generated map (see paragraph [0938]: online navigation uses a real-time local map, such as the LIDAR local map, in conjunction with a global map of the environment for more intelligent path planning. In some cases, the global map may be used to plan a global movement path and while executing the global movement path, the processor may create a real-time local map using fresh LIDAR scans.). With respect to dependent claims 8 and 17, Ebrahim Afrouzi discloses wherein the robot further comprises a motion sensor (see paragraph [1223]: Examples of sensors include, but are not limited to (which is not to suggest that any other described component of the robotic cleaning device is required in all embodiments), floor sensors, debris sensors, obstacle sensors, cliff sensors, acoustic sensors, cameras, optical sensors, distance sensors, motion sensors,), and wherein the one or more computer programs further include computer-executable instructions that, when executed by the one or more processors individually or collectively, cause the robot to correct a position of the reference point within the 2D image based on a sensor value of the motion sensor (see paragraph [0335]: images may not be accurately connected when connected based on the measured movement of the robot as the actual trajectory of the robot may not be the same as the intended trajectory of the robot. In some embodiments, the processor may localize the robot and correct the position and orientation of the robot. One example includes three images captured by an image sensor of the robot during navigation with the same points in each image. Based on the intended trajectory of the robot, the same points are expected to be positioned in particular locations. However, the actual trajectory may result in captured images with the same points positioned in unexpected locations. Based on localization of the robot during navigation, the processor may correct the position and orientation of the robot, resulting in captured images with the locations of the same points aligning with their expected locations given the correction in position and orientation of the robot.). With respect to dependent claims 9 and 18, Ebrahim Afrouzi discloses wherein the one or more computer programs further include computer-executable instructions that, when executed by the one or more processors individually or collectively, cause the robot to input the 2D image as input data to an artificial intelligence model and execute the artificial intelligence model to calculate relative depths corresponding to the pixels in the 2D image (see paragraph [see paragraphs [0444] and [0524]: An example of a neural network receives images from various cameras positioned on a robot and various layers of the network extract Fourier descriptors, Harr descriptors, ORB, Canny features, etc. In another example, two neural networks each receive images from cameras as input. One network outputs depth while the other extracts features such as edges. A neural network (deep or shallow) may be taught to recognize features or extract depth to the recognized features, recognize objects or extract depth to the recognized objects, or identify scenes in images or extract depth to the identified depths in the images. In embodiments, pixels of an image may be fed into the input layer of the network and the outputs of the first layer may indicate the presence of low-level features in the image, such as lines and edges.). With respect to dependent claim 19, Ebrahim Afrouzi discloses executing of a first artificial intelligence model to calculate the relative depths corresponding to the pixels in the 2D image; and executing the first artificial intelligence model to generate a depth image representing the calculated relative depths (see paragraph [0924]: depth may be inferred based on the position and/or geometry of the projected IR light in the image captured. For instance, some embodiments may infer map geometry (or features thereof) with a trained convolutional neural network configured to infer such geometries from raw data from a plurality of sensor poses. Some embodiments may apply a multi-stage convolutional neural network in which initial stages in a pipeline of models are trained on (and are configured to infer) a coarser-grained spatial map corresponding to raw sensor data of a two-or-three-dimensional scene and then later stages in the pipeline are trained on (and are configured to infer) finer-grained residual difference between the coarser-grained spatial map and the two-or-three-dimensional scene.). With respect to dependent claim 20, Ebrahim Afrouzi discloses executing a second artificial intelligence model to identify a type of the object and a region of the object within the 2D image (see paragraph [1150]: The traversability algorithm allows the robot to securely work around dynamic and static obstacles (e.g., people, pets, hazards, etc.). In some embodiments, the traversability algorithm may identify dynamic obstacles (e.g., people, bikes, pets, etc.). In some embodiments, the traversability algorithm may identify dynamic obstacles (e.g., a person) in an image of the environment and determine their average distance and velocity and direction of their movement. In some embodiments, an algorithm may be trained in advance through a neural network to identify areas with high chances of being traversable and areas with low chances of being traversable.). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DEMETRA R SMITH-STEWART whose telephone number is (571)270-3965. The examiner can normally be reached 10am - 6pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Peter Nolan can be reached at 571-270-7016. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DEMETRA R SMITH-STEWART/Examiner, Art Unit 3661 /PETER D NOLAN/Supervisory Patent Examiner, Art Unit 3661
Read full office action

Prosecution Timeline

Nov 01, 2024
Application Filed
Jan 24, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603011
LANDING GUIDANCE FOR AIR VEHICLES USING NEXT GENERATION CELLULAR NETWORKS
2y 5m to grant Granted Apr 14, 2026
Patent 12596368
SYSTEMS AND TECHNIQUES FOR FIELD-OF-VIEW IMPROVEMENTS IN AUTONOMOUS TRUCKING SYSTEMS
2y 5m to grant Granted Apr 07, 2026
Patent 12591240
MULTI-CHANNEL SENSOR SIMULATION FOR AUTONOMOUS CONTROL SYSTEMS
2y 5m to grant Granted Mar 31, 2026
Patent 12583581
COMMERCIAL SUPERSONIC AIRCRAFT AND ASSOCIATED SYSTEMS AND METHODS
2y 5m to grant Granted Mar 24, 2026
Patent 12583404
OPERATOR-CUSTOMIZED VEHICLE CONTROL
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
90%
Grant Probability
98%
With Interview (+8.1%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 728 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month