DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Application Status
This Final action is in response to applicant’s amendment of 21 January 2026. Claims 1-11, 13-16, 21-23, and 25-26 are examined and pending. Claims 1-4, 9-11, 13-16, and 21-23 are currently amended, claims 12, 17-20, and 24 are cancelled, and claims 25-26 are new.
Response to Arguments
Applicant’s amendments/arguments with respect to the rejection under 35 USC 112(b) as set forth in the Office Action have been fully considered and are persuasive. As such, the rejection as previously presented has been withdrawn. However, applicant’s amendment raises new rejection addressed below under 35 USC 112(b).
Applicant’s arguments with respect to the rejection under 35 U.S.C. § 103 have been fully considered but are moot because the new ground of rejection does not rely on any reference(s) applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Applicant’s amendments/arguments with respect to the rejection under 35 USC 101 as being directed to an abstract idea without significantly more have been carefully considered and are persuasive. As such the rejection under 35 USC 101 has been withdrawn.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-11, 13-16, 21-23, and 25-26 are rejected under 35 U.S.C. 103 as being unpatentable over Liu et al (US 20190265734 A1) in view of Derhy et al (US 20210190497 A1) in view of Dai et al (US 20230409040 A1).
With respect to claim 1, Liu discloses wherein the operations comprise obtaining a first image captured by a camera at a first location of the property and a second image captured by the camera at a second, different location of the property (see at least [0010-0013] and [0035-0037], and [0079-0084]); detecting feature points at positions within the first image and the second image (see at least [0010-0013] and [0035-0037]), the feature points including first feature points in the first image and second feature points in the second image (see at least [0010-0013] and [0035-0037]); comparing the positions of the first feature points in the first image to positions of the second feature points in the second image (see at least [0010-0013] and [0035-0037]); obtaining data indicating at least the first location and the second location at the property (see at least [0010-0013] and [0035-0037]); comparing at least the first location and the second location (see at least [0010-0013] and [0035-0037]); and, using results of a) the comparison of the position of the feature points in the first image and the second image and b) the comparison of at least the first and the second location, generating, the depth data for the feature points for use by the robot navigating the property, wherein generating the pose estimation uses the depth data (see at least [0010-0013], [0035-0037], [0091-0098], and [0103-0113]).
With respect to claim 2, Liu discloses wherein generating the depth data for the feature points uses the scale factor (see at least [0010-0013] and [0086-0090]).
However, Liu do not specifically disclose wherein generating the depth data for the feature points uses an epipolar process.
Derhy teaches wherein generating the depth data for the feature points uses an epipolar process (see at least [0005], [0028], and [0059]).
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to have modified Liu, with a reasonable expectation of success to incorporate the teachings of Derhy wherein generating the depth data for the feature points uses an epipolar process. This would be done to provide a useful reference point for determining/updating the position data regarding a robot navigating an environment (see Derhy para 0047).
With respect to claim 3, Liu discloses wherein: the scale factor maps camera units to real world units for the property (see at least [0010-0013] and [0086-0090]).
With respect to claim 4, Liu discloses generating the scale factor using a change between a first location at which the first image was captured and a second location at which the second image was captured (see at least [0010-0013] and [0086-0090]), at least the first location and the second location (see at least [0010-0013] and [0086-0090]).
With respect to claim 5, Liu do not specifically disclose generating the scale factor using an amount of overlap between the first image and the second image.
Derhy teaches generating the scale factor using an amount of overlap between the first image and the second image (see at least [0005], [0007], [0032], and [claim 13]).
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to have modified Liu, with a reasonable expectation of success to incorporate the teachings of Derhy of generating the scale factor using an amount of overlap between the first image and the second image. This would be done to provide a useful reference point for determining/updating the position data regarding a robot navigating an environment (see Derhy para 0047).
With respect to claim 6, Liu discloses determining whether a difference between a first location at which the first image was captured and a second location at which the second image was captured satisfies a difference threshold (see at least [0010-0013], [0069], [0086-0090], and [0130]), wherein generating the depth data for the feature points is responsive to determining that the difference between the first location at which the first image was captured and the second location at which the second image was captured satisfies the difference threshold (see at least [0010-0013], [0069], [0086-0090], [0113], and [0130]).
With respect to claim 7, Liu discloses wherein generating the depth data for the feature points comprises generating depth data that indicates a relationship between the first feature points of the first image and the second feature points of the second image (see at least [0010-0013], [0069], [0086-0090], [0113], and [013-0132]).
With respect to claim 8, Liu discloses providing the depth data to the robot to cause the robot to use the depth data for navigation at the property (see at least [0036]).
With respect to claim 9, Liu discloses one or more non-transitory computer storage media encoded with instructions that, when executed by one or more computers (see at least [0050]), cause the one or more computers to perform operations comprising: obtaining, from a robot, an image at a location of a property (see at least [0010-0013] and [0035-0037]); obtaining data indicating the location (see at least [0010-0013], [0066], and [0094]); accessing depth data for the key frame (see at least [0010-0013] and [0086-0090]).
However, Liu do not specifically disclose selecting a key frame from one or more key frames for the property using the data indicating the location and the one or more key frames; comparing, for each of at least one feature point in the key frame, a position of the respective feature point in the image to a position of the respective feature point in the key frame; generating a pose estimation for the robot using the scale factor and a result of the comparison, for the at least one feature point in the key frame, of the position of the respective feature point in the image to the position of the respective feature point in the key frame.
Derhy teaches selecting a key frame from one or more key frames for the property using the data indicating the location and the one or more key frames (see at least [0047-0066]); comparing, for each of at least one feature point in the key frame, a position of the respective feature point in the image to a position of the respective feature point in the key frame (see at least [0047-0066]); generating a pose estimation for the robot using depth data for the key frame and results of the comparison, for the at least one of one or more feature points in the key frame, of the position of the feature point from the feature points in the image to the position of the respective feature point in the key frame (see at least [0032-0033], [0045], and [0047-0066]); and causing an update to a pose of the robot using the pose estimation (see at least [0032-0033], [0043-0045], and [0047-0066]).
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to have modified Liu, with a reasonable expectation of success to incorporate the teachings of Derhy selecting a key frame from one or more key frames for the property using data for the image and the one or more key frames; comparing, for at least one of one or more feature points in the key frame, a position of a feature point from the feature points in the image to a position of the respective feature point in the key frame; generating a pose estimation for the robot using depth data for the key frame and results of the comparison, for the at least one of one or more feature points in the key frame, of the position of the feature point from the feature points in the image to the position of the respective feature point in the key frame; and causing an update to a pose of the robot using the pose estimation. This would be done to provide a useful reference point for determining/updating the position data regarding a robot navigating an environment (see Derhy para 0047).
Liu as modified by Derhy do not specifically discloses after accessing the depth data, determining a scale factor using the depth data for the key frame; transmitting, to a component in the robot, an instruction for the robot to: update a pose of the robot using the pose estimation; and navigate through the property using the updated pose.
DAI teaches after accessing the depth data, determining a scale factor using the depth data for the key frame (see at least [0021-0025], [0039], [0049-0050], [0087-0088], and [0095-0096]); transmitting, to a component in the robot, an instruction for the robot to: update a pose of the robot using the pose estimation (see at least [0031], [0057], and[0093]); and navigate through the property using the updated pose (see at least [0031], [0057], and[0093]).
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to have modified Liu as modified by Derhy, with a reasonable expectation of success to incorporate the teachings of DAI wherein after accessing the depth data, determining a scale factor using the depth data for the key frame; transmitting, to a component in the robot, an instruction for the robot to: update a pose of the robot using the pose estimation; and navigate through the property using the updated pose. This would be done to further improve navigating an environment with obstacles of different characteristics (see DAI para 0002)
With respect to claim 10, Liu do not specifically disclose wherein comparing, for each of the at least one feature point in the key frame, the position of the respective feature point in the image to the position of the respective feature point in the key frame uses an epipolar process.
Derhy teaches wherein comparing, for each of the at least one feature point in the key frame, the position of the respective feature point in the image to the position of the respective feature point in the key frame uses an epipolar process (see at least [0005], [0028], and [0059]).
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to have modified Liu, with a reasonable expectation of success to incorporate the teachings of Derhy wherein comparing, for each of the at least one feature point in the key frame, the position of the respective feature point in the image to the position of the respective feature point in the key frame uses an epipolar process. This would be done to provide a useful reference point for determining/updating the position data regarding a robot navigating an environment (see Derhy para 0047).
With respect to claim 11, Liu discloses wherein determining the scale factor comprises determining the scale factor using a key frame location at the property at which a camera captured the key frame and the location at the property for the image (see at least [0010-0013] and [0086-0090]).
With respect to claim 13, Liu do not specifically disclose wherein transmitting the instruction for the robot to update the pose of the robot uses the pose estimation and an expected pose of the robot.
Derhy teaches wherein transmitting the instruction for the robot to update the pose of the robot uses the pose estimation and an expected pose of the robot (see at least [0071-0072] and [0076]).
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to have modified Liu, with a reasonable expectation of success to incorporate the teachings of Derhy wherein transmitting the instruction for the robot to update the pose of the robot uses the pose estimation and an expected pose of the robot. This would be done to provide a useful reference point for determining/updating the position data regarding a robot navigating an environment (see Derhy para 0047).
With respect to claim 14, Liu do not specifically disclose obtaining, using the data indicating the location, the one or more key frames and depth data for the one or more key frames.
Derhy teaches obtaining, using the data indicating the location, the one or more key frames and depth data for the one or more key frames (see at least [0058]).
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to have modified Liu, with a reasonable expectation of success to incorporate the teachings of Derhy of obtaining, using the data indicating the location, the one or more key frames and depth data for the one or more key frames. This would be done to provide a useful reference point for determining/updating the position data regarding a robot navigating an environment (see Derhy para 0047).
With respect to claim 15, Liu do not specifically disclose wherein selecting the key frame from the one or more key frames for the property using data indicating the location and the one or more key frames uses a result of a comparison of first feature points of the image to second feature points of at least one of the one or more key frames.
Derhy teaches wherein selecting the key frame from the one or more key frames for the property using data indicating the location and the one or more key frames uses a result of a comparison of first feature points of the image to second feature points of at least one of the one or more key frames (see at least [0045] and [0060]).
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to have modified Liu, with a reasonable expectation of success to incorporate the teachings of Derhy wherein selecting the key frame from the one or more key frames for the property using data indicating the location and the one or more key frames uses a result of a comparison of first feature points of the image to second feature points of at least one of the one or more key frames. This would be done to provide a useful reference point for determining/updating the position data regarding a robot navigating an environment (see Derhy para 0047).
With respect to claim 16, Liu do not specifically disclose wherein selecting the key frame from the one or more key frames for the property using the data indicating the location and the one or more key frames uses the location at the property for the image and at least one location of a respective key frame from the one or more key frames.
Derhy teaches wherein selecting the key frame from the one or more key frames for the property using the data indicating the location and the one or more key frames uses the location at the property for the image and at least one location of a respective key frame from the one or more key frames (see at least [0047]).
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to have modified Liu, with a reasonable expectation of success to incorporate the teachings of Derhy wherein selecting the key frame from the one or more key frames for the property using the data indicating the location and the one or more key frames uses the location at the property for the image and at least one location of a respective key frame from the one or more key frames. This would be done to provide a useful reference point for determining/updating the position data regarding a robot navigating an environment (see Derhy para 0047).
With respect to claims 21, 22, and 23, they are method claims that recite substantially the same limitations as the respective computer storage media claims 9, 10, and 11. As such, claims 21, 22, and 23 are rejected for substantially the same reasons given for the respective computer storage media claims 9, 10, and 11 and are incorporated herein.
With respect to claim 25, Liu do not specifically disclose wherein selecting the key frame from the one or more key frames for the property using the data indicating the location and the one or more key frames uses the location at the property for the image and at least one location of a respective key frame from the one or more key frames.
Derhy teaches wherein selecting the key frame from the one or more key frames for the property using the data indicating the location and the one or more key frames uses the location at the property for the image and at least one location of a respective key frame from the one or more key frames (see at least [0032-0033], [0043-0045], and [0047-0066]).
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to have modified Liu, with a reasonable expectation of success to incorporate the teachings of Derhy of wherein selecting the key frame from the one or more key frames for the property using the data indicating the location and the one or more key frames uses the location at the property for the image and at least one location of a respective key frame from the one or more key frames. This would be done to provide a useful reference point for determining/updating the position data regarding a robot navigating an environment (see Derhy para 0047).
With respect to claim 26, thit is a system claim that recite substantially the same limitations as the respective computer storage media claim 9. As such, claim 26 is rejected for substantially the same reasons given for the respective computer storage media claim 9 and is incorporated herein.
Conclusion
Applicant’s amendment necessitated the new ground of rejection presented in the office action. Accordingly, THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Inquiry
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ABDALLA A KHALED whose telephone number is (571)272-9174. The examiner can normally be reached on Monday-Thursday 8:00 Am-5:00, every other Friday 8:00A-5:00AM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Faris Almatrahi can be reached on (313) 446-4821. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ABDALLA A KHALED/Examiner, Art Unit 3667