Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Response to Amendment
The RCE filed 12 has been entered. Claims 1-26 remain pending in the application. This communication is a Non Final Office Action on the on merits. The Information Disclosure Statement (IDS) filed on 12/31/2025 has been acknowledged by the Office.
Response to Arguments
Applicant argues that the amendment traverses the Double Patenting rejection, but the arguments are not persuasive in view of the cited references. Applicant’s remarks are confined to simply arguing that the independent claims of the present application are patentably distinct form the claims of U.S. Patent No. 11,340,620 in view of Buibas, and do not detailed context on how the claim language distinguishes the subject matter from the previous references. In the absence of such definition, the section below for Claim Interpretation will provide context for how the Examiner is interpreting the amended claim language.
Claim Interpretation
Independent claims 1 and 14 now include amended limitations that recite: “instructing display, via a graphical user interface, of a first scene of the environment as viewed from a first position on the robot located at a location within the environment”. Then, the second scene of the environment is defined as being “viewed from a second position on the robot located at the location”.
The Examiner asserts that this claim language fortifies the idea that the first portion of the image data and second portion of the image data “are associated with a same orientation of a body of the robot”. In other words, to provide the second scene on the graphical user interface the robot does not physically move or change orientations, and the views represent what the robot itself “sees” from two different image sensors that are mounted on the robot. The distinction from this claim amendment appears to be that the image sensors are thus provided on different portions of the robot body, and that each corresponding view must be as if viewed from that position on the robot body.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-26 rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-30 of Seifert et al. U.S. Patent No. 11340620 in view of Buibas et al., hereinafter Buibas (Document ID: US 10603794 B2). Although the claims at issue are not identical, they are not patentably distinct from each other because reference claims render the scope of the current claim limitations obvious, as seen below:
Regarding claims 1 and 14, Seifert claims disclose a computer-implemented method when executed by data processing hardware of an operator device in communication with a robot causes the data processing hardware to perform operations, and a system, comprising: receiving image data, the image data corresponding to an environment of the robot, in at least claims 1 and 14: “receiving image data from at least one image sensor, the image data corresponding to an environment about a robot”.
Seifert specifically indicates “at least on image sensor”, implying that more than one can be used, and Seifert explicitly discloses numerous individual cameras in claims 2 and 15. But Seifert does not explicitly teach in claims 1 and 14 from a first image sensor and a second image sensor.
Instead, Buibas, whose invention pertains to a robotic camera system, teaches in at least Col 3, Line 35 multiple implementations of a 360 degree camera system, including single camera systems but also multiple camera configurations, such as in FIG. 3C.
It would have been obvious to one of ordinary skill in the art before the filing date of the claimed invention to have modified the at least one image sensor for collecting image data of the environment of a robot of Seifert with the 360 degree camera system of Buibas in order to "capture 360 degree image information in higher resolution" (Buibas Col 9, Line 39). The modification of Seifert to include two image sensors would have produced an expected result for capturing image data in the environment of a robot. The exact configurations of cameras is, as described by Buibas, not limited to the configuration of FIG. 3C in Buibas, and one of ordinary skill in the art would have been able to equip the system of Seifert with any configuration of cameras that provide 360 degrees of environment information.
In view of the modification, it can be understood that the “at least one image sensor” of Seifert is modified to incorporate the “first and second image sensors” of the amended claims, and Seifert then teaches
instructing display, via a graphical user interface, of a first scene of the environment as viewed from a first position on the robot located at a location within the environment based on a first portion of the image data from the first image sensor, wherein a view of the robot displayed via the graphical user interface corresponds to the first scene (see at least Claims 1 and 14: “a graphical user interface for display on a screen of the operator device, the graphical user interface configured to: display a scene of the environment about the robot based on the image data”. See also claims 10 and 23: “a field of view of the environment about the robot in a direction away from a current scene”, wherein the current scene is the first scene. The current scene is displayed in the graphical user interface and comprises a view of the robot from a first position within the environment.);
receiving, via the graphical user interface, an input indicating a request to rotate the view of the robot in a particular direction; (see at least claims 10 and 23: “the graphical user interface is further configured to: receive a rotation input to rotate a field of view of the environment about the robot in a direction away from a current scene displayed in the graphical user interface; and display a preview scene by rotating the field of view of the environment about the robot in the direction away from the current scene.” See also claim 13 which establish that the direction is a particular direction, defined as a direction that “simulates the robot executing a turning maneuver in the direction away from the current scene and toward the preview scene.”); and
Seifert and Buibas do not explicitly teach
based on receiving the input, instructing display, via the graphical user interface, of a second scene of the environment as viewed from a second position on the robot located at the location based on a second portion of the image data from the second image sensor, wherein the view of the robot displayed via the graphical user interface corresponds to the second scene based on rotating the view of the robot in the particular direction, wherein the first scene based on the first portion of the image data from the first image sensor and the second scene based on the second portion of the image data from the second image sensor are associated with a same orientation of a body of the robot.
Instead Seifert teaches in at least claims 11-13 and 24-26: “the graphical user interface is configured to display the preview scene without requiring physical movement by the robot…the graphical user interface is configured to receive the rotation input in response to receiving an input indication indicating selection of a rotation graphic displayed in the graphical user interface…the rotation of the field of view of the environment about the robot in the direction away from the current scene simulates the robot executing a turning maneuver in the direction away from the current scene and toward the preview scene.” Wherein the preview scene is the second scene, and specifically represents a view from a second position on the robot since the image sensors are capturing side or back angles of the robot.
It would have been obvious to one of ordinary skill in the art before the filing date of the claimed invention to have modified the robot with multiple image sensors and a rotation input of Seifert and Buibas with multiple different embodiments for a preview scene of Seifert in order to execute a design choice that implements the disclosed embodiments for providing a preview scene in tandem. Although Seifert discloses claims 11-13 and 24-26 separately, the steps of each claim are not mutually exclusive, and combining the steps would produce an expected result of providing a preview scene that does not require physical movement of a robot in response to a rotation input. This claim combination reveals that the claims of the instant application involve interfering subject matter, and one of ordinary skill in the art would have found the claim combination obvious before the filing date of the claimed invention.
Regarding claims 8 and 21, modified Seifert teaches the method of claim 1 and the system of claim 14, and Seifert further teaches that
the input comprises a rotation input corresponding to the view of the robot, and wherein the rotation input is based on an input indication received via the graphical user interface (see at least claims 12-13 and 25-26: “the graphical user interface is configured to receive the rotation input in response to receiving an input indication indicating selection of a rotation graphic displayed in the graphical user interface…the rotation of the field of view of the environment about the robot in the direction away from the current scene simulates the robot executing a turning maneuver in the direction away from the current scene and toward the preview scene.”).
Regarding claims 10 and 23, modified Seifert teaches the method of claim 1 and the system of claim 14, and Seifert further teaches that
rotating the view of the robot simulates the robot executing a turning maneuver in a direction away from the first scene and toward the second scene (see at least claims 13 and 26: “the rotation of the field of view of the environment about the robot in the direction away from the current scene simulates the robot executing a turning maneuver in the direction away from the current scene and toward the preview scene.”).
Regarding claims 11 and 24, modified Seifert teaches the method of claim 1 and the system of claim 14, and Seifert further teaches that the first scene comprises:
a forward scene of the environment relative to the robot
a left scene of the environment relative to the robot;
a right scene of the environment relative to the robot;
an aft scene of the environment relative to the robot; or
a top-down scene of the robot. (see at least claims 2 and 15: “a forward scene of the environment based on the image data, the image data captured by a forward-left camera and a forward-right camera disposed on the robot; a left scene of the environment based on the image data, the image data captured by a left camera disposed on the robot; a right scene of the environment based on the image data, the image data captured by a right camera disposed on the robot; an aft scene of the environment based on the image data, the aft scene captured by an aft camera disposed on the robot; or a top-down scene of the robot based on the image data, the image data captured by a payload camera, the forward-left camera, the forward-right camera, the left camera, the right camera, and the aft camera”)
Regarding claims 12 and 25, modified Seifert teaches the method of claim 1 and the system of claim 14, and Seifert further teaches that
the robot comprises a quadruped robot (see at least claims 1 and 14: “A quadruped robot”).
Regarding claims 13 and 26, modified Seifert teaches the method of claim 1 and the system of claim 14, and Seifert further teaches in claims 9 and 22 that “the at least one image sensor is disposed on the robot”, but not explicitly the first image sensor and second image sensor are disposed on the robot
Instead, Buibas, whose invention pertains to a robotic camera system, teaches in at least Col 3, Line 35 multiple implementations of a 360 degree camera system, including single camera systems but also multiple camera configurations, such as in FIG. 3C.
It would have been obvious to one of ordinary skill in the art before the filing date of the claimed invention to have modified the at least one image sensor for collecting image data of the environment of a robot of Seifert with the 360 degree camera system of Buibas in order to "capture 360 degree image information in higher resolution" (Buibas Col 9, Line 39).
In view of the modification to include the first and second image sensors, Seifert then teaches
the operator device is in communication with the first image sensor and the second image sensor via a network (see at least claims 9 and 22: “the operator device is in communication with the image sensor via a network”, wherein the modification provides for the first and second image sensors).
Claims 2-7, 9, 15-20, and 22 rejected for their dependency on a rejected base claim.
Allowable Subject Matter
Claims 1-26 considered allowable over the prior art, but are not currently allowed in view of the Double Patenting rejection.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Dairon Estevez whose telephone number is (703)756-4552. The examiner can normally be reached M-F 8:00AM - 4:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Khoi Tran can be reached at (571) 272-6919. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/D.E./Examiner, Art Unit 3656 /KHOI H TRAN/Supervisory Patent Examiner, Art Unit 3656