Prosecution Insights
Last updated: April 19, 2026
Application No. 18/612,435

TELEPRESENCE SYSTEM

Final Rejection §103
Filed
Mar 21, 2024
Examiner
PANDE, ASHUTOSH
Art Unit
3668
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Disney Enterprises Inc.
OA Round
2 (Final)
71%
Grant Probability
Favorable
3-4
OA Rounds
3y 0m
To Grant
80%
With Interview

Examiner Intelligence

Grants 71% — above average
71%
Career Allow Rate
5 granted / 7 resolved
+19.4% vs TC avg
Moderate +8% lift
Without
With
+8.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
32 currently pending
Career history
39
Total Applications
across all art units

Statute-Specific Performance

§101
12.4%
-27.6% vs TC avg
§103
65.8%
+25.8% vs TC avg
§102
9.3%
-30.7% vs TC avg
§112
12.4%
-27.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 7 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims This Office Action is in response to the amendments filed on 11/21/2025. Claims 1, 2, 4, 5, 12-14 and 18-20 are amended. Claims 1-20 are presently pending and examined. Response to Arguments Claim Objection Applicant’s amendments and accompanying arguments, see remarks, filed 11/20/2025, with respect to informality in Claim 1 has been fully considered. The claim objection has been withdrawn 112(b) Rejection Applicant’s amendments and accompanying arguments, see remarks, filed 11/20/2025, with respect to 112(b) rejections have been fully considered and are persuasive. The 112(b) rejection has been withdrawn. Prior Art Rejection Applicant’s amendments and accompanying arguments, see remarks, filed 11/20/2025, with respect to the rejection(s) of claim(s) 1-20 under 103 have been fully considered and are persuasive. Therefore, the rejection under 35 U.S.C. 102 and 103 have been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Pavel Jurik et. al. US 20200224854 (“Jurik”). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 2, 4 – 16 and 19, 20 are rejected under 35 U.S.C. 103 as being unpatentable over Sutherland in view of Norman P. Jouppi et. al US6292713B1 (“Jouppi”) and further in view of Pavel Jurik et. al. US 2020/0224854 (“Jurik”). As per Claim 1, Sutherland discloses a telepresence system comprising, an interactive device including an actuator and located at a first location; (see at least [0011] telepresence robot according to the present invention is of a modular design and includes a series of connectible modules) a user device located at a second location separate from the first location, wherein the user device is configured to receive a user input; (see at least [0107] This application integrates with the video and audio capabilities of the remote host platform device including desktop, laptop, and other tablet computers as well as smartphones, web-enabled game consoles and televisions, and [0108] FIG. 10 illustrates an embodiment of some of the controls a remote connected user of the virtual presence robot is able to access from a smartphone or tablet). the actuator communicatively coupled to the user device and the interactive device (see at least see at least [Fig. 20] Motor controller and Neck motor and [0015] controller code running in the base which interpret and execute standardized motion and control commands for both the omni-wheeled base and application-specific motors, [0015] core application code running on the head device responding to both remote communications from the user and local navigation priorities and robotic systems health reporting demands [0120] FIG. 20 illustrates the communication paths between a remote PC or tablet via the Internet WiFi, 3G or 4G network to the robot. It also illustrates the communication within the robotic structure to the various motor drives) Sutherland does not explicitly disclose, a user device located at a second location separate from the first location, wherein the user device is configured to receive a user input; the actuator comprising a light emitter configured to output a light and to automatically change the outputted light between a collimated configuration and a diffuse configuration at the first location based on the user input at the second location Jouppi teaches, a user device located at a second location separate from the first location, wherein the user device is configured to receive a user input; (see at least [Col. 1, line 45-46] The user station is responsive to a user and communicates information to and from the user, and [Col. 3 ,line 21-24], In FIG. 1, a robotic telepresence system 40 has a user station 50 at a first geographic location 52 and a robot 60 at a second geographic location 62. The user station 50 is responsive to a user and communicates information to and receives information from the user). the actuator comprising a light emitter configured to output a light and to automatically change the outputted light between a collimated configuration and a diffuse configuration at the first location based on the user input at the second location (see at least [Col. 4, line 23-28] In response to user commands to turn the camera array 82, the control computer 80 activates the motor 118 which turns the shaft 116 with the camera array 82. In this way the user is provided with a way of using the robot's high resolution camera 104 to look around, and [Col. 16, line 43-45] A move_robot procedure 676 that sends signals to the move the robot in response to the joystick 73; alternately, the signals are sent in response to the mouse 72). Thus, Sutherland discloses a telepresence interactive device, a user device and an actuator and Jouppi teaches the user device at a second geographic location, communicatively connected to the robot at a first location. As a result, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the inventions as disclosed by Sutherland with the remote control approach as taught by Jouppi, with a reasonable expectation of success, to provide a three dimensional representation of the user transmitted from the user station (Col. 1, Line 48-50). Jurik teaches, the actuator comprising a light emitter configured to output a light and to automatically change the outputted light between a collimated configuration and a diffuse configuration at the first location based on the user input at the second location (see at least [0042] Each LED may have a collimating optical system 58 58 which serves to collimate and direct the light beam through the dichroic filters 152 and 154, [0042] Focusing and harmonizing optics 60 may be include optical elements selected from, but not restricted to, optical diffuser, holographic diffuser, non-Gaussian diffuser, … or other optical means of homogenizing or mixing light as is well known in the art, and [0049] Luminaires with automated and remotely controllable functionality are well known in the entertainment and architectural lighting markets. Such products are commonly used in theatres, television studios, concerts, theme parks, nightclubs and other venues. … Many products provide control over other parameters such as the intensity, color, focus, beam size, beam shape and beam pattern). Thus, Sutherland discloses a telepresence interactive device, a user device and an actuator and Jurik teaches a light emitter that can provide collimated and diffused light. As a result, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the inventions as disclosed by Sutherland with the light emitter taught by Jurik, with a reasonable expectation of success, to have a system where stray light and aberrations are well controlled (0004). As per Claim 2, Sutherland discloses, wherein light emitter includes one or more of a laser, a light-emitting diode, a fluorescent light, an incandescent light, an infrared light source, an ultraviolet light source, and is configured to identify an area or object within the first location by illumination (see at least Sutherland [0061] The mid-section sub-assembly 14 also includes a laser pointer apparatus 32 which in some applications can also function as a laser pattern emitter for 3D object reconstruction. As mentioned earlier, an infrared lighting apparatus may also be included in the mid-section sub-assembly with the appropriate infrared-sensitive camera 36 so that the virtual presence robot can navigate in relative darkness, and [0104] the laser pointer then projects point clouds on the actual objects or areas indicated by the tourists, with a dot representing each tourist's selection). As per Claim 4, Sutherland discloses, The telepresence system of claim 1, wherein the actuator further comprises one or more of a virtual output to the user device, a mechanical pointer, or a fluid emitter (see at least [0061] The mid-section sub-assembly 14 also includes a laser pointer apparatus 32 which in some applications can also function as a laser pattern emitter for 3D object reconstruction, and [0068] Looking more closely at FIGS. 11 and 12, the modified mid-section module 14 a includes opposed robotic arms 37 and 39. These driven arms include pivoting joints (generally corresponding to shoulder, elbow and wrist pivoting) and a gripping function). As per Claim 5, Sutherland discloses, The telepresence system of claim 1, wherein the actuator further comprises an actuator control configured to change a position of the actuator relative to the interactive device (see at least [0068] Looking more closely at FIGS. 11 and 12, the modified mid-section module 14 a includes opposed robotic arms 37 and 39. These driven arms include pivoting joints (generally corresponding to shoulder, elbow and wrist pivoting) and a gripping function (grippers 41) to perform various functions for manipulations of objects about the robot by a remote operator or, in the example noted above, to open doors or perform other autonomous actions). As per Claim 6, Sutherland discloses The telepresence system of claim 5, wherein the actuator control comprises a first pivot configured to move about a first axis and a second pivot configured to move about a second axis orthogonal to the first axis.(see at least [0041] driven tilt mechanism 24 enables the third party head device to face down, straight ahead, or up at varying angles to exhibit emotion, change the field of view of the camera (assuming the given third party device includes a camera) and in conjunction with the wheels 20 to provide left/right motion, establishing eye contact between the remote user who is facing out of the screen of the third party head device and the person or persons in proximity to the virtual presence robo). As per Claim 7, Sutherland does not disclose, the first location comprises at least one of a content production set, a remote scouting location, a conference room Jouppi teaches, the first location comprises at least one of a content production set, a remote scouting location, a conference room (see at least [Col. 2. Line 3-5] the first location comprises at least one of a content production set, a remote scouting location, a conference room, FIG. 4 is a diagram illustrating the use of the robot of FIG. 2 in a conference room with the overlay of the user's head on a background image, [Col. 2. Line 6-9] FIG. 5A is diagram of the display of the robot of FIG. 2 in the conference room of FIG. 4 illustrating the display of a texture map of a front view of user's head onto a front display, and [Col. 5, line 11-14] As shown in FIG. 4, the robot 60 provides a telepresence for a user at a remote user station at a meeting in a conference room 120. The conference room 120 is decorated with a plain side wall 122 opposite a vertically striped side wall 124). Thus, Sutherland discloses a telepresence interactive device, a user device and an actuator and Jouppi teaches first location as being a conference room. As a result, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the inventions as disclosed by Sutherland with the use of the equipment in a conference room by Jouppi, with a reasonable expectation of success, to provide a three dimensional representation of the user and audio from the user at the local environment (Col. 3, line 26-28). As per Claim 8, Sutherland discloses wherein the interactive device further comprises a mobility module configured to move the interactive device in the first location (see at least [0070] Referring back to FIG. 1, the base sub-assembly 18 is illustrated as an omni-wheeled device capable of moving in any direction, and [0096] Smooth, life-like motion is possible with this combination of sensors). As per Claim 9, Sutherland discloses, wherein the interactive device further comprises one or more sensors configured to detect an obstacle, and the mobility module is configured to avoid the obstacle (see at least [0071] Between each pair of omni-wheels 20 is located at least one, and in the preferred embodiment two, ultrasonic sensors 38. The ultrasonic sensors provide feedback necessary for the embedded motion controller to avoid obstacles, and [0097] In conjunction with ultrasonic sensor data from the base, the DSP also calculates potential collisions and other hazards (such as the top of a flight of stairs, for example) based on ideal trajectory information from the third party head device). As per Claim 10, Sutherland discloses, wherein the mobility module is configured to move the interactive device while in contact with a surface in the first location, or through air in the first location (see at least [0067] The omni-wheeled base enables the virtual presence robot to move with the precise curvature of the door. By adding appropriate autonomous navigation algorithms, the virtual presence robot can be called via a patient call button to any room in a hospital) As per Claim 11, Sutherland discloses, a base; ( see at least [0038] base sub-assembly 18, ) a mobility module coupled to the base and configured to move the interactive device in the first location; (see at least [0070] Fig. 1, the base sub-assembly 18 is illustrated as an omni-wheeled device capable of moving in any direction. The base sub-assembly includes three omni-wheels 20 mounted at 60 degrees to each other forming a triangle) a support structure extending from the base;(see at least [0053] The mid-section 14 is also designed to be available in a variety of configurations) an actuator control coupled to the support structure; wherein (see at least [Fig. 20] Motor controller and Neck motor) the actuator is coupled to, and independently moveable relative to, the support structure by the actuator control, (see at least [0052] the most basic of which would not include the tilt mechanism 24, and [0059] the entire camera-dome apparatus on a swinging and possibly pivoting horizontal axle such that when the robot is moving up or down wheelchair ramps, the horizon would remain level) the mobility module and the actuator control are communicatively coupled to the user device and configured to receive the user input, (see at least [0120] FIG. 20 illustrates the communication paths between a remote PC or tablet via the Internet WiFi, 3G or 4G network to the robot. It also illustrates the communication within the robotic structure to the various motor drives, the various ultrasonic sensors, the panoramic camera and processing provided within the robot. the mobility module is configured to move the interactive device based on the user input, (see at least [0043] In some cases, a more complex tilt mechanism may be deployed to enable the head to pivot from side to side or extend outwards to project over a bed or wheelchair) the actuator control is configured to actuate the actuator based on the user input (see at least Fig. 20, Fig. 21 and [0015] core application code running on the head device responding to both remote communications from the user and local navigation priorities and robotic systems health reporting demands). As per Claim 12, Sutherland discloses, method of interacting with a remote environment (see at least [0120] FIG. 20 illustrates the communication paths between a remote PC or tablet via the Internet WiFi, 3G or 4G network to the robot. It also illustrates the communication within the robotic structure to the various motor drives, the various ultrasonic sensors, the panoramic camera and processing provided within the robot) Sutherland does not disclose, communicatively coupling at least one user device in the remote environment and an interactive device in a local environment; receiving a user command at the at least one user device; transmitting the user command to the interactive device; actuating an actuator of the interactive device, the actuator comprising a light emitter, to output a light, and to automatically change the outputted light between a collimated configuration and a diffuse configuration in the remote environment based on the user command. Jouppi teaches, communicatively coupling at least one user device in the remote environment and an interactive device in a local environment (see at least [Col. 3. Line 21-23] In FIG. 1, a robotic telepresence system 40 has a user station 50 at a first geographic location 52 and a robot 60 at a second geographic location 62, and [Col. 3, line 43-47] The robot 60 is coupled to the communications medium 74 via a wireless transmitter/receiver 76 on the robot 60 and at least one corresponding wireless transmitter/receiver base station 78 that is placed sufficiently near the robot 60 to transmit and receive signals as the robot 60 moves). receiving a user command at the at least one user device (see at least [Col. 3. Line 23-25] The user station 50 is responsive to a user and communicates information to and receives information from the user) transmitting the user command to the interactive device (see at least [Col. 3. Line 25-28] The robot 60 is responsive to commands from the user station 50 and provides a three dimensional representation of the user and audio from the user which is transmitted by the user station 50) actuating an actuator of the interactive device, the actuator comprising a light emitter, to output a light, and to automatically change the outputted light between a collimated configuration and a diffuse configuration in the remote environment based on the user command (see at least [Col. 4, line 23-28] In response to user commands to turn the camera array 82, the control computer 80 activates the motor 118 which turns the shaft 116 with the camera array 82. In this way the user is provided with a way of using the robot's high resolution camera 104 to look around, and [Col. 16, line 43-45] A move_robot procedure 676 that sends signals to the move the robot in response to the joystick 73; alternately, the signals are sent in response to the mouse 72). Thus, Sutherland discloses a telepresence interactive device, a user device and an actuator and Jouppi teaches the communicative coupling of an interactive device at a first local environment and a user device at a second remote geographic location. As a result, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the inventions as disclosed by Sutherland with the remote control approach as taught by Jouppi, with a reasonable expectation of success, to provide a three dimensional representation of the user and audio from the user at the local environment (Col. 3, line 26-28). Jurik teaches, actuating an actuator of the interactive device, the actuator comprising a light emitter, to output a light, and to automatically change the outputted light between a collimated configuration and a diffuse configuration in the remote environment based on the user command (see at least [0042] Each LED may have a collimating optical system 58 58 which serves to collimate and direct the light beam through the dichroic filters 152 and 154, [0042] Focusing and harmonizing optics 60 may be include optical elements selected from, but not restricted to, optical diffuser, holographic diffuser, non-Gaussian diffuser, … or other optical means of homogenizing or mixing light as is well known in the art, and [0049] Luminaires with automated and remotely controllable functionality are well known in the entertainment and architectural lighting markets. Such products are commonly used in theatres, television studios, concerts, theme parks, nightclubs and other venues. … Many products provide control over other parameters such as the intensity, color, focus, beam size, beam shape and beam pattern). Thus, Sutherland discloses a telepresence interactive device, a user device and an actuator and Jurik teaches a light emitter that can provide collimated and diffused light. As a result, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the inventions as disclosed by Sutherland with the light emitter taught by Jurik, with a reasonable expectation of success, to have a system where stray light and aberrations are well controlled (0004). As per Claim 13, Sutherland discloses, wherein the actuator comprises a light emitter including one or more of a laser, a light-emitting diode, a fluorescent light, an incandescent light, an infrared light, or an ultraviolet light (see at least Southerland [0013] a mid-section module typically incorporating an embedded laser pointer and a 360 degree 2D or 3D camera apparatus, plus numerous application-specific options) the method further comprises identifying an area or an object within the local environment by illumination (see at least [0104] the laser pointer then projects point clouds on the actual objects or areas indicated by the tourists, with a dot representing each tourist's selection) As per Claim 14, Sutherland discloses, wherein generating the physical output in the local environment comprises providing a virtual output to the user device, actuating a mechanical pointer, or emitting a fluid (see at least [0061] The mid-section sub-assembly 14 also includes a laser pointer apparatus 32 which in some applications can also function as a laser pattern emitter for 3D object reconstruction, [0068] Looking more closely at FIGS. 11 and 12, the modified mid-section module 14 a includes opposed robotic arms 37 and 39. These driven arms include pivoting joints (generally corresponding to shoulder, elbow and wrist pivoting) and a gripping function (grippers 41). As per Claim 15, Sutherland discloses, further comprising changing, via an actuator control of the actuator, a position of the actuator relative to the interactive device (see at least [0068] Looking more closely at FIGS. 11 and 12, the modified mid-section module 14 a includes opposed robotic arms 37 and 39. These driven arms include pivoting joints (generally corresponding to shoulder, elbow and wrist pivoting) and a gripping function (grippers 41) to perform various functions for manipulations of objects about the robot by a remote operator or, in the example noted above, to open doors or perform other autonomous actions As per Claim 16, Sutherland discloses, further comprising changing, via an actuator control of the actuator, a position of the actuator relative to the interactive device by rotating at least one of a first pivot of the actuator control about a first axis or a second pivot of the actuator control about a second axis orthogonal to the first axis (see at least [0041] driven tilt mechanism 24 enables the third party head device to face down, straight ahead, or up at varying angles to exhibit emotion, change the field of view of the camera (assuming the given third party device includes a camera) and in conjunction with the wheels 20 to provide left/right motion, establishing eye contact between the remote user who is facing out of the screen of the third party head device and the person or persons in proximity to the virtual presence robo). As per Claim 19, Sutherland discloses, further comprising generating a physical output in the remote environment based on the user command (see at least [0104] the laser pointer then projects point clouds on the actual objects or areas indicated by the tourists, with a dot representing each tourist's selection and [110] if the user pushes the iPhone™ away from their body by extending their arms out straight or walks forward with the iPhone™, the virtual presence robot will move forward, in the direction of the iPhone™ motion. If the user then spins left or right, the virtual presence robot will rotate similarly. The screen on the iPhone™ will show the resulting motion which, depending on network lag and the speed of the motion, will lag behind the users motion). wherein actuating the actuator comprises directing the physical output toward an object or a person within the local environment based on the user command (see at least [0045] enables the robot to gracefully move out of the path of people or other robots moving towards it, [0068] Looking more closely at FIGS. 11 and 12, the modified mid-section module 14 a includes opposed robotic arms 37 and 39. These driven arms include pivoting joints (generally corresponding to shoulder, elbow and wrist pivoting) and a gripping function (grippers 41) to perform various functions for manipulations of objects about the robot by a remote operator or, in the example noted above, to open doors or perform other autonomous actions, and [0111] the user can then tap anywhere in either field of view 94 or 98 and the virtual presence robot will autonomously move and/or rotate to a physical location as close as possible and directly facing the selected target) As per Claim 20, Sutherland discloses, a base; ( see at least [0038] base sub-assembly 18, ) a mobility module coupled to the base and configured to move the interactive device in the first location; (see at least [0070] Fig. 1, the base sub-assembly 18 is illustrated as an omni-wheeled device capable of moving in any direction. The base sub-assembly includes three omni-wheels 20 mounted at 60 degrees to each other forming a triangle) a support structure extending from the base;(see at least [0053] The mid-section 14 is also designed to be available in a variety of configurations) an actuator comprising a light emitter (see at least [100] In FIG. 8, an integrated head sub-assembly 72 containing a laser pointer apparatus 76) an actuator control coupled to the support structure and the actuator; wherein (see at least [Fig. 20] Motor controller and Neck motor and [0015] controller code running in the base which interpret and execute standardized motion and control commands for both the omni-wheeled base and application-specific motors) the actuator is coupled to, and independently moveable relative to, the support structure by the actuator control, (see at least [0052] the most basic of which would not include the tilt mechanism 24, and [0059] the entire camera-dome apparatus on a swinging and possibly pivoting horizontal axle such that when the robot is moving up or down wheelchair ramps, the horizon would remain level) the mobility module and the actuator control are communicatively coupled to the user device and configured to receive the user input from the user device , (see at least [0120] FIG. 20 illustrates the communication paths between a remote PC or tablet via the Internet WiFi, 3G or 4G network to the robot. It also illustrates the communication within the robotic structure to the various motor drives, the various ultrasonic sensors, the panoramic camera and processing provided within the robot) the mobility module is configured to move the interactive device based on the user input, (see at least [0043] In some cases, a more complex tilt mechanism may be deployed to enable the head to pivot from side to side or extend outwards to project over a bed or wheelchair) the actuator control is configured to actuate the actuator based on the user input (see at least Fig. 20, Fig. 21 and [0015] core application code running on the head device responding to both remote communications from the user and local navigation priorities and robotic systems health reporting demands). Sutherland does not disclose, the light emitter is configured to output a light and to automatically change the outputted light between a collimated configuration and a diffuse configuration Jurik teaches, the light emitter is configured to output a light and to automatically change the outputted light between a collimated configuration and a diffuse configuration (see at least [0042] Each LED may have a collimating optical system 58 58 which serves to collimate and direct the light beam through the dichroic filters 152 and 154, [0042] Focusing and harmonizing optics 60 may be include optical elements selected from, but not restricted to, optical diffuser, holographic diffuser, non-Gaussian diffuser, … or other optical means of homogenizing or mixing light as is well known in the art, and [0049] Luminaires with automated and remotely controllable functionality are well known in the entertainment and architectural lighting markets. Such products are commonly used in theatres, television studios, concerts, theme parks, nightclubs and other venues. … Many products provide control over other parameters such as the intensity, color, focus, beam size, beam shape and beam pattern.) Thus, Sutherland discloses a telepresence interactive device, a user device and an actuator and Jurik teaches a light emitter that can provide collimated and diffused light. As a result, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the inventions as disclosed by Sutherland with the light emitter taught by Jurik, with a reasonable expectation of success, to have a system where stray light and aberrations are well controlled (0004). Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Sutherland, Jouppi and Jurik as applied to Claim 1, and further in view of Thorlabs Website NPL (“Thorlabs”). As per Claim 3, Sutherland discloses, a telepresence system wherein the actuator comprise a light emitter including a laser pointer Sutherland does not disclose, wherein the light emitter further comprises: an objective lens; and an image lens Thorlabs teaches, wherein the light emitter further comprises: an objective lens; and an image lens (Fig. 1.2). Thus, Sutherland discloses a telepresence interactive device wherein the actuator comprises a light emitter including a laser pointer and Thorlabs teaches use of a laser light expander in production sets and laser shows. As a result, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the inventions as disclosed by Sutherland with the use of as taught by Thorlabs, with a reasonable expectation of success, to outputs a collimated beam that is not inverted when compared with the input beam. Claim 17 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Sutherland , Jouppi and Jurik as applied to Claim 12, and further in view of Gyorgy Csaba et. al. IEEE International Symposium of Logistics and Industrial Informatics 2011 (“Csaba”) As per Claim 17, Sutherland does not disclose, further comprising calibrating, the interactive device by aligning a physical location of the actuator with an interactive control feature. Csaba teaches, further comprising calibrating, the interactive device by aligning a physical location of the actuator with an interactive control feature (see at least [Page 251, Col. 2] First step was the sensor calibration. It is first step was the measuring of the farthest point, when the laser light projected to the ground. The measured result have to match to the calculated result (9), and [Page 252, Col. 1] When the measurement ended, we knew the (t1, t2, ...tn) states, and we knew the distance between the vehicle and the object (dt1, dt2 , ...dtn ) and the laser position (p1, p2, ...pn). Knowledge of these measurements the distance from the object can be determined, depending on the location of the laser). Thus, Sutherland discloses a telepresence interactive device wherein the actuator comprises a light emitter including a laser pointer and Csaba teaches calibration of the mobile robot for operation in an unknown environment. As a result, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the inventions as disclosed by Sutherland with the use of as taught by Csaba, with a reasonable expectation of success, to plan the path of the robot with the help of building a local map. As per Claim 18, Sutherland discloses, further comprising generating a physical output in the remote environment based on the user command (see at least [0104] the laser pointer then projects point clouds on the actual objects or areas indicated by the tourists, with a dot representing each tourist's selection and [110] if the user pushes the iPhone™ away from their body by extending their arms out straight or walks forward with the iPhone™, the virtual presence robot will move forward, in the direction of the iPhone™ motion. If the user then spins left or right, the virtual presence robot will rotate similarly. The screen on the iPhone™ will show the resulting motion which, depending on network lag and the speed of the motion, will lag behind the users motion). Sutherland does not disclose, calibrating the interactive device by detecting a location of the physical output within the local environment and adjusting a location of an interactive control feature to the location of the physical output. Csaba teaches, further comprising calibrating the interactive device by detecting a location of the physical output within the local environment and adjusting a location of an interactive control feature to the location of the physical output (see at least [Page 251, Col.2 – Page 252, Col 1]The farthest point and the vehicle distance was 1900 mm, at current settings. We repeated the measurement, when the distance between the vehicle and the object was 1700 mm, 1500 mm, ...500 mm. During this time we noted down the laser position on the image. When the measurement ended, we knew the (t1, t2, ...tn) states, and we knew the distance between the vehicle and the object (dt1 , dt2 , ...dtn ) and the laser position (p1, p2, ...pn). Knowledge of these measurements the distance from the object can be determined, depending on the location of the laser. The Fig 11 shows this. Where unit of pi is percent. We matched the regression line to the dti and pi point. And we got a function, that shows the relation of distance and laser position (10)). Thus, Sutherland discloses a telepresence interactive device wherein the actuator comprises a light emitter including a laser pointer and remotely generating a physical output (laser point cloud and motion) and Csaba teaches calibration of the mobile robot for operation in an unknown environment. As a result, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the inventions as disclosed by Sutherland with the use of as taught by Csaba, with a reasonable expectation of success, to plan the path of the robot with the help of building a local map. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ASHUTOSH PANDE whose telephone number is (571)272-6269. The examiner can normally be reached Monday -Friday 9:00am -5:00 PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Fadey Jabr can be reached at 5712721516. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /A.P./Examiner, Art Unit 3668 /Fadey S. Jabr/Supervisory Patent Examiner, Art Unit 3668
Read full office action

Prosecution Timeline

Mar 21, 2024
Application Filed
Aug 21, 2025
Non-Final Rejection — §103
Nov 21, 2025
Response Filed
Feb 17, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12564136
MOWER, MOWING SYSTEM, AND DRIVE CONTROL METHOD
2y 5m to grant Granted Mar 03, 2026
Patent 12567328
CONTEXT-BASED IDENTIFICATION OF VEHICLE CONNECTIVITY
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
71%
Grant Probability
80%
With Interview (+8.3%)
3y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 7 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month