Prosecution Insights
Last updated: April 19, 2026
Application No. 18/163,564

Automated Control Activation System with Machine Learning-Enabled Camera

Non-Final OA §101§102§103
Filed
Feb 02, 2023
Examiner
CHOI, ALICIA M
Art Unit
2117
Tech Center
2100 — Computer Architecture & Software
Assignee
International Business Machines Corporation
OA Round
1 (Non-Final)
79%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
275 granted / 349 resolved
+23.8% vs TC avg
Strong +29% interview lift
Without
With
+29.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
26 currently pending
Career history
375
Total Applications
across all art units

Statute-Specific Performance

§101
16.8%
-23.2% vs TC avg
§103
39.7%
-0.3% vs TC avg
§102
20.2%
-19.8% vs TC avg
§112
17.3%
-22.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 349 resolved cases

Office Action

§101 §102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-20 are pending, of which claims 1, 9, and 13 are independent claims. Information Disclosure Statement The references cited in the information disclosure statement (IDS) submitted on February 2, 2023 has been considered by the examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 13-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claims do not fall within at least one of the four categories of patent eligible subject matter. Claim 13 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim does not fall within at least one of the four categories of patent eligible subject matter because the broadest reasonable interpretation of the “…the computer program product comprising a computer-readable storage medium…” encompasses signals per se. The specification discloses that “Persistent storage 113 is any form of non-volatile storage for computers that is now known or to be developed in the future. The non-volatility of this storage means that the stored data is maintained regardless of whether power is being supplied to computer 101 and/or directly to persistent storage 113. Persistent storage 113 may be a read only memory (ROM), but typically at least a portion of the persistent storage allows writing of data, deletion of data, and re-writing of data. Some familiar forms of persistent storage include magnetic disks and solid-state storage devices. Operating system 122 may take several forms, such as various known proprietary operating systems or open-source Portable Operating System Interface-type operating systems that employ a kernel. The external system control activation code included in block 200 includes at least some of the computer code involved in performing the inventive methods.” (see Paragraph 0022 of the published Specification). A claim whose BRI covers both statutory and non-statutory embodiments embraces subject matter that is not eligible for patent protection and therefore is directed to non-statutory subject matter. See MPEP 2106.03(11). Thus, the broadest, reasonable interpretation of the “computer readable storage medium” in view of the specification encompasses non-statutory subject matter that is unpatentable under 35 USC 101. In view of the dependencies to a rejected base claim, claims 14-20 are also rejected. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. Claims 1 and 9 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Caron L’ecuyer et al. (US Patent Publication No. 2020/0086493 A1) (“Caron L’ecuyer”). Regarding independent claim 1, Caron L’ecuyer teaches: A computer-implemented method for changing state of an external system control, the computer-implemented method comprising: Caron L’ecuyer: Abstract (“A method for operating a vision guided robot arm system comprising a robot arm provided with an end effector at a distal end thereof, a display, an image sensor and a controller, the method comprising: receiving from the sensor image an initial image of an area comprising at least one object and displaying the initial image on the display; determining an object of interest amongst the at least one object and identifying the object of interest within the initial image; determining a potential action related to the object of interest and providing a user with an identification of the potential action; receiving a confirmation of the object of interest and the potential action from the user; and automatically moving the robot arm so as to position the end effector of the robot arm at a predefined position relative to the object of interest.”) identifying, by an automated control activation system, utilizing a machine learning-enabled camera, an orientation of a control corresponding to an external system; Caron L’ecuyer: Paragraph [0064] (“The robot arm 12 is a motorized mechanical arm comprising at least two links or arm body parts connected together by joints allowing rotational motion and/or translational displacement of the arm body parts. The distal end of a robot arm 12 is usually terminated by an end effector such as a gripper or a tool, designed to interact with the environment.”) Caron L’ecuyer: Paragraph [0067] (“In a further embodiment, the vision guiding system 14 is integral with the robot arm 12.”) Caron L’ecuyer: Paragraph [0068] (“As described above, the vision guiding system 14 comprises an image sensor device 16. The image sensor device 16 comprises at least one image sensor for imaging an area comprising an object with which the robot arm 12 will interact.”) Caron L’ecuyer: Paragraph [0077] (“In one embodiment, the “granulometry” or “size grading” method based on deep learning and/or deep neural network may be used. This approach consists first in understanding the surroundings and the context of the user and then detecting the most probable objects that can be found. Using this approach, the vision sensor device first takes at least one image of the close surroundings of the user and the controller can then determine where the user is, e.g. if the user is in a living room, a kitchen, a bedroom, etc. Then, still using the deep learning approach, the controller will analyze the image(s) and detect and identify the objects that are usually found in such surroundings.”) Caron L’ecuyer: Paragraph [0085] (“In one embodiment, a low computational cost and fast deep neural network based object tracking approach may be used to identify the offset between the target object and the position and orientation of the camera to make corrections.”) moving, by the automated control activation system, utilizing the machine learning-enabled camera, an actuator head to align with the orientation of the control corresponding to the external system; and Caron L’ecuyer: Paragraph [0072] (“The controller 18 is configured for controlling the configuration of the robot arm, i.e. controlling the relative position and orientation of the different arm body parts in order to position the end effector of the robot arm 12 at a desired position and/or orient the end effector according to a desired orientation.”) Caron L’ecuyer: Paragraph [0074] (“In another embodiment, the controller 18 is configured for automatically controlling the robot arm 12. For example, the controller 18 may be configured for determining the target position and/or orientation of the end effector and control the robot arm to bring the end effector into the desired position and/or orientation. In this automated mode, a user may input a desired action to be performed and the controller 18 is configured for controlling the robot arm 12 to achieve the inputted action.”) Caron L’ecuyer: Paragraph [0075] (“As described in further detail below, the controller 18 is further configured for displaying images or videos of an area of interest taken by the image sensor device 16 on the display 20. The area of interest comprises at least one object including a target object with which the end effector of the robot arm 12 is to interact. The controller 18 is further configured for automatically determining a target object within the area of interest and highlighting or identifying the determined object within the image displayed on the display.”) [Based on the images or videos from the image sensor device, the control of the orientation or position of the end effector according to a desired/target position and/or orientation reads on “moving, by the automated control activation system, utilizing the machine learning-enabled camera, an actuator head to align with the orientation of the control corresponding to the external system”.] changing, by the automated control activation system, utilizing the actuator head, a current state of the control corresponding to the external system to a user-desired state. Caron L’ecuyer: Paragraph [0003] (“Assistive robotic or robot arms have been introduced in the recent years inter alia to help handicapped persons having upper body limitations in accomplishing everyday tasks such as opening a door, drinking water, handling the television's remote controller or simply pushing elevator buttons.”) Caron L’ecuyer: Paragraph [0113] (“In one embodiment, once the end effector of the robot arm 12 has been positioned near the target object, the action may be performed automatically by the vision guiding system 14. For example, when the target object is a glass, the vision guiding system 14 controls the robot arm 12 for the end effector to grasp the glass.”) Caron L’ecuyer: Paragraph [0117] (“In an embodiment in which the vision guiding system operates in an automated mode for controlling the robot arm 12, the determined and confirmed action may be followed by at least another action. For example, if the target object is a bottle and the associated action is grabbing the bottle with the gripper of the robot arm 12, the vision guiding system 14 may autonomously control the robot arm 12 to bring the bottle close to the user's lips so that the user may drink. In another example, the robot arm 12 is controlled so as to pull or push the door handle to open the door after the door handle was turned. In a further example, the robot arm 12 retracts to a compact position after a button of an elevator was pushed.”) Regarding independent claim 9. Caron L’ecuyer teaches: An automated control activation system for changing state of an external system control, the automated control activation system comprising: Caron L’ecuyer: Abstract (“A method for operating a vision guided robot arm system comprising a robot arm provided with an end effector at a distal end thereof, a display, an image sensor and a controller, the method comprising: receiving from the sensor image an initial image of an area comprising at least one object and displaying the initial image on the display; determining an object of interest amongst the at least one object and identifying the object of interest within the initial image; determining a potential action related to the object of interest and providing a user with an identification of the potential action; receiving a confirmation of the object of interest and the potential action from the user; and automatically moving the robot arm so as to position the end effector of the robot arm at a predefined position relative to the object of interest.”) a communication fabric; a storage device connected to the communication fabric, wherein the storage device stores program instructions; and a processor connected to the communication fabric, wherein the processor executes the program instructions to: Canon: Paragraph [0095] and FIG. 1 (“In one embodiment, the controller 18 comprises a processing unit, a communication unit for receiving/transmitting data, and a memory having stored thereon statements and instructions to be executed by the processor.”) identify, utilizing a machine learning-enabled camera, an orientation of a control corresponding to an external system; Caron L’ecuyer: Paragraph [0064] (“The robot arm 12 is a motorized mechanical arm comprising at least two links or arm body parts connected together by joints allowing rotational motion and/or translational displacement of the arm body parts. The distal end of a robot arm 12 is usually terminated by an end effector such as a gripper or a tool, designed to interact with the environment.”) Caron L’ecuyer: Paragraph [0067] (“In a further embodiment, the vision guiding system 14 is integral with the robot arm 12.”) Caron L’ecuyer: Paragraph [0068] (“As described above, the vision guiding system 14 comprises an image sensor device 16. The image sensor device 16 comprises at least one image sensor for imaging an area comprising an object with which the robot arm 12 will interact.”) Caron L’ecuyer: Paragraph [0077] (“In one embodiment, the “granulometry” or “size grading” method based on deep learning and/or deep neural network may be used. This approach consists first in understanding the surroundings and the context of the user and then detecting the most probable objects that can be found. Using this approach, the vision sensor device first takes at least one image of the close surroundings of the user and the controller can then determine where the user is, e.g. if the user is in a living room, a kitchen, a bedroom, etc. Then, still using the deep learning approach, the controller will analyze the image(s) and detect and identify the objects that are usually found in such surroundings.”) Caron L’ecuyer: Paragraph [0085] (“In one embodiment, a low computational cost and fast deep neural network based object tracking approach may be used to identify the offset between the target object and the position and orientation of the camera to make corrections.”) move, utilizing the machine learning-enabled camera, an actuator head to align with the orientation of the control corresponding to the external system; and Caron L’ecuyer: Paragraph [0072] (“The controller 18 is configured for controlling the configuration of the robot arm, i.e. controlling the relative position and orientation of the different arm body parts in order to position the end effector of the robot arm 12 at a desired position and/or orient the end effector according to a desired orientation.”) Caron L’ecuyer: Paragraph [0074] (“In another embodiment, the controller 18 is configured for automatically controlling the robot arm 12. For example, the controller 18 may be configured for determining the target position and/or orientation of the end effector and control the robot arm to bring the end effector into the desired position and/or orientation. In this automated mode, a user may input a desired action to be performed and the controller 18 is configured for controlling the robot arm 12 to achieve the inputted action.”) Caron L’ecuyer: Paragraph [0075] (“As described in further detail below, the controller 18 is further configured for displaying images or videos of an area of interest taken by the image sensor device 16 on the display 20. The area of interest comprises at least one object including a target object with which the end effector of the robot arm 12 is to interact. The controller 18 is further configured for automatically determining a target object within the area of interest and highlighting or identifying the determined object within the image displayed on the display.”) [Based on the images or videos from the image sensor device, the control of the orientation or position of the end effector according to a desired/target position and/or orientation reads on “move, utilizing the machine learning-enabled camera, an actuator head to align with the orientation of the control corresponding to the external system”.] change, utilizing the actuator head, a current state of the control corresponding to the external system to a user-desired state. Caron L’ecuyer: Paragraph [0003] (“Assistive robotic or robot arms have been introduced in the recent years inter alia to help handicapped persons having upper body limitations in accomplishing everyday tasks such as opening a door, drinking water, handling the television's remote controller or simply pushing elevator buttons.”) Caron L’ecuyer: Paragraph [0113] (“In one embodiment, once the end effector of the robot arm 12 has been positioned near the target object, the action may be performed automatically by the vision guiding system 14. For example, when the target object is a glass, the vision guiding system 14 controls the robot arm 12 for the end effector to grasp the glass.”) Caron L’ecuyer: Paragraph [0117] (“In an embodiment in which the vision guiding system operates in an automated mode for controlling the robot arm 12, the determined and confirmed action may be followed by at least another action. For example, if the target object is a bottle and the associated action is grabbing the bottle with the gripper of the robot arm 12, the vision guiding system 14 may autonomously control the robot arm 12 to bring the bottle close to the user's lips so that the user may drink. In another example, the robot arm 12 is controlled so as to pull or push the door handle to open the door after the door handle was turned. In a further example, the robot arm 12 retracts to a compact position after a button of an elevator was pushed.”) It is noted that any citations to specific paragraphs or figures in the prior art references and any interpretation of the reference should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. See MPEP 2123. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 2-4 and 10-12 are rejected under 35 U.S.C. 103 as being unpatentable over Caron L’ecuyer in view of Pittman (US Patent Publication No. 2019/0371196 A1) (“Pittman”). Regarding claim 2, Caron L’ecuyer teaches all the claimed features of claim 1, from which claim 2 depends. Caron L’ecuyer further teaches: The computer-implemented method of claim 1, further comprising: receiving, by the automated control activation system, a command signal to change the current state of the control corresponding to the external system to the user-desired state from a user device via a network; Caron L’ecuyer: Paragraph [0073] (“In one embodiment, the controller 18 is adapted to receive commands from the user interface 22 and control the robot arm 12 according to the received commands. The user interface 22 may be any adequate device allowing a user to input commands such as a mechanical interface, an audio interface, a visual interface, etc. For example, the user interface 22 may be a touch screen interface, a joystick pointer, a voice command interface, an eye movement interface, an electromyography interface, a brain-computer interface, or the like. For example, the commands inputted by the user may be indicative of an action to be performed or the motion to be followed by the end effector of the robot arm 12, such as move forward, move backward, move right, move left, etc. In this embodiment, the robot arm 12 is said to be operated in a teleoperation mode.”) Caron L’ecuyer: Paragraph [0104] (“At step 58, a potential action to be executed and associated with the target object is determined and provided to the user. As described above, any adequate method for providing the user with the associated action may be used.”) Caron L’ecuyer: Paragraph [0105] (“If more than one objects have been determined from the image, then a corresponding action may be determined for each object and provided to the user at step 58. For example, for an object being a glass, the associated action may be grabbing the glass. For an object being a door handle, the associated action may be turning the door handle. For an object being a button, the associated action may be pushing the button, etc.”) Caron L’ecuyer: Paragraph [0107] (“… the user selects one of the objects as being the target object and confirms that the associated action would be performed. In this case, the identification of the target object and the confirmation of its associated action are received at step 60.”) Although Caron L’ecuyer teaches “capturing, by the automated control activation system, an image of the control corresponding to the external system utilizing the machine learning-enabled camera of the automated control activation system…” (See Paragraphs [0064], [0067], [0068], [0072], [0074], [0075], [0077], and [0085]), Caron L’ecuyer does not expressly teach “determining, by the automated control activation system, whether the user device is an extended reality-enabled user device; and … in response to the automated control activation system determining that the user device is not an extended reality-enabled user device”. However, Pittman teaches a heavy equipment simulation system and methods of operating the same. Pittman teaches: determining, by the automated control activation system, whether the user device is an extended reality-enabled user device; and Pittman: Paragraph [0025] (“In addition, the VR HMD has the ability to track the user's head movement and position in 3D space, and this also includes a way of determining if the user is actually wearing the HMD or not.”) capturing, by the automated control activation system, an image of the control corresponding to the external system utilizing the machine learning-enabled camera of the automated control activation system in response to the automated control activation system determining that the user device is not an extended reality-enabled user device. Pittman: Paragraph [0025] (“If for any reason, at any time, the user decides to take off the HMD, the standard camera and functionality is restored,…”) Pittman: Paragraph [0051] (“If the user removes the headset at any time during the simulation the VR camera will be disabled and the standard cameras will be enabled, and the simulation will continue, but the rendering will switch from the VR headset to the standard screens.”) Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Caron L’ecuyer and Pittman before them, to determine, by the automated control activation system, whether the user device is an extended reality-enabled user device; and capture by the automated control activation system, an image of the control corresponding to the external system utilizing the machine learning-enabled camera of the automated control activation system in response to the automated control activation system determining that the user device is not an extended reality-enabled user device because the references are in the same field of endeavor as the claimed invention and they are focused on control of equipment. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to do this modification because it would provide the user of Caron L’ecuyer to be able to either use a voice activation as taught therein or performing the image capturing using VR technology. Pittman Paragraphs [0008] and [0025] Regarding claim 3, Caron L’ecuyer and Pittman teach all the claimed features of claim 2, from which claim 3 depends. Pittman further teaches: The computer-implemented method of claim 2, further comprising: retrieving, by the automated control activation system, the image of the control corresponding to the external system from the extended reality-enabled user device via the network in response to the automated control activation system determining that the user device is an extended reality-enabled user device. Pittman: Paragraph [0032] (“The heavy equipment simulation system 10 also includes the VR headset unit 20 that is connected to the computer processing device 54 over USB and to the graphics card 62 over HDMI. A specific example of a suitable VR headset unit 20 is the Oculus Rift™ but it is to be understood that the teachings herein can be modified for other presently known or future Head Mounted Displays. The VR headset unit 20 also interacts with the positional tracking sensors 66 over infrared using outside-in tracking to send specific locations of the HMD to the software 48. The heavy equipment simulation system 10 contains a hand tracking component 46 which is mounted on the VR headset unit 20 but communicates directly with the computer processing device 54 over USB.”) Pittman: Paragraph [0036] (“The VR camera viewpoint is used to render images of the virtual environment on the on the VR head mounted display 42.”) Pittman: Paragraph [0010] (“…a virtual reality (VR) headset unit adapted to be worn by the user, and a control system. The method includes the control system performing the steps of generating the virtual environment including the simulated heavy equipment vehicle, operating the simulated heavy equipment vehicle within the virtual environment based on the first input signals received from the heavy equipment vehicle control assembly, operating the motion actuation system to adjust the orientation of the support frame with respect to the ground surface based on movements of the simulated heavy equipment vehicle within the virtual environment, rendering the virtual environment including the simulated heavy equipment vehicle on the VR head mounted display, and rendering the virtual environment including the simulated heavy equipment vehicle on the display device assembly.”) The motivation to combine Caron L’ecuyer and Pittman as provided in claim 2 is incorporated herein. Regarding claim 4, Caron L’ecuyer and Pittman teach all the claimed features of claim 3, from which claim 4 depends. Pittman further teaches: The computer-implemented method of claim 3, further comprising: performing, by the automated control activation system, utilizing the machine learning-enabled camera, an analysis of the image of the control corresponding to the external system to identify the orientation of the control. Pittman: Paragraph [0010] [As described in claim 3.] Pittman: Paragraph [0009] (“The VR head mounted display is configured to display the virtual environment including the simulated heavy equipment vehicle. The position sensor is configured to detect an orientation of the VR headset unit and generate second input signals based on the detected orientation of the VR headset unit. The control system includes a processor coupled to a memory device. The processor is programmed to generate the virtual environment including the simulated heavy equipment vehicle, operate the simulated heavy equipment vehicle within the virtual environment based on the first input signals received from the heavy equipment vehicle control assembly, operate the motion actuation system to adjust the orientation of the support frame with respect to the ground surface based on movements of the simulated heavy equipment vehicle within the virtual environment, render the virtual environment including the simulated heavy equipment vehicle on the VR head mounted display, and render the virtual environment including the simulated heavy equipment vehicle on the display device assembly.”) Pittman: Paragraph [0029] (“The motion actuation system 16 includes a plurality of actuators 36 that are coupled to the support frame 12 for adjusting an orientation of the support frame 12 with respect to a ground surface. The motion actuation system 16 may be operated to adjust the orientation of the support frame 12 with respect to the ground surface based on movements of the simulated heavy equipment vehicle 26 within the virtual environment 24.”) The motivation to combine Caron L’ecuyer and Pittman as provided in claim 2 is incorporated herein. Regarding claim 10, Caron L’ecuyer teaches all the claimed features of claim 9, from which claim 10 depends. Caron L’ecuyer further teaches: The automated control activation system of claim 9, wherein the processor further executes the program instructions to: receive a command signal to change the current state of the control corresponding to the external system to the user-desired state from a user device via a network; Caron L’ecuyer: Paragraph [0073] (“In one embodiment, the controller 18 is adapted to receive commands from the user interface 22 and control the robot arm 12 according to the received commands. The user interface 22 may be any adequate device allowing a user to input commands such as a mechanical interface, an audio interface, a visual interface, etc. For example, the user interface 22 may be a touch screen interface, a joystick pointer, a voice command interface, an eye movement interface, an electromyography interface, a brain-computer interface, or the like. For example, the commands inputted by the user may be indicative of an action to be performed or the motion to be followed by the end effector of the robot arm 12, such as move forward, move backward, move right, move left, etc. In this embodiment, the robot arm 12 is said to be operated in a teleoperation mode.”) Caron L’ecuyer: Paragraph [0104] (“At step 58, a potential action to be executed and associated with the target object is determined and provided to the user. As described above, any adequate method for providing the user with the associated action may be used.”) Caron L’ecuyer: Paragraph [0105] (“If more than one objects have been determined from the image, then a corresponding action may be determined for each object and provided to the user at step 58. For example, for an object being a glass, the associated action may be grabbing the glass. For an object being a door handle, the associated action may be turning the door handle. For an object being a button, the associated action may be pushing the button, etc.”) Caron L’ecuyer: Paragraph [0107] (“… the user selects one of the objects as being the target object and confirms that the associated action would be performed. In this case, the identification of the target object and the confirmation of its associated action are received at step 60.”) Although Caron L’ecuyer teaches “capture an image of the control corresponding to the external system utilizing the machine learning-enabled camera of the automated control activation system …” (See Paragraphs [0064], [0067], [0068], [0072], [0074], [0075], [0077], and [0085]), Caron L’ecuyer does not expressly teach “determine whether the user device is an extended reality-enabled user device; and …in response to determining that the user device is not an extended reality-enabled user device”. However, Pittman teaches a heavy equipment simulation system and methods of operating the same. Pittman teaches: determine whether the user device is an extended reality-enabled user device; and Pittman: Paragraph [0025] (“In addition, the VR HMD has the ability to track the user's head movement and position in 3D space, and this also includes a way of determining if the user is actually wearing the HMD or not.”) capture an image of the control corresponding to the external system utilizing the machine learning-enabled camera of the automated control activation system in response to determining that the user device is not an extended reality-enabled user device. Pittman: Paragraph [0025] (“If for any reason, at any time, the user decides to take off the HMD, the standard camera and functionality is restored,…”) Pittman: Paragraph [0051] (“If the user removes the headset at any time during the simulation the VR camera will be disabled and the standard cameras will be enabled, and the simulation will continue, but the rendering will switch from the VR headset to the standard screens.”) Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Caron L’ecuyer and Pittman before them, to determine whether the user device is an extended reality-enabled user device; and capture an image of the control corresponding to the external system utilizing the machine learning-enabled camera of the automated control activation system in response to determining that the user device is not an extended reality-enabled user device because the references are in the same field of endeavor as the claimed invention and they are focused on control of equipment. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to do this modification because it would provide the user of Caron L’ecuyer to be able to either use a voice activation as taught therein or performing the image capturing using VR technology. Pittman Paragraphs [0008] and [0025] Regarding claim 11, Caron L’ecuyer and Pittman teach all the claimed features of claim 10, from which claim 11 depends. Pittman further teaches: The automated control activation system of claim 10, wherein the processor further executes the program instructions to: retrieve the image of the control corresponding to the external system from the extended reality-enabled user device via the network in response to determining that the user device is the extended reality-enabled user device. Pittman: Paragraph [0032] (“The heavy equipment simulation system 10 also includes the VR headset unit 20 that is connected to the computer processing device 54 over USB and to the graphics card 62 over HDMI. A specific example of a suitable VR headset unit 20 is the Oculus Rift™ but it is to be understood that the teachings herein can be modified for other presently known or future Head Mounted Displays. The VR headset unit 20 also interacts with the positional tracking sensors 66 over infrared using outside-in tracking to send specific locations of the HMD to the software 48. The heavy equipment simulation system 10 contains a hand tracking component 46 which is mounted on the VR headset unit 20 but communicates directly with the computer processing device 54 over USB.”) Pittman: Paragraph [0036] (“The VR camera viewpoint is used to render images of the virtual environment on the on the VR head mounted display 42.”) Pittman: Paragraph [0010] (“…a virtual reality (VR) headset unit adapted to be worn by the user, and a control system. The method includes the control system performing the steps of generating the virtual environment including the simulated heavy equipment vehicle, operating the simulated heavy equipment vehicle within the virtual environment based on the first input signals received from the heavy equipment vehicle control assembly, operating the motion actuation system to adjust the orientation of the support frame with respect to the ground surface based on movements of the simulated heavy equipment vehicle within the virtual environment, rendering the virtual environment including the simulated heavy equipment vehicle on the VR head mounted display, and rendering the virtual environment including the simulated heavy equipment vehicle on the display device assembly.”) The motivation to combine Caron L’ecuyer and Pittman as provided in claim 10 is incorporated herein. Regarding claim 12, Caron L’ecuyer and Pittman teach all the claimed features of claim 11, from which claim 12 depends. Pittman further teaches: The automated control activation system of claim 11, wherein the processor further executes the program instructions to: perform, utilizing the machine learning-enabled camera, an analysis of the image of the control corresponding to the external system to identify the orientation of the control. Pittman: Paragraph [0010] [As described in claim 11.] Pittman: Paragraph [0009] (“The VR head mounted display is configured to display the virtual environment including the simulated heavy equipment vehicle. The position sensor is configured to detect an orientation of the VR headset unit and generate second input signals based on the detected orientation of the VR headset unit. The control system includes a processor coupled to a memory device. The processor is programmed to generate the virtual environment including the simulated heavy equipment vehicle, operate the simulated heavy equipment vehicle within the virtual environment based on the first input signals received from the heavy equipment vehicle control assembly, operate the motion actuation system to adjust the orientation of the support frame with respect to the ground surface based on movements of the simulated heavy equipment vehicle within the virtual environment, render the virtual environment including the simulated heavy equipment vehicle on the VR head mounted display, and render the virtual environment including the simulated heavy equipment vehicle on the display device assembly.”) Pittman: Paragraph [0029] (“The motion actuation system 16 includes a plurality of actuators 36 that are coupled to the support frame 12 for adjusting an orientation of the support frame 12 with respect to a ground surface. The motion actuation system 16 may be operated to adjust the orientation of the support frame 12 with respect to the ground surface based on movements of the simulated heavy equipment vehicle 26 within the virtual environment 24.”) The motivation to combine Caron L’ecuyer and Pittman as provided in claim 10 is incorporated herein. Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Caron L’ecuyer in view of Abrams (US Patent No. 12,808,715 B1) (“Abrams”). Regarding independent claim 13, Caron L’ecuyer teaches: … identifying, by the automated control activation system, utilizing a machine learning-enabled camera, an orientation of a control corresponding to an external system; Caron L’ecuyer: Paragraph [0064] (“The robot arm 12 is a motorized mechanical arm comprising at least two links or arm body parts connected together by joints allowing rotational motion and/or translational displacement of the arm body parts. The distal end of a robot arm 12 is usually terminated by an end effector such as a gripper or a tool, designed to interact with the environment.”) Caron L’ecuyer: Paragraph [0067] (“In a further embodiment, the vision guiding system 14 is integral with the robot arm 12.”) Caron L’ecuyer: Paragraph [0068] (“As described above, the vision guiding system 14 comprises an image sensor device 16. The image sensor device 16 comprises at least one image sensor for imaging an area comprising an object with which the robot arm 12 will interact.”) Caron L’ecuyer: Paragraph [0077] (“In one embodiment, the “granulometry” or “size grading” method based on deep learning and/or deep neural network may be used. This approach consists first in understanding the surroundings and the context of the user and then detecting the most probable objects that can be found. Using this approach, the vision sensor device first takes at least one image of the close surroundings of the user and the controller can then determine where the user is, e.g. if the user is in a living room, a kitchen, a bedroom, etc. Then, still using the deep learning approach, the controller will analyze the image(s) and detect and identify the objects that are usually found in such surroundings.”) Caron L’ecuyer: Paragraph [0085] (“In one embodiment, a low computational cost and fast deep neural network based object tracking approach may be used to identify the offset between the target object and the position and orientation of the camera to make corrections.”) moving, by the automated control activation system, utilizing the machine learning-enabled camera, an actuator head to align with the orientation of the control corresponding to the external system; and Caron L’ecuyer: Paragraph [0072] (“The controller 18 is configured for controlling the configuration of the robot arm, i.e. controlling the relative position and orientation of the different arm body parts in order to position the end effector of the robot arm 12 at a desired position and/or orient the end effector according to a desired orientation.”) Caron L’ecuyer: Paragraph [0074] (“In another embodiment, the controller 18 is configured for automatically controlling the robot arm 12. For example, the controller 18 may be configured for determining the target position and/or orientation of the end effector and control the robot arm to bring the end effector into the desired position and/or orientation. In this automated mode, a user may input a desired action to be performed and the controller 18 is configured for controlling the robot arm 12 to achieve the inputted action.”) Caron L’ecuyer: Paragraph [0075] (“As described in further detail below, the controller 18 is further configured for displaying images or videos of an area of interest taken by the image sensor device 16 on the display 20. The area of interest comprises at least one object including a target object with which the end effector of the robot arm 12 is to interact. The controller 18 is further configured for automatically determining a target object within the area of interest and highlighting or identifying the determined object within the image displayed on the display.”) [Based on the images or videos from the image sensor device, the control of the orientation or position of the end effector according to a desired/target position and/or orientation reads on “moving, by the automated control activation system, utilizing the machine learning-enabled camera, an actuator head to align with the orientation of the control corresponding to the external system”.] changing, by the automated control activation system, utilizing the actuator head, a current state of the control corresponding to the external system to a user-desired state. Caron L’ecuyer: Paragraph [0003] (“Assistive robotic or robot arms have been introduced in the recent years inter alia to help handicapped persons having upper body limitations in accomplishing everyday tasks such as opening a door, drinking water, handling the television's remote controller or simply pushing elevator buttons.”) Caron L’ecuyer: Paragraph [0113] (“In one embodiment, once the end effector of the robot arm 12 has been positioned near the target object, the action may be performed automatically by the vision guiding system 14. For example, when the target object is a glass, the vision guiding system 14 controls the robot arm 12 for the end effector to grasp the glass.”) Caron L’ecuyer: Paragraph [0117] (“In an embodiment in which the vision guiding system operates in an automated mode for controlling the robot arm 12, the determined and confirmed action may be followed by at least another action. For example, if the target object is a bottle and the associated action is grabbing the bottle with the gripper of the robot arm 12, the vision guiding system 14 may autonomously control the robot arm 12 to bring the bottle close to the user's lips so that the user may drink. In another example, the robot arm 12 is controlled so as to pull or push the door handle to open the door after the door handle was turned. In a further example, the robot arm 12 retracts to a compact position after a button of an elevator was pushed.”) Caron L’ecuyer does not expressly teach a computer program product for changing state of an external system control, the computer program product comprising a computer-readable storage medium having program instructions embodied therewith. However, Abrams describes allowing robots to perform tasks for users. Abrams teaches: A computer program product for changing state of an external system control, the computer program product comprising a computer-readable storage medium having program instructions embodied therewith, the program instructions executable by a computer of an automated control activation system to cause the automated control activation system to perform a method of: Abrams: Column 17, line 28-60 (“Aspects of the systems and methods provided herein, such as the computer system 701, may be embodied in programming. Various aspects of the technology may be thought of as “products” or “articles of manufacture” typically in the form of machine (or processor) executable code and/or associated data that is carried on or embodied in a type of machine readable medium. Machine-executable code can be stored on an electronic storage unit, such memory (e.g., read-only memory, random-access memory, flash memory) or a hard disk. “Storage” type media can include any or all of the tangible memory of the computers, processors or the like, or associated modules thereof, such as various semiconductor memories, tape drives, disk drives and the like, which may provide non-transitory storage at any time for the software programming. All or portions of the software may at times be communicated through the Internet or various other telecommunication networks. Such communications, for example, may enable loading of the software from one computer or processor into another, for example, from a management server or host computer into the computer platform of an application server. Thus, another type of media that may bear the software elements includes optical, electrical and electromagnetic waves, such as used across physical interfaces between local devices, through wired and optical landline networks and over various air-links. The physical elements that carry such waves, such as wired or wireless links, optical links or the like, also may be considered as media bearing the software. As used herein, unless restricted to non-transitory, tangible “storage” media, terms such as computer or machine “readable medium” refer to any medium that participates in providing instructions to a processor for execution.”) Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Caron L’ecuyer and Abrams before them, to include a computer program product for changing state of an external system control, the computer program product comprising a computer-readable storage medium having program instructions embodied therewith because the references are in the same field of endeavor as the claimed invention and they are focused on control of equipment. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to do this modification because it would provide the user of Caron L’ecuyer to be able to implement the configuration described therein into an electro-mechanical machine that may be controlled by a computer program or electronic circuitry and a non-transitory computer-readable storage media encoded with a computer program including instructions executable by at least one processor. Abrams Column 1, lines 31-44, and Claim 1 Claims 14-16 are rejected under 35 U.S.C. 103 as being unpatentable over Caron L’ecuyer in view of Abrams and further in view of Pittman. Regarding claim 14, Caron L’ecuyer and Abrams teach all the claimed features of claim 13, from which claim 14 depends. Caron L’ecuyer further teaches: The computer program product of claim 13, further comprising: receiving, by the automated control activation system, a command signal to change the current state of the control corresponding to the external system to the user-desired state from a user device via a network; Caron L’ecuyer: Paragraph [0073] (“In one embodiment, the controller 18 is adapted to receive commands from the user interface 22 and control the robot arm 12 according to the received commands. The user interface 22 may be any adequate device allowing a user to input commands such as a mechanical interface, an audio interface, a visual interface, etc. For example, the user interface 22 may be a touch screen interface, a joystick pointer, a voice command interface, an eye movement interface, an electromyography interface, a brain-computer interface, or the like. For example, the commands inputted by the user may be indicative of an action to be performed or the motion to be followed by the end effector of the robot arm 12, such as move forward, move backward, move right, move left, etc. In this embodiment, the robot arm 12 is said to be operated in a teleoperation mode.”) Caron L’ecuyer: Paragraph [0104] (“At step 58, a potential action to be executed and associated with the target object is determined and provided to the user. As described above, any adequate method for providing the user with the associated action may be used.”) Caron L’ecuyer: Paragraph [0105] (“If more than one objects have been determined from the image, then a corresponding action may be determined for each object and provided to the user at step 58. For example, for an object being a glass, the associated action may be grabbing the glass. For an object being a door handle, the associated action may be turning the door handle. For an object being a button, the associated action may be pushing the button, etc.”) Caron L’ecuyer: Paragraph [0107] (“… the user selects one of the objects as being the target object and confirms that the associated action would be performed. In this case, the identification of the target object and the confirmation of its associated action are received at step 60.”) Although Caron L’ecuyer teaches “capturing, by the automated control activation system, an image of the control corresponding to the external system utilizing the machine learning-enabled camera of the automated control activation system…” (See Paragraphs [0064], [0067], [0068], [0072], [0074], [0075], [0077], and [0085]), Caron L’ecuyer and Abrams not expressly teach “determining, by the automated control activation system, whether the user device is an extended reality-enabled user device; and … in response to the automated control activation system determining that the user device is not an extended reality-enabled user device”. However, Pittman teaches a heavy equipment simulation system and methods of operating the same. Pittman teaches: determining, by the automated control activation system, whether the user device is an extended reality-enabled user device; and Pittman: Paragraph [0025] (“In addition, the VR HMD has the ability to track the user's head movement and position in 3D space, and this also includes a way of determining if the user is actually wearing the HMD or not.”) capturing, by the automated control activation system, an image of the control corresponding to the external system utilizing the machine learning-enabled camera of the automated control activation system in response to the automated control activation system determining that the user device is not an extended reality-enabled user device. Pittman: Paragraph [0025] (“If for any reason, at any time, the user decides to take off the HMD, the standard camera and functionality is restored,…”) Pittman: Paragraph [0051] (“If the user removes the headset at any time during the simulation the VR camera will be disabled and the standard cameras will be enabled, and the simulation will continue, but the rendering will switch from the VR headset to the standard screens.”) Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Caron L’ecuyer, Abrams, and Pittman before them, to determine, by the automated control activation system, whether the user device is an extended reality-enabled user device; and capture by the automated control activation system, an image of the control corresponding to the external system utilizing the machine learning-enabled camera of the automated control activation system in response to the automated control activation system determining that the user device is not an extended reality-enabled user device because the references are in the same field of endeavor as the claimed invention and they are focused on control of equipment. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to do this modification because it would provide the user of Caron L’ecuyer and Abrams to be able to either use a voice activation as taught therein or performing the image capturing using VR technology. Pittman Paragraphs [0008] and [0025] Regarding claim 15, Caron L’ecuyer, Abrams, and Pittman teach all the claimed features of claim 14, from which claim 15 depends. Pittman further teaches: The computer program product of claim 14, further comprising: retrieving, by the automated control activation system, the image of the control corresponding to the external system from the extended reality-enabled user device via the network in response to the automated control activation system determining that the user device is an extended reality-enabled user device. Pittman: Paragraph [0032] (“The heavy equipment simulation system 10 also includes the VR headset unit 20 that is connected to the computer processing device 54 over USB and to the graphics card 62 over HDMI. A specific example of a suitable VR headset unit 20 is the Oculus Rift™ but it is to be understood that the teachings herein can be modified for other presently known or future Head Mounted Displays. The VR headset unit 20 also interacts with the positional tracking sensors 66 over infrared using outside-in tracking to send specific locations of the HMD to the software 48. The heavy equipment simulation system 10 contains a hand tracking component 46 which is mounted on the VR headset unit 20 but communicates directly with the computer processing device 54 over USB.”) Pittman: Paragraph [0036] (“The VR camera viewpoint is used to render images of the virtual environment on the on the VR head mounted display 42.”) Pittman: Paragraph [0010] (“…a virtual reality (VR) headset unit adapted to be worn by the user, and a control system. The method includes the control system performing the steps of generating the virtual environment including the simulated heavy equipment vehicle, operating the simulated heavy equipment vehicle within the virtual environment based on the first input signals received from the heavy equipment vehicle control assembly, operating the motion actuation system to adjust the orientation of the support frame with respect to the ground surface based on movements of the simulated heavy equipment vehicle within the virtual environment, rendering the virtual environment including the simulated heavy equipment vehicle on the VR head mounted display, and rendering the virtual environment including the simulated heavy equipment vehicle on the display device assembly.”) The motivation to combine Caron L’ecuyer, Abrams, and Pittman as provided in claim 14 is incorporated herein. Regarding claim 16, Caron L’ecuyer, Abrams, and Pittman teach all the claimed features of claim 15, from which claim 16 depends. Pittman further teaches: The computer program product of claim 15, further comprising: performing, by the automated control activation system, utilizing the machine learning-enabled camera, an analysis of the image of the control corresponding to the external system to identify the orientation of the control. Pittman: Paragraph [0010] [As described in claim 3.] Pittman: Paragraph [0009] (“The VR head mounted display is configured to display the virtual environment including the simulated heavy equipment vehicle. The position sensor is configured to detect an orientation of the VR headset unit and generate second input signals based on the detected orientation of the VR headset unit. The control system includes a processor coupled to a memory device. The processor is programmed to generate the virtual environment including the simulated heavy equipment vehicle, operate the simulated heavy equipment vehicle within the virtual environment based on the first input signals received from the heavy equipment vehicle control assembly, operate the motion actuation system to adjust the orientation of the support frame with respect to the ground surface based on movements of the simulated heavy equipment vehicle within the virtual environment, render the virtual environment including the simulated heavy equipment vehicle on the VR head mounted display, and render the virtual environment including the simulated heavy equipment vehicle on the display device assembly.”) Pittman: Paragraph [0029] (“The motion actuation system 16 includes a plurality of actuators 36 that are coupled to the support frame 12 for adjusting an orientation of the support frame 12 with respect to a ground surface. The motion actuation system 16 may be operated to adjust the orientation of the support frame 12 with respect to the ground surface based on movements of the simulated heavy equipment vehicle 26 within the virtual environment 24.”) The motivation to combine Caron L’ecuyer, Abrams, and Pittman as provided in claim 14 is incorporated herein. It is noted that any citations to specific paragraphs or figures in the prior art references and any interpretation of the reference should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. See MPEP 2123. Allowable Subject Matter Claims 5-8 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Provided that the non-statutory subject matter rejection under 35 USC 101 is overcome, claims 17-20 are also objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. As allowable subject matter has been indicated, applicant's reply must either comply with all formal requirements or specifically traverse each requirement not complied with. See 37 CFR 1.111(b) and MPEP § 707.07(a). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US Patent Publication No. 2020/0055191 A1 to Nahum describes A robot system is provided including an articulated robot (e.g., a SCARA robot) and a supplementary metrology position coordinates system. The articulated robot includes first and second arm portions, first and second rotary joints, a motion control system, and position sensors. The first arm portion is mounted to the first rotary joint at a proximal end of the first arm portion. The first rotary joint has a rotary axis aligned along a z axis direction such that the first arm portion moves about the first rotary joint in an x-y plane that is perpendicular to the z axis. The second rotary joint is located at a distal end of the first arm portion. The second rotary joint has its rotary axis nominally aligned along the z axis direction. The second arm portion is mounted to the second rotary joint at a proximal end of the second arm portion. The second arm portion moves about the second rotary joint in an x-y plane that is nominally perpendicular to the z axis. The motion control system is configured to control an end tool position of an end tool (e.g., … a camera, etc., as part of an end tool configuration that is coupled proximate to a distal end of the second arm portion). The end tool position is controlled with a level of accuracy defined as a robot accuracy, based at least in part on sensing and controlling the angular positions of the first and second arm portions about the first and second rotary joints, respectively, using the position sensors (e.g., rotary encoders) included in the articulated robot. US Patent Publication No. 2010/0014005 A1 to Yano et al. describes a remote control system including a television set that receives a television manipulation signal corresponding to a manipulation display on a television screen and that transmits encoded sound source data related to the manipulation signal; and a remote controller that transmits the manipulation signal to the television set and that decodes the sound source data received from the television set and outputs the decoded sound source data in the form of audio. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALICIA M. CHOI whose telephone number is (571) 272-1473. The examiner can normally be reached on Monday - Friday 7:30 am to 5:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Robert Fennema can be reached on 571-272-2748. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ALICIA M. CHOI/Primary Patent Examiner, Art Unit 2117
Read full office action

Prosecution Timeline

Feb 02, 2023
Application Filed
Oct 26, 2023
Response after Non-Final Action
Feb 23, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12601519
LEARNING DEVICE AND INFERENCE DEVICE FOR STATE OF AIR CONDITIONING SYSTEM
2y 5m to grant Granted Apr 14, 2026
Patent 12595926
CONTROLLER AND METHOD FOR MANAGING A FLOW UNIT
2y 5m to grant Granted Apr 07, 2026
Patent 12590721
BUILDING MANAGEMENT SYSTEM WITH PARTICULATE SENSING
2y 5m to grant Granted Mar 31, 2026
Patent 12584648
BUILDING MANAGEMENT SYSTEM WITH CLEAN AIR AND INFECTION REDUCTION FEATURES
2y 5m to grant Granted Mar 24, 2026
Patent 12584650
DISTRIBUTED ZONE CONTROL SYSTEM
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
79%
Grant Probability
99%
With Interview (+29.2%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 349 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month