DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the firstc inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 21, 27-28, 30, 33-35 are rejected under 35 U.S.C. 103 as being unpatentable over Faulkner (USPN 2020/0371673 A1) in view of Leibel (US 2017/0140570 A1).
As to claim 21, Faulkner teaches a method for processing gaze input data to control a computing device, the method comprising:
identifying two or more computing devices associated with a shared computing engine, wherein the two or more computing devices can operate together and share resources (see at least [0065] “The system 100 can be configured to provide a collaborative environment that facilitates the communication between two or more computing devices.”; [0101] “In addition to functionality that brings focus to a rendering of a selected object, the techniques disclosed herein can provide other functionality based on the gaze gesture. … the system can share the reconfigured user interface with other participants of a communication session. Thus, in addition to displaying contextually relevant menu options based on an object type, a system can also select and display contextually relevant menu options based on a state of available functions.”; [0162] “devices 1110 may include one or more computing devices that operate in a cluster or other grouped configuration to share resources, balance load, increase performance, provide fail-over support or redundancy, or for other purposes.”);
receiving gaze input data corresponding to one or more users, from a first computing device of the two or more computing devices, the gaze input data being accessible via a second computing device of the two or more computing devices (see at least [0088] “the system can select an object in response to determining that a gaze target meets one or more criteria with respect to that object.”; [0101] “The system can share the reconfigured user interface with other participants of a communication session.”; [0105] “the user selects the virtual object 110B using a gaze gesture that causes the system 10 define the gaze target 112.”);
determining, based on the gaze input data, an action (see at least [0088] “the system can select an object in response to determining that a gaze target meets one or more criteria with respect to that object.”; [0105] “the user selects the virtual object 110B using a gaze gesture that causes the system 10 define the gaze target 112.” – note action is object selection or interface update);
updating the state information based on the determined action (see at least [0101] “In addition to functionality that brings focus to a rendering of a selected object, the techniques disclosed herein can provide other functionality based on the gaze gesture. … the system can share the reconfigured user interface with other participants of a communication session. Thus, in addition to displaying contextually relevant menu options based on an object type, a system can also select and display contextually relevant menu options based on a state of available functions.”; [0162] “devices 1110 may include one or more computing devices that operate in a cluster or other grouped configuration to share resources, balance load, increase performance, provide fail-over support or redundancy, or for other purposes.” – note state information is data that describes the current condition of a system, program, or component at a specific point in time. It includes the contents of memory and any other attributes that define its status and behavior which can change based on inputs and operations); and
adapting the second computing device to perform the determined action (see at least [0088] “the system can select an object in response to determining that a gaze target meets one or more criteria with respect to that object.” and [0171] “the output module 1132 transmits communication data 1139(1) to client computing device 1106(1), and transmits communication data 1139(2) to client computing device 1106(2), and transmits communication data 1139(3) to client computing device 1106(3), etc. The communication data 1139 transmitted to the client computing devices can be the same or can be different”).
Faulkner does not directly teach a shared computing engine that receives and maintains state information from each computing device, updating that shared state information based on a determined action, and adapting a second device based on the updated shared state information.
Leibel teaches identifying two or more computing devices associated with a shared computing engine, wherein the shared computing engine is configured to receive state information from each computing device of the two or more computing device (see at least [0141] “in case of gaming, computing device 1300 may be the same as the game server computer that arbitrates the game and stays in communication with one or more game client computers, such as computing devices 1440A-N.”; [0171] “a central renderer 1310 of computing device 1300 … detecting/receiving a request or an indication for processing a viewpoint-agnostic workload relating to a graphics scene from one or more satellite/client renderers at one or more satellite/client computers, such as satellite renderers 1450A-N of satellite computers 1440A-N.” – note multiple computing devices (clients/satellites) associated with a shared computing engine (central renderer) and that the central renderer receives workload/state information from each client device);
receiving gaze input data corresponding to one or more users via the shared computing engine (see at least [0152] “Once the viewpoint-agnostic tasks are performed by centralized engine 1405, the processing data may then be forwarded on to … satellite renderers 1450A-N at computing devices 1440A-N.”; [0171] “a central renderer 1310 of computing device 1300 … detecting/receiving a request or an indication for processing a viewpoint-agnostic workload relating to a graphics scene from one or more satellite/client renderers at one or more satellite/client computers, such as satellite renderers 1450A-N of satellite computers 1440A-N.” – note Leibel does not use the specific term “gaze input data,” it teaches receiving client input/viewpoint data from one computing device and distributing processed data to other devices via the shared engine. Gaze input is a known form of viewpoint-based user input; substituting gaze input data of Faulkner for other user input would have been a predictable and routine variation);
updating the state information received by the shared computing engine, and adapting the second computing device based on the updated state information (see at least [0152] “Once the viewpoint-agnostic tasks are performed by centralized engine 1405, the processing data may then be forwarded on to compilation/presentation logic 1407 to compile the processing data into viewpoint-agnostic data that is presentable or sharable with satellite renderer 1450A-N.”; [0171] “The viewpoint-agnostic workload is evaluated for further processing at block 1605… and the scene may be capable of being delivered at the one or more satellite computers with their own specific viewpoints corresponding to their users.”).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify Faulkner’s collaborative gaze-based system to employ the centralized shared-engine architecture of Leibel in order to synchronize shared state information among multiple devices. One skilled in the art would have been motivated to combine these teachings because Leibel’s centralized renderer approach provides well-known benefits—improved consistency, reduced latency, and coordinated state management—for systems involving multiple devices interacting with shared data or environments. Applying Leibel’s shared computing engine to Faulkner’s collaborative multi-device gaze-control system would have predictably allowed all participant devices to maintain a consistent shared state reflecting gaze-driven actions, thereby enhancing synchronization and collaboration. The combination merely substitutes one known distributed-system control technique (Leibel’s shared engine) into another known multi-device gaze-control environment (Faulkner), yielding no unexpected result.
As to claim 33, Faulkner teaches a system for processing gaze input data to control a computing device, the system comprising:
a processor; and memory storing instructions that, when executed by the processor, cause the system to perform a set of operations (see at least [0194] “the computing device 1300 includes one or more logic devices and one or more computer memory devices storing instructions executable by the logic devices to implement the functionality disclosed herein. In particular, a controller 1318 can include one or more processing units 1320”), the set of operations comprising:
identifying two or more computing devices associated with a shared computing engine, wherein the two or more computing devices can operate together and share resources (see at least [0065] “The system 100 can be configured to provide a collaborative environment that facilitates the communication between two or more computing devices.”; [0101] “In addition to functionality that brings focus to a rendering of a selected object, the techniques disclosed herein can provide other functionality based on the gaze gesture. … the system can share the reconfigured user interface with other participants of a communication session. Thus, in addition to displaying contextually relevant menu options based on an object type, a system can also select and display contextually relevant menu options based on a state of available functions.”; [0162] “devices 1110 may include one or more computing devices that operate in a cluster or other grouped configuration to share resources, balance load, increase performance, provide fail-over support or redundancy, or for other purposes.”);
receiving gaze input data corresponding to one or more users, from a first computing device of the two or more computing devices, the gaze input data being accessible via a second computing device of the two or more computing devices (see at least [0088] “the system can select an object in response to determining that a gaze target meets one or more criteria with respect to that object.”; [0101] “The system can share the reconfigured user interface with other participants of a communication session.”; [0105] “the user selects the virtual object 110B using a gaze gesture that causes the system 10 define the gaze target 112.”);
determining, based on the gaze input data, an action (see at least [0088] “the system can select an object in response to determining that a gaze target meets one or more criteria with respect to that object.”; [0105] “the user selects the virtual object 110B using a gaze gesture that causes the system 10 define the gaze target 112.” – note action is object selection or interface update);
updating the state information based on the determined action (see at least [0101] “In addition to functionality that brings focus to a rendering of a selected object, the techniques disclosed herein can provide other functionality based on the gaze gesture. … the system can share the reconfigured user interface with other participants of a communication session. Thus, in addition to displaying contextually relevant menu options based on an object type, a system can also select and display contextually relevant menu options based on a state of available functions.”; [0162] “devices 1110 may include one or more computing devices that operate in a cluster or other grouped configuration to share resources, balance load, increase performance, provide fail-over support or redundancy, or for other purposes.” – note state information is data that describes the current condition of a system, program, or component at a specific point in time. It includes the contents of memory and any other attributes that define its status and behavior which can change based on inputs and operations); and
adapting the second computing device to perform the determined action (see at least [0088] “the system can select an object in response to determining that a gaze target meets one or more criteria with respect to that object.” and [0171] “the output module 1132 transmits communication data 1139(1) to client computing device 1106(1), and transmits communication data 1139(2) to client computing device 1106(2), and transmits communication data 1139(3) to client computing device 1106(3), etc. The communication data 1139 transmitted to the client computing devices can be the same or can be different”).
Faulkner does not directly teach a shared computing engine that receives and maintains state information from each computing device, updating that shared state information based on a determined action, and adapting a second device based on the updated shared state information.
Leibel teaches identifying two or more computing devices associated with a shared computing engine, wherein the shared computing engine is configured to receive state information from each computing device of the two or more computing devices (see at least [0141] “in case of gaming, computing device 1300 may be the same as the game server computer that arbitrates the game and stays in communication with one or more game client computers, such as computing devices 1440A-N.”; [0171] “a central renderer 1310 of computing device 1300 … detecting/receiving a request or an indication for processing a viewpoint-agnostic workload relating to a graphics scene from one or more satellite/client renderers at one or more satellite/client computers, such as satellite renderers 1450A-N of satellite computers 1440A-N.” – note multiple computing devices (clients/satellites) associated with a shared computing engine (central renderer) and that the central renderer receives workload/state information from each client device);
receiving gaze input data corresponding to one or more users via the shared computing engine (see at least [0152] “Once the viewpoint-agnostic tasks are performed by centralized engine 1405, the processing data may then be forwarded on to … satellite renderers 1450A-N at computing devices 1440A-N.”; [0171] “a central renderer 1310 of computing device 1300 … detecting/receiving a request or an indication for processing a viewpoint-agnostic workload relating to a graphics scene from one or more satellite/client renderers at one or more satellite/client computers, such as satellite renderers 1450A-N of satellite computers 1440A-N.” – note Leibel does not use the specific term “gaze input data,” it teaches receiving client input/viewpoint data from one computing device and distributing processed data to other devices via the shared engine. Gaze input is a known form of viewpoint-based user input; substituting gaze input data of Faulkner for other user input would have been a predictable and routine variation);
updating the state information received by the shared computing engine, and adapting the second computing device based on the updated state information (see at least [0152] “Once the viewpoint-agnostic tasks are performed by centralized engine 1405, the processing data may then be forwarded on to compilation/presentation logic 1407 to compile the processing data into viewpoint-agnostic data that is presentable or sharable with satellite renderer 1450A-N.”; [0171] “The viewpoint-agnostic workload is evaluated for further processing at block 1605… and the scene may be capable of being delivered at the one or more satellite computers with their own specific viewpoints corresponding to their users.”).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify Faulkner’s collaborative gaze-based system to employ the centralized shared-engine architecture of Leibel in order to synchronize shared state information among multiple devices. One skilled in the art would have been motivated to combine these teachings because Leibel’s centralized renderer approach provides well-known benefits—improved consistency, reduced latency, and coordinated state management—for systems involving multiple devices interacting with shared data or environments. Applying Leibel’s shared computing engine to Faulkner’s collaborative multi-device gaze-control system would have predictably allowed all participant devices to maintain a consistent shared state reflecting gaze-driven actions, thereby enhancing synchronization and collaboration. The combination merely substitutes one known distributed-system control technique (Leibel’s shared engine) into another known multi-device gaze-control environment (Faulkner), yielding no unexpected result.
As to claim 27, the combination of Faulkner and Leibel teach the method of claim 21 (see above rejection), wherein: the determined action comprises at least one of copying, pasting, or cutting an element from the first computing device or the second computing device, the updating the state information received by the shared computing engine comprises storing an indication of the element being the at least one of copied, pasted, or cut, and the adapting the second computing device to perform the determined action comprises causing the element to be the at least one of copied, pasted, or cut based on the gaze input data received from the first computing device (see Faulkner [0065] “A collaborative environment can be in any suitable communication session format including but not limited to .. multi-user editing sessions”; [0088] “the system can select an object in response to determining that a gaze target meets one or more criteria with respect to that object.”; [0101] “In addition to functionality that brings focus to a rendering of a selected object, the techniques disclosed herein can provide other functionality based on the gaze gesture. … the system can share the reconfigured user interface with other participants of a communication session. Thus, in addition to displaying contextually relevant menu options based on an object type, a system can also select and display contextually relevant menu options based on a state of available functions.”; [0105] “the user selects the virtual object 110B using a gaze gesture that causes the system 10 define the gaze target 112.”; [0106] “When the data defining the object has been, the application or one of the modules providing the specialized tools can save the updated data defining the object and distribute the updated data to one or more users based on their roles and/or permissions.” – note multi-user editing would include copying, pasting, or cutting. Further, Faulkner’s “contextually relevant menu options” and “save the updated data … and distribute …” teach selecting object operations (which would naturally include cut/copy/paste) and propagating the result to other users/devices. This maps to copy/paste/cut operations triggered via gaze and stored/distributed in shared state.).
As to claim 28, Faulkner and Leibel teach the method of claim 21 (see above rejection), wherein: one or more users include a primary user participating in a videoconference on the first computing device, the determined action comprises identifying a participant in the videoconference at whom the primary user is gazing on the first computing device, and the adapting the second computing device to perform the determined action comprises, based on the gaze input data received from the first computing device, causing information corresponding to the identified participant to be displayed on the second computing device (see Faulkner at least [0088] “the system can select an object in response to determining that a gaze target meets one or more criteria with respect to that object.”; [0101] “the system can share the reconfigured user interface with other participants of a communication session.”; [0105] “the user selects the virtual object 110B using a gaze gesture that causes the system 10 define the gaze target 112.”; and Leibel [0171] “a central renderer 1310 … detecting/receiving … processing a viewpoint-agnostic workload … the scene may be capable of being delivered at the one or more satellite computers with their own specific viewpoints corresponding to their users.”).
As to claim 34, Faulkner and Leibel teach the system of claim 33 (see above rejection), wherein: the determined action comprises at least one of copying, pasting, or cutting an element from the first computing device or the second computing device, the updating the state information received by the shared computing engine comprises storing an indication of the element being the at least one of copied, pasted, or cut, and the adapting the second computing device to perform the determined action comprises causing the element to be the at least one of copied, pasted, or cut (see Faulkner [0065] “A collaborative environment can be in any suitable communication session format including but not limited to .. multi-user editing sessions”; [0101] “In addition to functionality that brings focus to a rendering of a selected object, the techniques disclosed herein can provide other functionality based on the gaze gesture. … the system can share the reconfigured user interface with other participants of a communication session. Thus, in addition to displaying contextually relevant menu options based on an object type, a system can also select and display contextually relevant menu options based on a state of available functions.”; [0106] “When the data defining the object has been, the application or one of the modules providing the specialized tools can save the updated data defining the object and distribute the updated data to one or more users based on their roles and/or permissions.” – note multi-user editing would include copying, pasting, or cutting. Further, Faulkner’s “contextually relevant menu options” and “save the updated data … and distribute …” teach selecting object operations (which would naturally include cut/copy/paste) and propagating the result to other users/devices. This maps to copy/paste/cut operations triggered via gaze and stored/distributed in shared state.).
As to claim 35, Faulkner and Leibel teach the system of claim 33 (see above rejection), wherein: the one or more users include a primary user participating in a videoconference on the first computing device, the determined action comprises identifying a participant in the videoconference at whom the primary user is gazing on the first computing device, and the adapting the second computing device to perform the determined action comprises causing information corresponding to the identified participant to be displayed on the second computing device (see Faulkner at least [0101] “the system can share the reconfigured user interface with other participants of a communication session.”; [0105] “the user selects the virtual object 110B using a gaze gesture that causes the system 10 define the gaze target 112.”; and Leibel [0171] “a central renderer 1310 … detecting/receiving … processing a viewpoint-agnostic workload … the scene may be capable of being delivered at the one or more satellite computers with their own specific viewpoints corresponding to their users.”).
Claims 22-25 and 36-39 are rejected under 35 U.S.C. 103 as being unpatentable over Faulkner (USPN 2020/0371673 A1) in view of Leibel (US 2017/0140570 A1), further in view of Chu et al. (USPN 11,449,149 B2).
As to claim 22, the combination of Faulkner and Leibel teach the method of claim 21 (see above rejection), wherein: the updating the state information received by the shared computing engine comprises storing an indication selection, and the adapting the second computing device to perform the determined action comprises causing selection on the second computing device (see Faulkner [0105] “the user selects the virtual object 110B using a gaze gesture that causes the system 10 define the gaze target 112.”; [0106] “In response the selection, the system can change the mode of an application.”, “When the data defining the object has been, the application or one of the modules providing the specialized tools can save the updated data defining the object and distribute the updated data to one or more users based on their roles and/or permissions.”; and Leibel [0141] “computing device 1300 may be the same as the game server computer that arbitrates the game and stays in communication with one or more game client computers, such as computing devices 1440A-N.”; [0152] “Once the viewpoint-agnostic tasks are performed by centralized engine 1405, the processing data may then be forwarded on to … satellite renderers 1450A-N.”; [0171] “a central renderer 1310 of computing device 1300 … detecting/receiving a request or an indication for processing a viewpoint-agnostic workload relating to a graphics scene from one or more satellite/client renderers…”).
Faulkner and Leibel do not directly teach application selection.
Chu teaches wherein: the determined action comprises selecting an application on the second computing device based on the gaze input data received from the first computing device; and the adapting the second computing device to perform the determined action comprises causing the application to be selected on the second computing device (see at least col. 7 lines 38-58 “the music application icon can be rendered in the lenses 206 above the second computing device.”, and col. 7 line 59 – col. 8 line 20 “the user 202 can adjust their gaze and/or the direction of the computerized glasses 204 more towards the first computing device or the second computing device. In response, the automated assistant can detect the adjustment of the gaze and/or facing direction of the user and cause the first instance or the second instance of the music application icon to provide feedback that one has been selected.”).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify Faulkner’s gaze-selection system to use Leibel’s centralized/shared engine for storing and forwarding selection state so that selection made at one device is stored and propagated to other devices. Chu teaches that gaze selection and simple confirmation can cause a specific remote device to be responsive and run an application. The person having ordinary skill in the art would have been motivated to combine these teachings to provide predictable multi-device behavior (consistent UI state across devices, ability to target which device runs an application), because centralized state and device-activation provides improved synchronization, user clarity, and known practical benefits (resource coordination, consistent UX). The combination yields no unexpected result — it is the predictable application of known techniques.
As to claim 23, the combination of Faulkner, Leibel and Chu teach the method of claim 22 (see above rejection), wherein the adapting the second computing device to perform the determined action further comprises causing a brightness on the first computing device to be reduced to prioritize the second computing device on which the application was selected (see Chu at least col. 7 line 59 – col. 8 line 20 “each instance of the music application icon can be rendered in a way that indicates to the user that a particular device has not been selected …the music application icon can be “grayed out,” blurry, blinking, and/or otherwise have one or more features that indicate that one of the devices should be selected by the user. …, when the user 202 directs their gaze .., the first instance of the music application can icon blink, shake, become idle, no long be grayed out, no longer be blurry, ... In this way, the user 202 can receive feedback that they have selected a particular device”).
As to claim 24, the combination of Faulkner, Leibel and Chu teach the method of claim 22 (see above rejection), further comprising: identifying a background process being executed on the second computing device; and interrupting the background process being executed on the second computing device to prioritize the selected application (see Faulkner [0081] “the display areas of the objects are arranged according to a priority level of the object displayed in each display area. In this example, the first display area 121A, the third display area 121C, and the fourth display area 121D are arranged from left to right indicating a higher priority for the objects that are displayed on the left portion of the user interface 103 versus objects that are displayed on the right portion of the user interface 103.”; and Chu at least col. 7 line 59 – col. 8 line 20 “each instance of the music application icon can be rendered in a way that indicates to the user that a particular device has not been selected …the music application icon can be “grayed out,” blurry, blinking, and/or otherwise have one or more features that indicate that one of the devices should be selected by the user. …, when the user 202 directs their gaze .., the first instance of the music application can icon blink, shake, become idle, no long be grayed out, no longer be blurry, ... In this way, the user 202 can receive feedback that they have selected a particular device”).
As to claim 25, the combination of Faulkner, Leibel and Chu teach the method of claim 22 (see above rejection), wherein: the one or more users comprise a presenter to whom the gaze data corresponds, and the adapting the second computing device to perform the determined action comprises causing the selected application to be presented on the second computing device based on the gaze input data received from the first computing device (see Faulkner at least [0088] “the system can select an object in response to determining that a gaze target meets one or more criteria with respect to that object.”; [0101] “The system can share the reconfigured user interface with other participants of a communication session.”; [0105] “the user selects the virtual object 110B using a gaze gesture that causes the system 10 define the gaze target 112.”; and Leibel [0171] “the scene may be capable of being delivered at the one or more satellite computers with their own specific viewpoints corresponding to their users.”).
As to claim 36, the combination of Faulkner and Leibel teach the system of claim 33 (see above rejection), wherein: the updating the state information received by the shared computing engine comprises storing an indication selection, and the adapting the second computing device to perform the determined action comprises causing selection on the second computing device (see Faulkner [0105] “the user selects the virtual object 110B using a gaze gesture that causes the system 10 define the gaze target 112.”; [0106] “In response the selection, the system can change the mode of an application.”, “When the data defining the object has been, the application or one of the modules providing the specialized tools can save the updated data defining the object and distribute the updated data to one or more users based on their roles and/or permissions.”; and Leibel
[0141] “computing device 1300 may be the same as the game server computer that arbitrates the game and stays in communication with one or more game client computers, such as computing devices 1440A-N.”; [0152] “Once the viewpoint-agnostic tasks are performed by centralized engine 1405, the processing data may then be forwarded on to … satellite renderers 1450A-N.”; [0171] “a central renderer 1310 of computing device 1300 … detecting/receiving a request or an indication for processing a viewpoint-agnostic workload relating to a graphics scene from one or more satellite/client renderers…”).
Faulkner and Leibel do not directly teach application selection.
Chu teaches wherein: the determined action comprises selecting an application on the second computing device; and the adapting the second computing device to perform the determined action comprises causing the application to be selected on the second computing device (see at least col. 7 lines 38-58 “the music application icon can be rendered in the lenses 206 above the second computing device.”, and col. 7 line 59 – col. 8 line 20 “the user 202 can adjust their gaze and/or the direction of the computerized glasses 204 more towards the first computing device or the second computing device. In response, the automated assistant can detect the adjustment of the gaze and/or facing direction of the user and cause the first instance or the second instance of the music application icon to provide feedback that one has been selected.”).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify Faulkner’s gaze-selection system to use Leibel’s centralized/shared engine for storing and forwarding selection state so that selection made at one device is stored and propagated to other devices. Chu teaches that gaze selection and simple confirmation can cause a specific remote device to be responsive and run an application. The person having ordinary skill in the art would have been motivated to combine these teachings to provide predictable multi-device behavior (consistent UI state across devices, ability to target which device runs an application), because centralized state and device-activation provides improved synchronization, user clarity, and known practical benefits (resource coordination, consistent UX). The combination yields no unexpected result — it is the predictable application of known techniques.
As to claim 37, the combination of Faulkner, Leibel and Chu teach the system of claim 36 (see above rejection), wherein the adapting the second computing device to perform the determined action further comprises causing a brightness on the first computing device to be reduced to prioritize the second computing device on which the application was selected (see Chu at least col. 7 line 59 – col. 8 line 20 “each instance of the music application icon can be rendered in a way that indicates to the user that a particular device has not been selected …the music application icon can be “grayed out,” blurry, blinking, and/or otherwise have one or more features that indicate that one of the devices should be selected by the user. …, when the user 202 directs their gaze .., the first instance of the music application can icon blink, shake, become idle, no long be grayed out, no longer be blurry, ... In this way, the user 202 can receive feedback that they have selected a particular device”).
As to claim 38, the combination of Faulkner, Leibel and Chu teach the system of claim 36 (see above rejection), wherein the set of operations further comprise: identifying a background process being executed on the second computing device; and interrupting the background process being executed on the second computing device to prioritize the selected application (see Faulkner [0081] “the display areas of the objects are arranged according to a priority level of the object displayed in each display area. In this example, the first display area 121A, the third display area 121C, and the fourth display area 121D are arranged from left to right indicating a higher priority for the objects that are displayed on the left portion of the user interface 103 versus objects that are displayed on the right portion of the user interface 103.”; and Chu at least col. 7 line 59 – col. 8 line 20 “each instance of the music application icon can be rendered in a way that indicates to the user that a particular device has not been selected …the music application icon can be “grayed out,” blurry, blinking, and/or otherwise have one or more features that indicate that one of the devices should be selected by the user. …, when the user 202 directs their gaze .., the first instance of the music application can icon blink, shake, become idle, no long be grayed out, no longer be blurry, ... In this way, the user 202 can receive feedback that they have selected a particular device”).
As to claim 39, the combination of Faulkner, Leibel and Chu teach the system of claim 36 (see above rejection), wherein: the one or more users comprise a presenter to whom the gaze data corresponds, and the adapting the second computing device to perform the determined action comprises causing the selected application to be presented on the second computing device (see Faulkner at least [0101] “The system can share the reconfigured user interface with other participants of a communication session.”; and Leibel [0171] “the scene may be capable of being delivered at the one or more satellite computers with their own specific viewpoints corresponding to their users.”).
Claims 26 and 40 are rejected under 35 U.S.C. 103 as being unpatentable over Faulkner (USPN 2020/0371673 A1) in view of Leibel (US 2017/0140570 A1), further in view of Chu et al. (USPN 11,449,149 B2), and further in view of Dow et al. (USPN 2018/0081431 A1).
As to claim 26, the combination of Faulkner, Leibel and Chu teach the method of claim 22 (see above rejection),
Faulkner, Leibel and Chu do not directly teach the one or more users comprise a plurality of users to whom the gaze data corresponds, and the determined action comprises selecting the application on the second computing device by identifying that the application is being viewed by a majority of the plurality of users.
Dow teaches the one or more users comprise a plurality of users to whom the gaze data corresponds, and the determined action based on the gaze input data received from the first computing device comprises selecting the application on the second computing device by identifying that the application is being viewed by a majority of the plurality of users (see Dow at least Claim 1. “A method, executed by a gaze detection system comprising a gaze tracker, a display device, and a computing device integrated into a housing of the display device system, the method comprising: providing, by the display device, an initial visualization; monitoring, by the gaze tracker, one or more of a plurality of users approaching the display device; capturing, by the gaze tracker, gaze direction data from the plurality of users viewing the initial visualization, wherein the gaze direction data comprises an eye motion, a head position, and a head direction of each of the plurality of users, wherein the gaze tracker comprises one or more non-contact, non-invasive optical sensors that receive and sense infrared light reflected from at least one eye of each of the plurality of users to capture the gaze direction data; processing, by a processor of the computing device, the gaze direction data to determine multiple points of interest for the plurality of users; determining, by the processor, a common region of the multiple points of interest to acquire supplementary information for all of the plurality of users; and providing, by the display device, the supplementary information with the initial visualization based on the common region.” – point of interest can be an application).
It would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate the gaze input data of each user as taught by Dow with Faulkner, Leibel and Chu in order to determine a point of interest (see Dow at least Abstract). Further rationale to support a conclusion that the claim would have been obvious is that all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods, and the combination yields nothing more than predictable results to one of ordinary skill in the art.
As to claim 40, the combination of Faulkner, Leibel and Chu teach the system of claim 36 (see above rejection),
Faulkner, Leibel and Chu do not directly teach the one or more users comprise a plurality of users to whom the gaze data corresponds, and the determined action comprises selecting the application on the second computing device by identifying that the application is being viewed by a majority of the plurality of users.
Dow teaches the one or more users comprise a plurality of users to whom the gaze data corresponds, and the determined action comprises selecting the application on the second computing device by identifying that the application is being viewed by a majority of the plurality of users (see Dow at least Claim 1. “A method, executed by a gaze detection system comprising a gaze tracker, a display device, and a computing device integrated into a housing of the display device system, the method comprising: providing, by the display device, an initial visualization; monitoring, by the gaze tracker, one or more of a plurality of users approaching the display device; capturing, by the gaze tracker, gaze direction data from the plurality of users viewing the initial visualization, wherein the gaze direction data comprises an eye motion, a head position, and a head direction of each of the plurality of users, wherein the gaze tracker comprises one or more non-contact, non-invasive optical sensors that receive and sense infrared light reflected from at least one eye of each of the plurality of users to capture the gaze direction data; processing, by a processor of the computing device, the gaze direction data to determine multiple points of interest for the plurality of users; determining, by the processor, a common region of the multiple points of interest to acquire supplementary information for all of the plurality of users; and providing, by the display device, the supplementary information with the initial visualization based on the common region.” – point of interest can be an application).
It would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate the gaze input data of each user as taught by Dow with Faulkner, Leibel and Chu in order to determine a point of interest (see Dow at least Abstract). Further rationale to support a conclusion that the claim would have been obvious is that all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods, and the combination yields nothing more than predictable results to one of ordinary skill in the art.
Claim 29 is rejected under 35 U.S.C. 103 as being unpatentable over Faulkner (USPN 2020/0371673 A1) in view of Leibel (US 2017/0140570 A1), further in view of Dow et al. (USPN 2018/0081431 A1).
As to claim 29, Faulkner teaches a method for processing gaze input data to control a computing device, the method comprising:
identifying a first computing device and a second computing device both associated with a shared computing engine, wherein the two or more computing devices can operate together and share resources (see at least [0065] “The system 100 can be configured to provide a collaborative environment that facilitates the communication between two or more computing devices.”; [0101] “In addition to functionality that brings focus to a rendering of a selected object, the techniques disclosed herein can provide other functionality based on the gaze gesture. … the system can share the reconfigured user interface with other participants of a communication session. Thus, in addition to displaying contextually relevant menu options based on an object type, a system can also select and display contextually relevant menu options based on a state of available functions.”; [0162] “devices 1110 may include one or more computing devices that operate in a cluster or other grouped configuration to share resources, balance load, increase performance, provide fail-over support or redundancy, or for other purposes.”);
receiving gaze input data corresponding to a plurality of users, from the first computing device, the gaze input data being accessible via the second computing device (see at least [0088] “the system can select an object in response to determining that a gaze target meets one or more criteria with respect to that object.”; [0101] “The system can share the reconfigured user interface with other participants of a communication session.”; [0105] “the user selects the virtual object 110B using a gaze gesture that causes the system 10 define the gaze target 112.”);
determining, based on the gaze input data, an action (see at least [0088] “the system can select an object in response to determining that a gaze target meets one or more criteria with respect to that object.”; [0105] “the user selects the virtual object 110B using a gaze gesture that causes the system 10 define the gaze target 112.” – note action is object selection or interface update);
updating the state information based on the determined action (see at least [0101] “In addition to functionality that brings focus to a rendering of a selected object, the techniques disclosed herein can provide other functionality based on the gaze gesture. … the system can share the reconfigured user interface with other participants of a communication session. Thus, in addition to displaying contextually relevant menu options based on an object type, a system can also select and display contextually relevant menu options based on a state of available functions.”; [0162] “devices 1110 may include one or more computing devices that operate in a cluster or other grouped configuration to share resources, balance load, increase performance, provide fail-over support or redundancy, or for other purposes.” – note state information is data that describes the current condition of a system, program, or component at a specific point in time. It includes the contents of memory and any other attributes that define its status and behavior which can change based on inputs and operations); and
adapting the second computing device to perform the determined action (see at least [0088] “the system can select an object in response to determining that a gaze target meets one or more criteria with respect to that object.” and [0171] “the output module 1132 transmits communication data 1139(1) to client computing device 1106(1), and transmits communication data 1139(2) to client computing device 1106(2), and transmits communication data 1139(3) to client computing device 1106(3), etc. The communication data 1139 transmitted to the client computing devices can be the same or can be different”).
Faulkner does not directly teach a shared computing engine that receives and maintains state information from each computing device, updating that shared state information based on a determined action, adapting a second device based on the updated shared state information, and identifying at what a majority of the plurality of users are looking, based on the gaze input data.
Leibel teaches identifying a first computing device and a second computing device both associated with a shared computing engine, wherein the shared computing engine is configured to receive state information from each of the first computing device and the second computing device (see at least [0141] “in case of gaming, computing device 1300 may be the same as the game server computer that arbitrates the game and stays in communication with one or more game client computers, such as computing devices 1440A-N.”; [0171] “a central renderer 1310 of computing device 1300 … detecting/receiving a request or an indication for processing a viewpoint-agnostic workload relating to a graphics scene from one or more satellite/client renderers at one or more satellite/client computers, such as satellite renderers 1450A-N of satellite computers 1440A-N.” – note multiple computing devices (clients/satellites) associated with a shared computing engine (central renderer) and that the central renderer receives workload/state information from each client device);
receiving gaze input data corresponding to a plurality of users via the shared computing engine (see at least [0152] “Once the viewpoint-agnostic tasks are performed by centralized engine 1405, the processing data may then be forwarded on to … satellite renderers 1450A-N at computing devices 1440A-N.”; [0171] “a central renderer 1310 of computing device 1300 … detecting/receiving a request or an indication for processing a viewpoint-agnostic workload relating to a graphics scene from one or more satellite/client renderers at one or more satellite/client computers, such as satellite renderers 1450A-N of satellite computers 1440A-N.” – note Leibel does not use the specific term “gaze input data,” it teaches receiving client input/viewpoint data from one computing device and distributing processed data to other devices via the shared engine. Gaze input is a known form of viewpoint-based user input; substituting gaze input data of Faulkner for other user input would have been a predictable and routine variation);
updating the state information received by the shared computing engine, and adapting the second computing device based on the updated state information (see at least [0152] “Once the viewpoint-agnostic tasks are performed by centralized engine 1405, the processing data may then be forwarded on to compilation/presentation logic 1407 to compile the processing data into viewpoint-agnostic data that is presentable or sharable with satellite renderer 1450A-N.”; [0171] “The viewpoint-agnostic workload is evaluated for further processing at block 1605… and the scene may be capable of being delivered at the one or more satellite computers with their own specific viewpoints corresponding to their users.”).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify Faulkner’s collaborative gaze-based system to employ the centralized shared-engine architecture of Leibel in order to synchronize shared state information among multiple devices. One skilled in the art would have been motivated to combine these teachings because Leibel’s centralized renderer approach provides well-known benefits—improved consistency, reduced latency, and coordinated state management—for systems involving multiple devices interacting with shared data or environments. Applying Leibel’s shared computing engine to Faulkner’s collaborative multi-device gaze-control system would have predictably allowed all participant devices to maintain a consistent shared state reflecting gaze-driven actions, thereby enhancing synchronization and collaboration. The combination merely substitutes one known distributed-system control technique (Leibel’s shared engine) into another known multi-device gaze-control environment (Faulkner), yielding no unexpected result.
Faulkner and Leibel do not directly teach identifying at what a majority of the plurality of users are looking, based on the gaze input data.
Dow teaches identifying at what a majority of the plurality of users are looking, based on the gaze input data (see at least Claim 1. “A method, executed by a gaze detection system comprising a gaze tracker, a display device, and a computing device integrated into a housing of the display device system, the method comprising: providing, by the display device, an initial visualization; monitoring, by the gaze tracker, one or more of a plurality of users approaching the display device; capturing, by the gaze tracker, gaze direction data from the plurality of users viewing the initial visualization, wherein the gaze direction data comprises an eye motion, a head position, and a head direction of each of the plurality of users, wherein the gaze tracker comprises one or more non-contact, non-invasive optical sensors that receive and sense infrared light reflected from at least one eye of each of the plurality of users to capture the gaze direction data; processing, by a processor of the computing device, the gaze direction data to determine multiple points of interest for the plurality of users; determining, by the processor, a common region of the multiple points of interest to acquire supplementary information for all of the plurality of users; and providing, by the display device, the supplementary information with the initial visualization based on the common region.” – note multiple points of interest).
It would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate the gaze input data of each user as taught by Dow with Faulkner and Leibel in order to determine a point of interest (see Dow at least Abstract). Further rationale to support a conclusion that the claim would have been obvious is that all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods, and the combination yields nothing more than predictable results to one of ordinary skill in the art.
Claims 30-32 are rejected under 35 U.S.C. 103 as being unpatentable over Faulkner (USPN 2020/0371673 A1) in view of Leibel (US 2017/0140570 A1), further in view of Dow et al. (USPN 2018/0081431 A1), and further in view of Chu et al. (USPN 11,449,149 B2).
As to claim 30, the combination of Faulkner, Leibel and Dow teach the method of claim 29 (see above rejection), wherein: the gaze input data indicates that a majority of the plurality of users are gazing at an application on the second computing device (see Faulkner at least [0088] “the system can select an object in response to determining that a gaze target meets one or more criteria with respect to that object.”, [0101], [0105]-[0106]; Leibel [0152], [0171]; and Dow at least Claim 1. “A method, executed by a gaze detection system comprising a gaze tracker, a display device, and a computing device integrated into a housing of the display device system, the method comprising: providing, by the display device, an initial visualization; monitoring, by the gaze tracker, one or more of a plurality of users approaching the display device; capturing, by the gaze tracker, gaze direction data from the plurality of users viewing the initial visualization, wherein the gaze direction data comprises an eye motion, a head position, and a head direction of each of the plurality of users, wherein the gaze tracker comprises one or more non-contact, non-invasive optical sensors that receive and sense infrared light reflected from at least one eye of each of the plurality of users to capture the gaze direction data; processing, by a processor of the computing device, the gaze direction data to determine multiple points of interest for the plurality of users; determining, by the processor, a common region of the multiple points of interest to acquire supplementary information for all of the plurality of users; and providing, by the display device, the supplementary information with the initial visualization based on the common region.” – point of interest can be an application).
Faulkner, Leibel and Dow do not directly teach application selection.
Chu teaches wherein the determined action comprises selecting the application on the second computing device (see at least col. 7 lines 38-58 “the music application icon can be rendered in the lenses 206 above the second computing device.”, and col. 7 line 59 – col. 8 line 20 “the user 202 can adjust their gaze and/or the direction of the computerized glasses 204 more towards the first computing device or the second computing device. In response, the automated assistant can detect the adjustment of the gaze and/or facing direction of the user and cause the first instance or the second instance of the music application icon to provide feedback that one has been selected.”).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify Faulkner’s gaze-selection system to use Leibel’s centralized/shared engine for storing and forwarding selection state so that selection made at one device is stored and propagated to other devices. Chu teaches that gaze selection and simple confirmation can cause a specific remote device to be responsive and run an application. The person having ordinary skill in the art would have been motivated to combine these teachings to provide predictable multi-device behavior (consistent UI state across devices, ability to target which device runs an application), because centralized state and device-activation provides improved synchronization, user clarity, and known practical benefits (resource coordination, consistent UX). The combination yields no unexpected result — it is the predictable application of known techniques.
As to claim 31, the combination of Faulkner, Leibel, Dow and Chu teach the method of claim 30 (see above rejection), further comprising: identifying a background process being executed on the second computing device; and interrupting the background process being executed on the second computing device to free up one or more computational resources of the second computing device for the selected application (see Faulkner [0081] “the display areas of the objects are arranged according to a priority level of the object displayed in each display area. In this example, the first display area 121A, the third display area 121C, and the fourth display area 121D are arranged from left to right indicating a higher priority for the objects that are displayed on the left portion of the user interface 103 versus objects that are displayed on the right portion of the user interface 103.”; and Chu at least col. 7 line 59 – col. 8 line 20 “each instance of the music application icon can be rendered in a way that indicates to the user that a particular device has not been selected …the music application icon can be “grayed out,” blurry, blinking, and/or otherwise have one or more features that indicate that one of the devices should be selected by the user. …, when the user 202 directs their gaze .., the first instance of the music application can icon blink, shake, become idle, no long be grayed out, no longer be blurry, ... In this way, the user 202 can receive feedback that they have selected a particular device”).
As to claim 32, the combination of Faulkner, Leibel, Dow and Chu teach the method of claim 31 (see above rejection), wherein the adapting the second computing device to perform the determined action further comprises causing a brightness on the first computing device to be reduced to prioritize the second computing device on which the application was selected (see Chu at least col. 7 line 59 – col. 8 line 20 “each instance of the music application icon can be rendered in a way that indicates to the user that a particular device has not been selected …the music application icon can be “grayed out,” blurry, blinking, and/or otherwise have one or more features that indicate that one of the devices should be selected by the user. …, when the user 202 directs their gaze .., the first instance of the music application can icon blink, shake, become idle, no long be grayed out, no longer be blurry, ... In this way, the user 202 can receive feedback that they have selected a particular device”).
Response to Arguments
Applicant’s arguments, filed 2/17/2026, with respect to Double Patenting have been fully considered and are overcome with the Terminal Disclaimer filed and approved.
Applicant's arguments filed 2/17/2026, with respect to 35 USC § 103 Rejections have been fully considered but they are not persuasive.
Applicant argues –
“Applicant respectfully traverses for the reason set forth below.
A. Independent Claim 21
In rejecting independent claim 21, the Office Action alleges that it would have been obvious to employ viewpoint data, as described in Leibel, as a form of user input (Office Action at page 11). This is incorrect.
Leibel describes a method for centralized renderings of graphics workloads (Leibel, abstract). Leibel further describes that the centralized rendering involves offloading a portion of rendering tasks to a centralized computing device to alleviate some of the rendering burden on other computing devices (Id.). This is accomplished by choosing "viewpoint-agnostic" rendering tasks to be performed on the centralized computing device and then distributing the "viewpoint-agnostic" data to the other computing devices (Id.).
Thus, Leibel's disclosure recognizes that certain viewpoint-agnostic rendering functionality is redundant and can be performed by a "central computer," whereas other rendering functionality is viewpoint-specific and is performed remotely by "satellite computers" (Leibel, FIG. 16, block 1611). Page 12 of the Office Action concludes that it would have been obvious to use Leibel's shared rendering functionality to implement "gaze-driven actions," stating as follows:
environments. Applying Leibel's shared computing engine to Faulkner's collaborative multi- device gaze-control system would have predictably allowed all participant devices to maintain a
consistent shared state reflecting gaze-driven actions, thereby enhancing synchronization and
collaboration. The combination merely substitutes one known distributed-system control
technique (Leibel's shared engine) into another known multi-device gaze-control environment
(Faulkner), yielding no unexpected result.
Applicant respectfully disagrees. Leibel is not analogous art. MPEP 2141.01(a) states as follows:
In order for a reference to be proper for use in an obviousness rejection under 35 U.S.C. 103, the reference must be analogous art to the claimed invention. In re Bigio, 381 F.3d 1320, 1325, 72 USPQ2d 1209, 1212 (Fed. Cir. 2004). A reference is analogous art to the claimed invention if: (1) the reference is from the same field of endeavor as the claimed invention (even if it addresses a different problem); or (2) the reference is reasonably pertinent to the problem faced by the inventor (even if it is not in the same field of endeavor as the claimed invention).
Here, Leibel is neither from the same field of endeavor as the claimed invention, nor is Leibel reasonably pertinent to the problem solved by Applicant's invention. Leibel is directed to rendering techniques for computer graphics. Those skilled in the art recognize that rendering of computer graphics is fundamentally a matter of controlling the output of a computing device. Here, Applicant's invention employs gaze data as an input to the computing device. Said another way, Applicant's invention enables a user to employ their gaze as a means to provide input to the computing device, thus controlling the actions of the computing device with their gaze. Leibel's teachings relating to shared rendering of image output by a computing device is clearly not from the same endeavor as Applicant's invention.
Furthermore, Leibel is not pertinent to the problem solved by Applicant's claimed invention. Leibel's teachings are entirely limited to a description of how to distribute the computational load of rendering computations across multiple computing devices. Nothing in Leibel even remotely suggests any plausible way for the different viewpoints of the "satellite computers" to be employed as a form of user input for controlling a computing device. Said another way, Leibel simply teaches how to control multiple computing devices to output frames of a scene to a user according to a particular viewpoint, without even the faintest suggestion that the viewpoint is employed in some way to control the computing device.
Since Leibel is from a different field of endeavor as the claimed invention and is not reasonably pertinent to the problem of using gaze inputs to control a computing device, Leibel is not analogous art. Therefore, Leibel cannot be used in a§ 103 rejection against the claims of this application. Accordingly, for at least this reason, Applicant respectfully requests that the Office withdraw this rejection and allow independent claim 21.
B. Dependent Claims 23, 32, and 37
Dependent claim 23 is further distinguishable from the cited references. Claim 23 recites "causing a brightness on the first computing device to be reduced" (emphasis added). In rejecting claim 23, the Office Action relies on teachings in Chu relating to an icon being grayed out, blurred, or blinking (Office Action at page 22). However, those skilled in the art understand that reducing brightness is different than graying out, blurring, or blinking of an icon. In fact, an icon can be grayed out, blurred, or made to blink without changing display brightness, and display brightness can be changed without graying out, blurring, or blinking of icons.
Thus, even assuming the Office Action accurately characterizes the teachings in Chu, the alleged teachings are inadequate to sustain the rejection. Chu simply does not teach or suggest at least "causing a brightness on the first computing device to be reduced," as recited by dependent claim 23 (emphasis added). Claims 32 and 37 are distinguishable over the cited references for at least similar reasons as claim 23.
C. Dependent Claims 24, 31, and 38
Dependent claim 24 is further distinguishable from the cited references. Claim 24 recites "interrupting the background process being executed on the second computing device" (emphasis added). In rejecting claim 24, the Office Action relies on teachings in Chu relating to an icon being grayed out, blurred, or blinking (Office Action at pages 22-23). However, those skilled in the art understand that graying out, blurring, or blinking of an icon is different than interrupting a background process. In fact, an icon can be grayed out, blurred, or made to blink without interrupting a background process, and a background process can be interrupted without graying out, blurring, or blinking of icons.
Thus, even assuming the Office Action accurately characterizes the teachings in Chu, the alleged teachings are inadequate to sustain the rejection. Chu simply does not teach or suggest at least "interrupting the backgroundprocess being executed on the second computing device," as recited by dependent claim 24 (emphasis added). Claims 31 and 38 are distinguishable over the cited references for at least similar reasons as claim 24.
D. Independent Claim 29
Independent claim 29 is further distinguishable from the cited references. Claim 29 recites:
identifying at what a majority of the plurality of users are looking,
based on the gaze input data;
determining, based on at what the majority of the plurality of
users are looking, an action ...
(emphasis added). In rejecting claim 29, the Office Action relies on teachings in Dow relating to gaze direction of a plurality of users (Office Action at pages 35-36). However, the teachings cited in Dow do not include any mention of a majority of the users.
Thus, these teachings in Dow are inadequate to sustain the rejection. Dow simply does not teach or suggest at least "determining, based on at what the majority of the plurality of users are looking, an action," as recited by dependent claim 29 (emphasis added).
E. Remaining Claims
Although of different scope, independent claims 29 and 33 are allowable for at least similar reasons as discussed above with respect to claim 29. Claims 22- 28 depend from independent claim 21, claims 30-32 depend from independent claim 29, and claims 34-40 depend from independent claim 33. These dependent claims are allowable as depending from their respective allowable base claims. These dependent claims are also allowable for their own recited features which, in combination with those recited in their respective base claims, are not taught or suggested by the cited references.”
Examiner disagrees –
A. In response to applicant's argument with respect to independent claim 21 that Liebel is nonanalogous art because it relates to graphics rendering, it has been held that a prior art reference must either be in the field of the inventor’s endeavor or, if not, then be reasonably pertinent to the particular problem with which the inventor was concerned, in order to be relied upon as a basis for rejection of the claimed invention. See In re Oetiker, 977 F.2d 1443, 24 USPQ2d 1443 (Fed. Cir. 1992). In this case, the claims address coordinating state information among multiple computing devices, not merely gaze input. Leibel teaches a centralized computing architecture coordinating processing and information exchange across multiple devices and therefore is reasonably pertinent to the problem addressed by the claims.
B. In response to applicant's argument with respect to dependent claims 23, 32, and 37 that Chu does not teach reducing brightness - Under the broadest reasonable interpretation, reducing brightness encompasses reducing the visual intensity or prominence of interface elements displayed by the computing device. Chu teaches visually de-emphasizing icons (e.g., grayed out or blurred), which inherently reduces perceived brightness relative to other interface elements.
C. In response to applicant's argument with respect to dependent claims 24, 31, and 38 that the references do not teach interrupting background processes - When activating a prioritized application on a computing device, operating systems routinely suspend or interrupt background tasks in order to allocate system resources to the active application. Implementing such resource management represents a predictable system behavior when activating applications within the combined system.
D. In response to applicant's argument with respect to independent claim 29 that Dow does not teach selecting actions based on a majority of users - Dow teaches capturing gaze data from multiple users and determining shared gaze regions. Determining an action based on the gaze direction of the majority of users represents a predictable implementation of aggregating multiple gaze inputs.
E. Although of different scope, independent claims 29 and 33 are rejected for at least similar reasons as discussed above with respect to claim 29. Claims 22- 28 depend from independent claim 21, claims 30-32 depend from independent claim 29, and claims 34-40 depend from independent claim 33. These dependent claims are rejected at least as depending from their respective allowable base claims. These dependent claims are also rejected for their own recited features which, in combination with those recited in their respective base claims, are taught/suggested by the cited references (see above rejection.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JENNIFER L ZUBAJLO whose telephone number is (571)270-1551. The examiner can normally be reached Monday - Thursday 10 am - 8 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, KE XIAO can be reached at 571-272-7776. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JENNIFER L ZUBAJLO/Examiner, Art Unit 2627 3/16/2026
/KE XIAO/Supervisory Patent Examiner, Art Unit 2627