DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/23/2026 has been entered.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 3-4, 8-14, and 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Oostergo et al. (US 11863905 B1) in view of Swierk et al. (US 20220232189 A1).
Regarding claim 1, Oostergo discloses a system (figs. 1-3), comprising:
a processor (202 of fig. 2, Col. 8, lines 47-60, as illustrated in FIG. 2, the remote computing resource(s) 112 may include multiple components, such as one or more processors 202 and memory 204. The memory 204 may include computer-readable media (“CRM”) and/or computer-readable storage media (“CRSM”), which may be any available physical media accessible by the processor 202 to execute instructions stored on the memory 204. In one basic implementation, CRSM may include random access memory (“RAM”) and Flash memory. In other implementations, CRSM may include, but is not limited to, read-only memory (“ROM”), electrically erasable programmable read-only memory (“EEPROM”), or any other medium which can be used to store the desired data/information and that can be accessed by the processor(s) 202); and
a memory coupled to the processor (204 of fig. 2), the memory having program instructions stored thereon that, upon execution by the processor (Col. 8, lines 47-60, as illustrated in FIG. 2, the remote computing resource(s) 112 may include multiple components, such as one or more processors 202 and memory 204. The memory 204 may include computer-readable media (“CRM”) and/or computer-readable storage media (“CRSM”), which may be any available physical media accessible by the processor 202 to execute instructions stored on the memory 204. In one basic implementation, CRSM may include random access memory (“RAM”) and Flash memory. In other implementations, CRSM may include, but is not limited to, read-only memory (“ROM”), electrically erasable programmable read-only memory (“EEPROM”), or any other medium which can be used to store the desired data/information and that can be accessed by the processor(s) 202), cause the system to:
obtain context information of a remote conferencing application (Col. 3, lines 1-14, Detection of the user within the environment may also be determined based on facial recognition techniques applied to an image or video that is captured by a camera within the environment and that depicts the user; Col. 22, lines 47-54, 500 of fig. 5, an example process 500 of generating and presenting an application 122 that depicts devices 108 within an environment 102 that may be activated and controlled by a user 104 within the environment 102 via the application 122) that enables a remote conferencing session in a conference room (Col. 4, lines 25-30, the user 104 may facilitate a video conference with one or more remote locations (e.g., a different environment 102, room, building, etc.) in which other users 104 that are participants of the video conference are present/located; a video conference may involve the reception and transmission of audio-video signals between users 104/devices 108 at different locations, which may allow for video communications between users 104 at different locations in real-time (or near real-time)),
wherein the context information of the remote conferencing application (122 of fig. 4A) comprises at least one of: a meeting start, a meeting end (Col. 9, lines 60-Col. 10, line 19), a start of a remote user on video (Col. 4, lines 25-30, the user 104 may facilitate a video conference with one or more remote locations (e.g., a different environment 102, room, building, etc.) in which other users 104 that are participants of the video conference are present/located; a video conference may involve the reception and transmission of audio-video signals between users 104/devices 108 at different locations, which may allow for video communications between users 104 at different locations in real-time (or near real-time); Col. 10, lines 4-10, environments 102 at which those participants will be attending the meeting, provided that one or more participants will be participating from at least one location remote from the environment 102), an end of a remote user on video (104 of fig. 1, end of a remote user on video), a start of a slide presentation (See claim 1 of Oostergo, presentation of the application would obviously encompass a start of slide presentation), or an end of a slide presentation (See claim 1 of Oostergo, presentation of the application would obviously encompass end of a slice presentation); and
modify controls for one or more objects in the conference room based, at least in part, on the context information of the remote conferencing application (MODIFIED APPLICATION 232 of figs. 4A and 4B; Col. 23, lines 44-51, FIG. 6 illustrates a flow diagram of an example process 600 of presenting and updating an application 122 that depicts devices 108 within an environment that may be activated and controlled by a user 104 within the environment 102 via the application 122. Moreover, the following actions described with respect to FIG. 6 may be performed by the remote computing resource(s) 112, as illustrated with respect to FIGS. 1-4; Col. 24, lines 32-42, generating a modified application based on the selection and the device state, the application 122 may be dynamically updated to include a first selectable control to adjust a brightness level of the lights 322 and a second selectable control to turn off the lights 322, resulting in a modified application 232, and the modified application 232 with the dynamically updated selectable controls may then be presented to the user 104 via his/her user device 106),
wherein the one or more objects in the conference room comprise at least one of: lights of the conference room, a thermostat in the conference room, or a window blind in the conference room (108 of fig. 1, the device(s) 108; Col. 2, lines 19-50 ; Col. 3, line 60-Col. 4, line 20, Examples of the devices 108 may include video conferencing devices used to facilitate video conferences within the environment 102, and such video conferencing devices may include, or be associated with, displays, microphones, speakers, etc. The devices 108 or components may also include lights within the environment 102, a thermostat for maintaining/adjusting the temperature within the environment 102, blinds or curtains within the environment 102, or any other smart device that is configured to be communicatively coupled to other devices via one or more networks 110).
It is noted that Oostergo is silent about an Information Handling System (IHS); automatically modify the controls for the one or more objects in the conference room; and wherein to automatically modify the controls for the one or more objects in the conference room, the program instructions, upon execution by the processor, further cause the IHS to: apply at least the context information of the remote conferencing application to a machine learning model: determine control settings for the one or more objects in the conference room from the machine learning model: and set the controls for the one or more objects in the conference room to the determined control settings.
Swierk teaches an Information Handling System (IHS) (figs. 1 and 2, [0061] As described herein, the ICCSMS 244 may optimize various settings for peripheral devices used in the capture of media samples played during user sessions for a multimedia multi-user collaboration application as well as receive input from various sensors and user inputs to apply, to various video frames, an appropriate number of AV processing instruction modules. The intelligent collaboration contextual session management system ICCSMS may modify media capture settings, AV processing instructions applied to such captured media samples, or the type of processor used to perform such AV processing instructions in order to optimize performance of the multimedia multi-user collaboration application on one or more information handling systems in an embodiment);
automatically modify the controls for the one or more objects in the conference room ([0016, 0043, 0089, 0091] the modification of the controls is automatically performed by the intelligent collaboration contextual session management system 144 (ICCSMS), [0139] some AV processing instruction adjustments may be automatically selected based on the image received from the video camera, see also the process of fig. 8);
wherein to automatically modify the controls for the one or more objects in the conference room, the program instructions, upon execution by the processor ([0016] for modifying the controls; [0043] The ICCSMS 144 in an embodiment may select changes to or modify various settings of various AV processing instruction modules among plural sets of media samples received from a transmitting information handling system during a video conference call in another embodiment; [0089] Each of the AV processing instruction modules (e.g., 443-1, 443-2, and 443-n) in an embodiment may be sets of algorithms or code instructions executed via the operating system (OS), using one of the processors of the information handling system 400 for modification of video data or audio data relating to streaming video conferencing applications; [0091] additional modification; [0139] some AV processing instruction adjustments may be automatically selected based on the image received from the video camera and the lighting or color vectors detected in the image captured may automatically cause the AV processing instruction adjustments to be sent to the ICCSMS neural network indicating how to visually transform the video frame), further cause the IHS to:
apply at least the context information of the remote conferencing application to a machine learning model ([0017-0019, 0048-0049, and [0133-0134] apply a machine learning);
determine control settings for the one or more objects in the conference room from the machine learning model ([0043] The ICCSMS 144 in an embodiment may train and operate a neural network to determine optimized settings (e.g., media capture instructions) at a transmitting information handling system for audio or video capture, settings for execution of various AV processing instructions (e.g., AV processing instruction adjustments) on audio or video samples captured using those settings, or settings (e.g., offload instructions) for the type of processor used to perform such executions, for example. In another example embodiment, the ICCSMS 144 may operate to determine optimized settings at a receiving information handling system (e.g., 100) for execution of various AV processing instructions (e.g., AV processing instruction adjustments) on media samples (e.g., audio samples, video samples, or a combination of both) received from a transmitting information handling system during reprocessing and decoding of such media samples, or settings (e.g., offload instructions) for the type of process used to perform such executions. The ICCSMS 144 in an embodiment may select changes to or modify various settings of various AV processing instruction modules among plural sets of media samples received from a transmitting information handling system during a video conference call in another embodiment); and
set the controls for the one or more objects in the conference room to the determined control settings ([0043] The ICCSMS 144 in an embodiment may train and operate a neural network to determine optimized settings (e.g., media capture instructions) at a transmitting information handling system for audio or video capture, settings for execution of various AV processing instructions (e.g., AV processing instruction adjustments) on audio or video samples captured using those settings, or settings (e.g., offload instructions) for the type of processor used to perform such executions, for example).
Taking the teachings of Oostergo and Swierk together as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the Information Handling System (IHS) (fig. 2) and automatically modification of the controls (0016]) of Swierk into the system (figs. 1-4) of Oostergo to facilitate communication between various applications and devices of the information handling system to enable or improve the video conference experience for the user.
Regarding claim 3, Oostergo and Swierk teach the IHS of claim 1, Oostergo further teaches wherein the one or more objects in the conference room comprise one or more peripheral devices disposed in the conference room (Col. 2, lines 19-50, Col. 3, line 60-Col. 4, line 20, Examples of the devices 108 may include video conferencing devices used to facilitate video conferences within the environment 102, and such video conferencing devices may include, or be associated with, displays, microphones, speakers, etc. The devices 108 or components may also include lights within the environment 102, a thermostat for maintaining/adjusting the temperature within the environment 102, blinds or curtains within the environment 102, or any other smart device that is configured to be communicatively coupled to other devices via one or more networks 110).
Regarding claim 4, Oostergo and Swierk teach the IHS of claim 3, Oostergo further teaches wherein the one or more peripheral devices comprise at least one of: a shared display, a shared microphone, a shared speaker, or a shared camera (Col. 2, lines 19-50, Col. 3, line 60-Col. 4, line 20, Examples of the devices 108 may include video conferencing devices used to facilitate video conferences within the environment 102, and such video conferencing devices may include, or be associated with, displays, microphones, speakers, etc. The devices 108 or components may also include lights within the environment 102, a thermostat for maintaining/adjusting the temperature within the environment 102, blinds or curtains within the environment 102, or any other smart device that is configured to be communicatively coupled to other devices via one or more networks 110).
Regarding claim 8, Oostergo and Swierk teach the IHS of claim 1, Swierk further teaches wherein to apply at least the context information of the remote conferencing application to a machine learning model, the program instructions, upon execution by the processor, further cause the IHS to: apply the context information of the remote conferencing application and other context information to the machine learning model ([0017-0019, and 0043], [0044] In an embodiment in which the ICC SMS 144 operates to train a neural network, the information handling system 100 may represent the transmitting information handling system, the receiving information handling system, both of these, or an information handling system located remotely from both the transmitting and receiving information handling systems. The ICCSMS 144 in each of these embodiments may gather various input values from a plurality of information handling systems executing the MMCA 140 over time in order to determine settings for each of the plurality of information handling systems to decrease processing burden at each information handling system).
Regarding claim 9, Oostergo and Swierk teach the IHS of claim 8, Oostergo further teaches wherein the other context information comprises at least one of: time of the day, or number of people in the conference room (Col. 9, line 60-Col. 10, line10, time start and end; Col. 10, lines 20-23, determine users).
Regarding claim 10, Oostergo and Swierk teach the IHS of claim 1, Oostergo further teaches wherein the context information of the remote conferencing application is received from an on-the-box agent of a video bar or a host IHS that is executing the remote conferencing application (122 of figs. 2, 3, and, 4A-4B. See also Swierk [0016, 0020, 0025] for executing the remote application, In such an embodiment, each of the plurality of transmitting and receiving information handling systems participating within a user session of the multimedia multi-user collaboration application 140 may incorporate an agent or API for the ICCSMS 144).
Regarding claim 11, Oostergo and Swierk teach the IHS of claim 1, Swierk further teaches wherein the context information of the remote conferencing application is requested by an application integrator module from the remote conferencing application via an application programming interface (API) of the remote conferencing application ([0025] In such an embodiment, each of the plurality of transmitting and receiving information handling systems participating within a user session of the multimedia multi-user collaboration application 140 may incorporate an agent or API for the ICCSMS 144, [0036, 0082, and 0083]).
Regarding claim 12, Oostergo and Swierk teach the IHS of claim 1, Oostergo further teaches wherein the remote conferencing application enables a client IHS to participate in one or more aspects of the remote conference session (Col. 2, lines 15-18, As a result, the individual may conduct a live video meeting with individuals in a remote location, which may replicate or simulate the individuals all being within the same environment/room for the meeting; Col. 5, lines 25-30, For instance, using one or more devices 108 within the environment 102, the user 104 may facilitate a video conference with one or more remote locations (e.g., a different environment 102, room, building, etc.) in which other users 104 that are participants of the video conference are present/located).
Regarding claim 13, Oostergo and Swierk teach the IHS of claim 12, Oostergo further teaches wherein the one or more aspects comprises: receive video, transmit video, receive audio, transmit audio, or share an electronic file (Col. 5, lines 25-50).
Regarding claim 14, Oostergo and Swierk teach the IHS of claim 12, Oostergo further teaches wherein the remote conferencing application allows a participant to record one or more aspects of the remote conferencing meeting (122 of figs. 2, 4A and 4B, Col. 8, lines 7-38).
Regarding claim 25, Oostergo and Swierk teach the IHS of claim 1, Oostergo further teacheds wherein the context information of the remote conferencing application further comprises at least one of: a meeting start, a meeting end (Col. 9, lines 60-Col. 10, line 19), an end of a remote user on video (104 of fig. 1, end of a remote user on video to facilitate other user at the remote location a meeting start; Col. 4, lines 25-30, the user 104 may facilitate a video conference with one or more remote locations (e.g., a different environment 102, room, building, etc.) in which other users 104 that are participants of the video conference are present/located; a video conference may involve the reception and transmission of audio-video signals between users 104/devices 108 at different locations, which may allow for video communications between users 104 at different locations in real-time (or near real-time); Col. 10, lines 4-10, environments 102 at which those participants will be attending the meeting, provided that one or more participants will be participating from at least one location remote from the environment 102), an end of a remote user on video (104 of fig. 1, end of a remote user on video), a start of a slide presentation (See claim 1 of Oostergo, presentation of the application would obviously encompass a start of slide presentation), or an end of a slide presentation (See claim 1 of Oostergo, presentation of the application would obviously encompass end of a slice presentation).
Claim(s) 15, 18, and 21-24 is/are rejected under 35 U.S.C. 103 as being unpatentable over Oostergo et al. (US 11863905 B1) in view of Swierk et al. (US 20220232189 A1) as applied claim 1, and further in view of Alrod et al. (US 20170324933 A1).
Regarding claims 15 and 18, Oostergo discloses a system (figs. 1-3), comprising:
a processor (202 of fig. 2, Col. 8, lines 47-60, as illustrated in FIG. 2, the remote computing resource(s) 112 may include multiple components, such as one or more processors 202 and memory 204. The memory 204 may include computer-readable media (“CRM”) and/or computer-readable storage media (“CRSM”), which may be any available physical media accessible by the processor 202 to execute instructions stored on the memory 204. In one basic implementation, CRSM may include random access memory (“RAM”) and Flash memory. In other implementations, CRSM may include, but is not limited to, read-only memory (“ROM”), electrically erasable programmable read-only memory (“EEPROM”), or any other medium which can be used to store the desired data/information and that can be accessed by the processor(s) 202); and
a memory coupled to the processor (204 of fig. 2), the memory having program instructions stored thereon that, upon execution by the processor (Col. 8, lines 47-60, as illustrated in FIG. 2, the remote computing resource(s) 112 may include multiple components, such as one or more processors 202 and memory 204. The memory 204 may include computer-readable media (“CRM”) and/or computer-readable storage media (“CRSM”), which may be any available physical media accessible by the processor 202 to execute instructions stored on the memory 204. In one basic implementation, CRSM may include random access memory (“RAM”) and Flash memory. In other implementations, CRSM may include, but is not limited to, read-only memory (“ROM”), electrically erasable programmable read-only memory (“EEPROM”), or any other medium which can be used to store the desired data/information and that can be accessed by the processor(s) 202), cause the system to:
obtain context information of a remote conferencing application that enables a remote conferencing session in a conference room (Col. 3, lines 1-14, Detection of the user within the environment may also be determined based on facial recognition techniques applied to an image or video that is captured by a camera within the environment and that depicts the user, Col. 22, lines 47-54, 500 of fig. 5, an example process 500 of generating and presenting an application 122 that depicts devices 108 within an environment 102 that may be activated and controlled by a user 104 within the environment 102 via the application 122),
wherein the context information of the remote conferencing application comprises at least one of: a meeting start, a meeting end, a start of a remote user on video, an end of a remote user on video, a start of a slide presentation, or an end of a slide presentation (Col 23, lines 36-43, the user 104 has entered the environment 102); and
modify controls for one or more objects in the conference room based, at least in part, on the context information of the remote conferencing application (MODIFIED APPLICATION 232 of figs. 4A and 4B; Col. 23, lines 44-51, FIG. 6 illustrates a flow diagram of an example process 600 of presenting and updating an application 122 that depicts devices 108 within an environment that may be activated and controlled by a user 104 within the environment 102 via the application 122. Moreover, the following actions described with respect to FIG. 6 may be performed by the remote computing resource(s) 112, as illustrated with respect to FIGS. 1-4; Col. 24, lines 32-42, generating a modified application based on the selection and the device state, the application 122 may be dynamically updated to include a first selectable control to adjust a brightness level of the lights 322 and a second selectable control to turn off the lights 322, resulting in a modified application 232, and the modified application 232 with the dynamically updated selectable controls may then be presented to the user 104 via his/her user device 106),
wherein the one or more objects in the conference room comprise at least one of: lights of the conference room, a thermostat in the conference room, or a window blind in the conference room (108 of fig. 1, the device(s) 108, Col. 2, lines 19-50, Col. 3, line 60-Col. 4, line 20, Examples of the devices 108 may include video conferencing devices used to facilitate video conferences within the environment 102, and such video conferencing devices may include, or be associated with, displays, microphones, speakers, etc. The devices 108 or components may also include lights within the environment 102, a thermostat for maintaining/adjusting the temperature within the environment 102, blinds or curtains within the environment 102, or any other smart device that is configured to be communicatively coupled to other devices via one or more networks 110).
It is noted that Oostergo is silent about an Information Handling System (IHS); automatically modify the controls for the one or more objects in the conference room; and wherein to automatically modify the controls for the one or more objects in the conference room, the program instructions, upon execution by the processor, further cause the IHS to: apply at least the context information of the remote conferencing application to a machine learning model: determine control settings for the one or more objects in the conference room from the machine learning model: and set the controls for the one or more objects in the conference room to the determined control settings.
Swierk teaches an Information Handling System (IHS) (figs. 1 and 2, [0061] As described herein, the ICCSMS 244 may optimize various settings for peripheral devices used in the capture of media samples played during user sessions for a multimedia multi-user collaboration application as well as receive input from various sensors and user inputs to apply, to various video frames, an appropriate number of AV processing instruction modules. The intelligent collaboration contextual session management system ICCSMS may modify media capture settings, AV processing instructions applied to such captured media samples, or the type of processor used to perform such AV processing instructions in order to optimize performance of the multimedia multi-user collaboration application on one or more information handling systems in an embodiment);
automatically modify the controls for the one or more objects in the conference room ([0016, 0043, 0089, 0091] the modification of the controls is automatically performed by the intelligent collaboration contextual session management system 144 (ICCSMS), [0139] some AV processing instruction adjustments may be automatically selected based on the image received from the video camera, see also the process of fig. 8),
wherein to automatically modify the controls for the one or more objects in the conference room, the program instructions, upon execution by the processor ([0016] for modifying the controls; [0043] The ICCSMS 144 in an embodiment may select changes to or modify various settings of various AV processing instruction modules among plural sets of media samples received from a transmitting information handling system during a video conference call in another embodiment; [0089] Each of the AV processing instruction modules (e.g., 443-1, 443-2, and 443-n) in an embodiment may be sets of algorithms or code instructions executed via the operating system (OS), using one of the processors of the information handling system 400 for modification of video data or audio data relating to streaming video conferencing applications; [0091] additional modification; [0139] some AV processing instruction adjustments may be automatically selected based on the image received from the video camera and the lighting or color vectors detected in the image captured may automatically cause the AV processing instruction adjustments to be sent to the ICCSMS neural network indicating how to visually transform the video frame), further cause the IHS to:
apply at least the context information of the remote conferencing application to a machine learning model ([0017-0019, 0048-0049, and [0133-0134] apply a machine learning);
determine control settings for the one or more objects in the conference room from the machine learning model ([0043] The ICCSMS 144 in an embodiment may train and operate a neural network to determine optimized settings (e.g., media capture instructions) at a transmitting information handling system for audio or video capture, settings for execution of various AV processing instructions (e.g., AV processing instruction adjustments) on audio or video samples captured using those settings, or settings (e.g., offload instructions) for the type of processor used to perform such executions, for example. In another example embodiment, the ICCSMS 144 may operate to determine optimized settings at a receiving information handling system (e.g., 100) for execution of various AV processing instructions (e.g., AV processing instruction adjustments) on media samples (e.g., audio samples, video samples, or a combination of both) received from a transmitting information handling system during reprocessing and decoding of such media samples, or settings (e.g., offload instructions) for the type of process used to perform such executions. The ICCSMS 144 in an embodiment may select changes to or modify various settings of various AV processing instruction modules among plural sets of media samples received from a transmitting information handling system during a video conference call in another embodiment); and
set the controls for the one or more objects in the conference room to the determined control settings ([0043] The ICCSMS 144 in an embodiment may train and operate a neural network to determine optimized settings (e.g., media capture instructions) at a transmitting information handling system for audio or video capture, settings for execution of various AV processing instructions (e.g., AV processing instruction adjustments) on audio or video samples captured using those settings, or settings (e.g., offload instructions) for the type of processor used to perform such executions, for example).
Taking the teachings of Oostergo and Swierk together as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the Information Handling System (IHS) (fig. 2) and automatically modification of the controls (0016]) of Swierk into the system (figs. 1-4) of Oostergo to facilitate communication between various applications and devices of the information handling system to enable or improve the video conference experience for the user.
It is further noted that Oostergo and Swierk are silent about apply the context information of the remote conferencing application to a rule table and determine control settings for the one or more objects in the conference room from the rule table.
Alrod teaches apply the context information of the remote conferencing application to a rule table ([0027] the control unit can create automatically a dynamic lighting experience by analyzing the meeting content and identifying the sources of attention, or focal points or points of interest, to selectively provide them with different illumination than other objects that are not of interest; [0028] the system and method can enable a videoconferencing system to auto-adjust the lighting to enhance remote and local user experience in many ways, including spotlighting areas of attention, such as an active speaker or whiteboard. The use of a time threshold and rules typically cause the processor to perform only a few adjustments throughout the entire meeting that will provide an optimal set of lighting conditions according to a predetermined definition or set of criteria; [0089] a rule set regarding optimal lighting configuration for selected locations in the meeting area. For example, a lookup table can then be generated mapping lighting element state and setting against lighting effects at different locations in the meeting area. The lookup table can be used to produce selected lighting effects based on focal point determination) and
determine control settings for the one or more objects in the conference room from the rule table ([0088 and 0089] a lookup table can then be generated mapping lighting element state and setting against lighting effects at different locations in the meeting area. The lookup table can be used to produce selected lighting effects based on focal point determination, [0096] the imaging controller 256 determines a time interval since a last change by the lighting controller of a lighting configuration or setting in the meeting area).
Taking the teachings of Oostergo, Swierk, and Alrod together as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the rule table for the conference room components settings of Alrod into the settings of Oostergo for improving performance, achieving ease, and reducing cost of implementation ([0116] of Alrod).
Regarding claims 21 and 23, Oostergo, Swierk, and Alrod teach the hardware memory device of claim 15, wherein the context information of the remote conferencing application is received from an on-the-box agent of a video bar or a host IHS that is executing the remote conferencing application (122 of figs. 2, 3, and, 4A-4B. See also Swierk [0016, 0020, 0025] for executing the remote application, in such an embodiment, each of the plurality of transmitting and receiving information handling systems participating within a user session of the multimedia multi-user collaboration application 140 may incorporate an agent or API for the ICCSMS 144).
Regarding claims 22 and 14, Oostergo, Swierk, and Alrod teach the hardware memory device of claim 15, Swierk further teaches wherein the context information of the remote conferencing application is requested by an application integrator module from the remote conferencing application via an application programming interface (API) of the remote conferencing application ([0025] In such an embodiment, each of the plurality of transmitting and receiving information handling systems participating within a user session of the multimedia multi-user collaboration application 140 may incorporate an agent or API for the ICCSMS 144, [0036, 0082, and 0083]).
Response to Arguments
Applicant's arguments filed 01/23/2026 have been fully considered but they are not persuasive.
The applicant argues Oostergo does not teach "wherein the context information of the remote conferencing application comprises at least one of: a meeting start, a meeting end, a start of a remote user on video, an end of a remote user on video, a start of a slide presentation, or an end of a slide presentation" in claims 1 and 25. It is noted that the new claim 25 has the same cited limitations in claim 1 except a start of a remote user on video.
The examiner disagrees with the applicant. It is submitted that Oostergo teaches the obtaining context information of a remote conferencing application (Col. 3, lines 1-14, Detection of the user within the environment may also be determined based on facial recognition techniques applied to an image or video that is captured by a camera within the environment and that depicts the user, Col. 22, lines 47-54, 500 of fig. 5, an example process 500 of generating and presenting an application 122 that depicts devices 108 within an environment 102 that may be activated and controlled by a user 104 within the environment 102 via the application 122) that enables a remote conferencing session in a conference room (Col. 4, lines 25-30, the user 104 may facilitate a video conference with one or more remote locations (e.g., a different environment 102, room, building, etc.) in which other users 104 that are participants of the video conference are present/located; a video conference may involve the reception and transmission of audio-video signals between users 104/devices 108 at different locations, which may allow for video communications between users 104 at different locations in real-time (or near real-time)), wherein the context information of the remote conferencing application (122 of fig. 4A) comprises at least one of: a meeting start, a meeting end (Col. 9, lines 60-Col. 10, line 19), a start of a remote user on video (Col. 4, lines 25-30, the user 104 may facilitate a video conference with one or more remote locations (e.g., a different environment 102, room, building, etc.) in which other users 104 that are participants of the video conference are present/located; a video conference may involve the reception and transmission of audio-video signals between users 104/devices 108 at different locations, which may allow for video communications between users 104 at different locations in real-time (or near real-time); Col. 10, lines 4-10, environments 102 at which those participants will be attending the meeting, provided that one or more participants will be participating from at least one location remote from the environment 102), an end of a remote user on video (104 of fig. 1, end of a remote user on video), a start of a slide presentation (See claim 1 of Oostergo, presentation of the application would obviously encompass a start of slide presentation), or an end of a slide presentation (See claim 1 of Oostergo, presentation of the application would obviously encompass end of a slice presentation)
The applicant further argues "automatically modify controls for one or more objects in the conference room". In other words, the controls for one or more objects in the conference room are not automatically modified (or even simply modified for that matter) based on the context information (claim 1, lines 11-12).
The examiner disagrees with the applicant. It is submitted Oostergo teaches modify controls for one or more objects in the conference room (Col. 21, lines 10-37, the selectable controls that are depicted within the modified application 232 and that are presented to the user 104 may include a selectable control to adjust the volume 306 of one or more speakers associated with the video conferencing device(s) 302)).
Swierk teaches automatically modify the controls for the one or more objects in the conference room ([0016, 0043, 0089, 0091] the modification of the controls is automatically performed by the intelligent collaboration contextual session management system 144 (ICCSMS), [0139] some AV processing instruction adjustments may be automatically selected based on the image received from the video camera, see also the process of fig. 8); wherein to automatically modify the controls for the one or more objects in the conference room, the program instructions, upon execution by the processor ([0016] for modifying the controls; [0043] The ICCSMS 144 in an embodiment may select changes to or modify various settings of various AV processing instruction modules; [0089] for modification of video data or audio data relating to streaming video conferencing applications; [0091] additional modification; [0139] automatically cause the AV processing instruction adjustments). Therefore, it would have been obvious to one of ordinary skill in the art to modify automatically modification of the controls of Swierk into the modified application and the remote application of Oostergo to automatically control the devices within the conference room to improve the video conferencing.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Davis et al. (US 10958457 B1) discloses the service provider system 350 determines that video will be used for a meeting, device settings may be determined for various devices that can affect the lighting conditions in a room.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to TUNG T VO whose telephone number is (571)272-7340. The examiner can normally be reached Monday-Friday 6:30 AM - 5:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian Pendleton can be reached at 571-272-7527. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
TUNG T. VO
Primary Examiner
Art Unit 2425
/TUNG T VO/Primary Examiner, Art Unit 2425