DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
1. Claim 8 states a “a computer system, the computer system comprising: one or more processors, one or more computer-readable memories, one or more computer-readable tangible storage medium, and program instructions stored on at least one of the one or more tangible storage medium for execution by at least one of the one or more processors via at least one of the one or more memories.”
Claim 15 states a “A computer program product, the computer program product comprising: one or more computer-readable tangible storage medium and program instructions stored on at least one of the one or more tangible storage medium…”
Applicants specification in paragraph [0015] A computer readable storage medium, as that term is used in the present disclosure, is not to be construed as storage in the form of transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide, light pulses passing through a fiber optic cable, electrical signals communicated through a wire, and/or other transmission media. As will be understood by those of skill in the art, data is typically moved at some occasional points in time during normal operations of a storage device, such as during access, de-fragmentation or garbage collection, but this does not render the storage device as transitory because the data is not transitory while it is stored. Thus the computer storage media excludes signals per se.
Examiner on the bases of applicants specification this would constitute a non-transitory medium.
Claim Rejections - 35 USC § 102
2. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
3. Claim(s) 1, 4, 5, 8, 11, 12, 15, 18, 19 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Palamadai et al. (US 2024/0097925).
Regarding claim 1, Palamadai teaches a processor-implemented method, the method comprising: initiating a video call; determining one or more sets of user preferences for each of one or more users; identifying visual characteristics of a main video stream on the video call (see fig. 1-5, 7,¶ 0062, 0087, 0100-0101. Audio-visual management component can automatically control and adapt the way respective participants look and sound to others based on their individual preferences and the context of the session by automatically applying various audio-visual filters to the data streams. The LSM component can also employ ML and AI techniques to learn user preferences regarding preferred audio-visual parameters and settings and apply these preferences in future video conferencing sessions in which the users participate. The interface adaptation component can also automatically determine, control and adapt the display configuration of respective participants graphical user interfaces (GUIs) based on user preference, meeting context and contextual events that occur over the course of a session. The GUI of respective participants in a video conferencing session can comprise separate window or thumbnail views of the other participants, including live video feeds, static image content and/or pre-recorded video content. Adaptation component can selectively choose what view to display and when based on the context of the session and user preferences. The interface adaptation component can arrange the views to include a view of whoever is currently speaking in a primary position on the display screen, wherein interface adaptation component can also adjust the size of the respective window views as function of relevancy and priority throughout the duration of the session, wherein the interface adaptation component can control increasing the size of a view of the current speaker relative to rendered views of other participants.); and adjusting one or more visual characteristics of the main video stream according to the one or more sets of user preferences corresponding to each user to create a custom video stream for each user (see fig. 1-5, 7,¶ 0062, 0087, 0100-0101. The interface adaptation component can also automatically determine, control and adapt the display configuration of respective participants graphical user interfaces (GUIs) based on user preference, meeting context and contextual events that occur over the course of a session. The GUI of respective participants in a video conferencing session can comprise separate window or thumbnail views of the other participants, including live video feeds, static image content and/or pre-recorded video content. Adaptation component can selectively choose what view to display and when based on the context of the session and user preferences. wherein interface adaptation component can also adjust the size of the respective window views as function of relevancy and priority throughout the duration of the session, wherein the interface adaptation component can control increasing the size of a view of the current speaker relative to rendered views of other participants.).
In a broad interpretation of the claimed invention the visual characteristic which can be adaption of the user interface to provide different viewing arrangement based on user preferences. Therefore the claimed invention based on the inventive concept, the system makes changes to the display for each user based on user preferences.
Regarding claim 4, Palamadai teaches the method of claim 1, wherein the determining is performed by a process of machine learning (see fig. 1-5, 7, ¶ 0101. The artificial intelligence component can also employ ML and AI techniques to learn user preferences preferred audio-visual parameters and settings and apply these preferences in future video conferencing sessions in which the users participate).
Regarding claim 5, Palamadai teaches the method of claim 1, further comprising: selecting one or more points of focus in the main video stream; and adjusting the relative size of the point of focus (see fig. 1-5, 7,¶ 0062, 0087, 0100-0101. The interface adaptation component can also automatically determine, control and adapt the display configuration of respective participants graphical user interfaces (GUIs) based on user preference, meeting context and contextual events that occur over the course of a session. The GUI of respective participants in a video conferencing session can comprise separate window or thumbnail views of the other participants, including live video feeds, static image content and/or pre-recorded video content. Adaptation component can selectively choose what view to display and when based on the context of the session and user preferences. wherein interface adaptation component can also adjust the size of the respective window views as function of relevancy and priority throughout the duration of the session, wherein the interface adaptation component can control increasing the size of a view of the current speaker relative to rendered views of other participants.).
Regarding claim 8, Palamadai teaches a computer system, the computer system comprising: one or more processors, one or more computer-readable memories, one or more computer-readable tangible storage medium, and program instructions stored on at least one of the one or more tangible storage medium for execution by at least one of the one or more processors via at least one of the one or more memories, wherein the computer system is capable of performing a method comprising: initiating a video call; determining one or more sets of user preferences for each of one or more users; identifying visual characteristics of a main video stream on the video call (see fig. 1-5, 7,¶ 0062, 0087, 0100-0101. Audio-visual management component can automatically control and adapt the way respective participants look and sound to others based on their individual preferences and the context of the session by automatically applying various audio-visual filters to the data streams. The LSM component can also employ ML and AI techniques to learn user preferences regarding preferred audio-visual parameters and settings and apply these preferences in future video conferencing sessions in which the users participate. The interface adaptation component can also automatically determine, control and adapt the display configuration of respective participants graphical user interfaces (GUIs) based on user preference, meeting context and contextual events that occur over the course of a session. The GUI of respective participants in a video conferencing session can comprise separate window or thumbnail views of the other participants, including live video feeds, static image content and/or pre-recorded video content. Adaptation component can selectively choose what view to display and when based on the context of the session and user preferences. The interface adaptation component can arrange the views to include a view of whoever is currently speaking in a primary position on the display screen, wherein interface adaptation component can also adjust the size of the respective window views as function of relevancy and priority throughout the duration of the session, wherein the interface adaptation component can control increasing the size of a view of the current speaker relative to rendered views of other participants.); and adjusting one or more visual characteristics of the main video stream according to the one or more sets of user preferences corresponding to each user to create a custom video stream for each user (see fig. 1-5, 7,¶ 0062, 0087, 0100-0101. The interface adaptation component can also automatically determine, control and adapt the display configuration of respective participants graphical user interfaces (GUIs) based on user preference, meeting context and contextual events that occur over the course of a session. The GUI of respective participants in a video conferencing session can comprise separate window or thumbnail views of the other participants, including live video feeds, static image content and/or pre-recorded video content. Adaptation component can selectively choose what view to display and when based on the context of the session and user preferences. wherein interface adaptation component can also adjust the size of the respective window views as function of relevancy and priority throughout the duration of the session, wherein the interface adaptation component can control increasing the size of a view of the current speaker relative to rendered views of other participants.).
In a broad interpretation of the claimed invention the visual characteristic which can be adaption of the user interface to provide different viewing arrangement based on user preferences. Therefore the claimed invention based on the inventive concept, the system makes changes to the display for each user based on user preferences.
Regarding claim 11, Palamadai teaches the computer system of claim 8, wherein the determining is performed by a process of machine learning (see fig. 1-5, 7, ¶ 0101. The artificial intelligence component can also employ ML and AI techniques to learn user preferences preferred audio-visual parameters and settings and apply these preferences in future video conferencing sessions in which the users participate).
Regarding claim 12, Palamadai teaches the computer system of claim 8, further comprising: selecting one or more points of focus in the main video stream; and adjusting the relative size of the point of focus (see fig. 1-5, 7,¶ 0062, 0087, 0100-0101. The interface adaptation component can also automatically determine, control and adapt the display configuration of respective participants graphical user interfaces (GUIs) based on user preference, meeting context and contextual events that occur over the course of a session. The GUI of respective participants in a video conferencing session can comprise separate window or thumbnail views of the other participants, including live video feeds, static image content and/or pre-recorded video content. Adaptation component can selectively choose what view to display and when based on the context of the session and user preferences. wherein interface adaptation component can also adjust the size of the respective window views as function of relevancy and priority throughout the duration of the session, wherein the interface adaptation component can control increasing the size of a view of the current speaker relative to rendered views of other participants.).
Regarding claim 15, Palamadai teaches a computer program product, the computer program product comprising: one or more computer-readable tangible storage medium and program instructions stored on at least one of the one or more tangible storage medium, the program instructions executable by a processor capable of performing a method, the method comprising: initiating a video call; determining one or more sets of user preferences for each of one or more users; identifying visual characteristics of a main video stream on the video call (see fig. 1-5, 7,¶ 0062, 0087, 0100-0101. Audio-visual management component can automatically control and adapt the way respective participants look and sound to others based on their individual preferences and the context of the session by automatically applying various audio-visual filters to the data streams. The LSM component can also employ ML and AI techniques to learn user preferences regarding preferred audio-visual parameters and settings and apply these preferences in future video conferencing sessions in which the users participate. The interface adaptation component can also automatically determine, control and adapt the display configuration of respective participants graphical user interfaces (GUIs) based on user preference, meeting context and contextual events that occur over the course of a session. The GUI of respective participants in a video conferencing session can comprise separate window or thumbnail views of the other participants, including live video feeds, static image content and/or pre-recorded video content. Adaptation component can selectively choose what view to display and when based on the context of the session and user preferences. The interface adaptation component can arrange the views to include a view of whoever is currently speaking in a primary position on the display screen, wherein interface adaptation component can also adjust the size of the respective window views as function of relevancy and priority throughout the duration of the session, wherein the interface adaptation component can control increasing the size of a view of the current speaker relative to rendered views of other participants.); and adjusting one or more visual characteristics of the main video stream according to the one or more sets of user preferences corresponding to each user to create a custom video stream for each user (see fig. 1-5, 7,¶ 0062, 0087, 0100-0101. The interface adaptation component can also automatically determine, control and adapt the display configuration of respective participants graphical user interfaces (GUIs) based on user preference, meeting context and contextual events that occur over the course of a session. The GUI of respective participants in a video conferencing session can comprise separate window or thumbnail views of the other participants, including live video feeds, static image content and/or pre-recorded video content. Adaptation component can selectively choose what view to display and when based on the context of the session and user preferences. wherein interface adaptation component can also adjust the size of the respective window views as function of relevancy and priority throughout the duration of the session, wherein the interface adaptation component can control increasing the size of a view of the current speaker relative to rendered views of other participants.).
In a broad interpretation of the claimed invention the visual characteristic which can be adaption of the user interface to provide different viewing arrangement based on user preferences. Therefore the claimed invention based on the inventive concept, the system makes changes to the display for each user based on user preferences.
Regarding claim 18, Palamadai teaches the computer program product of claim 15, wherein the determining is performed by a process of machine learning (see fig. 1-5, 7, ¶ 0101. The artificial intelligence component can also employ ML and AI techniques to learn user preferences preferred audio-visual parameters and settings and apply these preferences in future video conferencing sessions in which the users participate).
Regarding claim 19, Palamadai teaches the computer program product of claim 15, further comprising: selecting one or more points of focus in the main video stream; and adjusting the relative size of the point of focus (see fig. 1-5, 7,¶ 0062, 0087, 0100-0101. The interface adaptation component can also automatically determine, control and adapt the display configuration of respective participants graphical user interfaces (GUIs) based on user preference, meeting context and contextual events that occur over the course of a session. The GUI of respective participants in a video conferencing session can comprise separate window or thumbnail views of the other participants, including live video feeds, static image content and/or pre-recorded video content. Adaptation component can selectively choose what view to display and when based on the context of the session and user preferences. wherein interface adaptation component can also adjust the size of the respective window views as function of relevancy and priority throughout the duration of the session, wherein the interface adaptation component can control increasing the size of a view of the current speaker relative to rendered views of other participants.).
Claim Rejections - 35 USC § 103
4. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
5. Claim(s) 2, 9, 16 are rejected under 35 U.S.C. 103 as being unpatentable over Palamadai et al. (US 2024/0097925) in view of Sullivan et al. (US 2021/0377202).
Regarding claim 2, Palamadai does not teach the method of claim 1, wherein the one or more sets of user preferences include preferences regarding a text size.
Sullivan teaches wherein the one or more sets of user preferences include preferences regarding a text size (see ¶ 0027. The text size may be adapted for the specific intended recipient based on preferences on the user device. If the intended recipient has enlarged text setting selected on their electronic device, the scaling will be larger as to provide enhanced readability for the intended recipient.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Palamadai to incorporate preferences of a user to include text size for the user interface. The modification provides text size may be adapted for the specific intended recipient based on preferences on the user device.
Regarding claim 9, Palamadai does not teach the computer system of claim 8, wherein the one or more sets of user preferences include preferences regarding a text size.
Sullivan teaches wherein the one or more sets of user preferences include preferences regarding a text size (see ¶ 0027. The text size may be adapted for the specific intended recipient based on preferences on the user device. If the intended recipient has enlarged text setting selected on their electronic device, the scaling will be larger as to provide enhanced readability for the intended recipient.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Palamadai to incorporate preferences of a user to include text size for the user interface. The modification provides text size may be adapted for the specific intended recipient based on preferences on the user device.
Regarding claim 16, Palamadai does not teach the computer program product of claim 15, wherein the one or more sets of user preferences include preferences regarding a text size.
Sullivan teaches wherein the one or more sets of user preferences include preferences regarding a text size (see ¶ 0027. The text size may be adapted for the specific intended recipient based on preferences on the user device. If the intended recipient has enlarged text setting selected on their electronic device, the scaling will be larger as to provide enhanced readability for the intended recipient.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Palamadai to incorporate preferences of a user to include text size for the user interface. The modification provides text size may be adapted for the specific intended recipient based on preferences on the user device.
6. Claim(s) 3, 10, 17 are rejected under 35 U.S.C. 103 as being unpatentable over Palamadai et al. (US 2024/0097925) in view of Chainnapatlolla et al. (US 2017/0093931).
Regarding claim 3, Palamadai does not teach the method of claim 1, wherein the identifying is performed by a process of image recognition including text recognition.
Chinnapatlolla teaches wherein the identifying is performed by a process of image recognition including text recognition (see ¶ 0039. The conferencing setup program which would be based on user preferences, would process image recognition and text transcript analysis (recognition) for processing based on preferences.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Palamadai to incorporate preferences of a user to include image recognition and text recognition (transcription). The modification provides image and text analysis based on user preferences.
Regarding claim 10, Palamadai does not teach the computer system of claim 8, wherein the identifying is performed by a process of image recognition including text recognition.
Chinnapatlolla teaches wherein the identifying is performed by a process of image recognition including text recognition (see ¶ 0039. The conferencing setup program which would be based on user preferences, would process image recognition and text transcript analysis (recognition) for processing based on preferences.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Palamadai to incorporate preferences of a user to include image recognition and text recognition (transcription). The modification provides image and text analysis based on user preferences.
Regarding claim 17, Palamadai does not teach the computer program product of claim 15, wherein the identifying is performed by a process of image recognition including text recognition.
Chinnapatlolla teaches wherein the identifying is performed by a process of image recognition including text recognition (see ¶ 0039. The conferencing setup program which would be based on user preferences, would process image recognition and text transcript analysis (recognition) for processing based on preferences.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Palamadai to incorporate preferences of a user to include image recognition and text recognition (transcription). The modification provides image and text analysis based on user preferences.
7. Claim(s) 6, 13, 20 are rejected under 35 U.S.C. 103 as being unpatentable over Palamadai et al. (US 2024/0097925) in view of Meghwani et al. (US 2016/0057387).
Regarding claim 6, Palamadai does not teach the method of claim 1, wherein the main video stream is video of a presenter's screen.
Meghawani teaches wherein the main video stream is video of a presenter's screen (see ¶ 0002. Presenter will have two video streams at his disposal. The first stream may be considered a main video stream, which would show the output of the presenter's camera, e.g., the presenter's head shot. A second stream would be used for a presentation. Examples of a presentation could include a view of the presenter's computer desktop or the output of particular application, such PowerPoint™ or a web browser).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Palamadai to incorporate the main stream being the presenters stream. The modification provides for the presenter being the main conferencing stream.
Regarding claim 13, Palamadai does not teach the computer system of claim 8, wherein the main video stream is video of a presenter's screen.
Meghawani teaches wherein the main video stream is video of a presenter's screen (see ¶ 0002. Presenter will have two video streams at his disposal. The first stream may be considered a main video stream, which would show the output of the presenter's camera, e.g., the presenter's head shot. A second stream would be used for a presentation. Examples of a presentation could include a view of the presenter's computer desktop or the output of particular application, such PowerPoint™ or a web browser).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Palamadai to incorporate the main stream being the presenters stream. The modification provides for the presenter being the main conferencing stream.
Regarding claim 20, Palamadai does not teach computer program product of claim 15, wherein the main video stream is video of a presenter's screen.
Meghawani teaches wherein the main video stream is video of a presenter's screen (see ¶ 0002. Presenter will have two video streams at his disposal. The first stream may be considered a main video stream, which would show the output of the presenter's camera, e.g., the presenter's head shot. A second stream would be used for a presentation. Examples of a presentation could include a view of the presenter's computer desktop or the output of particular application, such PowerPoint™ or a web browser).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Palamadai to incorporate the main stream being the presenters stream. The modification provides for the presenter being the main conferencing stream.
8. Claim(s) 7, 14 are rejected under 35 U.S.C. 103 as being unpatentable over Palamadai et al. (US 2024/0097925) in view of Gould et al. (US 2010/0323728).
Regarding claim 7, Palamadai does not teach the method of claim 1, wherein the one or more sets of user preferences include preferences regarding text colors.
Gould teaches wherein the one or more sets of user preferences include preferences regarding text colors (see ¶ 0092. The colors or other modes of differentiation may be established according to user-defined preferences. Different colors of text may also be used for different types of messages. One or more colors for text attributable to a speaker during the conversation, other colors for greetings, and still other colors for system messages and the like.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Palamadai to incorporate user define preference for different color text. The modification provides for different color texts based on user preferences.
Regarding claim 14, Palamadai does not teach the computer system of claim 8, wherein the one or more sets of user preferences include preferences regarding text colors.
Gould teaches wherein the one or more sets of user preferences include preferences regarding text colors (see ¶ 0092. The colors or other modes of differentiation may be established according to user-defined preferences. Different colors of text may also be used for different types of messages. One or more colors for text attributable to a speaker during the conversation, other colors for greetings, and still other colors for system messages and the like.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Palamadai to incorporate user define preference for different color text. The modification provides for different color texts based on user preferences.
Conclusion
9. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ASSAD MOHAMMED whose telephone number is (571)270-7253. The examiner can normally be reached 9:00AM-5:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Duc Nguyen can be reached at 571-272-7503. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ASSAD MOHAMMED/Examiner, Art Unit 2691
/DUC NGUYEN/Supervisory Patent Examiner, Art Unit 2691