DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-5 are pending.
Claim 1 is amended.
Claim 4 is cancelled.
Claim 5 is new.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-3 and 5 are rejected under 35 U.S.C. 103 as being unpatentable over Springer et al. (US 11265181) in view of Kim et al. (US 20220329687) and Hedge et al. (US 20100085416 A1).
Regarding claim 1:
Springer teaches:
A terminal apparatus comprising (Springer [0094] The present disclosure also relates to an apparatus for performing the operations herein. This apparatus may be specially constructed for the intended purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, each coupled to a computer system bus.):
a communication interface (Springer [0025] The first user's client device 150 and additional users' client device(s) 160 are devices with a display configured to present information to a user of the device. In some embodiments, the first user's client device 150 and additional users' client device(s) 160 present information in the form of a user interface (UI) with UI elements or components.);
a display (Springer [0025] The first user's client device 150 and additional users' client device(s) 160 are devices with a display configured to present information to a user of the device. In some embodiments, the first user's client device 150 and additional users' client device(s) 160 present information in the form of a user interface (UI) with UI elements or components.);
an input interface comprising a touch panel superimposed on the display (Springer [0048] At step 212, the system receives annotation inputs corresponding to at least one of the composite videos. In some embodiments, the annotation inputs may be provided via touch or stylus input from one or more client devices.);
an imager configured to capture images of a user (Springer [0025] In some embodiments, first user's client device 150 and/or additional users' client device(s) 160 include an embedded or connected camera which is capable of generating and transmitting video content in real time or substantially real time.);
and a controller configured to communicate using the communication interface (Springer [0091] The computer 500 may include peripherals 505. Peripherals 505 may include input peripherals such as a keyboard, mouse, trackball, video camera, microphone, and other input devices.),
wherein the controller is configured to: receive, from another terminal apparatus, information for generating a model image representing another user who uses the another terminal apparatus based on a captured image of the another user, and information on a drawn image that is drawn by the another user on a touch panel of the another terminal apparatus, and to display, on the display, and the model image and the drawn image are superimposed on each other (Springer [0008] An annotation layer overlaid on top of the user's image allows for a presenter to produce live written annotations in conjunction with their own video presentations, particularly when accessing a “self-view” interface on their client device. In some use cases, this combination of visual elements can be reproduced both locally for near-end participants, as well as remotely for far-end participants, thus democratizing the instructional experience. [0009] One embodiment relates to a method for providing multi-point video presentations with live annotations within a communication platform. First, the system receives video feeds depicting imagery of a number of users, with the video feeds each having multiple video frames. The system then determines a boundary about each user in the video feeds, with the boundaries each having an interior portion and an exterior portion. The system provides a media background for the exterior portions, then generates a composite video for each of the feeds which is displayed on client devices. The composite videos depict the corresponding media background in a first layer and each user from the interior portion in a second layer overlaid on top of the first layer. The system then determines that one or more client devices have annotation permissions, and receives one or more annotation inputs corresponding to at least one of the composite videos. Finally, the system updates at least one of the composite videos to additionally depict the annotation inputs within a third layer. [0044] Video compositing techniques and processes may include, e.g., masking, digital image manipulation, background projection, flipping or reversing (e.g., flipping the imagery of the annotations so that the annotations are readable by viewers and do not appear reversed), deep image compositing, alpha compositing, digital matte compositing, computer vision models and algorithms, or any other suitable video compositing techniques and processes. [0053] Upon receiving the annotation inputs in step 210, the system depicts the annotations as though they are overlaid in front of the user. For example, in a live stream, the user may draw on the client device directly on top of a self-view of the user, i.e., the user is watching himself present on the client device in real time. As the user draws, the annotations appear as though they are being drawn directly in front of the user. In some embodiments, the annotations may correspond to the media background, including markings of various parts of the media background. For example, a user may circle a bullet point of interest on the screen while a media background of a presentation slide is being depicted. The system updates the composite video with such annotations in real time, so that viewers and participants see the annotations as they are being sketched out. [0048] At step 212, the system receives annotation inputs corresponding to at least one of the composite videos. In some embodiments, the annotation inputs may be provided via touch or stylus input from one or more client devices.).
Springer fails to teach:
an image for display in which features of the model image are either horizontally flipped or horizontally translated across an axis of symmetry
decrease a first display magnification of the image for display by the display when a second display magnification of an image for display on the another terminal apparatus increases, and decreases.
Hedge teaches:
an image for display in which features of the model image are either horizontally flipped or horizontally translated across an axis of symmetry (Hedge [0056] One issue that sometimes occurs in the user interface 102 is how to display the video of the attendees who are on the opposite side of the table from the speaker. Typically, the cameras 120 for capturing the conference 104 are on a conference table, so no cameras 120 capture the attendees from a rear view. Nevertheless, the spatial browser 110 still shows the videos of these attendees as captured by cameras 120 on the table. Because of the 3-D arrangement 114, showing a frontal view as a substitute for a rear view causes the attendee's eye movements and body gestures to be reversed. Thus, when Alice looks at Dave, it would appear to the user as if she is incorrectly looking to the right. To correct the issue, the spatial browser 110 horizontally inverts or "flips" the videos of the attendees who are being viewed from the rear, when in the context of the 3-D arrangement 114. For example, when Eve is the speaker, the video of Alice 506 and the video of Bob 508 are horizontally inverted in the user interface 102 as shown in FIG. 5. This allows users to correctly interpret gestures and interactions between the attendees.)
Kim teaches:
decrease a first display magnification of the image for display by the display when a second display magnification of an image for display on the another terminal apparatus increases, and decreases (Kim [0012] the first display and the second display has a constant total area such that, as the size of the first display increases, the size of the second display decreases, and the controller senses occurrence of an event, and performs control such that a function corresponding to the event is executed and the size of the first display is changed simultaneously therewith.).
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Stringer with Kim and Hedge. Flipping an image or video horizontally and having a display that changes the size and magnification based on a change in size compared on a different change, as in Kim and Hedge, would benefit the Springer teachings by allowing for images or videos to be adjust and have a connection between different screens. Additionally, this is the application of a known technique, flipping an image or video horizontally and having a display that changes the size and magnification based on a change in size compared on a different change, to yield predictable results.
Regarding claim 2:
Springer, Hedge, and Kim teach:
The terminal apparatus according to claim 1,
wherein the controller is configured to generate a rendered image, in which the model image that is horizontally flipped is placed in a virtual space yielded by horizontally flipping a real space in which the another user exists, and superimpose the drawn image that is horizontally flipped on the rendered image to generate the image for display (Springer [0008] Generally speaking, the approach includes generating a composite video for each of a number of video feeds associated with users within a video session. A media background is generated for each video feed and can be used to present materials (such as, e.g., presentation slides) and/or eliminate visual distractions behind participants. One or more presenters can then be seen on video during the session. An annotation layer overlaid on top of the user's image allows for a presenter to produce live written annotations in conjunction with their own video presentations, particularly when accessing a “self-view” interface on their client device. In some use cases, this combination of visual elements can be reproduced both locally for near-end participants, as well as remotely for far-end participants, thus democratizing the instructional experience.).
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Stringer with Kim and Hedge. Flipping an image or video horizontally and having a display that changes the size and magnification based on a change in size compared on a different change, as in Kim and Hedge, would benefit the Springer teachings by allowing for images or videos to be adjust and have a connection between different screens. Additionally, this is the application of a known technique, flipping an image or video horizontally and having a display that changes the size and magnification based on a change in size compared on a different change, to yield predictable results.
Regarding claim 3:
Springer, Hedge, and Kim teach:
The terminal apparatus according to claim 1,
wherein the controller is configured to generate a rendered image, in which the model image is placed in a virtual space corresponding to a real space in which the another user exists, and horizontally flip and superimpose the rendered image on the drawn image that is horizontally flipped to generate the image for display (Springer [0008] Generally speaking, the approach includes generating a composite video for each of a number of video feeds associated with users within a video session. A media background is generated for each video feed and can be used to present materials (such as, e.g., presentation slides) and/or eliminate visual distractions behind participants. One or more presenters can then be seen on video during the session. An annotation layer overlaid on top of the user's image allows for a presenter to produce live written annotations in conjunction with their own video presentations, particularly when accessing a “self-view” interface on their client device. In some use cases, this combination of visual elements can be reproduced both locally for near-end participants, as well as remotely for far-end participants, thus democratizing the instructional experience.).
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Stringer with Kim and Hedge. Flipping an image or video horizontally and having a display that changes the size and magnification based on a change in size compared on a different change, as in Kim and Hedge, would benefit the Springer teachings by allowing for images or videos to be adjust and have a connection between different screens. Additionally, this is the application of a known technique, flipping an image or video horizontally and having a display that changes the size and magnification based on a change in size compared on a different change, to yield predictable results.
Regarding claim 5:
Springer, Hedge, and Kim teach:
The terminal apparatus according to claim 1,
wherein the controller is configured to is set to M times, where M is a number greater than 1, andset the first display magnification to M times when the second display magnification is set to 1/M times (Kim [0012] the first display and the second display has a constant total area such that, as the size of the first display increases, the size of the second display decreases, and the controller senses occurrence of an event, and performs control such that a function corresponding to the event is executed and the size of the first display is changed simultaneously therewith.).
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Stringer with Kim and Hedge. Flipping an image or video horizontally and having a display that changes the size and magnification based on a change in size compared on a different change, as in Kim and Hedge, would benefit the Springer teachings by allowing for images or videos to be adjust and have a connection between different screens. Additionally, this is the application of a known technique, flipping an image or video horizontally and having a display that changes the size and magnification based on a change in size compared on a different change, to yield predictable results.
Response to Arguments
Applicant's arguments filed 07/08/2025 with respect to claims 1-3 and 5 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Applicant has amended claim 1 and added claim 5. References US 20220329687 and US 20100085416 have been to teach the amended limitations.
In light of the amendments the previous 35 U.S.C. 102(a)(1) rejection of claims 1-4 has been hereby updated to a 35 U.S.C. 103 rejection of claims 1-3 and 5. Therefore, in light of the amendments claims 1-3 and 5 are rejected by 35 U.S.C. 103. The rejections for all dependent claims have also been updated.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DENIS VASILIY MINKO whose telephone number is (571)270-5226. The examiner can normally be reached Monday-Thursday 8:30-6:00 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Said Broome can be reached at 571-272-2931. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DENIS VASILIY MINKO/Examiner, Art Unit 2612
/Said Broome/Supervisory Patent Examiner, Art Unit 2612