DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments filed 12/18/2025 have been considered but are moot in view of a new ground of rejections.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Anvaripour et al. (US 2022/0375137 A1 – hereinafter Anvaripour), Xu et al. (US 2025/0004616 A1 – hereinafter Xu), Anvaripour et al. (US 2022/0210337 A1 – hereinafter Anvaripour ‘337), and Szeto (US 2005/0204309 A1 – hereinafter Szeto).
Regarding claim 1, Anvaripour discloses a method comprising: providing a media library of previously captured media data ([0045]; [0070]; [0124] – a library of previously captured images and video clips for composing a message with augmentation modifications); displaying an augmentation UI entry point icon ([0143]; Fig. 8A – displaying AR icon 816); detecting, by one or more processors, a selection by a first user of selected media data of the previously captured media data ([0143] – detecting, by one or more processors as shown in Fig. 13 and described in at least [0175], a user pressing/tapping or pressing-and-holding of a screen content, which is selected for display on a screen from the gallery or the library of previously captured media data); detecting, by the one or more processors, a type of a selection and a type of selected media data ([0143] – detecting a type of a selection, i.e. pressing/tapping or pressing-and-holding and a type of media, i.e. an image or a video to be applied corresponding augmentation data as further described at least in [0066] and [0071]-[0074]); generating, by the one or more processors, augmented media data by applying an augmentation to the selected media data based on the type of the selection and the type of the selected media data ([0143] – generating an image in response to a press/tap selection and a video in response to a press-and-hold selection and based on the type of the selected media data as further described at least in [0066] and [0071]-[0074]); and providing, by the one or more processors, to a second user, the augmented media data ([0143] – to send to a friend).
However, Anvaripour does not disclose providing, by the one or more processors, a media library User Interface (UD) displaying the previously captured media data of the first user; and displaying, by the one or more processors, a tooltip prompting the first user to add an augmentation when the first user encounters an augmentation UI entry point icon, the tooltip permanently dismissed in response to detecting that the first user tapped the entry point icon.
Xu discloses providing, by one or more processors, a media library User Interface (UI) displaying previously captured media data of a first user (Figs. 4a-4c; [0093]-[0096] – a user selects one of existing saved images or videos from a library UI displaying a camera roll for user selection).
One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to incorporate the teachings of Xu into the method taught by Anvaripour to enhance a user interface of the method by providing for selection of captured media data conveniently.
Anvaripour and Xu do not disclose displaying, by the one or more processors, a tooltip prompting the first user to add an augmentation when the first user encounters an augmentation UI entry point icon, the tooltip permanently dismissed in response to detecting that the first user tapped the entry point icon.
Anvaripour ‘337 discloses displaying, by one or more processors, a tooltip prompting a first user to add an editing when the first user encounters an editing UI entry point icon, the tooltip is not displayed again after dismissed ([0131]; [0137]; Figs. 8A-8B – a tooltip prompting a user to edit a video only once, i.e. not displayed again after dismissed, the first time the user encounters the icon).
One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to incorporate the teachings of Anvaripour ‘337 into the UI entry point icon in the method taught by Anvaripour and Xu above to direct user’s attention to the operation to be performed by the icon when the user encounters the icon for the first time, thus making the icon self-explanatory to the user.
However, Anvaripour, Xu, and Anvaripour ‘337 do not disclose the tooltip is dismissed in response to detecting that the first user tapped the entry point icon.
Szeto discloses a tooltip is dismissed in response to detecting that a user tapped an entry point icon ([0075]).
One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to incorporate the teachings of Szeto into the UI entry point icon in the method taught by Anvaripour, Xu, and Anvaripour ‘337 above to remove the tooltip from the screen after the user select the prompted operation since the tooltip is no longer needed, thus enhancing the user interface of the method.
Regarding claim 2, Anvaripour in view of Xu also discloses method of claim 1, wherein the type of the selection is a tap, the type of the selected media data is image data, and wherein generating the augmented media data comprises applying the augmentation to the selected media data to generate still augmented media data ([0143] – when the media is an image and the type of the selection is a press/tap, the augmented media data is generated as a still augmented image).
Regarding claim 3, Anvaripour in view of Xu also discloses the method of claim 1, wherein the type of the selection is a tap, the type of the selected media data is video data ([0143] – when the media is image data and the type of the selection is a press/tap and the selected media data is video as described in at least [0135]), and wherein generating augmented media data comprises applying the augmentation to a frame of the video data to create augmented image data ([0143] – generating an image with augmentation modification).
Regarding claim 4, Anvaripour in view of Xu also discloses the method of claim 1, wherein the type of the selection is a long press, the type of the selected media data is video data, and wherein generating the augmented media data comprises applying the augmentation as an animation to the image data to generate animated image data ([0143] – when the media is an image and the type of the selection is a press-and-hold, the augmented media data is generated as a video of the screen content, thus being an animation to the image data).
Regarding claim 5, Anvaripour in view of Xu also discloses the method of claim 1, wherein the type of the selection is a long press, the type of the selected media data is image data ([0143] – when the media is image data and the type of the selection is a press-and-hold and the selected media data is video as described in at least [0135]), and wherein generating augmented media data comprises compositing the augmentation as an animation with the video data to generate augmented video data ([0143] – when the media is the video and the type of the selection is a press-and-hold, the augmented media data is generated as a video of the screen content, thus being an animation to the video data).
Regarding claim 6, Anvaripour in view of Xu also discloses the method of claim 1, further comprising: receiving, by the one or more processors, a selection of the augmentation from a carousel of available augmentations ([0143] – receiving a selection of the augmentation from carousel interface 814 of available augmentations 816).
Regarding claim 7, Anvaripour in view of Xu also discloses the method of claim 1, further comprising: recognizing, by the one or more processors, a face in the selected media data ([0046]; [0075]-[0076]); and positioning, by the one or more processors, the augmentation on the face in the selected media data (Figs. 8A-8B; [0079] – positioning the augmentation, e.g. glasses or any other augmentation modification, on the face in the selected image).
Claim 8 is rejected for the same reason as discussed in claim 1 above in view of Anvaripour also disclosing a machine, comprising: one or more processors (Fig. 13; [0175]-[0176] - processors 1304); and one or more memories storing instructions that, when executed by the one or more processors, cause the machine to perform the recited operations (Fig. 13; [0175]-[0177] – memories 1306 storing instructions 1310).
Claim 9 is rejected for the same reason as discussed in claim 2 above.
Claim 10 is rejected for the same reason as discussed in claim 3 above.
Claim 11 is rejected for the same reason as discussed in claim 4 above.
Claim 12 is rejected for the same reason as discussed in claim 5 above.
Claim 13 is rejected for the same reason as discussed in claim 6 above.
Claim 14 is rejected for the same reason as discussed in claim 7 above.
Claim 15 is rejected for the same reason as discussed in claim 1 above in view of Anvaripour also disclosing a non-transitory machine-readable storage medium storing instructions that, when executed by one or more processors of a machine, cause the machine to perform the recited operations (Fig. 13; [0175]-[0177] – memories 1306 storing instructions 1310 executed by one or more processors 1304 of a machine 1300).
Claim 16 is rejected for the same reason as discussed in claim 2 above.
Claim 17 is rejected for the same reason as discussed in claim 3 above.
Claim 18 is rejected for the same reason as discussed in claim 4 above.
Claim 19 is rejected for the same reason as discussed in claim 5 above.
Claim 20 is rejected for the same reason as discussed in claim 7 above.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HUNG Q DANG whose telephone number is (571)270-1116. The examiner can normally be reached IFT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Thai Q Tran can be reached at 571-272-7382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HUNG Q DANG/Primary Examiner, Art Unit 2484