DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments have been fully considered but they are not persuasive.
Anvaripour in [0061] discloses the real-time video capture may be used with an illustrated modification to enable the user to modify video images currently being captured by sensors of a client device 102. The real time video is displayed on the screen and not stored in memory, a preview feature with different augmented reality content items within different windows is displayed in a display at the same time.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1, 4, 6, 14, 17, 19, 21, 22 and 25 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Anvaripour et al. (US 20240265601 A1) (Anvaripour)
Regarding claim 1, Anvaripour discloses A method of camera function page switching, applied to a terminal device with a camera unit, the method comprising:
starting, based on a first video that is an existing video, a video editing page for editing the first video based on an editing instruction input by a user;
[0122] and Fig. 6D for the carousel launch button 612 is user-selectable to launch the carousel interface 624. the carousel interface 624 allows the user to cycle through and/or select different augmented reality content items to apply to images currently being captured (“first video”) by the device camera and being displayed on the device screen. Each of the available augmented reality content items is represented by an icon which is user-selectable for switching to the respective augmented reality content item.
in response to a shooting trigger instruction for the video editing page, closing the video editing page and starting a video shooting page, wherein the video shooting page is configured to generate a new video based on real-time images collected by the camera unit;
[0116] the capture user interface 602 of FIGS. 6A-6E includes one or more of: a camera selection button 604 (e.g., for switching between rear-facing and front-facing cameras), and a preview button 618 (e.g., for previewing, editing and generating a media content item based on captured video clip(s)), and/or an undo button 622 (e.g., to delete the most recent video clip).
obtaining a preview image based on the first vide
[0125] FIG. 7 illustrates the preview user interface 702 for previewing multiple video clips for combining into a media content item, in accordance with some example embodiments. For example, FIG. 7 depicts an example scenario in which the user selects to preview the multiple video clips (e.g., 4 video clips) captured in association with FIG. 6D.
displaying the preview image in the video shooting page during a period of initializing the camera unit;
[0061] discloses the real-time video capture may be used with an illustrated modification to show how video images currently being captured by sensors of a client device 102 would modify the captured data. Such data may simply be displayed on the screen and not stored in memory, a preview feature can show how different augmented reality content items will look within different windows in a display at the same time. This can, for example, enable multiple windows with different pseudorandom animations to be viewed on a display at the same time.
collecting a second video through the camera unit after initialization of the camera unit is completed; and
[0129] In addition, the preview user interface 702 includes an add video button 710 for adding video clips to the captured video clips (e.g., which are viewable via the video preview 708). In response to user selection of the add video button 710 (e.g., or alternatively, a predefined gesture such as a swipe down gesture within a predefined region of the preview user interface 702), the camera mode system 214 provides for switching from the preview user interface 702 back to the capture user interface 502, with all video clips and edits being preserved. A tool tip (e.g., a message indicating to “go back to camera to add more”) may direct user attention to the add video button 710. The tool tip may be displayed only once (e.g., a first time), to advise the user that selection of the add video button 710 directs to the capture user interface 502.
jumping to the video editing page and playing a synthesized video through the video editing page, wherein the synthesized video is generated after processing the first video and the second video based on a target editing operation.
[0129] In addition, the preview user interface 702 includes an add video button 710 for adding video clips to the captured video clips (e.g., which are viewable via the video preview 708). In response to user selection of the add video button 710 (e.g., or alternatively, a predefined gesture such as a swipe down gesture within a predefined region of the preview user interface 702), the camera mode system 214 provides for switching from the preview user interface 702 back to the capture user interface 502, with all video clips and edits being preserved. A tool tip (e.g., a message indicating to “go back to camera to add more”) may direct user attention to the add video button 710. The tool tip may be displayed only once (e.g., a first time), to advise the user that selection of the add video button 710 directs to the capture user interface 502.
Regarding claim 4, Anvaripour discloses The method of claim 1,wherein the preview image comprises a plurality of continuous frames of pictures; and
displaying the preview image in the video shooting page comprises:
displaying the plurality of continuous frames of pictures sequentially in the video shooting page within a predetermined duration, wherein the predetermined duration characterizes an initialization duration of the camera unit.
[0144] As noted above, the messaging client 104 provides for capturing video in a first camera mode (e.g., for single video clip capture) or a second camera mode (e.g., for multi-video clip capture). In the second camera mode, boundaries for video clips may be depicted within certain user interfaces (e.g., the segments within the timeline progress bar 514 or timeline progress bar 616). With respect to the front handle 830 and back handle 832 of the preview bar 826, the messaging client 104 may provide haptic feedback during a drag gesture to indicate transitions between video clips with respect to the second camera mode.
Regarding claim 6, Anvaripour discloses The method of claim 1, wherein the terminal device has a display interface, and displaying the preview image in the video shooting page comprises:
drawing a page sticker corresponding to the video shooting page in a first area of the display interface; and
[0113] Referring back to FIG. 5, the preview user interface 504 includes editing tools 526 for modifying/annotating (e.g., drawing on, adding text to, adding stickers to, cropping, and the like) the captured video clips.
drawing the preview image in a second area of the display interface.
[0116] the capture user interface 602 of FIGS. 6A-6E includes one or more of: a camera selection button 604 (e.g., for switching between rear-facing and front-facing cameras), and a preview button 618 (e.g., for previewing, editing and generating a media content item based on captured video clip(s)), and/or an undo button 622 (e.g., to delete the most recent video clip).
Regarding claim 14, Anvaripour discloses An electronic device, comprising:
a processor, and a memory in communication connection with the processor;
the memory storing computer execution instructions; the processor executing the computer execution instructions stored in the memory to implement acts of camera function page switching, the acts comprising:
starting, based on a that is an existing video, a video editing page for editing the first video based on an editing instruction input by a user;
[0122] and Fig. 6D for the carousel launch button 612 is user-selectable to launch the carousel interface 624. the carousel interface 624 allows the user to cycle through and/or select different augmented reality content items to apply to images currently being captured (“first video”) by the device camera and being displayed on the device screen. Each of the available augmented reality content items is represented by an icon which is user-selectable for switching to the respective augmented reality content item.
in response to a shooting trigger instruction for the video editing page, closing the video editing page and starting a video shooting page, wherein the video shooting page is configured to generate a new video based on real-time images collected by a camera unit;
[0116] the capture user interface 602 of FIGS. 6A-6E includes one or more of: a camera selection button 604 (e.g., for switching between rear-facing and front-facing cameras), and a preview button 618 (e.g., for previewing, editing and generating a media content item based on captured video clip(s)), and/or an undo button 622 (e.g., to delete the most recent video clip).
obtaining a preview image based on the first video, and displaying the preview image in the video shooting page, wherein the preview image comprises at least one video frame generated based on the first video;
[0125] FIG. 7 illustrates the preview user interface 702 for previewing multiple video clips for combining into a media content item, in accordance with some example embodiments. For example, FIG. 7 depicts an example scenario in which the user selects to preview the multiple video clips (e.g., 4 video clips) captured in association with FIG. 6D.
displaying the preview image in the video shooting page during a period of initializing the camera unit;
[0061] discloses the real-time video capture may be used with an illustrated modification to show how video images currently being captured by sensors of a client device 102 would modify the captured data. Such data may simply be displayed on the screen and not stored in memory, a preview feature can show how different augmented reality content items will look within different windows in a display at the same time. This can, for example, enable multiple windows with different pseudorandom animations to be viewed on a display at the same time.
collecting a second video through the camera unit after initialization of the camera unit is completed; and
[0129] In addition, the preview user interface 702 includes an add video button 710 for adding video clips to the captured video clips (e.g., which are viewable via the video preview 708). In response to user selection of the add video button 710 (e.g., or alternatively, a predefined gesture such as a swipe down gesture within a predefined region of the preview user interface 702), the camera mode system 214 provides for switching from the preview user interface 702 back to the capture user interface 502, with all video clips and edits being preserved. A tool tip (e.g., a message indicating to “go back to camera to add more”) may direct user attention to the add video button 710. The tool tip may be displayed only once (e.g., a first time), to advise the user that selection of the add video button 710 directs to the capture user interface 502.
jumping to the video editing page and playing a synthesized video through the video editing page, wherein the synthesized video is generated after processing the first video and the second video based on a target editing operation.
[0129] In addition, the preview user interface 702 includes an add video button 710 for adding video clips to the captured video clips (e.g., which are viewable via the video preview 708). In response to user selection of the add video button 710 (e.g., or alternatively, a predefined gesture such as a swipe down gesture within a predefined region of the preview user interface 702), the camera mode system 214 provides for switching from the preview user interface 702 back to the capture user interface 502, with all video clips and edits being preserved. A tool tip (e.g., a message indicating to “go back to camera to add more”) may direct user attention to the add video button 710. The tool tip may be displayed only once (e.g., a first time), to advise the user that selection of the add video button 710 directs to the capture user interface 502.
egarding claim 17, Anvaripour discloses The electronic device of claim 14 , wherein the preview image comprises a plurality of continuous frames of pictures; and
[0144] As noted above, the messaging client 104 provides for capturing video in a first camera mode (e.g., for single video clip capture) or a second camera mode (e.g., for multi-video clip capture). In the second camera mode, boundaries for video clips may be depicted within certain user interfaces (e.g., the segments within the timeline progress bar 514 or timeline progress bar 616). With respect to the front handle 830 and back handle 832 of the preview bar 826, the messaging client 104 may provide haptic feedback during a drag gesture to indicate transitions between video clips with respect to the second camera mode.
displaying the preview image in the video shooting page comprises:
displaying the plurality of continuous frames of pictures sequentially in the video shooting page within a predetermined duration, wherein the predetermined duration characterizes an initialization duration of the camera unit.
[0144] As noted above, the messaging client 104 provides for capturing video in a first camera mode (e.g., for single video clip capture) or a second camera mode (e.g., for multi-video clip capture). In the second camera mode, boundaries for video clips may be depicted within certain user interfaces (e.g., the segments within the timeline progress bar 514 or timeline progress bar 616). With respect to the front handle 830 and back handle 832 of the preview bar 826, the messaging client 104 may provide haptic feedback during a drag gesture to indicate transitions between video clips with respect to the second camera mode.
Regarding claim 19, Anvaripour discloses The electronic device of claim 14, wherein the terminal device has a display interface, and displaying the preview image in the video shooting page comprises:
drawing a page sticker corresponding to the video shooting page in a first area of the display interface; and drawing the preview image in a second area of the display interface.
[0144] As noted above, the messaging client 104 provides for capturing video in a first camera mode (e.g., for single video clip capture) or a second camera mode (e.g., for multi-video clip capture). In the second camera mode, boundaries for video clips may be depicted within certain user interfaces (e.g., the segments within the timeline progress bar 514 or timeline progress bar 616). With respect to the front handle 830 and back handle 832 of the preview bar 826, the messaging client 104 may provide haptic feedback during a drag gesture to indicate transitions between video clips with respect to the second camera mode.
Regarding claim 21, Anvaripour discloses The electronic device of claim 14, wherein starting the video shooting page comprises:
initializing the camera unit through a first process to start the video shooting page;
[0144] As noted above, the messaging client 104 provides for capturing video in a first camera mode (e.g., for single video clip capture) or a second camera mode (e.g., for multi-video clip capture). In the second camera mode, boundaries for video clips may be depicted within certain user interfaces (e.g., the segments within the timeline progress bar 514 or timeline progress bar 616). With respect to the front handle 830 and back handle 832 of the preview bar 826, the messaging client 104 may provide haptic feedback during a drag gesture to indicate transitions between video clips with respect to the second camera mode.
wherein displaying the preview image in the video shooting page comprises:
[0144] As noted above, the messaging client 104 provides for capturing video in a first camera mode (e.g., for single video clip capture) or a second camera mode (e.g., for multi-video clip capture). In the second camera mode, boundaries for video clips may be depicted within certain user interfaces (e.g., the segments within the timeline progress bar 514 or timeline progress bar 616). With respect to the front handle 830 and back handle 832 of the preview bar 826, the messaging client 104 may provide haptic feedback during a drag gesture to indicate transitions between video clips with respect to the second camera mode.
displaying the preview image in the video shooting page through a second process.
[0144] As noted above, the messaging client 104 provides for capturing video in a first camera mode (e.g., for single video clip capture) or a second camera mode (e.g., for multi-video clip capture). In the second camera mode, boundaries for video clips may be depicted within certain user interfaces (e.g., the segments within the timeline progress bar 514 or timeline progress bar 616). With respect to the front handle 830 and back handle 832 of the preview bar 826, the messaging client 104 may provide haptic feedback during a drag gesture to indicate transitions between video clips with respect to the second camera mode.
Regarding claim 22, Anvaripour discloses A non-transitory computer readable storage medium storing a computer execution instruction, the computer execution instruction, when a processor executes the computer execution instruction, implementing acts of camera function page switching, the acts comprising:
starting, based on a that is an existing video, a video editing page for editing the first video based on an editing instruction input by a user;
[0122] and Fig. 6D for the carousel launch button 612 is user-selectable to launch the carousel interface 624. the carousel interface 624 allows the user to cycle through and/or select different augmented reality content items to apply to images currently being captured (“first video”) by the device camera and being displayed on the device screen. Each of the available augmented reality content items is represented by an icon which is user-selectable for switching to the respective augmented reality content item.
in response to a shooting trigger instruction for the video editing page, closing the video editing page and starting a video shooting page, wherein the video shooting page is configured to generate a new video based on real-time images collected by a camera unit;
[0116] the capture user interface 602 of FIGS. 6A-6E includes one or more of: a camera selection button 604 (e.g., for switching between rear-facing and front-facing cameras), and a preview button 618 (e.g., for previewing, editing and generating a media content item based on captured video clip(s)), and/or an undo button 622 (e.g., to delete the most recent video clip).
obtaining a preview image based on the first vide
[0125] FIG. 7 illustrates the preview user interface 702 for previewing multiple video clips for combining into a media content item, in accordance with some example embodiments. For example, FIG. 7 depicts an example scenario in which the user selects to preview the multiple video clips (e.g., 4 video clips) captured in association with FIG. 6D.
displaying the preview image in the video shooting page during a period of initializing the camera unit;
[0061] discloses the real-time video capture may be used with an illustrated modification to show how video images currently being captured by sensors of a client device 102 would modify the captured data. Such data may simply be displayed on the screen and not stored in memory, a preview feature can show how different augmented reality content items will look within different windows in a display at the same time. This can, for example, enable multiple windows with different pseudorandom animations to be viewed on a display at the same time.
collecting a second video through the camera unit after initialization of the camera unit is completed; and
[0129] In addition, the preview user interface 702 includes an add video button 710 for adding video clips to the captured video clips (e.g., which are viewable via the video preview 708). In response to user selection of the add video button 710 (e.g., or alternatively, a predefined gesture such as a swipe down gesture within a predefined region of the preview user interface 702), the camera mode system 214 provides for switching from the preview user interface 702 back to the capture user interface 502, with all video clips and edits being preserved. A tool tip (e.g., a message indicating to “go back to camera to add more”) may direct user attention to the add video button 710. The tool tip may be displayed only once (e.g., a first time), to advise the user that selection of the add video button 710 directs to the capture user interface 502.
jumping to the video editing page and playing a synthesized video through the video editing page, wherein the synthesized video is generated after processing the first video and the second video based on a target editing operation.
[0129] In addition, the preview user interface 702 includes an add video button 710 for adding video clips to the captured video clips (e.g., which are viewable via the video preview 708). In response to user selection of the add video button 710 (e.g., or alternatively, a predefined gesture such as a swipe down gesture within a predefined region of the preview user interface 702), the camera mode system 214 provides for switching from the preview user interface 702 back to the capture user interface 502, with all video clips and edits being preserved. A tool tip (e.g., a message indicating to “go back to camera to add more”) may direct user attention to the add video button 710. The tool tip may be displayed only once (e.g., a first time), to advise the user that selection of the add video button 710 directs to the capture user interface 502.
Regarding claim 25, Anvaripour discloses The non-transitory computer readable storage medium of claim 22, wherein the preview image comprises a plurality of continuous frames of pictures; and displaying the preview image in the video shooting page comprises:
displaying the plurality of continuous frames of pictures sequentially in the video shooting page within a predetermined duration, wherein the predetermined duration characterizes an initialization duration of the camera unit.
[0144] As noted above, the messaging client 104 provides for capturing video in a first camera mode (e.g., for single video clip capture) or a second camera mode (e.g., for multi-video clip capture). In the second camera mode, boundaries for video clips may be depicted within certain user interfaces (e.g., the segments within the timeline progress bar 514 or timeline progress bar 616). With respect to the front handle 830 and back handle 832 of the preview bar 826, the messaging client 104 may provide haptic feedback during a drag gesture to indicate transitions between video clips with respect to the second camera mode.
Allowable Subject Matter
Claims 2, 3, 5, 7, 8, 16, 18, 20, 23 and 24 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SAMIRA MONSHI whose telephone number is (571)272-0995. The examiner can normally be reached 8 AM-5 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John W Miller can be reached at 5712727353. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SAMIRA MONSHI/Primary Examiner, Art Unit 2422