DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on January 1, 2026 has been entered.
Response to Arguments
Applicant’s arguments with respect to claim(s) 1, 5-10, 14-16, and 20-29 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. The examiner must, however, address arguments presented by the applicant which are still relevant to references being applied.
With regard to claim 1, Applicant submits that Smith does not relate to a live stream or live broadcast. Remarks, p. 10.
In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986).
Claim(s) 1 is/are rejected under 35 U.S.C. 103 as being unpatentable over a combination of Fremlin et al. (US 2014/0280530), Akumiah et al. (US 2018/0192141), Bernstein et al. (US 2016/0277802), Smith (US 2023/0027231), Chew et al. (US 2014/0019882), and Becchetti (US 11290687).
As presented in the claim rejections under 35 USC §103, Fremlin teaches wherein the live preview content is generated in accordance with an image of a live stream of the user ([0038], “In particular embodiments, an entity may be represented on a business page 50D hosted on the social-networking system. As illustrated in the example of FIG. 2D, a portion of an example business page 50D may be configured to present presence information, such as for example, a real-time video status 52 associated with the entity. As an example and not by way of limitation, real-time video status 52 may capture images corresponding to activity occurring at a geo-location associated with the entity. Furthermore, real-time video status 52 may function as a cover image of example business page 50D.” Fig. 2D).
Akumiah and Bernstein additionally provide teachings for live content (Akumiah: [0061], Bernstein: [0060]).
Smith teaches a pull-down operation, in response to the pull-down operation reaching a preset critical point when it ends, entering an interface ([0057], “For example, to reveal player controls the input may drag vertically downward 80 pixels or more (a ‘short drag’). The drag position could render real-time feedback, for example, gradually adjusting the position and opacity of the player controls with each movement. But if the input stops before crossing the 80-pixel threshold, the shift will be cancelled, and the view will revert to the previous plane positions. In the same drag gesture, if the input continues dragging downward, the input may cross a second threshold: for example, to shift the browse surface largely outside of the viewport (to ‘enter full screen’), the input may drag downward 140 pixels or more (a ‘long drag’). In this case, player controls would be shifted into view if the inputs drags 80-140 pixels, while the player controls and media player would be shifted to the primary plane if the input drags 140 pixels or more. Other information may be used in combination with the thresholds to determine the desired input.”).
Considering Smith’s teaching with those of the combination, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination such that the operation is a pull-down operation, and in response to the pull-down operation on the background image area in which the live preview content is presented reaching a preset critical point when it ends, entering the live stream of the user. The modification would serve to facilitate user navigation to the live room of the target user. The modification would improve the overall user experience.
With regard to claim 1, Applicant submits, that Bernstein and Smith at least fail to disclose entering the live stream of the user from the profile page of the user via the pull-down operation. Remarks, p. 10.
In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986).
Claim(s) 1 is/are rejected under 35 U.S.C. 103 as being unpatentable over a combination of Fremlin et al. (US 2014/0280530), Akumiah et al. (US 2018/0192141), Bernstein et al. (US 2016/0277802), Smith (US 2023/0027231), Chew et al. (US 2014/0019882), and Becchetti (US 11290687).
Claim 1 recites, in part, “in response to the pull-down operation on the background image area in which the live preview content is presented reaching a preset critical point when it ends, entering the live stream of the user.”
Bernstein teaches, in response to a user being on a live, guidance information is displayed in an image area, the guidance information being used for guiding to enter a live stream of the user via an operation on the image area; and in response to the operation on the image area in which the live preview content is presented , entering the live stream of the user ([0060], “FIG. 3A illustrates an example user interface 300 showing a message 305 to view a live broadcast. … The user interface 300 may also include a preview 310 of the live video stream. Selecting the view option of the message 305 may cause the device to become a viewing device and, thus, the user of the device to become a viewer.” [0061], “FIG. 3B illustrates an example user interface 301 presented to a viewer of a real-time video stream. The user interface 301 may be provided, for example, after a targeted viewer joins the real-time video stream.” [0066], “In some implementations, the system may select a few live video streams for the head of the list with a large preview of the broadcast and the remainder may have thumbnail views. In some implementations, the preview may include a few seconds of video from the live video stream, e.g., a few seconds of the video that are associated with a large quantity of signals of appreciation. The previews may be selectable, so that the user can join a live video stream by selecting the preview or thumbnail in the area 405. Once a user joins a live video stream, the user may be presented with a user interface similar to that of FIG. 3B, discussed above.” Figs. 3A, 3B, 4A). Bernstein additionally teaches wherein the live interface comprises a comment area for displaying comments on the image of the live stream of the user ([0057], Fig. 2C).
Smith teaches a pull-down operation, in response to the pull-down operation reaching a preset critical point when it ends, entering an interface ([0057], “For example, to reveal player controls the input may drag vertically downward 80 pixels or more (a ‘short drag’). The drag position could render real-time feedback, for example, gradually adjusting the position and opacity of the player controls with each movement. But if the input stops before crossing the 80-pixel threshold, the shift will be cancelled, and the view will revert to the previous plane positions. In the same drag gesture, if the input continues dragging downward, the input may cross a second threshold: for example, to shift the browse surface largely outside of the viewport (to ‘enter full screen’), the input may drag downward 140 pixels or more (a ‘long drag’). In this case, player controls would be shifted into view if the inputs drags 80-140 pixels, while the player controls and media player would be shifted to the primary plane if the input drags 140 pixels or more. Other information may be used in combination with the thresholds to determine the desired input.”).
In view of Smith’s teaching, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination such that the operation is a pull-down operation, and in response to the pull-down operation on the background image area in which the live preview content is presented reaching a preset critical point when it ends, entering the live stream of the user. The modification would serve to facilitate user navigation to the live room of the target user. The modification would improve the overall user experience.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 5, 10, 14, 16, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over a combination of Fremlin et al. (US 2014/0280530), Akumiah et al. (US 2018/0192141), Bernstein et al. (US 2016/0277802), Smith (US 2023/0027231), Chew et al. (US 2014/0019882), and Becchetti (US 11290687).
Regarding claim 1, Fremlin teaches a method for presenting live content, comprising:
in response to a user being not on a live, presenting a background image area comprised in a profile page of the user ([0038], “In particular embodiments, an entity may be represented on a business page 50D hosted on the social-networking system. As illustrated in the example of FIG. 2D, a portion of an example business page 50D may be configured to present presence information, such as for example, a real-time video status 52 associated with the entity. As an example and not by way of limitation, real-time video status 52 may capture images corresponding to activity occurring at a geo-location associated with the entity. Furthermore, real-time video status 52 may function as a cover image of example business page 50D.” Fig. 2D), and
wherein the live preview content is generated in accordance with an image of a live stream of the user ([0038], “In particular embodiments, an entity may be represented on a business page 50D hosted on the social-networking system. As illustrated in the example of FIG. 2D, a portion of an example business page 50D may be configured to present presence information, such as for example, a real-time video status 52 associated with the entity. As an example and not by way of limitation, real-time video status 52 may capture images corresponding to activity occurring at a geo-location associated with the entity. Furthermore, real-time video status 52 may function as a cover image of example business page 50D.” Fig. 2D).
Fremlin does not expressly teach, in response to a user being not on a live, presenting a background image corresponding to the user in a background image area comprised in a profile page of the user. Fremlin also does not expressly teach, in response to the user being on a live, replacing the background image in the background image area of the profile page of the user with live preview content. Fremlin also does not expressly teach wherein in response to the user being on a live, guidance information is displayed in the background image area, the guidance information being used for guiding to enter the live stream of the user a pull-down operation on the background image area; and in response to the pull-down operation on the background image area in which the live preview content is presented reaching a preset critical point when it ends, entering the live stream of the user, wherein the live preview content is in a mute state by default, and a control for controlling volume of the live preview content is displayed in the background image area, and the method further comprises: playing audio content comprised in the live preview content in response to receiving a trigger operation for the control.
Akumiah teaches:
in response to a user being not on a live, presenting a background image corresponding to the user in a background image area comprised in a profile page of the user ([0061], “FIGS. 4A-4B illustrate lobby interfaces 410 (i.e., 410A-B) that may be displayed to users 101 a pre-determined amount of time before the scheduled start time of a live video.” [0062], “In general, lobby interfaces 410 provide interfaces for viewers of a live video to congregate a certain amount of time before the scheduled start of a live video.” [0063], “Display area 415, which may be above comment area 420 or any other appropriate location in lobby interfaces 410, may include one or more of an image (e.g., profile image 312 of the broadcaster), a pre-recorded video such as a video provided by the broadcaster, an advertisement, or any other content.” Fig. 4B); and
in response to the user being on a live, replacing the background image in the background image area of the a profile page of the user with live content ([0067], “FIGS. 4C-4D illustrate live video interfaces 450 (i.e., 450A-B) that may be displayed to users 101 at or after the scheduled start time of a live video. In some embodiments, live video interface 450A is displayed to the broadcaster (e.g., Taylor Swift), and live video interface 450B is displayed to viewers 101 of the live video.” Figs. 4C-4D).
In view of Akumiah’s teaching, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Fremlin to include, in response to a user being not on a live, presenting a background image corresponding to the user in a background image area comprised in a profile page of the user, and, in response to a user being on a live, replacing a background image in a background image area of a profile page of the user with live preview content. The modification would provide viewers with an interface to congregate, comment, and interact with other viewers before a start live video content (Akumiah: [0062]).
The combination teaches the limitations specified above; however, the combination also does not expressly teach wherein in response to the user being on a live, guidance information is displayed in the background image area, the guidance information being used for guiding to enter the live stream of the user a pull-down operation on the background image area; and in response to the pull-down operation on the background image area in which the live preview content is presented reaching a preset critical point when it ends, entering the live stream of the user, wherein the live preview content is in a mute state by default, and a control for controlling volume of the live preview content is displayed in the background image area, and the method further comprises: playing audio content comprised in the live preview content in response to receiving a trigger operation for the control.
Bernstein teaches, in response to a user being on a live, guidance information is displayed in an image area, the guidance information being used for guiding to enter a live stream of the user via an operation on the image area; and in response to the operation on the image area in which the live preview content is presented , entering the live stream of the user ([0060], “FIG. 3A illustrates an example user interface 300 showing a message 305 to view a live broadcast. … The user interface 300 may also include a preview 310 of the live video stream. Selecting the view option of the message 305 may cause the device to become a viewing device and, thus, the user of the device to become a viewer.” [0061], “FIG. 3B illustrates an example user interface 301 presented to a viewer of a real-time video stream. The user interface 301 may be provided, for example, after a targeted viewer joins the real-time video stream.” [0066], “In some implementations, the system may select a few live video streams for the head of the list with a large preview of the broadcast and the remainder may have thumbnail views. In some implementations, the preview may include a few seconds of video from the live video stream, e.g., a few seconds of the video that are associated with a large quantity of signals of appreciation. The previews may be selectable, so that the user can join a live video stream by selecting the preview or thumbnail in the area 405. Once a user joins a live video stream, the user may be presented with a user interface similar to that of FIG. 3B, discussed above.” Figs. 3A, 3B, 4A). Bernstein additionally teaches wherein the live interface comprises a comment area for displaying comments on the image of the live stream of the user ([0057], Fig. 2C).
In view of Bernstein’s teaching, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination wherein, wherein in response to the user being on a live, guidance information is displayed in the background image area, the guidance information being used for guiding to enter the live stream of the user an operation on the background image area; and in response to the operation on the background image area in which the live preview content is presented , entering the live stream of the user. The modification would serve to facilitate content navigation and selection for users.
The combination teaches the limitations specified above; however, the combination also does not expressly teach that the operation is a pull-down operation, and in response to the pull-down operation on the background image area in which the live preview content is presented reaching a preset critical point when it ends, entering the live stream of the user. The combination also does not expressly teach wherein the live preview content is in a mute state by default, and a control for controlling volume of the live preview content is displayed in the background image area, and the method further comprises: playing audio content comprised in the live preview content in response to receiving a trigger operation for the control.
Smith teaches a pull-down operation, in response to the pull-down operation reaching a preset critical point when it ends, entering an interface ([0057], “For example, to reveal player controls the input may drag vertically downward 80 pixels or more (a ‘short drag’). The drag position could render real-time feedback, for example, gradually adjusting the position and opacity of the player controls with each movement. But if the input stops before crossing the 80-pixel threshold, the shift will be cancelled, and the view will revert to the previous plane positions. In the same drag gesture, if the input continues dragging downward, the input may cross a second threshold: for example, to shift the browse surface largely outside of the viewport (to ‘enter full screen’), the input may drag downward 140 pixels or more (a ‘long drag’). In this case, player controls would be shifted into view if the inputs drags 80-140 pixels, while the player controls and media player would be shifted to the primary plane if the input drags 140 pixels or more. Other information may be used in combination with the thresholds to determine the desired input.”).
In view of Smith’s teaching, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination such that the operation is a pull-down operation, and in response to the pull-down operation on the background image area in which the live preview content is presented reaching a preset critical point when it ends, entering the live stream of the user. The modification would serve to facilitate user navigation to the live room of the target user. The modification would improve the overall user experience.
The combination teaches the limitations specified above; however, the combination does not expressly teach wherein the live preview content is in a mute state by default, and a control for controlling volume of the live preview content is displayed in the background image area, and the method further comprises: playing audio content comprised in the live preview content in response to receiving a trigger operation for the control.
Chew teaches:
wherein live preview content is in a mute state ([0075], “In some examples, the participant user can mute the audio stream of another participant (e.g., John) by selecting icon 520-1. FIG. 5B illustrates hangout user interface 500-B, with icon 520-1 changed, after selection of icon 520-1, to icon 520-2 to indicate that the audio stream corresponding to John is muted.” Fig. 5B), and
a control for controlling volume of the live preview content is displayed in a background image area, and playing audio content comprised in the live preview content in response to receiving a trigger operation for the control ([0076], “In some examples, an individual volume control 524 is displayed over a video stream thumbnail (e.g., thumbnail 502-2 in FIG. 5B). The volume control 524 can be, for example, a slider control. The participant user can select the volume control 524 to adjust the volume of the audio stream associated with the thumbnail 502-2. For example, the participant user can adjust the volume of Mary's audio stream using volume control 524.” Fig. 5B).
In view of Chew’s teaching, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination wherein the live preview content is in a mute state , and a control for controlling volume of the live preview content is displayed in the background image area, and the method further comprises: playing audio content comprised in the live preview content in response to receiving a trigger operation for the control. The modification would serve to facilitate user management of audio from content sources.
The combination teaches the limitations specified above; however, the combination does not expressly teach that the live preview content is in a mute state by default.
Becchetti teaches live content is in a mute state by default (Col. 10, lines 36-55, “the mute control 336 for clients in the waiting room session portion 340 may default to muted.”).
In view of Becchetti’s teaching, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination wherein the live preview content is in a mute state by default in order to allow the user to selectively modify the mute states of content. The modification would thereby improve the user experience.
Regarding claim 10, Fremlin teaches an electronic device, comprising: a memory and a processor, wherein the memory is configured to store computer program instructions, and the processor is configured to execute the computer program instructions to implement the operations for presenting live content ([0047]-[0048], Fig. 5). The rejection of claim 1 under 35 USC § 103 is similarly applied to the remaining limitations of claim 1.
Regarding claim 16, Fremlin teaches a non-transitory computer-readable storage medium, comprising: computer program instructions, wherein the computer program instructions, when executed by at least one processor of an electronic device, cause the electronic device to implement the operations for presenting live content ([0047]-[0048], Fig. 5). The rejection of claim 1 under 35 USC § 103 is similarly applied to the remaining limitations of claim 1.
Regarding claims 5, 14, and 20, the combination further teaches prompting that the pull-down operation has reached the preset critical point (Smith: [0057], “For example, to reveal player controls the input may drag vertically downward 80 pixels or more (a ‘short drag’). The drag position could render real-time feedback, for example, gradually adjusting the position and opacity of the player controls with each movement. But if the input stops before crossing the 80-pixel threshold, the shift will be cancelled, and the view will revert to the previous plane positions. In the same drag gesture, if the input continues dragging downward, the input may cross a second threshold: for example, to shift the browse surface largely outside of the viewport (to ‘enter full screen’), the input may drag downward 140 pixels or more (a ‘long drag’). In this case, player controls would be shifted into view if the inputs drags 80-140 pixels, while the player controls and media player would be shifted to the primary plane if the input drags 140 pixels or more. Other information may be used in combination with the thresholds to determine the desired input.”).
Claim(s) 6, 15, and 27 is/are rejected under 35 U.S.C. 103 as being unpatentable over Fremlin, Akumiah, Bernstein, Smith, Chew, Becchetti, and Gupta (US 2023/0319399).
Regarding claims 6, 15, and 27, the combination teaches the limitations specified above; however, the combination does not expressly teach wherein the prompting that the pull-down operation has reached the preset critical point comprises: prompting, in the manner of at least one of a text prompt or a vibration prompt, that the pull-down operation has reached the preset critical point.
Gupta teaches prompting, in the manner of a vibration prompt, that an operation has reached the preset critical point ([0036], “In some instances, the electronic device 102 can use haptic sensor(s) to cause a vibration response to indicate completion of the drag gesture 126 or that the drag gesture 126 has met the distance threshold 138.”).
In view of Gupta’s teaching, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination wherein the prompting that the pull-down operation has reached the preset critical point comprises: prompting, in the manner of a text prompt and/or a vibration prompt, that the pull-down operation has reached the preset critical point. The modification would serve to further facilitate user interaction with the user interface. The modification would further enhance the overall user experience.
Claim(s) 7-9, 24-26, and 28-29 is/are rejected under 35 U.S.C. 103 as being unpatentable over Fremlin, Akumiah, Bernstein, Smith, Chew, Becchetti, and O’Leary et al. (US 2022/0247919).
Regarding claims 7, 24, and 28 the combination teaches the limitations specified above; however, the combination does not expressly teach wherein the live preview content is obtained by cropping the image of the live stream of the user in accordance with a size of the background image area.
O’Leary teaches wherein live preview content is obtained by cropping an image of a live stream of a user in accordance with a size of a background image area ([0232], “When the automatic framing mode is enabled, device 600 detects conditions of a scene that is within the field-of-view of the enabled camera (e.g., camera 602) (e.g., the presence and/or position of one or more subjects within the field-of-view of the camera), and, in real time, adjusts the field-of-view of the video output for the video conference session (as represented in the camera preview) (e.g., without moving camera 602 or device 600), based on the conditions of the scene or changes detected in the scene within the field-of-view of the camera (e.g., changes in the position and/or movement of subject(s) during the video conference session).” [0236], “Camera preview 606 includes representation 622-1 of Jane and a representation of the environment that is captured within portion 625 (e.g., the video feed field-of-view).”).
In view of O’Leary’s teaching, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination wherein the live preview content is obtained by cropping the image of the live stream of the user in accordance with a size of the background image area. The modification would allow viewers to better focus on target subjects. The modification would improve the overall user experience.
Regarding claims 8, 25, and 29, the combination further teaches wherein the live preview content is obtained by identifying a visual center of gravity of the image of the live stream of the user and cropping the image of the live stream of the ser in accordance with the size of the background image area and the visual center of gravity (O’Leary: [0241], “When automatic framing mode is enabled, device 600 automatically adjusts the displayed video feed field-of-view based on conditions detected within scene 615. In the embodiment depicted in FIG. 6E, device 600 adjusts the displayed video feed field-of-view to center on Jane's face. Accordingly, device 600 updates camera preview 606 to include representation 622-1 of Jane centered in the frame and, in the background, representation 621-1 of the couch upon which she is sitting. Field-of-view 620 remains fixed because the position of camera 602 remains unchanged. However, the position of Jane's face within field-of-view 620 does change. As a result, device 600 adjusts (e.g., repositions) the displayed portion of field-of-view 620 so that Jane remains positioned within camera preview 606. This is represented in FIG. 6E by the repositioning of portion 625 so that it is centered on Jane's face. In FIG. 6E, portion 627 corresponds to the prior location of portion 625 and, thus, represents the portion of field-of-view 620 that was previously displayed in camera preview 606 (before the adjustment resulting from enabling automatic framing mode).” Figs. 6E-6F).
Regarding claims 9 and 26, the combination further teaches wherein the live preview content is obtained by zooming out the image of the live stream of the user in accordance with a size of the background image area (O’Leary: [0246], “FIG. 6I is an embodiment similar to that depicted in FIG. 6H, but with camera preview 606 having a greater, zoomed out field-of-view when compared to that shown in FIG. 6H. Specifically, the embodiment depicted in FIG. 6I illustrates a jump cut transition from the camera preview in FIG. 6G to the camera preview in FIG. 6I. The jump cut transition is depicted by transitioning from camera preview 606 in FIG. 6G, to camera preview 606 in FIG. 6I, which has a larger (e.g., zoomed out) field-of-view. Accordingly, the video feed field-of-view in FIG. 6I (represented by portion 625) is a larger portion of field-of-view 620. This is illustrated by the size difference between portion 625 (corresponding to the camera preview in FIG. 6I) and portion 627 (corresponding to the camera preview in FIG. 6G).” Figs. 6G-6I).
Claim(s) 21-23 is/are rejected under 35 U.S.C. 103 as being unpatentable over a combination of Fremlin, Akumiah, Bernstein, Smith, Chew, Becchetti, and “How To Enable Control Centre on iPhone 7 & 7 Plus in 2021! [EASY] [Acess, Get & Open].” YouTube, uploaded by Saunderverse, Dec. 26, 2020, youtube.com/shorts/DXZuU_YrY-Q (hereinafter “Saunderverse”).
Regarding claims 21, 22, and 23, the combination teaches the limitations specified above; however, the combination does not expressly teach, in response to continuing to perform a slide-up operation in a case where the pull-down operation has been performed and a hand has not been loosed, returning to the profile page of the user without entering the live stream of the user.
Saunderverse teaches, in response to continuing to perform a slide-up operation in a case where a swipe operation has been performed and a hand has not been loosed, returning to a previous user interface state (00:15-00:16 and 00:29-00:31).
In view of Saunderverse’s teaching, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination to include, in response to continuing to perform a slide-up operation in a case where the pull-down operation has been performed and a hand has not been loosed, returning to the profile page of the user without entering the live stream of the user. The modification would serve to facilitate user navigation and operation of the combined system.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL R TELAN whose telephone number is (571)270-5940. The examiner can normally be reached 9:30AM-6:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nasser Goodarzi can be reached at (571) 272-4195. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MICHAEL R TELAN/Primary Examiner, Art Unit 2426