Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This is in response to applicant’s amendment/response filed on02/09/2026, which has
been entered and made of record. Claim 1, 12 and 20 are amended. Claim 21 is added. Claims 1-21 are pending in the application.
Response to Arguments
Applicant arguments regarding claim rejections under 103 are considered, but are not persuasive.
Applicant argues: the references used does not teach the amended limitation of “in at least one instance, when the depiction of the user includes the particular gesture or the particular facial expression, replacing the received frame with a replacement frame that depicts the user without the particular gesture or the particular facial expression;”
Examiner disagrees: Moncomble teaches if a disruptive event in an input video is happened, replace the disruptive segment of video with a replacement segment. ([0055], “According to a particular embodiment, at least one event marking the start or the end of a time range corresponding to a part of the sequence to be replaced is determined by an analysis of the image. For example, the image analysis can consist in searching for a change of shot, a change of camera or even a particular gesture or a particular face in the sequence.” )
Moncomble further teach generating a replacement sequence, which can have no content representative of the disruptive segment. ([0062], “According to a particular embodiment, a series of replacement sequences 106 can be generated and positioned successively at step 202 as a replacement of the sequence 105,.”[0061], “From this analysis, a replacement sequence 106 can be generated at step 202 as a replacement of the sequence 105, but without content representative of said sequence 105 if the content has not been deemed relevant. For example, the sequence 106 is hence a simple shot transition.” FIG. 1a, 1b and 1c.)
This means: when the depiction of the user includes the particular gesture or the particular facial expression is happened, replacing the received frame with a replacement frame that depicts the user without the particular gesture or the particular facial expression.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-6, 12-13, 15-17, 20-21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Moncomble et al. (US 2016/0372154 A1) in view of Cunico et al. (US 2015/0381939 A1).
Regarding claim 1, Moncomble teaches:
A method comprising:
receiving a video signal;([0050], “FIG. 1a represents a video sequence 100 recorded during a MOOC type class run by a teacher in front of an audience of students.”)
detecting a disruption to the video signal; ([0055], “According to a particular embodiment, at least one event marking the start or the end of a time range corresponding to a part of the sequence to be replaced is determined by an analysis of the image. For example, the image analysis can consist in searching for a change of shot, a change of camera or even a particular gesture or a particular face in the sequence. To that end, the method can implement a movement characterization algorithm in order to detect a particular gesture performed, for example, by a student or a teacher participating in a MOOC type class. For example, the movement characterization algorithm can determine that a student is requesting to speak by raising his/her hand in the audience.”)
responsive to detecting the disruption to the video signal, analyzing a depiction of a user in a received frame of the video signal; ([0055], “For example, the image analysis can consist in searching for a change of shot, a change of camera or even a particular gesture or a particular face in the sequence. To that end, the method can implement a movement characterization algorithm in order to detect a particular gesture performed, for example, by a student or a teacher participating in a MOOC type class. For example, the movement characterization algorithm can determine that a student is requesting to speak by raising his/her hand in the audience.”)
… replace the received frame based at least on whether the depiction of the user includes a particular gesture or a particular facial expression; ([0055], “According to a particular embodiment, at least one event marking the start or the end of a time range corresponding to a part of the sequence to be replaced is determined by an analysis of the image. For example, the image analysis can consist in searching for a change of shot, a change of camera or even a particular gesture or a particular face in the sequence. To that end, the method can implement a movement characterization algorithm in order to detect a particular gesture performed, for example, by a student or a teacher participating in a MOOC type class. For example, the movement characterization algorithm can determine that a student is requesting to speak by raising his/her hand in the audience. To that end, a video sequence showing the audience during a class can be presented as input to the algorithm such that an image analysis is performed. At the end of this analysis, the algorithm determines various time ranges corresponding to interruptions of the lesson.” FIG. 1a, b, and c)
in at least one instance, when the depiction of the user includes the particular gesture or the particular facial expression, replacing the received frame with a replacement frame that depicts the user without the particular gesture or the particular facial expression; ([0055], “According to a particular embodiment, at least one event marking the start or the end of a time range corresponding to a part of the sequence to be replaced is determined by an analysis of the image. For example, the image analysis can consist in searching for a change of shot, a change of camera or even a particular gesture or a particular face in the sequence.” [0062], “According to a particular embodiment, a series of replacement sequences 106 can be generated and positioned successively at step 202 as a replacement of the sequence 105,.”[0061], “From this analysis, a replacement sequence 106 can be generated at step 202 as a replacement of the sequence 105, but without content representative of said sequence 105 if the content has not been deemed relevant. For example, the sequence 106 is hence a simple shot transition.” FIG. 1a, 1b and 1c.) and
outputting the replacement frame for display processing.([0065], “FIG. 1c represents the initial video sequence 100 in which the part 105 has been replaced by the generated sequence 106. This replacement is performed at step 203 of the substitution method illustrated in FIG. 2. The replacement can be performed using conventional video editing techniques. According to a particular implementation, the audiovisual content resulting from the substitution contains an index indicating the start of the replacement sequence. For example, timestamp information relating to various replaced parts in a video sequence can be listed in an index in such a way that a user can immediately have access to one of the replaced sequences. For example, when the video is viewed using a suitable multimedia player, the various entries of the index can appear in the form of visual indexes integrated in a playback progress bar.”)
However, Moncomble does not explicitly, but Cunico teaches:
The step of determining whether to replace the received frame based at least on the depiction of the user; (FIG. 2, step 218, [0041]-[0042, “In step 218, dynamic facial feature substitution program 150 determines if the facial features change in the real-time video feed. Dynamic facial feature substitution program 150 monitors the real-time video of the attendee and using facial recognition algorithms, determines if changes to the facial features in the real-time video feed of the attendee occur such as a change in facial expression or a change in articulations (i.e. new words or phrases). When dynamic facial feature substitution program 150 determines that there is a change in the facial features of the attendee in the real-time video (“yes” branch, decision block 218), the program returns to step 210 to determine the one or more portions of the pre-recorded video to be substituted into the avatar in the video conference for the changed facial features. If dynamic facial feature substitution program 150 determines there is no change in the facial features of the attendee (“no” branch, decision block 218), then the program, in step 220, determines if the attendee exits the program.”)
Moncomble teaches analyze input video data and decide to replace of part of the video content. However, Moncomble does not explicitly teach a step of determine whether to replace the received frame based at least on the depiction of the user. On the other hand, Cunico explicitly teach a step of determine whether to replace the received frame based at least on the depiction of the user. Based on the determination, performing different operation.
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to have combined the teachings of Moncomble with the specific teachings of Cunico to clearly define different operations based on the analysis of the input video data.
Regarding claim 2, Moncomble in view of Cunico teaches:
The method of claim 1, wherein the analyzing comprises: inputting the received frame to a gesture detection model; and receiving a detected gesture from the gesture detection model. (Moncomble [0055], “To that end, the method can implement a movement characterization algorithm in order to detect a particular gesture performed, for example, by a student or a teacher participating in a MOOC type class. For example, the movement characterization algorithm can determine that a student is requesting to speak by raising his/her hand in the audience.”)
Regarding claim 3, Moncomble in view of Cunico teaches:
The method of claim 2, wherein determining whether to replace the received frame comprises: comparing the detected gesture to one or more gestures that are designated for replacement. (Moncomble [0055], “To that end, the method can implement a movement characterization algorithm in order to detect a particular gesture performed, for example, by a student or a teacher participating in a MOOC type class. For example, the movement characterization algorithm can determine that a student is requesting to speak by raising his/her hand in the audience.”)
Regarding claim 4, Moncomble in view of Cunico teaches:
The method of claim 1, wherein the analyzing comprises: inputting the received frame to a facial expression detection model; and receiving a detected facial expression from the facial expression detection model.( Cunico [0041], “In step 218, dynamic facial feature substitution program 150 determines if the facial features change in the real-time video feed. Dynamic facial feature substitution program 150 monitors the real-time video of the attendee and using facial recognition algorithms, determines if changes to the facial features in the real-time video feed of the attendee occur such as a change in facial expression or a change in articulations (i.e. new words or phrases).” Moncomble teaches detect gesture information to determine video replacement. Cunico further teaches to use facial expression change in a video data to determine video replacement. It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to have combined the teachings of Moncomble with the specific teachings of Cunico to allow a more variety of inforamtion in the video data to determine video replacement is necessary.)
Regarding claim 5, Moncomble in view of Cunico teaches:
The method of claim 4, wherein determining whether to replace the received frame comprises: comparing the detected facial expression to one or more facial expressions that are designated for replacement. (Moncomble [0055], “For example, the image analysis can consist in searching for a change of shot, a change of camera or even a particular gesture or a particular face in the sequence. To that end, the method can implement a movement characterization algorithm in order to detect a particular gesture performed, for example, by a student or a teacher participating in a MOOC type class. For example, the movement characterization algorithm can determine that a student is requesting to speak by raising his/her hand in the audience.” Moncomble teaches detecting a particular face in a video frame. Cunico teaches detecting a face expression. It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to have combined the teachings of Moncomble with the specific teachings of Cunico to detect a particular facial expression in a video frame to allow the method more options in video frame detections.)
Regarding claim 6, Moncomble in view of Cunico teaches:
The method of claim 1, wherein determining whether to replace the received frame comprises: comparing the received frame to one or more previous frames of the video signal.( Moncomble [0012], “The range corresponding to the sequence can hence be determined by detecting a change of sound level or of a particular image in the video stream.”)
Regarding claim 12, Moncomble in view of Cunico teaches:
A system comprising: a processor; and a storage medium storing instructions which, when executed by the processor, cause the system to: (Moncomble [0067], “Upon initialization, the instructions of the computer program 302 are for example loaded into a RAM (Random Access Memory) memory before being executed by the processor of the processing unit 303. The processor of the processing unit 303 implements the steps of the substitution method according to the instructions of the computer program 302.”) the rest of claim 12 recites similar limitations of claim 1, thus is rejected accordingly.
Regarding claim 13, Moncomble in view of Cunico teaches:
The system of claim 12, the replacement frame comprising a predetermined background image. (Moncomble [0061], “From this analysis, a replacement sequence 106 can be generated at step 202 as a replacement of the sequence 105, but without content representative of said sequence 105 if the content has not been deemed relevant. For example, the sequence 106 is hence a simple shot transition.” Moncomble teaches the replacement can be just a simple shot transition without any content. It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to have just use a background image, which has no meaningful content, to act as a simple shot transition. The benefit would be to use a natural and simple replacement frame.)
Regarding claim 15, Moncomble in view of Cunico teaches:
The system of claim 12, the replacement frame comprising a previous frame from the video signal. (Moncomble in view of Cunico teaches generating a replacement frame to replace undesirable frames. Especially Moncomble teaches generating a transition frame. It would have been a design choice to continue a previous frame without showing the undesirable frames. The benefit of combining of the teachings of Moncomble in view of Cunico with the design choice to generate an natural and easy replacement frame in the video sequence to replace the undesirable frames. )
Regarding claim 16, Moncomble in view of Cunico teaches:
The system of claim 12, the video signal being a recorded video signal. (Moncomble [0007], “MOOCs are recorded and accessible on dedicated sites and on university or school sites via a simple Internet browser. Thus, the courses can be consulted from anywhere in the world and at any time.”)
Regarding claim 17, Moncomble in view of Cunico teaches:
The system of claim 16, wherein the instructions, when executed by the processor, cause the system to: generate the replacement frame by interpolating between at least one previous frame and at least one subsequent frame of the recorded video signal. (Moncomble [0066], “determining the start and end instants of a time range by detection of a first and a second particular event in the audiovisual stream, for extracting the part of the audiovisual content contained between the start and the end of the time range, for the semantic analysis of the extracted part and for generating a substitution sequence from the result of the analysis, and for inserting the substitution sequence in place of the extracted part.” A sequence of frames has been analyzed to generate the replace frame.)
Regarding claim 21, Moncomble in view of Cunico teaches:
The method of claim 1, further comprising: in another instance when the video signal is not disrupted, allowing another received frame having another depiction of the user with the particular gesture or the particular facial expression to be displayed without replacing the particular gesture or the particular facial expression. (Moncomble teaches replacing a segment of video sequence based on certain disruptive video content, which can be a particular audio component or particular video content. Moncomble teaches a general video segment replacement method, where video content can be kept or replaced based on certain criteria depending on a disrupted event happened or not. Although Moncomble does not explicitly teach the specific scenario of this claim limitation, it would have been a design choice to person ordinary skill in the art to apply the teachings of Moncomble to a specific video content scenario, which is “when the video signal is not disrupted, allowing another received frame having another depiction of the user with the particular gesture or the particular facial expression to be displayed without replacing the particular gesture or the particular facial expression”. The benefit of combing the teachings of Moncomble with the specific design choice is to allow users the flexibility to apply the method of Moncomble in different customized video scenario.)
Claim 20 recites similar limitations of claim 12, thus is rejected accordingly.
Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Moncomble in view of Cunico and further in view of Wang et al. (US 2020/0026910 A1).
Regarding claim 7, Moncomble in view of Cunico teaches:
The method of claim 6, further comprising:
However, Moncomble in view of Cunico does not, but Wang teaches:
obtaining a first embedding representing a segmentation of the user from the received frame and a second embedding representing one or more segmentations of the user from the one or more previous frames, wherein the comparing is performed using the first embedding and the second embedding. ([0087], “When determining the change information above, the current video frame can be compared with the adjacent earlier previous video frame, to obtain corresponding change information, for example, by means of comparison, the gesture category is determined to be changed from the OK hand to a five-finger splaying hand, and therefore, the smart TV can return to a homepage from a current display interface. The current video frame can be compared with a plurality of earlier and continuous video frames, and continuous change information is formed according to the change information between adjacent frames, so as to execute the corresponding control operation, for example, by comparing the current video frame with three earlier and continuous video frames, the continuous position change information of the hand region is obtained, a hand movement track is formed,”)
Moncomble in view of Cunico teaches detecting gesture change. Wang teaches a specific method of detecting gesture change.
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to have combined the teachings of Moncomble in view of Cunico with the specific teachings of Wang to easily detecting the gesture change.
Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Moncomble in view of Cunico and further in view of Wang and in view of Radwin et al. (US 2020/0279102 A1).
Regarding claim 8, Moncomble in view of Cunico and Wang teaches:
The method of claim 7, further comprising:
However, Moncomble in view of Cunico and Wang does not, but Radwin teaches:
averaging embeddings of multiple segmentations of the user from multiple previous frames to obtain the second embedding.([0134], “To compare a current frame appearance in the region of interest 58 with a previous frame appearance within the region of interest 58 (e.g., where the previous frame appearance within the region of interest 58 may be from a frame immediate before the current frame, may be from a frame at X-number of frames before the current frame, may be an average of previous frame appearances within the region of interest 58 over X-number of frames, a rolling average of previous frame appearances within the region of interest 58 over X-number of frames, or other suitable previous frame appearance within the region of interest 58),”)
Moncomble in view of Cunico and Wang teaches second embedding feature. Radwin teaches a specific method of how to generate a second embedding feature.
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to have combined the teachings of Moncomble in view of Cunico Wang with the specific teachings of Radwin to generate a m
Claim(s) 9-11, 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Moncomble in view of Cunico and further in view of Mun et al. (US 2025/0247606 A1).
Regarding claim 9, Moncomble in view of Cunico teaches:
The method of claim 1, further comprising:
However, Moncomble in view of Cunico does not, but Mun teaches:
obtaining the replacement frame from a generative image model. ([0101], “According to one or more embodiments, the generative AI model 320 may generate (for example, reproduce, paint, or add) the second image by using the first image and the feature information of the second object. Alternatively, the generative AI model 320 may generate, output, or acquire the second image by using prompt information (for example, text prompt) that commands to generate the first image and the second image. Alternatively, the second image may be generated using the first image, the feature information of the second object, and the prompt information. For example, the text prompt (or a text command) may include a command in a text form that can be recognized by the generative AI model 320. For example, the text prompt may include a command for generating the second image by the generative AI model 320. For example, the second image may be an image in which at least a portion of the first object of the first image is transformed to or replaced with at least a portion of the second object. For example, the electronic device 201 may generate the second image, based on at least two text prompts and the second object by using the generative AI model 320. For example, the generative AI model 320 may perform a painting operation for the first image while transforming or replacing at least the portion of the first object of the first image to or with at least the portion of the second object. For example, when generating the second image, the processor 220 may perform an in-painting operation for generating a first part of the first image. For example, the first part may be a part (for example, a part to be additionally filled in while a segment corresponding to the at least the portion of the first object is replaced with at least the portion of the second object) removed when at least the portion of the first object is replaced with at least the portion of the second object.”)
Moncomble in view of Cunico teaches generating a replacement frame. Mun teaches using a generative AI model to generate the replacement frame.
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to have combined the teachings of Moncomble in view of Cunico with the specific teachings of Mun to generate a high-quality replacement image.
Claim 18 recites similar limitations of claim 9, thus is rejected accordingly.
Regarding claim 10, Moncomble in view of Cunico teaches:
The method of claim 9, further comprising: remove a detected gesture or facial expression from the received frame. (Moncomble [0055] “According to a particular embodiment, at least one event marking the start or the end of a time range corresponding to a part of the sequence to be replaced is determined by an analysis of the image. For example, the image analysis can consist in searching for a change of shot, a change of camera or even a particular gesture or a particular face in the sequence. To that end, the method can implement a movement characterization algorithm in order to detect a particular gesture performed, for example, by a student or a teacher participating in a MOOC type class. For example, the movement characterization algorithm can determine that a student is requesting to speak by raising his/her hand in the audience. To that end, a video sequence showing the audience during a class can be presented as input to the algorithm such that an image analysis is performed. At the end of this analysis, the algorithm determines various time ranges corresponding to interruptions of the lesson.” FIG. 1a, b, and c)
However, Moncomble in view of Cunico does not, but Mun teaches:
inputting a prompt instructing the generative image model to remove certain feature ([0101], “According to one or more embodiments, the generative AI model 320 may generate (for example, reproduce, paint, or add) the second image by using the first image and the feature information of the second object. Alternatively, the generative AI model 320 may generate, output, or acquire the second image by using prompt information (for example, text prompt) that commands to generate the first image and the second image. Alternatively, the second image may be generated using the first image, the feature information of the second object, and the prompt information. For example, the text prompt (or a text command) may include a command in a text form that can be recognized by the generative AI model 320. For example, the text prompt may include a command for generating the second image by the generative AI model 320. For example, the second image may be an image in which at least a portion of the first object of the first image is transformed to or replaced with at least a portion of the second object. For example, the electronic device 201 may generate the second image, based on at least two text prompts and the second object by using the generative AI model 320. For example, the generative AI model 320 may perform a painting operation for the first image while transforming or replacing at least the portion of the first object of the first image to or with at least the portion of the second object. For example, when generating the second image, the processor 220 may perform an in-painting operation for generating a first part of the first image. For example, the first part may be a part (for example, a part to be additionally filled in while a segment corresponding to the at least the portion of the first object is replaced with at least the portion of the second object) removed when at least the portion of the first object is replaced with at least the portion of the second object.”)
Moncomble in view of Cunico teaches generating a replacement frame to remove certain gesture. Mun teaches using a generative AI model to generate the replacement frame to remove certain feature.
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to have combined the teachings of Moncomble in view of Cunico with the specific teachings of Mun to generate a high-quality replacement image.
Regarding claim 11, Moncomble in view of Cunico and Mun teaches:
The method of claim 9, further comprising: inputting a prompt instructing the generative image model to depict the user with a neutral gesture, neutral pose, or neutral facial expression in the replacement frame. (Mun [0101], “According to one or more embodiments, the generative AI model 320 may generate (for example, reproduce, paint, or add) the second image by using the first image and the feature information of the second object. Alternatively, the generative AI model 320 may generate, output, or acquire the second image by using prompt information (for example, text prompt) that commands to generate the first image and the second image. Alternatively, the second image may be generated using the first image, the feature information of the second object, and the prompt information. For example, the text prompt (or a text command) may include a command in a text form that can be recognized by the generative AI model 320. For example, the text prompt may include a command for generating the second image by the generative AI model 320. For example, the second image may be an image in which at least a portion of the first object of the first image is transformed to or replaced with at least a portion of the second object. For example, the electronic device 201 may generate the second image, based on at least two text prompts and the second object by using the generative AI model 320. For example, the generative AI model 320 may perform a painting operation for the first image while transforming or replacing at least the portion of the first object of the first image to or with at least the portion of the second object. For example, when generating the second image, the processor 220 may perform an in-painting operation for generating a first part of the first image. For example, the first part may be a part (for example, a part to be additionally filled in while a segment corresponding to the at least the portion of the first object is replaced with at least the portion of the second object) removed when at least the portion of the first object is replaced with at least the portion of the second object.” The combination rationale of claim 9 is incorporated here. Furthermore, Moncomble in view of Cunico teaches generating a replacement frame to remove certain gesture. Mun teaches using a generative AI model based on prompt to generate the replacement frame to remove certain feature. It would have been a design choice to choose a replacement image depicting a user with a neutral gesture, neutral pose, or neutral facial expression to replace unwanted or undesirable gesture. The benefit would be to use an easy and straight forward user expression or gesture replacement frame.)
Claim(s) 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Moncomble in view of Cunico and further in view of Lichtenberg et al. (US 10440324 B1).
Regarding claim 14, Moncomble in view of Cunico teaches:
The system of claim 12,
However, Moncomble in view of Cunico does not, but Lichtenberg teaches:
the replacement frame comprising a default image of the user. (para 54: “As an example, rather than outputting the undesirable image, the video data 134 may have a video clip inserted in such that the remote user device 116 outputs a happy face, or a picture of the local user 106.”)
Moncomble in view of Cunico teaches replacing undesirable image. Lichtenberg teaches one option of replacement image can be a default local user image.
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to have combined the teachings of Moncomble in view of Cunico with the specific teachings of Lichtenberg to use an image of a user to replace an undesirable image in a video with a reasonable expectation of success.
Claim(s) 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Moncomble in view of Cunico and further in view of Bronder et al (US 2019/0358541 A1).
Regarding claim 19, Moncomble in view of Cunico teaches:
The system of claim 12,
However, Moncomble in view of Cunico does not, but Bronder teaches:
further comprising detecting the disruption based at least on network latency or bandwidth of the video signal.([0031], “The operating system 140 may include a display controller 142 for controlling the GPU 120. For example, the display controller 142 may provide control commands to the GPU 120 to perform one or more specific graphics processing operations such as rendering source images or performing adjustments. The display controller 142 may include a streaming engine 144 for receiving and decoding a video stream 166, a latency module 146 for detecting a delay in the video stream 166, and a reprojection module 148 for generating an image adjustment to be applied to one or more images in one or more previous video frames when a current video frame is not received.”)
Moncomble in view of Cunico teaches generating replacement frame when detecting certain event. Bronder teaches generating replacement frame when detecting a delay of network.
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to have combined the teachings of Moncomble in view of Cunico with the specific teachings of Bronder to ensure frame display in the event of network delay.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to YANNA WU whose telephone number is (571)270-0725. The examiner can normally be reached Monday-Thursday 8:00-5:30 ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alicia Harrington can be reached at 5712722330. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/YANNA WU/Primary Examiner, Art Unit 2615