Prosecution Insights
Last updated: April 19, 2026
Application No. 18/929,694

SYSTEM AND METHOD FOR PRODUCING A VIDEO STREAM

Final Rejection §103
Filed
Oct 29, 2024
Examiner
TRAN, LOI H
Art Unit
2484
Tech Center
2400 — Computer Networks
Assignee
Livearena Technologies AB
OA Round
4 (Final)
64%
Grant Probability
Moderate
5-6
OA Rounds
2y 10m
To Grant
88%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
394 granted / 611 resolved
+6.5% vs TC avg
Strong +24% interview lift
Without
With
+23.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
25 currently pending
Career history
636
Total Applications
across all art units

Statute-Specific Performance

§101
6.3%
-33.7% vs TC avg
§103
54.9%
+14.9% vs TC avg
§102
14.8%
-25.2% vs TC avg
§112
12.5%
-27.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 611 resolved cases

Office Action

§103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Interpretation under 35 USC § 112 2. Claims 15-16 are interpreted under 35 U.S.C. 112(f) as described in the previous non-final office action. Response to Arguments 3. Applicant’s arguments with respect to the rejections of claims 1-16 have been fully considered but they are moot in view of the new grounds of rejection. Response to Amendment 4. In response to the amendment, the rejections of claims 1-16 under 112(b) have been withdrawn. Claim Rejections - 35 USC § 103 5. The text of those sections of Title 35, U.S. Code not included in this section can be found in a prior Office action. 6. Claims 1-2, 4-16 are rejected under AIA 35 U.S.C. 103 as being unpatentable over Swierk et al. (US Patent 11,350,059) in view of Holzer et al. (US Patent US 10,070,154), and further in view of Yu (US Publication 2023/0412656). Regarding claim 1, Swierk discloses a method for providing an output digital video stream, the method comprising: continuously collecting a real-time first primary digital video stream (Swierk, fig. 8, step 812, col. 54, lines 31-35, receiving captured videoframes during the current videoconference session and transmitting the captured videoframes to the trained intelligent appearance monitoring management system neural network); performing a first digital image analysis of the first primary digital video stream so as to identify at least one first event or pattern in the first primary digital video stream, the first digital image analysis resulting in a first production control parameter being established based on the detection of the first event or pattern, the first digital image analysis taking a certain time to perform causing the first production control parameter to be established after a first time delay in relation to a time of occurrence of the first event or pattern in the first primary digital video stream (Swierk, col. 24, lines 33-50, performing image processing on video frames to detect the user’s image appearance anomaly, the detection process implicitly consumes a processing “delay” time; in response to a detected cough or sneeze event in an example embodiment, the trained intelligent appearance monitoring management system neural network may generate/output an optimized appearance filtering adjustment to alter an image of the user in videoframes, i.e., a first production control parameter); applying the first production control parameter to the real-time first primary digital video stream, the application of the first production control parameter resulting in the first primary digital video stream being modified based on the first production control parameter without being delayed by the first time delay, to produce a first produced digital video stream, wherein the first production control parameter is applied to the first primary digital video stream at a time in the first primary digital video stream which is later, by at least the first time delay, than the time of occurrence of the first event or pattern in the first primary digital video stream (Swierk, col. 3, lines 55-64, a trained intelligent appearance monitoring management system neural network may output optimized appearance filtering adjustments that include self-correction adjustments to a user's image by replacing a user's image of appearance anomaly with a stock image, such as a stock photo of the user during some or all of the videoconferencing session; as a result, an edited videoconference video is produced; col. 24, lines 33-50, the user image appearance anomaly in the videoframes may be ongoing, may last for a preset duration of time, or may last while the user appearance anomaly lasts; col. 25, line 59 to col 26, line 4, the detected cough/sneeze event is based on detected movement or facial expression indicating a cough/sneeze, and the filter is applied when the actual cough/sneeze takes place; col. 56, lines 5-15, the trained intelligent appearance monitoring management system neural network may correlate one or more types of output optimized appearance filtering adjustments for altering the user's image within a videoframe that corresponds with the detection of one or more user appearance anomalies. For example, the trained neural network may generate optimized appearance filtering adjustments by invoking or adjusting processing of the captured videoframes via one or more AV processing instruction modules to alter the user's image, replace the user's image, or correct the user's image in the videoframe, i.e., producing a produced/modified video stream; col. 6 line 49 through col. 7 line 13, the neural network trained for the transmitting information handling system in embodiments may output optimized appearance filtering adjustments as an optimized processor setting (e.g., offload instruction); such offload instructions may include an instruction to execute one or more AV processing instruction modules using a non-CPU processor (e.g., GPU, VPU, GNA). The intelligent appearance monitoring management system may transmit this instruction to the multimedia framework pipeline and infrastructure platform controlling or directing execution of such AV processing instruction modules when application of optimized appearance filtering adjustments require additional processing that may otherwise load the CPU to a point where additional errors or issues may occur. By decreasing the computational requirements of the CPU on of the captured audio or video samples upon which such AV processing instruction modules may be executed such as for applying corrective user image alterations in some embodiments, the processing power required to perform such an execution at the transmitting information handling system may also markedly decrease. Further, by offloading these executions to a non-CPU processor, the undesirable side effects (e.g., video lag, glitches, slowing of peripheral applications) associated with over-taxing the CPU during such executions may be avoided); based on the disclosure above, it is implicit or obvious that off-loading executions to non-CPU processors may allow the image processing on video frames to detect the user’s image appearance anomaly and the process of outputting a produced video stream to be performed in separate processes or threads; as a result, using optimized appearance filtering adjustments to modify the video stream, without being delayed by the first time delay due to the image appearance anomaly detection process, generates a produced/modified videoconference video stream, wherein the optimized appearance filtering adjustments is applied to the videoconference video at a time which is later, by at least the first time delay, than the time of image appearance anomaly detection in the videoconference video); and continuously providing the output digital video stream to at least one participating client, the output digital video stream being provided as, or being based on, the first produced digital video stream, wherein the first primary digital video stream is continuously captured by a camera arranged locally in relation to the participating client and locally in relation to a computer device performing the first digital image analysis and the application of the first production control parameter (Swierk, fig. 8, steps 806-824, col. 3, line 48 to column 4 line 19, and col. 24 lines 33-62, camera system arranged for the videoconference session continues capturing videoframes and identifies users’ images for image detection; col. 33, lines 21-25, an information handling system transmits video of one participant user while simultaneously executes code instructions for the MMCA 550 to display videos of other participants within a shared user session of a video conferencing system). Swierk does not explicitly disclose: applying the parameter applying while continuously providing an output digital video stream, wherein the output digital video stream remains uninterrupted during the establishing and applying, continuously providing the output digital video stream to at least one participating client, the output digital video stream being provided as, or being based on, the first produced digital video stream after application of the first production control parameter. Holzer discloses applying a control parameter to a received video while continuously providing an output stream wherein the output stream remains uninterrupted during execution of other processes associated with the received video (Holzer, col. 3 lines 16-47, video filtering may include multiple iterations of a technique or multiple instantiations of a mechanism. Filters modify and/or add to the visual data of a media object such as a live video stream or a multi-view interactive digital media representation. Given information about the content of the scene, scene elements can be used as reference coordinate system for filters, as masks to apply filters only to certain parts of the scene or to act as occluder for a filter, and for other such purposes. While the user keeps pointing the camera at the scene, more data can be sent which can be used by the server to improve the obtained information about the already observed scene and to also obtain information about scene parts which were not captured previously; col. 4 lines 47-56, FIG. 1, as shown is one example of a system that can be used to perform a live video stream filtering. As depicted, a combination of client and server applications is used to implement a filtering mechanism that runs live in a capture device application, such as with a camera on a smartphone. While the camera is recording, the user points the camera at an object. The smartphone then communicates with the server, and collectively the two devices analyze the video stream to provide a filtered view of the video stream in real time). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Holzer’s features into Swierk’s invention for enhancing user’s viewing experience by using longer time delay taken by additional image detection to cover for video filtering without being delayed by the time delay. Swierk-Holzer does not explicitly disclose but Yu discloses continuously providing the output digital video stream to at least one participating client, the output digital video stream being provided as, or being based on, the first produced digital video stream after application of the first production control parameter (Yu, fig. 5 through fig. 8 , para’s 0067-0101, display video conference using a first aspect ratio; determining an event in the conference video; determining a second aspect ratio; after dynamically applying the second aspect ratio to the conference video to generate an adjusted conference video, continuously providing the adjusted conference video to at least one conference participant). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Yu’s features into Swierk-Holzer’s invention for enhancing participants’ viewing experience by dynamically adjusting conference video based on detected event of interest and continuously displaying the adjusted conference video to conference participants. Regarding claim 2, Swierk-Holzer-Yu discloses the method of claim 1, further comprising: producing the output digital video stream based on both the first primary digital video stream and the first produced digital video stream (Swierk, column 24 lines 33-64, a trained intelligent appearance monitoring management system neural network may output optimized appearance filtering adjustments that include self-correction adjustments to a user's image by image filtering or replacing a user's image of appearance anomaly with a stock image, such as a stock photo of the user during some or all of the videoconferencing session; as a result, an edited videoconference video is produced). Regarding claim 4, Swierk-Holzer-Yu discloses the method of claim 1, wherein: the first primary digital video stream is continuously captured by a camera so as to show a participating user of the participating client in the first primary digital video stream (Swierk, fig. 8, steps 806-824, col. 3, line 48 to column 4 line 19, and col. 24 lines 33-62, camera system arranged for the videoconference session continues capturing videoframes and identifies users’ images for image detection). Regarding claim 5, Swierk-Holzer-Yu discloses the method of claim 1, wherein: the first production control parameter comprises one or several of: a) a location of, or tracking information with respect to, a stationary or moving object or person in the first primary digital video stream, the location or tracking information being automatically detected using digital image processing (Swierk, col. 24 line 63 to col. 25 line 12, a location of an object in the first primary digital video stream, the location information being automatically detected using digital image processing); b) a discrete production command, automatically generated based on the detection of a predetermined event or pattern, and/or automatically generated based on a predetermined or variable production schedule; c) a virtual panning and/or zooming instruction; and d) camera stabilizing information, automatically generated based on a camera movement detection. Regarding claim 6, Swierk-Holzer-Yu discloses the method of claim 1, wherein: the first digital image analysis is performed by a computer device that is also arranged to provide the output digital video stream to the at least one participant client (Swierk, col. 7 line 53 to column 8 line 7; FIG. 1 illustrates an information handling system 100 similar to information handling systems according to several aspects of the present disclosure. As described herein, the intelligent appearance monitoring management system 170 in an embodiment may operate to identify user appearance anomalies and to generate optimized appearance filtering adjustments that may adjust the user's image during the capture, processing, encoding, and transmission of a media sample (e.g., including audio or video) from a transmitting information handling system to a receiving information handling system. The information handling system 100 described with reference to FIG. 1 may represent a transmitting information handling system or a receiving information handling system in various embodiments. In still other embodiments, information handling system 100 may operate as both a transmitting and a receiving information handling system, as may be the case for an information handling system transmitting video of one participant user while simultaneously executing code instructions for the multimedia multi-user collaboration application (MMCA) 150 to display videos of other participants within a shared user session). Regarding claim 7, Swierk-Holzer-Yu discloses the method of claim 6, wherein: the first digital image analysis and the providing of the output digital video stream are performed in separate processes or threads (Swierk, col. 57 line 36 to col. 58 line 27, processor offload instruction adjustments may be made to assist in providing for alterations to a user's image within videoframes as part of optimized appearance filtering adjustments to alter the user's image in a videoframe; this may also decrease latency and jitter, as well as freeing up processing resources for execution of other applications at the information handling system of interest, thus improving overall performance of that information handling system; col. 56, lines 5-15, the trained intelligent appearance monitoring management system neural network may correlate one or more types of output optimized appearance filtering adjustments for altering the user's image within a videoframe that corresponds with the detection of one or more user appearance anomalies. For example, the trained neural network may generate optimized appearance filtering adjustments by invoking or adjusting processing of the captured videoframes via one or more AV processing instruction modules to alter the user's image, replace the user's image, or correct the user's image in the videoframe, i.e., producing a produced/modified video stream; col. 6 line 49 through col. 7 line 13, the neural network trained for the transmitting information handling system in embodiments may output optimized appearance filtering adjustments as an optimized processor setting (e.g., offload instruction); such offload instructions may include an instruction to execute one or more AV processing instruction modules using a non-CPU processor (e.g., GPU, VPU, GNA). The intelligent appearance monitoring management system may transmit this instruction to the multimedia framework pipeline and infrastructure platform controlling or directing execution of such AV processing instruction modules when application of optimized appearance filtering adjustments require additional processing that may otherwise load the CPU to a point where additional errors or issues may occur. By decreasing the computational requirements of the CPU on of the captured audio or video samples upon which such AV processing instruction modules may be executed such as for applying corrective user image alterations in some embodiments, the processing power required to perform such an execution at the transmitting information handling system may also markedly decrease. Further, by offloading these executions to a non-CPU processor, the undesirable side effects (e.g., video lag, glitches, slowing of peripheral applications) associated with over-taxing the CPU during such executions may be avoided); based on the disclosure above, it is implicit or obvious that offloading executions to non-CPU processors may allow the image processing on video frames to detect the user’s image appearance anomaly and the process of outputting a produced video stream to be performed in separate processes or threads). Regarding claim 8, Swierk-Holzer-Yu discloses the method of claim 6, further comprising: processor-throttling the first digital image analysis as a function of current processor load of the computer device performing the first digital image analysis, so that the provision of the output digital video stream has processor priority over the first digital image analysis (Swierk, col. 57 line 36 to col. 58 line 27, processor offload instruction adjustments may be made to assist in providing for alterations to a user's image within videoframes as part of optimized appearance filtering adjustments to alter the user's image in a videoframe; this may also decrease latency and jitter, as well as freeing up processing resources for execution of other applications at the information handling system of interest, thus improving overall performance of that information handling system; col. 56, lines 5-15, the trained intelligent appearance monitoring management system neural network may correlate one or more types of output optimized appearance filtering adjustments for altering the user's image within a videoframe that corresponds with the detection of one or more user appearance anomalies. For example, the trained neural network may generate optimized appearance filtering adjustments by invoking or adjusting processing of the captured videoframes via one or more AV processing instruction modules to alter the user's image, replace the user's image, or correct the user's image in the videoframe, i.e., producing a produced/modified video stream; col. 6 line 49 through col. 7 line 13, the neural network trained for the transmitting information handling system in embodiments may output optimized appearance filtering adjustments as an optimized processor setting (e.g., offload instruction); such offload instructions may include an instruction to execute one or more AV processing instruction modules using a non-CPU processor (e.g., GPU, VPU, GNA). The intelligent appearance monitoring management system may transmit this instruction to the multimedia framework pipeline and infrastructure platform controlling or directing execution of such AV processing instruction modules when application of optimized appearance filtering adjustments require additional processing that may otherwise load the CPU to a point where additional errors or issues may occur. By decreasing the computational requirements of the CPU on of the captured audio or video samples upon which such AV processing instruction modules may be executed such as for applying corrective user image alterations in some embodiments, the processing power required to perform such an execution at the transmitting information handling system may also markedly decrease. Further, by offloading these executions to a non-CPU processor, the undesirable side effects (e.g., video lag, glitches, slowing of peripheral applications) associated with over-taxing the CPU during such executions may be avoided); based on the disclosure above, it is implicit or obvious that offloading executions to non-CPU processors may allow the image processing on video frames to detect the user’s image appearance anomaly and the process of outputting a produced video stream to be performed in separate processes or threads); offloading processes to non-CPU processor restricts usage of CPU processor, therefore can obviously be considered as processor-throttling). Regarding claim 9, Swierk-Holzer-Yu discloses the method of claim 8, wherein: the processor-throttling of the first digital image analysis is performed by limiting the first digital image analysis to only a subpart of all video frames of the first primary digital video stream (Swierk, col. 57 line 36 to col. 58 line 27, processor offload instruction adjustments may be made to assist in providing for alterations to a user's image within videoframes as part of optimized appearance filtering adjustments to alter the user's image in a videoframe; this may also decrease latency and jitter, as well as freeing up processing resources for execution of other applications at the information handling system of interest, thus improving overall performance of that information handling system; Swierk, col. 24, lines 33-50, the image processing may be performed on a portion of video frames to detect the user’s image appearance anomaly). Regarding claim 10, Swierk-Holzer-Yu discloses the method of any claim 1, further comprising: identify at least one second event or pattern in the first primary digital video stream and/or the digital audio stream (see Swierk, col. 54 line54 to col. 55 line 21); performing a second digital image analysis of the first primary digital video stream, and/or a second digital audio analysis of a digital audio stream continuously captured and associated with the first primary digital video stream, so as to identify at least one second event or pattern in the first primary digital video stream and/or the digital audio stream, the second digital image or audio analysis taking a certain time to perform causing a second production control parameter to be established after a second time delay in relation to a time of occurrence of the second event or pattern in the first primary digital video stream, the second time delay being longer than the first time delay; and applying the second production control parameter to the real-time first primary digital video stream, the application of the second production control parameter resulting in the first primary digital video stream being modified based on the second production control parameter without being delayed by the second time delay, so as to produce the first produced digital video stream (Holzer, col. 3 lines 16-47, video filtering may include multiple iterations of a technique or multiple instantiations of a mechanism. Filters modify and/or add to the visual data of a media object such as a live video stream or a multi-view interactive digital media representation. Given information about the content of the scene, scene elements can be used as reference coordinate system for filters, as masks to apply filters only to certain parts of the scene or to act as occluder for a filter, and for other such purposes. While the user keeps pointing the camera at the scene, more data can be sent which can be used by the server to improve the obtained information about the already observed scene and to also obtain information about scene parts which were not captured previously; in summary, video filtering may include a second digital image analysis. The feature "the second time delay being longer than the first time delay" is only a result of the circumstances and therefore exhibits no technical effect. This feature is therefore implicitly included; the different detection/identification processes of different user appearance anomalies inevitably take different amounts of time). The motivation for combining the references would have been for enhancing user’s video editing experience by using longer time delay taken by additional image detection to cover for video filtering without being delayed by the time delay. Regarding claim 11, Swierk-Holzer-Yu discloses the method of claim 10, wherein: the second digital image analysis is performed by a computer device which is remote in relation to a computer device performing the application of the second production control parameter (Holzer, col. 3 line 41 to col. 4 line 3, and column 7 lines 3-45; according to various embodiments, video filtering is provided through a client-server communication system. During this process, a video frame is transmitted from the client device to the server. The server processes the video frame to produce filtering information and then transmits a filter processing message to the client device that indicates how to apply a filter to the video frame. The client device then applies the filtering information to create a filtered video stream. Filters modify and/or add to the visual data of a media object such as a live video stream or a multi-view interactive digital media representation. One example for a modification is a change to the color matrix, such as darkening the colors associated with a video stream. Examples of additions include, but are not limited to, adding 2D or 3D stickers or text that is placed relative to a reference coordinate system. For instance, a thought bubble may be placed near to a person's head and continue to stay with the person as the person moves. Given information about the content of the scene, scene elements can be used as reference coordinate system for filters, as masks to apply filters only to certain parts of the scene or to act as occluder for a filter, and for other such purposes. Although the computing capabilities of mobile devices increase over time, their computational power is still a limiting factor for advanced algorithms that allow to obtain detailed information about the content of a scene. In some implementations, the server may respond to the client device with information that the client device can use to apply a filter to a frame. Alternately, or additionally, the server may add a filter to a frame and then provide the filtered frame to the client device). The motivation to combine the references would have been for enhancing user’s non-delayed video editing experience by using remote facility to process video filtering and generating a more effective second production control parameter. Regarding claim 12, Swierk-Holzer-Yu discloses the method of claim 10, wherein: the second production control parameter constitutes an input to the first digital image analysis (Swierk, col. 24 line 63 to col. 25 line 12, the boundary detection module 382 in an embodiment may operate to identify a user's image boundary, including for identification of a user's head, face, hair, and body, as described above for several purposes. Such boundary detection may be used for overlaying a virtual background image/blur on the background portion of the videoframes around the user boundary in some embodiments. As described, the boundary detection module may also be used to determine the location of the user for application of blur or another image or color on some or all of the user's image portion of the videoframes. In yet another embodiment, the boundary detection module 382 may detect the user's face, head or body boundaries to identify the user's face, hair or the like within each captured image making up the video sample. In this way, that portion of the image may input to the trained neural network for identification of user appearance anomalies). Regarding claim 13, Swierk-Holzer-Yu discloses the method of claim 10, wherein: the second production control parameter comprises one or several of: a) a second primary video stream; and b) an instruction regarding whether or not to show, in the first produced digital video stream, a certain participating user, the participating user in question being automatically identified based on digital image processing (Swierk, fig. 8, steps 806-824, col. 33, lines 21-25, multiple video stream associated with users of the teleconference session; col. 56 lines 5-26, an instruction regarding whether or not to show, in the first produced digital video stream, a certain participating user, the participating user in question being automatically identified based on digital image processing). Claims 14-15 are rejected for the same reasons set forth in claim 1, Swierk-Holzer-Yu further discloses a system and executing instructions (see Swierk, fig. 1, col. 5, lines 1-5, a system and instructions, col. 12, lines 41-44, computer readable medium). Regarding claim 16, Swierk-Holzer-Yu discloses the system of claim 15, wherein: the system further comprises several cameras, each camera being arranged to capture a respective non-delayed primary digital video stream; and the production function is arranged to produce the non-delayed first produced digital video stream based on each of the captured primary digital video streams (Swierk, col. 23 lines 4-22, the video processing engine 380 may operate at the direction of the AV processing instruction manager 341 to perform one or more of the algorithms associated with the plurality of modules within the video processing engine 380. Several AV processing instruction modules are contemplated for execution during operation of the MMCA including several not depicted in FIG. 3 such as an eye contact correction module to operate in an embodiment in which multiple video media samples are captured from a plurality of cameras, user framing module operating to identify a user's face and center the face within the captured videoframes, user zoom modules to select an appropriate zoom level on the user image in the videoframe based on distance from a camera, a zoom and face normalizer module that operate to crop, enlarge, or scale down various captured images constituted the captured video sample to normalize the size of the user's face across each of the captured images, shading adjustment modules, color blending modules, and others). 7. Claim 3 is rejected under AIA 35 U.S.C. 103 as being unpatentable over Swierk-Holzer-Yu, as applied to claim 2 above, in view of Thomas (US Publication 2022/0007127). Regarding claim 3, Swierk-Holzer-Yu discloses the method of claim 2. Swierk-Holzer-Yu does not explicitly disclose but Thomas discloses, wherein: the collecting further comprises continuously capturing a first digital audio stream, the first digital audio stream being associated with the first primary digital video stream, is captured, and wherein the method further comprises time-synchronizing the first digital audio stream with the first produced video stream; and providing the time-synchronized first digital audio stream to the at least one participating client together with or as a part of the output digital video stream (Thomas, para. 0054, the spatial audio sample(s) 438 output by the stereo converter circuitry 436 are transmitted to audio and video (A/V) time synchronization circuitry 440 of the input device 102. The A/V time synchronization circuitry 440 also receives the video stream signal 402 output by the video encoder 326 of FIG. 3. The A/V time synchronization circuitry 440 performs time synchronization of the spatial audio sample(s) 438 and the video stream signal 402. For example, the spatial audio sample(s) 438 and the video stream signal 402 can include time stamp data corresponding to a time at which the audio or video was captured. The A/V time synchronization circuitry 440 matches the time stamp(s) of the spatial audio sample(s) 438 to the time stamp(s) of the video stream signal 402 to create a time synchronized audio and video stream. The A/V time synchronization circuitry 440 outputs data including synchronized video data and spatial audio data to the user device 123. For example, the time synchronized spatial audio is output to the speaker(s) 308 and the time synchronized video data is output to the display screen 122). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Thomas’ features into Swierk-Holzer-Yu’s invention for enhancing user’s video editing experience by performing time synchronization of the spatial audio sample(s) 438 and the video stream signal. Conclusion 8. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. 9. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LOI H TRAN whose telephone number is (571)270-5645. The examiner can normally be reached 8:00AM-5:00PM PST FIRST FRIDAY OF BIWEEK OFF. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, THAI TRAN can be reached at 571-272-7382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LOI H TRAN/ Primary Examiner, Art Unit 2484
Read full office action

Prosecution Timeline

Oct 29, 2024
Application Filed
Dec 14, 2024
Non-Final Rejection — §103
Apr 16, 2025
Response Filed
May 13, 2025
Final Rejection — §103
Sep 16, 2025
Request for Continued Examination
Sep 18, 2025
Response after Non-Final Action
Sep 30, 2025
Non-Final Rejection — §103
Dec 19, 2025
Interview Requested
Dec 19, 2025
Response Filed
Jan 03, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598366
CONTENT DATA PROCESSING METHOD AND CONTENT DATA PROCESSING APPARATUS
2y 5m to grant Granted Apr 07, 2026
Patent 12593112
METHOD, DEVICE, AND COMPUTER PROGRAM FOR ENCAPSULATING REGION ANNOTATIONS IN MEDIA TRACKS
2y 5m to grant Granted Mar 31, 2026
Patent 12592261
VIDEO EDITING METHOD AND APPARATUS, AND DEVICE AND STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12576798
CAMERA SYSTEM AND ASSISTANCE SYSTEM FOR A VEHICLE AND A METHOD FOR OPERATING A CAMERA SYSTEM
2y 5m to grant Granted Mar 17, 2026
Patent 12579810
SYSTEM AND METHOD FOR AUTOMATIC EVENTS IDENTIFICATION ON VIDEO
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
64%
Grant Probability
88%
With Interview (+23.6%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 611 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month