DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: an application module, a media framework module, a video decoder module, and a first plugin module in claims 11-19; and a second plugin module in claims 15-20.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-19 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter.
Regarding claims 1-10, the language of the claims raise questions as to whether the claims are directed merely to an abstract idea that is not tied to a technological art, environment or machine which would result in a practical application producing a concrete, useful, and tangible result to form the basis of statutory subject matter under 35 U.S.C. 101. Specifically, an application module, a media framework module, a video decoder module, a first plugin module and a second plugin module, as disclosed in claims 1-19, are directed to software programs as described on paragraph [0016] of the specification. Therefore, under broadest reasonable interpretation, it is entirely possible for the method comprising different “modules”, as claimed, to cover an embodiment of a software alone. Therefore, the claims are appropriate to be rejected under 101 as failing to fall within a statutory category.
Regarding claims 11-19, the language of the claims raise questions as to whether the claims are directed merely to an abstract idea that is not tied to a technological art, environment or machine which would result in a practical application producing a concrete, useful, and tangible result to form the basis of statutory subject matter under 35 U.S.C. 101. The word "system" does not inherently mean that the claims are directed to a machine. Only if at least one of the claimed elements of the system is a physical part of a device can the system as claimed constitute part of a device or a combination of devices to be a machine within the meaning of 101. Specifically, an application module, a media framework module, a video decoder module, a first plugin module and a second plugin module, as recited in claims 1-19, are directed to software programs as described on paragraph [0016] of the specification. Therefore, under broadest reasonable interpretation, it is entirely possible for the system comprising different “modules”, as claimed, to cover an embodiment of a software alone. Therefore, the claims are appropriate to be rejected under 101 as failing to fall within a statutory category.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-4 and 6-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al. (CN 112866814, hereinafter Wang), and further in view of Shribman et al. (US 2016/0337426, hereinafter Shribman).
Regarding claim 1, Wang teaches a video processing method (page 18 section Technical field: a method and device for audio and video processing) comprising:
an application module (stream pulling module 602, fig. 6; page 32 paragraph 6: it can also be implemented by hardware) receiving a video (video stream; page 22 paragraph 1: Step 202: Pull the audio and video stream corresponding to the identifier of the audio and video stream to be processed; page 30 paragraph 2: The stream pulling module 602 is configured to pull the audio and video stream corresponding to the identifier of the audio and video stream to be processed);
a media framework module (stream pulling module 602, fig. 6; page 32 paragraph 6: it can also be implemented by hardware) sending a command (processing plug-in identifier list comprising processing plug-in identifiers corresponds to a command; page 21 paragraph 6: Step 201: Receive a processing task, the processing task at least including: an identification of the audio and video stream to be processed and a processing plug-in identification list) and the video (video stream);
a video decoder module (decoding module, page 30 paragraph 4; page 32 paragraph 6: it can also be implemented by hardware) decoding the video (video stream) to generate a decoded video (page 22 paragraph 5: The audio and video stream is decoded into frame data, and the frame data is stored in a shared memory; page 22 paragraph 6: In this step, when the stream is pulled, Puller can pass the pulled audio and video stream to the decoder (Decorder) for decoding, and Decorator decodes the audio and video stream to obtain frame data, where, for the video, the frame The data can be an image frame in rgba format; for audio, the frame data can be in an uncompressed PCM format; page 30 paragraph 4: The decoding module is configured to decode the audio and video stream into frame data after pulling the audio and video stream corresponding to the identifier of the audio and video stream to be processed); and
a first plugin module (first processing plug-in in a first container, page 24 paragraphs 5-6; content processing module, page 19 paragraph 8/content processing module 603, page 30 paragraph 3; page 32 paragraph 6: it can also be implemented by hardware) receiving the decoded video (decoded frame data representing the audio and video streams) and outputting a first plugin output (processed audio and video stream/processed frame data) according to the command (page 19 paragraph 4: The processing plug-in corresponding to each processing plug-in identifier in the processing plug-in identifier list is sequentially called to perform content processing on the audio and video stream; page 19 paragraph 8: The content processing module is used to sequentially call the processing plug-in corresponding to each processing plug-in identifier in the processing plug-in identifier list to perform content processing on the audio and video stream; page 21 paragraph 4: Figure 2 is a flowchart of an audio and video processing method embodiment provided in the first embodiment of the application. The audio and video processing method is a cloud processing method similar to a pipeline. Multiple processing plugins can be connected in series in the cloud for audio and video processing. The video stream is processed; page 24 paragraph 4: after Decorator decodes the audio and video streams, it can directly transmit the decoded frame data to the processing plug-in corresponding to the first processing plug-in identifier in the processing plug-in identifier list. (The first processing plug-in identification here means that the execution order is the first, the subsequent second, etc. are similar). After the first processing plug-in finishes processing the frame data, it can transmit the processed frame data to The processing plug-in corresponding to the second processing plug-in identifier in the processing plug-in identifier list is processed by the second processing plug-in; page 24 paragraphs 5-6: In an embodiment, when the number of processing plug-ins to be called is at least two, the at least two processing plug-ins may be deployed in one or more containers of the same physical machine … each processing plugin can have a separate container; page 23 paragraph 6 – page 24 paragraph 1 describes processing plug-in reads the corresponding frame data from the shared memory according to the memory offset, and processes the frame data. In this step, after the called processing plug-in receives the memory offset of the frame data currently to be processed, it can determine the storage location of the frame data currently to be processed in the shared memory according to the memory offset, and use the Read the corresponding frame data in the storage location, and then use the processing logic of the current processing plug-in to perform content processing on the read frame data. Step 203-3: Store the processed frame data in the shared memory, and continue to traverse the next processing plug-in identifier, and call the corresponding processing plug-in to read the processed frame data from the shared memory Processing is performed, and so on, until the processing plug-in identifiers in the processing plugin identifier list are traversed. In this step, after the current processing plug-in performs content processing on the frame data, the processed frame data can be stored in the shared memory. In one implementation, the processed frame data can be stored in the original storage location, and the original frame data can be overwritten. At the same time, when the processed frame data is stored in the shared memory, the current processing plug-in can send a notification message to the processing process to notify the processing process that the processing of the current plug-in is completed and trigger the processing of the next plug-in. After the processing process receives the notification message, it can determine that the current processing plug-in has completed its work, and continue to traverse the next processing plug-in identifier in the processing plug-in identifier list, call the corresponding processing plug-in, and send the memory offset of the frame data For the processing plug-in, the processing plug-in repeats the process of step 203-2 and step 203-3 until all processing plug-in identifiers in the processing plug-in identifier list have been traversed, and all processing plug-ins corresponding to the processing plug-in identifiers have been processed; page 30 paragraph 3: The content processing module 603 is configured to sequentially call the processing plug-in corresponding to each processing plug-in identifier in the processing plug-in identifier list to perform content processing on the audio and video stream; page 26 paragraph 8: In step 205, the processing plug-ins corresponding to the processing plug-in identifiers in the post-processing plug-in identifier list corresponding to the mixed flow target are sequentially invoked to perform post processing on the merged frame data).
Wang does not explicitly teach modifying the first plugin module without affecting functions of at least the application module, the media framework module, and the video decoder module.
Shribman teaches modifying the first plugin module without affecting functions of at least the application module, the media framework module, and the video decoder module (end-users can update plug-ins dynamically without needing to make changes to the host application; [0143]: Typically, the host application provides services which the plug-in can use, including a way for plug-ins to register themselves with the host application, and protocol for the exchange of data with plug-ins. Plug-ins depend on the services provided by the host application and do not usually work by themselves. Conversely, the host application operates independently of the plug-ins, making it possible for end-users to add and update plug-ins dynamically without needing to make changes to the host application). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply Shribman’s knowledge of updating plug-ins without making changes to the host application as taught and modifying the process of Wang because such a system enhances user experience by enabling the end-users to make necessary changes to the plug-ins to meet their specific needs.
Regarding claim 11, Wang teaches a video processing system (page 18 section Technical field: a method and device for audio and video processing) comprising:
an application module (this module is interpreted under USC 35 112(f) as a hardware device; stream pulling module 602, fig. 6; page 32 paragraph 6: it can also be implemented by hardware) configured to receive a video (video stream; page 22 paragraph 1: Step 202: Pull the audio and video stream corresponding to the identifier of the audio and video stream to be processed; page 30 paragraph 2: The stream pulling module 602 is configured to pull the audio and video stream corresponding to the identifier of the audio and video stream to be processed);
a media framework module (this module is interpreted under USC 35 112(f) as a hardware device; stream pulling module 602, fig. 6; page 32 paragraph 6: it can also be implemented by hardware) coupled to the application (fig. 4 and fig. 6), and configured to send a command (processing plug-in identifier list comprising processing plug-in identifiers corresponds to a command; page 21 paragraph 6: Step 201: Receive a processing task, the processing task at least including: an identification of the audio and video stream to be processed and a processing plug-in identification list) and the video (video stream);
a video decoder module (this module is interpreted under USC 35 112(f) as a hardware device; decoding module, page 30 paragraph 4; page 32 paragraph 6: it can also be implemented by hardware) coupled to the media framework (fig. 4 and fig. 6), and configured to decode the video (video stream) to generate a decoded video (page 22 paragraph 5: The audio and video stream is decoded into frame data, and the frame data is stored in a shared memory; page 22 paragraph 6: In this step, when the stream is pulled, Puller can pass the pulled audio and video stream to the decoder (Decorder) for decoding, and Decorator decodes the audio and video stream to obtain frame data, where, for the video, the frame The data can be an image frame in rgba format; for audio, the frame data can be in an uncompressed PCM format; page 30 paragraph 4: The decoding module is configured to decode the audio and video stream into frame data after pulling the audio and video stream corresponding to the identifier of the audio and video stream to be processed); and
a first plugin module (this module is interpreted under USC 35 112(f) as a hardware device; first processing plug-in in a first container, page 24 paragraphs 5-6; content processing module, page 19 paragraph 8/content processing module 603, page 30 paragraph 3; page 32 paragraph 6: it can also be implemented by hardware) coupled to the video decoder (fig. 4 and fig. 6), and configured to receive the decoded video (decoded frame data representing the audio and video streams) and output a first plugin output (processed audio and video stream/processed frame data) according to the command (page 19 paragraph 4: The processing plug-in corresponding to each processing plug-in identifier in the processing plug-in identifier list is sequentially called to perform content processing on the audio and video stream; page 19 paragraph 8: The content processing module is used to sequentially call the processing plug-in corresponding to each processing plug-in identifier in the processing plug-in identifier list to perform content processing on the audio and video stream; page 21 paragraph 4: Figure 2 is a flowchart of an audio and video processing method embodiment provided in the first embodiment of the application. The audio and video processing method is a cloud processing method similar to a pipeline. Multiple processing plugins can be connected in series in the cloud for audio and video processing. The video stream is processed; page 24 paragraph 4: after Decorator decodes the audio and video streams, it can directly transmit the decoded frame data to the processing plug-in corresponding to the first processing plug-in identifier in the processing plug-in identifier list. (The first processing plug-in identification here means that the execution order is the first, the subsequent second, etc. are similar). After the first processing plug-in finishes processing the frame data, it can transmit the processed frame data to The processing plug-in corresponding to the second processing plug-in identifier in the processing plug-in identifier list is processed by the second processing plug-in; page 24 paragraphs 5-6: In an embodiment, when the number of processing plug-ins to be called is at least two, the at least two processing plug-ins may be deployed in one or more containers of the same physical machine … each processing plugin can have a separate container; page 23 paragraph 6 – page 24 paragraph 1 describes processing plug-in reads the corresponding frame data from the shared memory according to the memory offset, and processes the frame data. In this step, after the called processing plug-in receives the memory offset of the frame data currently to be processed, it can determine the storage location of the frame data currently to be processed in the shared memory according to the memory offset, and use the Read the corresponding frame data in the storage location, and then use the processing logic of the current processing plug-in to perform content processing on the read frame data. Step 203-3: Store the processed frame data in the shared memory, and continue to traverse the next processing plug-in identifier, and call the corresponding processing plug-in to read the processed frame data from the shared memory Processing is performed, and so on, until the processing plug-in identifiers in the processing plugin identifier list are traversed. In this step, after the current processing plug-in performs content processing on the frame data, the processed frame data can be stored in the shared memory. In one implementation, the processed frame data can be stored in the original storage location, and the original frame data can be overwritten. At the same time, when the processed frame data is stored in the shared memory, the current processing plug-in can send a notification message to the processing process to notify the processing process that the processing of the current plug-in is completed and trigger the processing of the next plug-in. After the processing process receives the notification message, it can determine that the current processing plug-in has completed its work, and continue to traverse the next processing plug-in identifier in the processing plug-in identifier list, call the corresponding processing plug-in, and send the memory offset of the frame data For the processing plug-in, the processing plug-in repeats the process of step 203-2 and step 203-3 until all processing plug-in identifiers in the processing plug-in identifier list have been traversed, and all processing plug-ins corresponding to the processing plug-in identifiers have been processed; page 30 paragraph 3: The content processing module 603 is configured to sequentially call the processing plug-in corresponding to each processing plug-in identifier in the processing plug-in identifier list to perform content processing on the audio and video stream; page 26 paragraph 8: In step 205, the processing plug-ins corresponding to the processing plug-in identifiers in the post-processing plug-in identifier list corresponding to the mixed flow target are sequentially invoked to perform post processing on the merged frame data).
It should be further noted that Wang teaches a stream pulling module 602 to perform the functions similar to that of an application module and a media framework module, although the application module and the media framework module as claimed are two separate units of the system. However, in Nerwin v. Erlichman, 168 USPQ 177, 179 (PTO Bd. Of Int. 1969), it has been noted “the mere fact that a given structure is integral does not preclude its consisting of various elements”. Therefore, it would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to separate the stream pulling module 602 of Wang into two separate modules to function as an application module and a media framework module because using separate dedicated hardware units for specific functions will result in higher efficiency by freeing up resources. Also, See In re Dulberg, 289 F.2d 522, 523, 129 USPQ 348, 349 (CCPA 1961).
Wang does not explicitly teach the application module and the media framework module are separate modules; the first plugin module is modified without affecting functions of at least the application module, the media framework module, and the video decoder module.
Shribman teaches the first plugin module is modified without affecting functions of at least the application module, the media framework module, and the video decoder module (end-users can update plug-ins dynamically without needing to make changes to the host application; [0143]: Typically, the host application provides services which the plug-in can use, including a way for plug-ins to register themselves with the host application, and protocol for the exchange of data with plug-ins. Plug-ins depend on the services provided by the host application and do not usually work by themselves. Conversely, the host application operates independently of the plug-ins, making it possible for end-users to add and update plug-ins dynamically without needing to make changes to the host application). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply Shribman’s knowledge of updating plug-ins without making changes to the host application as taught and modifying the process of Wang because such a system enhances user experience by enabling the end-users to make necessary changes to the plug-ins to meet their specific needs.
Regarding claim 12, the combination of Wang and Shribman teaches the system of Claim 11, wherein if the command indicates that the first plugin module (first processing plug-in in a first container) is selected (when a first processing plug-in identifier related to a first task is in the identifier list, the system will selects the first processing plug-in identifier; Wang - page 19 paragraph 4: The processing plug-in corresponding to each processing plug-in identifier in the processing plug-in identifier list is sequentially called to perform content processing on the audio and video stream; Wang - page 19 paragraph 8: The content processing module is used to sequentially call the processing plug-in corresponding to each processing plug-in identifier in the processing plug-in identifier list to perform content processing on the audio and video stream; Wang - page 24 paragraphs 5-6: In an embodiment, when the number of processing plug-ins to be called is at least two, the at least two processing plug-ins may be deployed in one or more containers of the same physical machine … each processing plugin can have a separate container), the first plugin module performs a first function (processing task related to the first processing plug-in identifier) on the decoded video to generate the first plugin output (processed frame data; Wang - page 21 paragraph 6: Step 201: Receive a processing task, the processing task at least including: an identification of the audio and video stream to be processed and a processing plug-in identification list; Wang - page 21 paragraph 8: When the demander detects that the user triggers When processing business needs, you can create processing tasks based on user-triggered needs; Wang - page 22 paragraphs 8-9: Step 203: Call the processing plug-in corresponding to each processing plug-in identifier in the processing plug-in identifier list in turn to perform content processing on the audio and video stream. In this step, the processing plug-in corresponding to each processing plug-in identifier in the processing plug-in identifier list can be called to perform content processing on the frame data. Among them, the processing plug-ins corresponding to each processing plug-in identifier in the processing plug-in identifier list can work in series, and the output of the previous processing plug-in can be used as the input of the next processing plug-in until all processing plug-ins are processed; Wang - page 23 paragraph 1: In one embodiment, the processing plug-in identifier list may include at least two processing plug-in identifiers, and the execution order of the at least two processing plug-in identifiers; Wang - page 23 paragraph 6 – page 24 paragraph 1 describes processing plug-in reads the corresponding frame data from the shared memory according to the memory offset, and processes the frame data. In this step, after the called processing plug-in receives the memory offset of the frame data currently to be processed, it can determine the storage location of the frame data currently to be processed in the shared memory according to the memory offset, and use the Read the corresponding frame data in the storage location, and then use the processing logic of the current processing plug-in to perform content processing on the read frame data. Step 203-3: Store the processed frame data in the shared memory, and continue to traverse the next processing plug-in identifier, and call the corresponding processing plug-in to read the processed frame data from the shared memory Processing is performed, and so on, until the processing plug-in identifiers in the processing plugin identifier list are traversed. In this step, after the current processing plug-in performs content processing on the frame data, the processed frame data can be stored in the shared memory. In one implementation, the processed frame data can be stored in the original storage location, and the original frame data can be overwritten. At the same time, when the processed frame data is stored in the shared memory, the current processing plug-in can send a notification message to the processing process to notify the processing process that the processing of the current plug-in is completed and trigger the processing of the next plug-in. After the processing process receives the notification message, it can determine that the current processing plug-in has completed its work, and continue to traverse the next processing plug-in identifier in the processing plug-in identifier list, call the corresponding processing plug-in, and send the memory offset of the frame data For the processing plug-in, the processing plug-in repeats the process of step 203-2 and step 203-3 until all processing plug-in identifiers in the processing plug-in identifier list have been traversed, and all processing plug-ins corresponding to the processing plug-in identifiers have been processed).
Regarding claim 13, the combination of Wang and Shribman teaches the system of Claim 11, wherein: if the command indicates that the first plugin module (first processing plug-in in a first container) is not selected (when a first processing plug-in identifier related to a first task is not in the identifier list, the system will not select the first processing plug-in identifier; Wang - page 19 paragraph 4: The processing plug-in corresponding to each processing plug-in identifier in the processing plug-in identifier list is sequentially called to perform content processing on the audio and video stream; Wang - page 19 paragraph 8: The content processing module is used to sequentially call the processing plug-in corresponding to each processing plug-in identifier in the processing plug-in identifier list to perform content processing on the audio and video stream), the first plugin module passing the decoded video to output the first plugin output (processed frame data; Wang - page 21 paragraph 6: Step 201: Receive a processing task, the processing task at least including: an identification of the audio and video stream to be processed and a processing plug-in identification list; Wang - page 21 paragraph 8: When the demander detects that the user triggers When processing business needs, you can create processing tasks based on user-triggered needs; Wang - page 22 paragraphs 8-9: Step 203: Call the processing plug-in corresponding to each processing plug-in identifier in the processing plug-in identifier list in turn to perform content processing on the audio and video stream. In this step, the processing plug-in corresponding to each processing plug-in identifier in the processing plug-in identifier list can be called to perform content processing on the frame data. Among them, the processing plug-ins corresponding to each processing plug-in identifier in the processing plug-in identifier list can work in series, and the output of the previous processing plug-in can be used as the input of the next processing plug-in until all processing plug-ins are processed; Wang - page 23 paragraph 1: In one embodiment, the processing plug-in identifier list may include at least two processing plug-in identifiers, and the execution order of the at least two processing plug-in identifiers; Wang - page 23 paragraph 6 – page 24 paragraph 1 describes processing plug-in reads the corresponding frame data from the shared memory according to the memory offset, and processes the frame data. In this step, after the called processing plug-in receives the memory offset of the frame data currently to be processed, it can determine the storage location of the frame data currently to be processed in the shared memory according to the memory offset, and use the Read the corresponding frame data in the storage location, and then use the processing logic of the current processing plug-in to perform content processing on the read frame data. Step 203-3: Store the processed frame data in the shared memory, and continue to traverse the next processing plug-in identifier, and call the corresponding processing plug-in to read the processed frame data from the shared memory Processing is performed, and so on, until the processing plug-in identifiers in the processing plugin identifier list are traversed. In this step, after the current processing plug-in performs content processing on the frame data, the processed frame data can be stored in the shared memory. In one implementation, the processed frame data can be stored in the original storage location, and the original frame data can be overwritten. At the same time, when the processed frame data is stored in the shared memory, the current processing plug-in can send a notification message to the processing process to notify the processing process that the processing of the current plug-in is completed and trigger the processing of the next plug-in. After the processing process receives the notification message, it can determine that the current processing plug-in has completed its work, and continue to traverse the next processing plug-in identifier in the processing plug-in identifier list, call the corresponding processing plug-in, and send the memory offset of the frame data For the processing plug-in, the processing plug-in repeats the process of step 203-2 and step 203-3 until all processing plug-in identifiers in the processing plug-in identifier list have been traversed, and all processing plug-ins corresponding to the processing plug-in identifiers have been processed).
Regarding claim 14, the combination of Wang and Shribman teaches the system of Claim 11, wherein: the media framework module further generates the command (processing plug-in identifier list comprising processing plug-in identifiers) according to a user scenario (Wang - page 21 paragraph 6: Step 201: Receive a processing task, the processing task at least including: an identification of the audio and video stream to be processed and a processing plug-in identification list; Wang - page 21 paragraph 8: When the demander detects that the user triggers When processing business needs, you can create processing tasks based on user-triggered needs).
Regarding claim 15, the combination of Wang and Shribman teaches the system of Claim 11, further comprising: a second plugin module (this module is interpreted under USC 35 112(f) as a hardware device; Wang - second processing plug-in in a second container, page 24 paragraphs 5-6) coupled to the first plugin module (Wang - content processing module, page 19 paragraph 8/content processing module 603, page 30 paragraph 3; Wang - page 32 paragraph 6: it can also be implemented by hardware) and configured to receive the first plugin output and output a second plugin output (processed audio and video stream/processed frame data) according to the command (Wang - page 19 paragraph 4: The processing plug-in corresponding to each processing plug-in identifier in the processing plug-in identifier list is sequentially called to perform content processing on the audio and video stream; Wang - page 19 paragraph 8: The content processing module is used to sequentially call the processing plug-in corresponding to each processing plug-in identifier in the processing plug-in identifier list to perform content processing on the audio and video stream; Wang - page 21 paragraph 4: Figure 2 is a flowchart of an audio and video processing method embodiment provided in the first embodiment of the application. The audio and video processing method is a cloud processing method similar to a pipeline. Multiple processing plugins can be connected in series in the cloud for audio and video processing. The video stream is processed; Wang - page 24 paragraph 4: after Decorator decodes the audio and video streams, it can directly transmit the decoded frame data to the processing plug-in corresponding to the first processing plug-in identifier in the processing plug-in identifier list. (The first processing plug-in identification here means that the execution order is the first, the subsequent second, etc. are similar). After the first processing plug-in finishes processing the frame data, it can transmit the processed frame data to The processing plug-in corresponding to the second processing plug-in identifier in the processing plug-in identifier list is processed by the second processing plug-in; Wang - page 23 paragraph 6 – page 24 paragraph 1 describes processing plug-in reads the corresponding frame data from the shared memory according to the memory offset, and processes the frame data. In this step, after the called processing plug-in receives the memory offset of the frame data currently to be processed, it can determine the storage location of the frame data currently to be processed in the shared memory according to the memory offset, and use the Read the corresponding frame data in the storage location, and then use the processing logic of the current processing plug-in to perform content processing on the read frame data. Step 203-3: Store the processed frame data in the shared memory, and continue to traverse the next processing plug-in identifier, and call the corresponding processing plug-in to read the processed frame data from the shared memory Processing is performed, and so on, until the processing plug-in identifiers in the processing plugin identifier list are traversed. In this step, after the current processing plug-in performs content processing on the frame data, the processed frame data can be stored in the shared memory. In one implementation, the processed frame data can be stored in the original storage location, and the original frame data can be overwritten. At the same time, when the processed frame data is stored in the shared memory, the current processing plug-in can send a notification message to the processing process to notify the processing process that the processing of the current plug-in is completed and trigger the processing of the next plug-in. After the processing process receives the notification message, it can determine that the current processing plug-in has completed its work, and continue to traverse the next processing plug-in identifier in the processing plug-in identifier list, call the corresponding processing plug-in, and send the memory offset of the frame data For the processing plug-in, the processing plug-in repeats the process of step 203-2 and step 203-3 until all processing plug-in identifiers in the processing plug-in identifier list have been traversed, and all processing plug-ins corresponding to the processing plug-in identifiers have been processed; page 30 paragraph 3: The content processing module 603 is configured to sequentially call the processing plug-in corresponding to each processing plug-in identifier in the processing plug-in identifier list to perform content processing on the audio and video stream; Wang - page 26 paragraph 8: In step 205, the processing plug-ins corresponding to the processing plug-in identifiers in the post-processing plug-in identifier list corresponding to the mixed flow target are sequentially invoked to perform post processing on the merged frame data); and wherein the first plugin module is modified without affecting the functions of the application module, the media framework module, and the video decoder module, and a second function of the second plugin module (end-users can update plug-ins dynamically without needing to make changes to the host application; Shribman - [0143]: Typically, the host application provides services which the plug-in can use, including a way for plug-ins to register themselves with the host application, and protocol for the exchange of data with plug-ins. Plug-ins depend on the services provided by the host application and do not usually work by themselves. Conversely, the host application operates independently of the plug-ins, making it possible for end-users to add and update plug-ins dynamically without needing to make changes to the host application).
Regarding claim 16, the combination of Wang and Shribman teaches the system of Claim 15, wherein if the command indicates that the second plugin module (second processing plug-in in a first container) is selected (when a second processing plug-in identifier related to a second task is in the identifier list, the system will selects the second processing plug-in identifier; Wang - page 19 paragraph 4: The processing plug-in corresponding to each processing plug-in identifier in the processing plug-in identifier list is sequentially called to perform content processing on the audio and video stream; Wang - page 19 paragraph 8: The content processing module is used to sequentially call the processing plug-in corresponding to each processing plug-in identifier in the processing plug-in identifier list to perform content processing on the audio and video stream; Wang - page 24 paragraphs 5-6: In an embodiment, when the number of processing plug-ins to be called is at least two, the at least two processing plug-ins may be deployed in one or more containers of the same physical machine … each processing plugin can have a separate container), the second plugin module performs the second function (processing task related to the second processing plug-in identifier) on the first plugin output to generate the second plugin output (processed frame data; Wang - page 21 paragraph 6: Step 201: Receive a processing task, the processing task at least including: an identification of the audio and video stream to be processed and a processing plug-in identification list; Wang - page 21 paragraph 8: When the demander detects that the user triggers When processing business needs, you can create processing tasks based on user-triggered needs; Wang - page 22 paragraphs 8-9: Step 203: Call the processing plug-in corresponding to each processing plug-in identifier in the processing plug-in identifier list in turn to perform content processing on the audio and video stream. In this step, the processing plug-in corresponding to each processing plug-in identifier in the processing plug-in identifier list can be called to perform content processing on the frame data. Among them, the processing plug-ins corresponding to each processing plug-in identifier in the processing plug-in identifier list can work in series, and the output of the previous processing plug-in can be used as the input of the next processing plug-in until all processing plug-ins are processed; Wang - page 23 paragraph 1: In one embodiment, the processing plug-in identifier list may include at least two processing plug-in identifiers, and the execution order of the at least two processing plug-in identifiers; Wang - page 23 paragraph 6 – page 24 paragraph 1 describes processing plug-in reads the corresponding frame data from the shared memory according to the memory offset, and processes the frame data. In this step, after the called processing plug-in receives the memory offset of the frame data currently to be processed, it can determine the storage location of the frame data currently to be processed in the shared memory according to the memory offset, and use the Read the corresponding frame data in the storage location, and then use the processing logic of the current processing plug-in to perform content processing on the read frame data. Step 203-3: Store the processed frame data in the shared memory, and continue to traverse the next processing plug-in identifier, and call the corresponding processing plug-in to read the processed frame data from the shared memory Processing is performed, and so on, until the processing plug-in identifiers in the processing plugin identifier list are traversed. In this step, after the current processing plug-in performs content processing on the frame data, the processed frame data can be stored in the shared memory. In one implementation, the processed frame data can be stored in the original storage location, and the original frame data can be overwritten. At the same time, when the processed frame data is stored in the shared memory, the current processing plug-in can send a notification message to the processing process to notify the processing process that the processing of the current plug-in is completed and trigger the processing of the next plug-in. After the processing process receives the notification message, it can determine that the current processing plug-in has completed its work, and continue to traverse the next processing plug-in identifier in the processing plug-in identifier list, call the corresponding processing plug-in, and send the memory offset of the frame data For the processing plug-in, the processing plug-in repeats the process of step 203-2 and step 203-3 until all processing plug-in identifiers in the processing plug-in identifier list have been traversed, and all processing plug-ins corresponding to the processing plug-in identifiers have been processed).
Regarding claim 17, the combination of Wang and Shribman teaches the system of Claim 11, wherein: if the command indicates that the second plugin module (second processing plug-in in a second container) is not selected (when a second processing plug-in identifier related to a second task is not in the identifier list, the system will not select the second processing plug-in identifier; Wang - page 19 paragraph 4: The processing plug-in corresponding to each processing plug-in identifier in the processing plug-in identifier list is sequentially called to perform content processing on the audio and video stream; Wang - page 19 paragraph 8: The content processing module is used to sequentially call the processing plug-in corresponding to each processing plug-in identifier in the processing plug-in identifier list to perform content processing on the audio and video stream), the second plugin module passes the first plugin output to output second first plugin output (processed frame data; Wang - page 21 paragraph 6: Step 201: Receive a processing task, the processing task at least including: an identification of the audio and video stream to be processed and a processing plug-in identification list; Wang - page 21 paragraph 8: When the demander detects that the user triggers When processing business needs, you can create processing tasks based on user-triggered needs; Wang - page 22 paragraphs 8-9: Step 203: Call the processing plug-in corresponding to each processing plug-in identifier in the processing plug-in identifier list in turn to perform content processing on the audio and video stream. In this step, the processing plug-in corresponding to each processing plug-in identifier in the processing plug-in identifier list can be called to perform content processing on the frame data. Among them, the processing plug-ins corresponding to each processing plug-in identifier in the processing plug-in identifier list can work in series, and the output of the previous processing plug-in can be used as the input of the next processing plug-in until all processing plug-ins are processed; Wang - page 23 paragraph 1: In one embodiment, the processing plug-in identifier list may include at least two processing plug-in identifiers, and the execution order of the at least two processing plug-in identifiers; Wang - page 23 paragraph 6 – page 24 paragraph 1 describes processing plug-in reads the corresponding frame data from the shared memory according to the memory offset, and processes the frame data. In this step, after the called processing plug-in receives the memory offset of the frame data currently to be processed, it can determine the storage location of the frame data currently to be processed in the shared memory according to the memory offset, and use the Read the corresponding frame data in the storage location, and then use the processing logic of the current processing plug-in to perform content processing on the read frame data. Step 203-3: Store the processed frame data in the shared memory, and continue to traverse the next processing plug-in identifier, and call the corresponding processing plug-in to read the processed frame data from the shared memory Processing is performed, and so on, until the processing plug-in identifiers in the processing plugin identifier list are traversed. In this step, after the current processing plug-in performs content processing on the frame data, the processed frame data can be stored in the shared memory. In one implementation, the processed frame data can be stored in the original storage location, and the original frame data can be overwritten. At the same time, when the processed frame data is stored in the shared memory, the current processing plug-in can send a notification message to the processing process to notify the processing process that the processing of the current plug-in is completed and trigger the processing of the next plug-in. After the processing process receives the notification message, it can determine that the current processing plug-in has completed its work, and continue to traverse the next processing plug-in identifier in the processing plug-in identifier list, call the corresponding processing plug-in, and send the memory offset of the frame data For the processing plug-in, the processing plug-in repeats the process of step 203-2 and step 203-3 until all processing plug-in identifiers in the processing plug-in identifier list have been traversed, and all processing plug-ins corresponding to the processing plug-in identifiers have been processed).
Regarding claim 18, the combination of Wang and Shribman teaches the system of Claim 15, wherein: the second plugin module is modified without affecting the functions of the application module, the media framework module, and the video decoder module, and first function of the first plugin module (end-users can update plug-ins dynamically without needing to make changes to the host application; Shribman - [0143]: Typically, the host application provides services which the plug-in can use, including a way for plug-ins to register themselves with the host application, and protocol for the exchange of data with plug-ins. Plug-ins depend on the services provided by the host application and do not usually work by themselves. Conversely, the host application operates independently of the plug-ins, making it possible for end-users to add and update plug-ins dynamically without needing to make changes to the host application).
Regarding claim 19, the combination of Wang and Shribman teaches the system of Claim 15, wherein the first plugin module and the second plugin module are configured to perform video post-processing (beautification plug-in for beautification and background segmentation plug-in for background segmentation corresponds to plug-ins for video post-processing; page 28 paragraph 5: Then come to the MediaFlow stage, each audio and video stream has a corresponding MediaFlow, and each MediaFlow includes one or more processing plug-ins (ie plug-in 1...plug-in N in Figure 4, such as the beautification plug-in for beautification, the application The background segmentation plug-in for background segmentation, etc.), through plug-in 1...plug-in N to sequentially process the audio frame or video frame, multiple processing plug-ins are executed in series, and the processing result of the previous processing plug-in can be used as the input of the next processing plug-in , The output of the last processing plug-in is used as the output of the current MediaFlow).
Claims 2-4 and 6-10 are similar in scope to claims 12-14 and 15-19, and therefore the examiner provides similar rationale to reject these claims.
Claim(s) 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wang, in view of Shribman, and further in view of Anderson et al. (US 2017/0060650, hereinafter Anderson).
Regarding claim 5, the combination of Wang and Shribman does not explicitly teach the method of Claim 1, further comprising: the first plugin module returning the first plugin output to the media framework module.
Anderson teaches the first plugin module (extension script) returning the first plugin output to the media framework module (application is functionally analogous to the media framework module; [0015]: When the extension script has been executed, the extension script provides an output, typically in the same standardized form as the data received with the extension script call, and the data is returned to the calling app or application; [0037]: The method 300 then retrieves and executes 304 the extension script in view of the argument within a script execution environment of the agent process to obtain result data. The agent process then returns 306 the result data to the application. In some embodiments, the argument is received 302 with the extension script call in a particular format and the result data is returned 306 to the application in the same particular format). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply Anderson’s knowledge of returning the result data obtained by executing the extension script to the application as taught and modify the system of Wang and Shribman because such a system returns the data in the same format as the calling application expects the data to be in the same particular format for further processing within the application ([0037]).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Bhatia et al. (US 2018/0349283) describes an example implementation of the invention is described for redirecting HTML5 video in a virtual desktop environment. While the provided implementation discusses HTML5 video, the described methods and systems can be used with other video types, as applicable. The solution injects a scripting layer into a web browser to intercept an HTML5 video inside a virtual desktop. The virtual desktop can be, for example, a VMware Horizon virtual desktop, available from VMware, Inc. This layer communicates video commands and encoded video content to a server process inside the virtual desktop. The server process, acting as a proxy between the web browser and a plugin on the client device, transfers this information to the client plugin. The client plugin opens a counterpart video renderer application that interprets the received video commands, decodes the video content, and renders the HTML5 video on the client device. Furthermore, the plugin uses a clipping region for headless video composition on the client, giving the illusion of the video playing inside the virtual desktop.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JWALANT B AMIN whose telephone number is (571)272-2455. The examiner can normally be reached Monday-Friday 10am - 630pm CST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Said Broome can be reached at 571-272-2931. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JWALANT AMIN/Primary Examiner, Art Unit 2612