DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments filed 12/30/25 with respect to claims 17, 26, 28 and 36 have been read and considered but are moot because claims 17, 19-20, 23-24, 26-28, 30-31 and 34-36 are now rejected under 35 U.S.C. 103 as being unpatentable over Liu (US 2022/0224943) and Nakanishi (US 2019/0251418) in view of Jiang (US 2022/0217371).
Based on an updated search, a new reference Jiang (US 2022/0217371) is cited for disclosing “…wherein the distortion comprises one or more of the following: a feature-element-wise distortion, or a cross-entropy loss”. Peruse the rejection below for elaboration.
Claims 18, 21, 29 and 32 are now rejected under 35 U.S.C. 103 as being unpatentable over Liu (US 2022/0224943), Nakanishi (US 2019/0251418) and Jiang (US 2022/0217371) in view of Hwang (US 2019/0075325).
Claims 22 and 33 are now rejected under 35 U.S.C. 103 as being unpatentable over Liu (US 2022/0224943), Nakanishi (US 2019/0251418) and Jiang (US 2022/0217371) in view of Ma (US 2022/0295116).
Claim 25 is now rejected under 35 U.S.C. 103 as being unpatentable over Liu (US 2022/0224943), Nakanishi (US 2019/0251418) and Jiang (US 2022/0217371) in view of Chen (US 2022/0159285).
Thus, the rejection is maintained.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 17, 19-20, 23-24, 26-28, 30-31 and 34-36 are rejected under 35 U.S.C. 103 as being unpatentable over Liu (US 2022/0224943) and Nakanishi (US 2019/0251418) in view of Jiang (US 2022/0217371).
Regarding claim 17, Liu discloses an apparatus for encoding (paragraph [92], Liu discloses encoding embodiment) comprising:
at least one processor (paragraph [183], Liu discloses processor for executing computer software programs stored in memory); and
at least one memory including computer program code (paragraph [183], Liu discloses memory storing computer software programs);
wherein the at least one memory and the computer program code (paragraph [183], Liu discloses processor for executing computer software programs stored in memory) are configured to, with the at least one processor (paragraph [183], Liu discloses processor for executing computer software programs stored in memory), cause the apparatus at least to:
receive a video sequence comprising a first frame and a second frame (paragraph [96], fig.6, Liu discloses that intra encoder 622 receives blocks of current block from current images in a sequence of images, and paragraph [95], Liu discloses inter encoder 630 receives blocks from current images and blocks from reference images, thus, Liu discloses receiving a video sequence of images that include at least first and second frames, etc.);
encode the first frame into a first coded frame using a first coding method (paragraph [99], fig.6, Liu discloses entropy encoder 625 for encoding the first frame using a first coding method to generate an encoded bitstream based on outputs received from intra encoder 622, inter encoder 630, residue encoder 624 and general controller 621);
reconstruct a first decoded frame corresponding to the first coded frame (paragraph [98], fig.6, Liu discloses residue calculator 623 is configured to calculate a difference between the received block data of current frame versus the block data of previous frame, and the result of the difference is sent to residue decoder 628 to generate decoded residue data, thus, reconstructing a first decoded frame corresponding to first coded frame);
derive one or more optimizing parameters to adjust a filter (paragraph [115], Liu discloses an adaptive loop filter module can be implemented for generating one or more optimizing parameters for adjusting a filter, and paragraph [93], Liu discloses rate-distortion optimization is performed for encoder 603 of fig.6), wherein said one or more optimizing parameters reduce distortion of the first decoded frame to produce a first filtered frame (paragraph [115], Liu discloses an adaptive loop filter module can be implemented for generating one or more optimizing parameters for adjusting a filter, wherein fig.9, “ALF” module performs adaptive loop filtering for producing a filtered frame, and that iTR+iQ is considered to be the local decoder for generating a locally decoded frame to eventually send to the ALF module to produce a decoded first filtered frame, wherein paragraph [98], Liu discloses that residue decoder 628 performs inverse transform and generates decoded residue data to generate decoded frame, and paragraph [93], Liu discloses rate-distortion optimization is performed for encoder 603 of fig.6);
filter the first decoded frame with the filter (paragraph [115], Liu discloses an adaptive loop filter module can be implemented for generating one or more optimizing parameters for adjusting a filter, wherein fig.9, “ALF” module performs adaptive loop filtering for producing a filtered frame, and that iTR+iQ is considered to be the local decoder for generating a locally decoded frame to eventually send to the ALF module to produce a decoded first filtered frame, wherein paragraph [98], Liu discloses that residue decoder 628 performs inverse transform and generates decoded residue data to generate decoded frame);
encode the second frame into a second coded frame by using the first filtered frame directly or indirectly for prediction (paragraph [115], Liu discloses that output of residue decoder 628 is sent to “Buffer” and eventually to Inter module and ME (motion estimation) module for further processing the filtered frame as outputted from ALF module for further refinement of the image data and generate difference data and send to the “Combined Prediction” module for using the first filtered frame directly or indirectly for prediction, and the output of “Combined Prediction” can generate data for sending to the adder located before the “TR+Q” section, wherein the output of “TR+Q” section is sent to “Entropy Coding” for encoding the second frame to generate a second coded frame, and so on in a cyclical manner for continuously generating subsequent frames in a sequence of frames; paragraph [99], fig.6, Liu discloses entropy encoder 625 for encoding the second frame; thus, Liu discloses encoding a second frame to generate a second coded frame); and
signal said one or more optimizing parameters (paragraph [99], Liu discloses entropy encoder 625 for generating an encoded bitstream to include the sequence of frames along with various information general control data, selected prediction information, residue information and other suitable information (ie. optimization parameters like filter information pertaining to ALF module) for coding the video image data, wherein paragraph [115], Liu discloses an adaptive loop filter module can be implemented for generating one or more optimizing parameters for adjusting a filter, and paragraph [93], Liu discloses rate-distortion optimization is performed for encoder 603 of fig.6).
Liu does not disclose encode the second frame into a second coded frame by a second set of algorithms of a second coding method and by using the first filtered frame directly or indirectly for prediction. However, Nakanishi teaches a second set of algorithms of a second coding method (paragraph [28], Nakanishi discloses first encoding method (lossy compression) applied with first compression apparatus 12 and second encoding method (lossless compression) applied with second compression apparatus 14). Since Liu discloses “encode the second frame into a second coded frame by using the first filtered frame directly or indirectly for prediction”, and Nakanishi discloses “…a second set of algorithms of a second coding method”, therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Liu and Nakanishi together as a whole for ascertaining the limitation “encode the second frame into a second coded frame by a second set of algorithms of a second coding method and by using the first filtered frame directly or indirectly for prediction” so as to optimize encoder efficiency by shortening the calculation time needed for compression and decompression of video image data (Nakanishi’s paragraph [25]).
Liu and Nakanishi do not disclose wherein the distortion comprises one or more of the following: a feature-element-wise distortion, or a cross-entropy loss. However, Jiang teaches wherein the distortion comprises one or more of the following: a feature-element-wise distortion (paragraph [97], Jiang discloses obtaining distortion loss, wherein distortion loss can comprise a MSE (mean square error) ascertained from the difference of feature representation computed based on x_sub_i and x_hat_sub_i, and that the feature representation can originate from a weight map to emphasize and represent the distortion of the reconstructed facial area or different parts (ie.elements) of the facial area, thus obtaining a feature-element-wise distortion, similar to page 62, lines 4-7 of Applicant's specification, wherein Applicant discloses that feature-element-wise distortion comes from MSE (mean square error) computed on extracted feature elements), or a cross-entropy loss (paragraph [97], Jiang discloses obtaining a cross-entropy loss from classification error to obtain distortion data based on DNN (deep neural network) processing). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Liu, Nakanishi and Jiang together as a whole for efficiently compressing and decompressing facial features from video image data for data transmission and video conferencing applications (Jiang’s paragraph [89]).
Regarding claim 19, Liu discloses wherein to derive said one or more optimizing parameters (paragraph [71], Liu discloses that lambda value of rate-distortion optimization techniques is derived from parameters for encoding the sequence of image data; paragraph [115], Liu discloses an adaptive loop filter module can be implemented for generating one or more optimizing parameters for adjusting a filter, and paragraph [93], Liu discloses rate-distortion optimization is performed for encoder 603 of fig.6), the apparatus is further caused to: derive the distortion in relation to the first frame (paragraph [71], Liu discloses that lambda value of rate-distortion optimization techniques is derived from parameters for encoding the sequence of image data; paragraph [115], Liu discloses an adaptive loop filter module can be implemented for generating one or more optimizing parameters for adjusting a filter, and paragraph [93], Liu discloses rate-distortion optimization is performed for encoder 603 of fig.6).
Regarding claim 20, Liu discloses wherein the first coding method is an end-to-end learned image coding method (paragraph [111], Liu discloses a VCM (Video Coding for Machines) architecture for processing image data, wherein the architecture of fig.8 is an end-to-end learning image coding method of a neural network architecture for processing image data).
Regarding claim 23, Liu discloses wherein the apparatus is further caused to: derive said one or more optimizing parameters by a rate-distortion optimization process (paragraph [71], Liu discloses that lambda value of rate-distortion optimization techniques is derived from parameters for encoding the sequence of image data; paragraph [115], Liu discloses an adaptive loop filter module can be implemented for generating one or more optimizing parameters for adjusting a filter, and paragraph [93], Liu discloses rate-distortion optimization is performed for encoder 603 of fig.6).
Regarding claim 24, Liu discloses wherein to signal said one or more optimizing parameters (paragraph [99], Liu discloses entropy encoder 625 for generating an encoded bitstream to include the sequence of frames along with various information general control data, selected prediction information, residue information and other suitable information (ie. optimization parameters like filter information pertaining to ALF module) for coding the video image data, wherein paragraph [115], Liu discloses an adaptive loop filter module can be implemented for generating one or more optimizing parameters for adjusting a filter, and paragraph [93], Liu discloses rate-distortion optimization is performed for encoder 603 of fig.6), the apparatus is further caused to: encode said one or more optimizing parameters (paragraph [99], Liu discloses entropy encoder 625 for generating an encoded bitstream to include the sequence of frames along with various information general control data, selected prediction information, residue information and other suitable information (ie. optimization parameters like filter information pertaining to ALF module) for coding the video image data, wherein paragraph [115], Liu discloses an adaptive loop filter module can be implemented for generating one or more optimizing parameters for adjusting a filter, and paragraph [93], Liu discloses rate-distortion optimization is performed for encoder 603 of fig.6).
Liu does not disclose the second coding method. However, Nakanishi teaches implementing the second coding method (paragraph [28], Nakanishi discloses first encoding method (lossy compression) applied with first compression apparatus 12 and second encoding method (lossless compression) applied with second compression apparatus 14). Since Liu discloses “wherein to signal said one or more optimizing parameters, the apparatus is further caused to: encode said one or more optimizing parameters”, and Nakanishi discloses “the second coding method”, therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Liu and Nakanishi together as a whole for ascertaining the limitation “wherein to signal said one or more optimizing parameters, the apparatus is further caused to: encode said one or more optimizing parameters by the second coding method” so as to optimize encoder efficiency by shortening the calculation time needed for compression and decompression of video image data (Nakanishi’s paragraph [25]).
Regarding claim 26, Liu discloses an apparatus for decoding (paragraph [101], Liu discloses decoding embodiment) comprising:
at least one processor (paragraph [183], Liu discloses processor for executing computer software programs stored in memory); and
at least one memory including computer program code (paragraph [183], Liu discloses memory storing computer software programs);
wherein the at least one memory and the computer program code (paragraph [183], Liu discloses processor for executing computer software programs stored in memory) are configured to, with the at least one processor (paragraph [183], Liu discloses processor for executing computer software programs stored in memory), cause the apparatus at least to:
receive a first coded frame and a second coded frame (paragraph [102], Liu discloses entropy decoder 771 receives a coded video sequence which comprises a sequence of frames that includes at least a first coded frame and a second coded frame);
receive one or more optimizing parameters (paragraph [102], Liu discloses that entropy decoder 771, the encoded bitstream is received and parsed to extract certain symbols that represent the syntax elements for representing the coded picture, wherein the certain symbols represent block coding mode (ie. intra mode, inter mode, bi-predicted mode, merge submodes, etc.), prediction information (ie. intra prediction, inter prediction), residual information, quantized transform coefficients, and the like, wherein “and the like” can represent optimizing parameters like adaptive loop filtering information from the encoder embodiment, wherein paragraph [115], at the encoder embodiment, Liu discloses an adaptive loop filter module can be implemented for generating one or more optimizing parameters for adjusting a filter, and paragraph [93], Liu discloses rate-distortion optimization is performed for encoder 603 of fig.6);
decode the first coded frame into a first decoded frame using a first decoding method (paragraph [102], Liu discloses that entropy decoder 771 that receives image information that comprise a sequence of images including a first image, a second image, and subsequent images thereafter);
adjust a filter with the one or more optimizing parameters (paragraph [115], Liu discloses an adaptive loop filter module can be implemented for generating one or more optimizing parameters for adjusting a filter, and paragraph [93], Liu discloses rate-distortion optimization is performed for encoder 603 of fig.6), where the one or more optimizing parameters reduce distortion of the first decoded frame to produce a first filtered frame (paragraph [115], Liu discloses an adaptive loop filter module can be implemented for generating one or more optimizing parameters for adjusting a filter, wherein fig.9, “ALF” module performs adaptive loop filtering for producing a filtered frame, and that iTR+iQ is considered to be the local decoder for generating a locally decoded frame to eventually send to the ALF module to produce a decoded first filtered frame, wherein paragraph [98], Liu discloses that residue decoder 628 performs inverse transform and generates decoded residue data to generate decoded frame, and paragraph [93], Liu discloses rate-distortion optimization is performed for encoder 603 of fig.6);
filter the first decoded frame with the filter (paragraph [102], Liu discloses that entropy decoder 771, the encoded bitstream is received and parsed to extract certain symbols that represent the syntax elements for representing the coded picture, wherein the certain symbols represent block coding mode (ie. intra mode, inter mode, bi-predicted mode, merge submodes, etc.), prediction information (ie. intra prediction, inter prediction), residual information, quantized transform coefficients, and the like, wherein “and the like” can represent optimizing parameters like adaptive loop filtering information from the encoder embodiment, wherein paragraph [115], at the encoder embodiment, Liu discloses an adaptive loop filter module can be implemented for generating one or more optimizing parameters for adjusting a filter, wherein fig.9, “ALF” module performs adaptive loop filtering for producing a filtered frame, and that iTR+iQ is considered to be the local decoder for generating a locally decoded frame to eventually send to the ALF module to produce a decoded first filtered frame, wherein paragraph [98], Liu discloses that residue decoder 628 performs inverse transform and generates decoded residue data to generate decoded frame); and
decode the second coded frame into a second decoded frame (paragraph [102], Liu discloses that entropy decoder 771 that receives image information that comprise a sequence of images including a first image, a second image, and subsequent images thereafter) by using the first filtered frame directly or indirectly for prediction (paragraph [102], Liu discloses that entropy decoder 771, the encoded bitstream is received and parsed to extract certain symbols that represent the syntax elements for representing the coded picture, wherein the certain symbols represent block coding mode (ie. intra mode, inter mode, bi-predicted mode, merge submodes, etc.), prediction information (ie. intra prediction, inter prediction), residual information, quantized transform coefficients, and the like, wherein “and the like” can represent optimizing parameters like adaptive loop filtering information from the encoder embodiment, and wherein paragraph [115], Liu discloses that output of residue decoder 628 is sent to “Buffer” and eventually to Inter module and ME (motion estimation) module for further processing the filtered frame as outputted from ALF module for further refinement of the image data and generate difference data and send to the “Combined Prediction” module for using the first filtered frame directly or indirectly for prediction, and the output of “Combined Prediction” can generate data for sending to the adder located before the “TR+Q” section, wherein the output of “TR+Q” section is sent to “Entropy Coding” for encoding the second frame to generate a second coded frame, and so on in a cyclical manner for continuously generating subsequent frames in a sequence of frames; paragraph [99], fig.6, Liu discloses entropy encoder 625 for encoding the second frame; thus, Liu discloses encoding a second frame to generate a second coded frame to be sent to the decoding embodiment for decoding a second coded frame to generate a second decoded frame).
Liu does not disclose decode the second coded frame into a second decoded frame by a second set of algorithms of a second decoding method and by using the first filtered frame directly or indirectly for prediction. However, Nakanishi teaches a second set of algorithms of a second decoding method (paragraph [29], Nakanishi discloses first decoding method (lossy decompression) applied with first decompression apparatus 22 and second decoding method (lossless decompression) applied with second decompression apparatus 24). Since Liu discloses “decode the second coded frame into a second decoded frame by using the first filtered frame directly or indirectly for prediction”, and Nakanishi discloses “…a second set of algorithms of a second decoding method”, therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Liu and Nakanishi together as a whole for ascertaining the limitation “decode the second coded frame into a second decoded frame by a second set of algorithms of a second decoding method and by using the first filtered frame directly or indirectly for prediction” so as to optimize encoder efficiency by shortening the calculation time needed for compression and decompression of video image data (Nakanishi’s paragraph [25]).
Liu and Nakanishi do not disclose wherein the distortion comprises one or more of the following: a feature-element-wise distortion, or a cross-entropy loss. However, Jiang teaches wherein the distortion comprises one or more of the following: a feature-element-wise distortion (paragraph [97], Jiang discloses obtaining distortion loss, wherein distortion loss can comprise a MSE (mean square error) ascertained from the difference of feature representation computed based on x_sub_i and x_hat_sub_i, and that the feature representation can originate from a weight map to emphasize and represent the distortion of the reconstructed facial area or different parts (ie.elements) of the facial area, thus obtaining a feature-element-wise distortion, similar to page 62, lines 4-7 of Applicant's specification, wherein Applicant discloses that feature-element-wise distortion comes from MSE (mean square error) computed on extracted feature elements), or a cross-entropy loss (paragraph [97], Jiang discloses obtaining a cross-entropy loss from classification error to obtain distortion data based on DNN (deep neural network) processing). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Liu, Nakanishi and Jiang together as a whole for efficiently compressing and decompressing facial features from video image data for data transmission and video conferencing applications (Jiang’s paragraph [89]).
Regarding claim 27, Liu discloses wherein the first decoding method is an end-to-end learned image decoding method (paragraph [111], Liu discloses a VCM (Video Coding for Machines) architecture for processing image data, wherein the architecture of fig.8 is an end-to-end learning image decoding method of a neural network architecture for processing image data).
Regarding claim 28, Liu discloses a method for encoding (paragraph [92], Liu discloses encoding embodiment), comprising:
receiving a video sequence comprising a first frame and a second frame (paragraph [96], fig.6, Liu discloses that intra encoder 622 receives blocks of current block from current images in a sequence of images, and paragraph [95], Liu discloses inter encoder 630 receives blocks from current images and blocks from reference images, thus, Liu discloses receiving a video sequence of images that include at least first and second frames, etc.);
encoding the first frame into a first coded frame using a first coding method (paragraph [99], fig.6, Liu discloses entropy encoder 625 for encoding the first frame using a first coding method to generate an encoded bitstream based on outputs received from intra encoder 622, inter encoder 630, residue encoder 624 and general controller 621);
reconstructing a first decoded frame corresponding to the first coded frame (paragraph [98], fig.6, Liu discloses residue calculator 623 is configured to calculate a difference between the received block data of current frame versus the block data of previous frame, and the result of the difference is sent to residue decoder 628 to generate decoded residue data, thus, reconstructing a first decoded frame corresponding to first coded frame);
deriving one or more optimizing parameters to adjust a filter (paragraph [115], Liu discloses an adaptive loop filter module can be implemented for generating one or more optimizing parameters for adjusting a filter, and paragraph [93], Liu discloses rate-distortion optimization is performed for encoder 603 of fig.6), wherein said one or more optimizing parameters reduce distortion of the first decoded frame to produce a first filtered frame (paragraph [115], Liu discloses an adaptive loop filter module can be implemented for generating one or more optimizing parameters for adjusting a filter, wherein fig.9, “ALF” module performs adaptive loop filtering for producing a filtered frame, and that iTR+iQ is considered to be the local decoder for generating a locally decoded frame to eventually send to the ALF module to produce a decoded first filtered frame, wherein paragraph [98], Liu discloses that residue decoder 628 performs inverse transform and generates decoded residue data to generate decoded frame, and paragraph [93], Liu discloses rate-distortion optimization is performed for encoder 603 of fig.6);
filtering the first decoded frame with the filter (paragraph [115], Liu discloses an adaptive loop filter module can be implemented for generating one or more optimizing parameters for adjusting a filter, wherein fig.9, “ALF” module performs adaptive loop filtering for producing a filtered frame, and that iTR+iQ is considered to be the local decoder for generating a locally decoded frame to eventually send to the ALF module to produce a decoded first filtered frame, wherein paragraph [98], Liu discloses that residue decoder 628 performs inverse transform and generates decoded residue data to generate decoded frame);
encoding the second frame into a second coded frame by using the first filtered frame directly or indirectly for prediction (paragraph [115], Liu discloses that output of residue decoder 628 is sent to “Buffer” and eventually to Inter module and ME (motion estimation) module for further processing the filtered frame as outputted from ALF module for further refinement of the image data and generate difference data and send to the “Combined Prediction” module for using the first filtered frame directly or indirectly for prediction, and the output of “Combined Prediction” can generate data for sending to the adder located before the “TR+Q” section, wherein the output of “TR+Q” section is sent to “Entropy Coding” for encoding the second frame to generate a second coded frame, and so on in a cyclical manner for continuously generating subsequent frames in a sequence of frames; paragraph [99], fig.6, Liu discloses entropy encoder 625 for encoding the second frame; thus, Liu discloses encoding a second frame to generate a second coded frame); and
signaling said one or more optimizing parameters (paragraph [99], Liu discloses entropy encoder 625 for generating an encoded bitstream to include the sequence of frames along with various information general control data, selected prediction information, residue information and other suitable information (ie. optimization parameters like filter information pertaining to ALF module) for coding the video image data, wherein paragraph [115], Liu discloses an adaptive loop filter module can be implemented for generating one or more optimizing parameters for adjusting a filter, and paragraph [93], Liu discloses rate-distortion optimization is performed for encoder 603 of fig.6).
Liu does not disclose encoding the second frame into a second coded frame by a second set of algorithms of a second coding method and by using the first filtered frame directly or indirectly for prediction. However, Nakanishi teaches a second set of algorithms of a second coding method (paragraph [28], Nakanishi discloses first encoding method (lossy compression) applied with first compression apparatus 12 and second encoding method (lossless compression) applied with second compression apparatus 14). Since Liu discloses “encoding the second frame into a second coded frame by using the first filtered frame directly or indirectly for prediction”, and Nakanishi discloses “…a second set of algorithms of a second coding method”, therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Liu and Nakanishi together as a whole for ascertaining the limitation “encoding the second frame into a second coded frame by a second set of algorithms of a second coding method and by using the first filtered frame directly or indirectly for prediction” so as to optimize encoder efficiency by shortening the calculation time needed for compression and decompression of video image data (Nakanishi’s paragraph [25]).
Liu and Nakanishi do not disclose wherein the distortion comprises one or more of the following: a feature-element-wise distortion, or a cross-entropy loss. However, Jiang teaches wherein the distortion comprises one or more of the following: a feature-element-wise distortion (paragraph [97], Jiang discloses obtaining distortion loss, wherein distortion loss can comprise a MSE (mean square error) ascertained from the difference of feature representation computed based on x_sub_i and x_hat_sub_i, and that the feature representation can originate from a weight map to emphasize and represent the distortion of the reconstructed facial area or different parts (ie.elements) of the facial area, thus obtaining a feature-element-wise distortion, similar to page 62, lines 4-7 of Applicant's specification, wherein Applicant discloses that feature-element-wise distortion comes from MSE (mean square error) computed on extracted feature elements), or a cross-entropy loss (paragraph [97], Jiang discloses obtaining a cross-entropy loss from classification error to obtain distortion data based on DNN (deep neural network) processing). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Liu, Nakanishi and Jiang together as a whole for efficiently compressing and decompressing facial features from video image data for data transmission and video conferencing applications (Jiang’s paragraph [89]).
Regarding claim 30, Liu discloses wherein the deriving said one or more optimizing parameters (paragraph [71], Liu discloses that lambda value of rate-distortion optimization techniques is derived from parameters for encoding the sequence of image data; paragraph [115], Liu discloses an adaptive loop filter module can be implemented for generating one or more optimizing parameters for adjusting a filter, and paragraph [93], Liu discloses rate-distortion optimization is performed for encoder 603 of fig.6) comprises deriving the distortion in relation to the first frame (paragraph [71], Liu discloses that lambda value of rate-distortion optimization techniques is derived from parameters for encoding the sequence of image data; paragraph [115], Liu discloses an adaptive loop filter module can be implemented for generating one or more optimizing parameters for adjusting a filter, and paragraph [93], Liu discloses rate-distortion optimization is performed for encoder 603 of fig.6).
Regarding claim 31, Liu discloses wherein the first coding method is an end-to-end learned image coding method (paragraph [111], Liu discloses a VCM (Video Coding for Machines) architecture for processing image data, wherein the architecture of fig.8 is an end-to-end learning image coding method of a neural network architecture for processing image data).
Regarding claim 34, Liu discloses wherein the deriving said one or more optimizing parameters comprises: a rate-distortion optimization process (paragraph [71], Liu discloses that lambda value of rate-distortion optimization techniques is derived from parameters for encoding the sequence of image data; paragraph [115], Liu discloses an adaptive loop filter module can be implemented for generating one or more optimizing parameters for adjusting a filter, and paragraph [93], Liu discloses rate-distortion optimization is performed for encoder 603 of fig.6).
Regarding claim 35, Liu discloses wherein the signaling said one or more optimizing parameters (paragraph [99], Liu discloses entropy encoder 625 for generating an encoded bitstream to include the sequence of frames along with various information general control data, selected prediction information, residue information and other suitable information (ie. optimization parameters like filter information pertaining to ALF module) for coding the video image data, wherein paragraph [115], Liu discloses an adaptive loop filter module can be implemented for generating one or more optimizing parameters for adjusting a filter, and paragraph [93], Liu discloses rate-distortion optimization is performed for encoder 603 of fig.6) comprises: encoding said one or more optimizing parameters (paragraph [99], Liu discloses entropy encoder 625 for generating an encoded bitstream to include the sequence of frames along with various information general control data, selected prediction information, residue information and other suitable information (ie. optimization parameters like filter information pertaining to ALF module) for coding the video image data, wherein paragraph [115], Liu discloses an adaptive loop filter module can be implemented for generating one or more optimizing parameters for adjusting a filter, and paragraph [93], Liu discloses rate-distortion optimization is performed for encoder 603 of fig.6).
Liu does not disclose the second coding method. However, Nakanishi teaches implementing the second coding method (paragraph [28], Nakanishi discloses first encoding method (lossy compression) applied with first compression apparatus 12 and second encoding method (lossless compression) applied with second compression apparatus 14). Since Liu discloses “wherein the signaling said one or more optimizing parameters comprises: encoding said one or more optimizing parameters”, and Nakanishi discloses “the second coding method”, therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Liu and Nakanishi together as a whole for ascertaining the limitation “wherein the signaling said one or more optimizing parameters comprises: encoding said one or more optimizing parameters by the second coding method” so as to optimize encoder efficiency by shortening the calculation time needed for compression and decompression of video image data (Nakanishi’s paragraph [25]).
Regarding claim 36, Liu discloses a method for decoding (paragraph [101], Liu discloses decoding embodiment), comprising:
receiving a first coded frame and a second coded frame (paragraph [102], Liu discloses entropy decoder 771 receives a coded video sequence which comprises a sequence of frames that includes at least a first coded frame and a second coded frame);
receiving one or more optimizing parameters (paragraph [102], Liu discloses that entropy decoder 771, the encoded bitstream is received and parsed to extract certain symbols that represent the syntax elements for representing the coded picture, wherein the certain symbols represent block coding mode (ie. intra mode, inter mode, bi-predicted mode, merge submodes, etc.), prediction information (ie. intra prediction, inter prediction), residual information, quantized transform coefficients, and the like, wherein “and the like” can represent optimizing parameters like adaptive loop filtering information from the encoder embodiment, wherein paragraph [115], at the encoder embodiment, Liu discloses an adaptive loop filter module can be implemented for generating one or more optimizing parameters for adjusting a filter, and paragraph [93], Liu discloses rate-distortion optimization is performed for encoder 603 of fig.6);
decoding the first coded frame into a first decoded frame using a first decoding method (paragraph [102], Liu discloses that entropy decoder 771 that receives image information that comprise a sequence of images including a first image, a second image, and subsequent images thereafter);
adjusting a filter with the one or more optimizing parameters (paragraph [115], Liu discloses an adaptive loop filter module can be implemented for generating one or more optimizing parameters for adjusting a filter, and paragraph [93], Liu discloses rate-distortion optimization is performed for encoder 603 of fig.6), wherein said one or more optimizing parameters reduce distortion of the first decoded frame to produce a first filtered frame (paragraph [115], Liu discloses an adaptive loop filter module can be implemented for generating one or more optimizing parameters for adjusting a filter, wherein fig.9, “ALF” module performs adaptive loop filtering for producing a filtered frame, and that iTR+iQ is considered to be the local decoder for generating a locally decoded frame to eventually send to the ALF module to produce a decoded first filtered frame, wherein paragraph [98], Liu discloses that residue decoder 628 performs inverse transform and generates decoded residue data to generate decoded frame, and paragraph [93], Liu discloses rate-distortion optimization is performed for encoder 603 of fig.6);
filtering the first decoded frame with the filter (paragraph [102], Liu discloses that entropy decoder 771, the encoded bitstream is received and parsed to extract certain symbols that represent the syntax elements for representing the coded picture, wherein the certain symbols represent block coding mode (ie. intra mode, inter mode, bi-predicted mode, merge submodes, etc.), prediction information (ie. intra prediction, inter prediction), residual information, quantized transform coefficients, and the like, wherein “and the like” can represent optimizing parameters like adaptive loop filtering information from the encoder embodiment, wherein paragraph [115], at the encoder embodiment, Liu discloses an adaptive loop filter module can be implemented for generating one or more optimizing parameters for adjusting a filter, wherein fig.9, “ALF” module performs adaptive loop filtering for producing a filtered frame, and that iTR+iQ is considered to be the local decoder for generating a locally decoded frame to eventually send to the ALF module to produce a decoded first filtered frame, wherein paragraph [98], Liu discloses that residue decoder 628 performs inverse transform and generates decoded residue data to generate decoded frame); and
decoding the second coded frame into a second decoded frame (paragraph [102], Liu discloses that entropy decoder 771 that receives image information that comprise a sequence of images including a first image, a second image, and subsequent images thereafter) by using the first filtered frame directly or indirectly for prediction (paragraph [102], Liu discloses that entropy decoder 771, the encoded bitstream is received and parsed to extract certain symbols that represent the syntax elements for representing the coded picture, wherein the certain symbols represent block coding mode (ie. intra mode, inter mode, bi-predicted mode, merge submodes, etc.), prediction information (ie. intra prediction, inter prediction), residual information, quantized transform coefficients, and the like, wherein “and the like” can represent optimizing parameters like adaptive loop filtering information from the encoder embodiment, and wherein paragraph [115], Liu discloses that output of residue decoder 628 is sent to “Buffer” and eventually to Inter module and ME (motion estimation) module for further processing the filtered frame as outputted from ALF module for further refinement of the image data and generate difference data and send to the “Combined Prediction” module for using the first filtered frame directly or indirectly for prediction, and the output of “Combined Prediction” can generate data for sending to the adder located before the “TR+Q” section, wherein the output of “TR+Q” section is sent to “Entropy Coding” for encoding the second frame to generate a second coded frame, and so on in a cyclical manner for continuously generating subsequent frames in a sequence of frames; paragraph [99], fig.6, Liu discloses entropy encoder 625 for encoding the second frame; thus, Liu discloses encoding a second frame to generate a second coded frame to be sent to the decoding embodiment for decoding a second coded frame to generate a second decoded frame).
Liu does not disclose decoding the second coded frame into a second decoded frame by a second set of algorithms of a second decoding method and by using the first filtered frame directly or indirectly for prediction. However, Nakanishi teaches a second set of algorithms of a second decoding method (paragraph [29], Nakanishi discloses first decoding method (lossy decompression) applied with first decompression apparatus 22 and second decoding method (lossless decompression) applied with second decompression apparatus 24). Since Liu discloses “decoding the second coded frame into a second decoded frame by using the first filtered frame directly or indirectly for prediction”, and Nakanishi discloses “…a second set of algorithms of a second decoding method”, therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Liu and Nakanishi together as a whole for ascertaining the limitation “decoding the second coded frame into a second decoded frame by a second set of algorithms of a second decoding method and by using the first filtered frame directly or indirectly for prediction” so as to optimize encoder efficiency by shortening the calculation time needed for compression and decompression of video image data (Nakanishi’s paragraph [25]).
Liu and Nakanishi do not disclose wherein the distortion comprises one or more of the following: a feature-element-wise distortion, or a cross-entropy loss. However, Jiang teaches wherein the distortion comprises one or more of the following: a feature-element-wise distortion (paragraph [97], Jiang discloses obtaining distortion loss, wherein distortion loss can comprise a MSE (mean square error) ascertained from the difference of feature representation computed based on x_sub_i and x_hat_sub_i, and that the feature representation can originate from a weight map to emphasize and represent the distortion of the reconstructed facial area or different parts (ie.elements) of the facial area, thus obtaining a feature-element-wise distortion, similar to page 62, lines 4-7 of Applicant's specification, wherein Applicant discloses that feature-element-wise distortion comes from MSE (mean square error) computed on extracted feature elements), or a cross-entropy loss (paragraph [97], Jiang discloses obtaining a cross-entropy loss from classification error to obtain distortion data based on DNN (deep neural network) processing). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Liu, Nakanishi and Jiang together as a whole for efficiently compressing and decompressing facial features from video image data for data transmission and video conferencing applications (Jiang’s paragraph [89]).
Claims 18, 21, 29 and 32 are rejected under 35 U.S.C. 103 as being unpatentable over Liu (US 2022/0224943), Nakanishi (US 2019/0251418) and Jiang (US 2022/0217371) in view of Hwang (US 2019/0075325).
Regarding claim 18, Liu discloses wherein the apparatus is further caused to: encode the first decoded frame (paragraph [99], fig.6, Liu discloses entropy encoder 625 for encoding the first frame using a first coding method to generate an encoded bitstream to encode the locally decoded frame, wherein paragraph [98], fig.6, Liu discloses residue calculator 623 is configured to calculate a difference between the received block data of current frame versus the block data of previous frame, and the result of the difference is sent to residue decoder 628 to generate decoded residue data, thus, reconstructing a first decoded frame corresponding to first coded frame), wherein to encode the first decoded frame (paragraph [99], fig.6, Liu discloses entropy encoder 625 for encoding the first frame using a first coding method to generate an encoded bitstream to encode the locally decoded frame, wherein paragraph [98], fig.6, Liu discloses residue calculator 623 is configured to calculate a difference between the received block data of current frame versus the block data of previous frame, and the result of the difference is sent to residue decoder 628 to generate decoded residue data, thus, reconstructing a first decoded frame corresponding to first coded frame), the apparatus is further caused to: reconstruct first decoded frame (paragraph [98], fig.6, Liu discloses residue calculator 623 is configured to calculate a difference between the received block data of current frame versus the block data of previous frame, and the result of the difference is sent to residue decoder 628 to generate decoded residue data, thus, reconstructing a first decoded frame corresponding to first coded frame); and filter the first decoded frame with the filter using said one or more optimizing parameters into first decoded and filtered frame (paragraph [115], Liu discloses an adaptive loop filter module can be implemented for generating one or more optimizing parameters for adjusting a filter, wherein fig.9, “ALF” module performs adaptive loop filtering for producing a filtered frame, and that iTR+iQ is considered to be the local decoder for generating a locally decoded frame to eventually send to the ALF module to produce a decoded first filtered frame, wherein paragraph [98], Liu discloses that residue decoder 628 performs inverse transform and generates decoded residue data to generate decoded frame, and paragraph [93], Liu discloses rate-distortion optimization is performed for encoder 603 of fig.6), wherein the first decoded and filtered frame is used directly for prediction of the second frame (paragraph [115], Liu discloses that output of residue decoder 628 is sent to “Buffer” and eventually to Inter module and ME (motion estimation) module for further processing the filtered frame as outputted from ALF module for further refinement of the image data and generate difference data and send to the “Combined Prediction” module for using the first filtered frame directly or indirectly for prediction, and the output of “Combined Prediction” can generate data for sending to the adder located before the “TR+Q” section, wherein the output of “TR+Q” section is sent to “Entropy Coding” for encoding the second frame to generate a second coded frame, and so on in a cyclical manner for continuously generating subsequent frames in a sequence of frames; paragraph [99], fig.6, Liu discloses entropy encoder 625 for encoding the second frame; thus, Liu discloses encoding a second frame to generate a second coded frame).
Liu does not disclose a first set of algorithms of the second coding method.
However, Nakanishi teaches a first set of algorithms of the second coding method (paragraph [28], Nakanishi discloses first encoding method (lossy compression) applied with first compression apparatus 12, and second encoding method (lossless compression) applied with second compression apparatus 14, thus, Nakanishi discloses implementing a second coding method, and paragraph [142], Nakanishi discloses the second compression apparatus has a second compression model generator to generate a first model or first set of algorithms of the second compression apparatus (second coding method), wherein the model comprises the estimated probability distribution for performing the lossless compression, and paragraph [143], Nakanishi discloses that second data compressor 143 performs lossless compression based on probability distribution as generated from the model to generate calculated probability distributions of the values of divided data, and paragraph [152], Nakanishi discloses that values of each dimension is calculated, and paragraph [156], Nakanishi discloses multiple subsets of data is utilized in order to enable performance of parallel computations of values for multiple operations of the second compression of image data). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Liu and Nakanishi together as a whole for optimizing encoder efficiency by shortening the calculation time needed for compression and decompression of video image data (Nakanishi’s paragraph [25]).
Liu, Nakanishi and Jiang do not disclose the apparatus is further caused to: reconstruct another first decoded frame; and filter the another first decoded frame with the filter using said one or more optimizing parameters into another first decoded and filtered frame, wherein the another first decoded and filtered frame is used directly for prediction of the second frame.
However, Hwang teaches generating another set of identical frames (paragraph [81], Hwang discloses the concept of duplicating identical frames; paragraph [89], Hwang discloses that the decoded images can be duplicated multiple times), and generating another first decoded frame (paragraph [81], Hwang discloses the concept of duplicating identical frames for display to be viewed at the display terminal; paragraph [89], Hwang discloses that the decoded images can be duplicated multiple times).
Since Liu discloses Liu discloses “wherein the apparatus is further caused to: encode the first decoded frame, wherein to encode the first decoded frame, the apparatus is further caused to: reconstruct first decoded frame; and filter the first decoded frame with the filter using said one or more optimizing parameters into first decoded and filtered frame, wherein the first decoded and filtered frame is used directly for prediction of the second frame”, Nakanishi discloses “a first set of algorithms of the second coding method”, and Hwang discloses the concept of generating another set of identical frames and generating another first decoded frame, therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Liu, Nakanishi, Jiang and Hwang together as a whole for ascertaining the limitation of “wherein the apparatus is further caused to: encode the first decoded frame by a first set of algorithms of the second coding method, wherein to encode the first decoded frame, the apparatus is further caused to: reconstruct another first decoded frame; and filter the another first decoded frame with the filter using said one or more optimizing parameters into another first decoded and filtered frame, wherein the another first decoded and filtered frame is used directly for prediction of the second frame” so as to improve transmission efficiency of compression of video data (Hwang’s paragraph [18]).
Regarding claim 21, Liu discloses reconstructs the first decoded frame (paragraph [98], fig.6, Liu discloses residue calculator 623 is configured to calculate a difference between the received block data of current frame versus the block data of previous frame, and the result of the difference is sent to residue decoder 628 to generate decoded residue data, thus, reconstructing a first decoded frame corresponding to first coded frame).
Liu does not disclose the first set of algorithms of the second coding method. However, Nakanishi teaches the first set of algorithms of the second coding method (paragraph [28], Nakanishi discloses first encoding method (lossy compression) applied with first compression apparatus 12, and second encoding method (lossless compression) applied with second compression apparatus 14, thus, Nakanishi discloses implementing a second coding method, and paragraph [142], Nakanishi discloses the second compression apparatus has a second compression model generator to generate a first model or first set of algorithms of the second compression apparatus (second coding method), wherein the model comprises the estimated probability distribution for performing the lossless compression, and paragraph [143], Nakanishi discloses that second data compressor 143 performs lossless compression based on probability distribution as generated from the model to generate calculated probability distributions of the values of divided data, and paragraph [152], Nakanishi discloses that values of each dimension is calculated, and paragraph [156], Nakanishi discloses multiple subsets of data is utilized in order to enable performance of parallel computations of values for multiple operations of the second compression of image data). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Liu and Nakanishi together as a whole for optimizing encoder efficiency by shortening the calculation time needed for compression and decompression of video image data (Nakanishi’s paragraph [25]).
Liu, Nakanishi and Jiang do not disclose wherein the first set of algorithms of the second coding method reconstructs the another first decoded frame to be identical or substantially identical to the first decoded frame. However, Hwang teaches generating another set of identical frames (paragraph [81], Hwang discloses the concept of duplicating identical frames; paragraph [89], Hwang discloses that the decoded images can be duplicated multiple times), and generating another first decoded frame to be identical or substantially identical to the first decoded frame (paragraph [81], Hwang discloses the concept of duplicating identical frames for display to be viewed at the display terminal; paragraph [89], Hwang discloses that the decoded images can be duplicated multiple times).
Since Liu discloses “reconstructs the first decoded frame”, Nakanishi discloses “the first set of algorithms of the second coding method”, and Hwang discloses generating another set of identical frames and generating another first decoded frame to be identical or substantially identical to the first decoded frame, therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Liu, Nakanishi, Jiang and Hwang together as a whole for ascertaining the limitation of “wherein the first set of algorithms of the second coding method reconstructs the another first decoded frame to be identical or substantially identical to the first decoded frame” so as to improve transmission efficiency of compression of video data (Hwang’s paragraph [18]).
Regarding claim 29, Liu discloses encoding the first decoded frame (paragraph [99], fig.6, Liu discloses entropy encoder 625 for encoding the first frame using a first coding method to generate an encoded bitstream to encode the locally decoded frame, wherein paragraph [98], fig.6, Liu discloses residue calculator 623 is configured to calculate a difference between the received block data of current frame versus the block data of previous frame, and the result of the difference is sent to residue decoder 628 to generate decoded residue data, thus, reconstructing a first decoded frame corresponding to first coded frame), wherein the encoding (paragraph [99], fig.6, Liu discloses entropy encoder 625 for encoding the first frame using a first coding method to generate an encoded bitstream to encode the locally decoded frame, wherein paragraph [98], fig.6, Liu discloses residue calculator 623 is configured to calculate a difference between the received block data of current frame versus the block data of previous frame, and the result of the difference is sent to residue decoder 628 to generate decoded residue data, thus, reconstructing a first decoded frame corresponding to first coded frame) further comprises: reconstructing first decoded frame (paragraph [98], fig.6, Liu discloses residue calculator 623 is configured to calculate a difference between the received block data of current frame versus the block data of previous frame, and the result of the difference is sent to residue decoder 628 to generate decoded residue data, thus, reconstructing a first decoded frame corresponding to first coded frame), and filtering the first decoded frame with the filter using said one or more optimizing parameters into first decoded and filtered frame (paragraph [115], Liu discloses an adaptive loop filter module can be implemented for generating one or more optimizing parameters for adjusting a filter, wherein fig.9, “ALF” module performs adaptive loop filtering for producing a filtered frame, and that iTR+iQ is considered to be the local decoder for generating a locally decoded frame to eventually send to the ALF module to produce a decoded first filtered frame, wherein paragraph [98], Liu discloses that residue decoder 628 performs inverse transform and generates decoded residue data to generate decoded frame, and paragraph [93], Liu discloses rate-distortion optimization is performed for encoder 603 of fig.6), wherein the first decoded and filtered frame is used directly for prediction of the second frame (paragraph [115], Liu discloses that output of residue decoder 628 is sent to “Buffer” and eventually to Inter module and ME (motion estimation) module for further processing the filtered frame as outputted from ALF module for further refinement of the image data and generate difference data and send to the “Combined Prediction” module for using the first filtered frame directly or indirectly for prediction, and the output of “Combined Prediction” can generate data for sending to the adder located before the “TR+Q” section, wherein the output of “TR+Q” section is sent to “Entropy Coding” for encoding the second frame to generate a second coded frame, and so on in a cyclical manner for continuously generating subsequent frames in a sequence of frames; paragraph [99], fig.6, Liu discloses entropy encoder 625 for encoding the second frame; thus, Liu discloses encoding a second frame to generate a second coded frame).
Liu does not disclose a first set of algorithms of the second coding method. However, Nakanishi teaches a first set of algorithms of the second coding method (paragraph [28], Nakanishi discloses first encoding method (lossy compression) applied with first compression apparatus 12, and second encoding method (lossless compression) applied with second compression apparatus 14, thus, Nakanishi discloses implementing a second coding method, and paragraph [142], Nakanishi discloses the second compression apparatus has a second compression model generator to generate a first model or first set of algorithms of the second compression apparatus (second coding method), wherein the model comprises the estimated probability distribution for performing the lossless compression, and paragraph [143], Nakanishi discloses that second data compressor 143 performs lossless compression based on probability distribution as generated from the model to generate calculated probability distributions of the values of divided data, and paragraph [152], Nakanishi discloses that values of each dimension is calculated, and paragraph [156], Nakanishi discloses multiple subsets of data is utilized in order to enable performance of parallel computations of values for multiple operations of the second compression of image data). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Liu and Nakanishi together as a whole for optimizing encoder efficiency by shortening the calculation time needed for compression and decompression of video image data (Nakanishi’s paragraph [25]).
Liu, Nakanishi and Jiang do not disclose wherein the encoding further comprises: reconstructing another first decoded frame, and filtering the another first decoded frame with the filter using said one or more optimizing parameters into another first decoded and filtered frame, wherein the another first decoded and filtered frame is used directly for prediction of the second frame. However, Hwang teaches generating another set of identical frames (paragraph [81], Hwang discloses the concept of duplicating identical frames; paragraph [89], Hwang discloses that the decoded images can be duplicated multiple times), and generating another first decoded frame (paragraph [81], Hwang discloses the concept of duplicating identical frames for display to be viewed at the display terminal; paragraph [89], Hwang discloses that the decoded images can be duplicated multiple times).
Since Liu discloses “encoding the first decoded frame, wherein the encoding further comprises: reconstructing first decoded frame, and filtering the first decoded frame with the filter using said one or more optimizing parameters into first decoded and filtered frame, wherein the first decoded and filtered frame is used directly for prediction of the second frame”, Nakanishi discloses “a first set of algorithms of the second coding method”, and Hwang discloses the concept of generating another set of identical frames and generating another first decoded frame, therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Liu, Nakanishi, Jiang and Hwang together as a whole for ascertaining the limitation of “encoding the first decoded frame by a first set of algorithms of the second coding method, wherein the encoding further comprises: reconstructing another first decoded frame, and filtering the another first decoded frame with the filter using said one or more optimizing parameters into another first decoded and filtered frame, wherein the another first decoded and filtered frame is used directly for prediction of the second frame” so as to improve transmission efficiency of compression of video data (Hwang’s paragraph [18]).
Regarding claim 32, Liu discloses reconstructs the first decoded frame (paragraph [98], fig.6, Liu discloses residue calculator 623 is configured to calculate a difference between the received block data of current frame versus the block data of previous frame, and the result of the difference is sent to residue decoder 628 to generate decoded residue data, thus, reconstructing a first decoded frame corresponding to first coded frame).
Liu does not disclose the first set of algorithms of the second coding method. However, Nakanishi teaches the first set of algorithms of the second coding method (paragraph [28], Nakanishi discloses first encoding method (lossy compression) applied with first compression apparatus 12, and second encoding method (lossless compression) applied with second compression apparatus 14, thus, Nakanishi discloses implementing a second coding method, and paragraph [142], Nakanishi discloses the second compression apparatus has a second compression model generator to generate a first model or first set of algorithms of the second compression apparatus (second coding method), wherein the model comprises the estimated probability distribution for performing the lossless compression, and paragraph [143], Nakanishi discloses that second data compressor 143 performs lossless compression based on probability distribution as generated from the model to generate calculated probability distributions of the values of divided data, and paragraph [152], Nakanishi discloses that values of each dimension is calculated, and paragraph [156], Nakanishi discloses multiple subsets of data is utilized in order to enable performance of parallel computations of values for multiple operations of the second compression of image data). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Liu and Nakanishi together as a whole for optimizing encoder efficiency by shortening the calculation time needed for compression and decompression of video image data (Nakanishi’s paragraph [25]).
Liu, Nakanishi and Jiang do not disclose wherein the first set of algorithms of the second coding method reconstructs the another first decoded frame to be identical or substantially identical to the first decoded frame. However, Hwang teaches generating another set of identical frames (paragraph [81], Hwang discloses the concept of duplicating identical frames; paragraph [89], Hwang discloses that the decoded images can be duplicated multiple times), and generating another first decoded frame to be identical or substantially identical to the first decoded frame (paragraph [81], Hwang discloses the concept of duplicating identical frames for display to be viewed at the display terminal; paragraph [89], Hwang discloses that the decoded images can be duplicated multiple times).
Since Liu discloses “reconstructs the first decoded frame”, Nakanishi discloses “the first set of algorithms of the second coding method”, and Hwang discloses generating another set of identical frames and generating another first decoded frame to be identical or substantially identical to the first decoded frame, therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Liu, Nakanishi, Jiang and Hwang together as a whole for ascertaining the limitation of “wherein the first set of algorithms of the second coding method reconstructs the another first decoded frame to be identical or substantially identical to the first decoded frame” so as to improve transmission efficiency of compression of video data (Hwang’s paragraph [18]).
Claims 22 and 33 are rejected under 35 U.S.C. 103 as being unpatentable over Liu (US 2022/0224943), Nakanishi (US 2019/0251418) and Jiang (US 2022/0217371) in view of Ma (US 2022/0295116).
Regarding claim 22, Liu, Nakanishi and Jiang do not disclose wherein the distortion comprises a pixel-wise distortion. However, Ma teaches wherein the distortion comprises a pixel-wise distortion (paragraph [106], Ma discloses utilizing pixel wise distortion measure). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Liu, Nakanishi, Jiang and Ma together as a whole for improving visual quality of video image data during compression for neural networks (Ma’s paragraph [30]).
Regarding claim 33, Liu, Nakanishi and Jiang do not disclose wherein the distortion comprises pixel-wise distortion. However, Ma teaches wherein the distortion comprises pixel-wise distortion (paragraph [106], Ma discloses utilizing pixel wise distortion measure). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Liu, Nakanishi, Jiang and Ma together as a whole for improving visual quality of video image data during compression for neural networks (Ma’s paragraph [30]).
Claim 25 is rejected under 35 U.S.C. 103 as being unpatentable over Liu (US 2022/0224943), Nakanishi (US 2019/0251418) and Jiang (US 2022/0217371) in view of Chen (US 2022/0159285).
Regarding claim 25, Liu discloses wherein the filter is an adaptive loop filter and to signal said one or more optimizing parameters (paragraph [99], Liu discloses entropy encoder 625 for generating an encoded bitstream to include the sequence of frames along with various information general control data, selected prediction information, residue information and other suitable information (ie. optimization parameters like filter information pertaining to ALF module) for coding the video image data, wherein paragraph [115], Liu discloses an adaptive loop filter module can be implemented for generating one or more optimizing parameters for adjusting a filter, and paragraph [93], Liu discloses rate-distortion optimization is performed for encoder 603 of fig.6).
Liu does not disclose the apparatus is further caused to: include said one or more optimizing parameters into an adaptation parameter set defined by the second coding method. However, Nakanishi teaches implementing a second coding method (paragraph [28], Nakanishi discloses first encoding method (lossy compression) applied with first compression apparatus 12 and second encoding method (lossless compression) applied with second compression apparatus 14). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Liu and Nakanishi together as a whole for optimizing encoder efficiency by shortening the calculation time needed for compression and decompression of video image data (Nakanishi’s paragraph [25]).
Liu, Nakanishi and Jiang do not disclose include said one or more optimizing parameters into an adaptation parameter set defined by the second coding method. However, Chen teaches include the one or more optimizing parameters into an adaptation parameter set (paragraph [40], Chen discloses adaptation parameter set (APS) is a syntax structure that comprises syntax elements for representing optimization parameters from ALF (adaptive loop filter) processing). Since Nakanishi discloses “a second coding method”, and Chen discloses “include the one or more optimizing parameters into an adaptation parameter set”, therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Liu, Nakanishi, Jiang and Chen together as a whole for ascertaining the limitation of “the apparatus is further caused to: include said one or more optimizing parameters into an adaptation parameter set defined by the second coding method” in order to increase video coding system functionality while reducing usage of network, memory and processing resources at the encoder and decoder when transmitting video data (Chen’s last sentence of paragraph [5]).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALLEN C WONG whose telephone number is (571)272-7341. The examiner can normally be reached on Flex Monday-Thursday 9:30am-7:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sath V Perungavoor can be reached on 571-272-7455. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ALLEN C WONG/Primary Examiner, Art Unit 2488