DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This communication is responsive to the correspondence filled on 3/6/25.
Claims 16-35 are presented for examination.
IDS Considerations
The information disclosure statement (IDS) submitted on 3/6/25 is/are being considered by the examiner as the submission is in compliance with the provisions of 37 CFR 1.97.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the claims at issue are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the reference application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The USPTO internet Web site contains terminal disclaimer forms which may be used. The filing date of the application will determine what form should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission.
Claims 16, 26 and 34 are rejected on the ground of nonstatutory obviousness-type double patenting as being unpatentable over claims 1 of US Pat 12284344 B2.
Even though instant application does not claim “a vector of split possibilities based on a block of image data and a plurality of pixels adjacent to the block, wherein the causal pixel is a pixel of a causal border of the block of image data, wherein the causal border comprises a row of pixels on top of the block of image data and a column of pixels to the left of the block of image data, wherein the convolutional neural network comprises a convolutional layer and a fully connected layer, wherein an output of the convolutional layer comprises a vector associated with a level of splitting, wherein the output of the convolutional layer is concatenated with quantization information associated with the block of image data, wherein the output of the convolution layer”, however not claiming this does not provide instant application a patentable distinction. Because lack of limitation makes the claim broad obvious variation of US Pat 12284344 B2.
Even though US Pat 12284344 B2 does not claim “concatenate the output of the first layer with compression information associated with the block; input the concatenated output to a second layer associated with the DL algorithm”. However, this is well known in the art as an example given in prior art JP2017-155903 (U.S. Pub. No. 20210150767A1 which claims priority from JP2017-155903 filled on 8/10/2017 and google translation of JP2017-155903 is used for rejection). JP2017-155903 teach concatenate the output of the first layer (JP2017-155903 FIG. 20 conv2 layer) with compression information (JP2017-155903 page 25 para 8 [0163] The conv4 of the second CNN filter 107b2 receives N2*Hl*Wl data as input and outputs Nconv4*H1*Wl
data. The Concatenate layer of the second CNN filter 107b2 receives the image processing results [compression information], the coding parameters processed by conv4, and (Nconv2+Nconv4) *Hl*Wl, concatenates each of them, and outputs Nconv3*Hl*Wl data. The conv3 of the second CNN filter 107b2 receives Nconv4*Hl*Wl data as input.) associated with the block; (JP2017-155903 [0161] FIG. 20 is a schematic diagram showing the configuration of a CNN filter 107b according to this embodiment. As shown in FIG. 20, the first CNN filter 107b1 includes two convX layers (conv1, conv2). The second CNN filter 107b2 includes two convX layers (conv3, conv4) and a Concatenate layer.) input the concatenated output to a second layer associated with the DL algorithm; (JP2017-155903 page 25 para 8 [0163] The conv4 of the second CNN filter 107b2 receives N2*Hl*Wl data as input and outputs Nconv4*H1*Wl data. The Concatenate layer of the second CNN filter 107b2 receives the image processing results [compression information], the coding parameters processed by conv4, and (Nconv2+Nconv4) *Hl*Wl, concatenates each of them, and outputs Nconv3*Hl*Wl data. The conv3 [second layer] of the second CNN filter 107b2 receives [input the concatenated output] Nconv4*Hl*Wl data as input.)
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to combine US Pat 12284344 B2 and JP2017-155903 with predictable results to incorporate concatenate the output of the first layer with compression information associated with the block; input the concatenated output to a second layer associated with the DL algorithm. Combined teaching will accommodate efficiency.
Instant Application 19/023,060
US Pat 12284344 B2
16. (New) A device for video processing, comprising: a memory, and a processor, configured to:
obtain an output of a first layer associated with a deep learning (DL) algorithm
based on a block of image data and a causal pixel of the block;
concatenate the output of the first layer with compression information associated with the block;
input the concatenated output to a second layer associated with the DL algorithm;
partition the block into a plurality of smaller blocks based on an output of the second layer;
and predict the plurality of smaller blocks.
1. A method for video processing, comprising:
determining, using a convolutional neural network, wherein the plurality of pixels
comprises a causal pixel,
wherein the causal pixel is adjacent to the block of image data,
a vector of split possibilities based on a block of image data and a plurality of pixels adjacent to the block, wherein the causal pixel is a pixel of a causal border of the block of image data, wherein the causal border comprises a row of pixels on top of the block of image data and a column of pixels to the left of the block of image data, wherein the convolutional neural network comprises a convolutional layer and a fully connected layer, wherein an output of the convolutional layer comprises a vector associated with a level of splitting, wherein the output of the convolutional layer is concatenated with quantization information associated with the block of image data, wherein the output of the convolution layer,
concatenated with the quantization information, is an input of the fully connected layer, and wherein an output of the fully connected layer, based on a dimension reduction, comprises the vector of split possibilities;
partitioning the block of image data into a plurality of smaller blocks based on the vector of split possibilities;
and predicting the plurality of smaller blocks.
9. Limitations of remaining claims of instant application are obvious over US Pat 12284344 B2 in view of prior art discussed under Claim Rejections – 35 USC § 103 of this office action. Same motivation described under Claim Rejections – 35 USC § 103 of this office action is applicable for combining US Pat 12284344 B2 and stated prior arts. Please note 35 U.S.C. 101 allows only one patent from one patent application or invention. In that aspect all dependent claims of instant application are obvious variation of independent claim 1.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
A review of the specification appears to be no corresponding structure described in the specification for the 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph limitation.
Claim(s) 19 and 29 is/are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention.
Claim scope “wherein the block has a size of 64x64 pixels and an input of the first layer associated with the DL algorithm has a size of 65x65 pixels” is unclear because that does not limit a claim to a particular structure and the specification does not provide a standard for ascertaining the requisite degree and one of the ordinary skill in the art would not be reasonably appraised of the scope of the invention.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 16-18, 22, 24-28 and 32-35 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jeon (WO2019009449A1), in view of JP2017-155903 (U.S. Pub. No. 20210150767A1 which claims priority from JP2017-155903 filled on 8/10/2017 and google translation of JP2017-155903 is used for rejection).
Regarding to claim 16, 26 and 34:
16. Jeon teach a device for video processing, comprising: a memory, and a processor, configured to: (Jeon page 15 para 2-3 [124] According to one embodiment, the decoder 160 is stored in a memory and/or a storage device. You can execute program commands. The decoding unit 160 may include at least one processor including a central processing unit (CPU), a graphics processing unit (GPU), and the like.)
obtain an output of a first layer associated with a deep learning (DL) algorithm based on a block of image data (Jeon Fig. 4B shows layers of deep learning (DL) related to an image coding, Fig. 5, Fig. 9, Fig.10 and page 10 para 5 [89] the encoder 110 uses a deep neural network [convolutional layer] to convert an original image. By determining the subjective image quality of each of the included frames, the compressive strength at which the subjective image quality changes may be determined, and a quantization parameter and a ratio trajectory for the determined compressive strength may be determined)
Jeon Fig. 7A teach deep learning based image encoding and concatenate of compression information [prediction parameter]. Jeon do not explicitly teach and a causal pixel of the block; concatenate the output of the first layer with compression information associated with the block; input the concatenated output to a second layer associated with the DL algorithm; partition the block into a plurality of smaller blocks based on an output of the second layer; and predict the plurality of smaller blocks.
However JP2017-155903 teach and a causal pixel of the block; (JP2017-155903 Fig. 11-12 shows spatial neighboring block [causal pixel of the block are spatial prediction] because JP2017-155903 page 21 para 2 [0124] The prediction parameters shown in (a) of FIG. 12 are arranged in units of coding units (prediction units). The case where the prediction parameters shown in (a) of FIG. 12 are input in units of pixels, etc., is shown in (b) of FIG. 12. When input in units of pixels, as in the example shown in (b) of FIG. 11, the prediction parameters that directly correspond spatially to each pixel can be used for processing, and processing according to each pixel can be performed)
concatenate the output of the first layer (JP2017-155903 FIG. 20 conv2 layer) with compression information (JP2017-155903 page 25 para 8 [0163] The conv4 of the second CNN filter 107b2 receives N2*Hl*Wl data as input and outputs Nconv4*H1*Wl
data. The Concatenate layer of the second CNN filter 107b2 receives the image processing results [compression information], the coding parameters processed by conv4, and (Nconv2+Nconv4) *Hl*Wl, concatenates each of them, and outputs Nconv3*Hl*Wl data. The conv3 of the second CNN filter 107b2 receives Nconv4*Hl*Wl data as input.) associated with the block; (JP2017-155903 [0161] FIG. 20 is a schematic diagram showing the configuration of a CNN filter 107b according to this embodiment. As shown in FIG. 20, the first CNN filter 107b1 includes two convX layers (conv1, conv2). The second CNN filter 107b2 includes two convX layers (conv3, conv4) and a Concatenate layer.)
input the concatenated output to a second layer associated with the DL algorithm; (JP2017-155903 page 25 para 8 [0163] The conv4 of the second CNN filter 107b2 receives N2*Hl*Wl data as input and outputs Nconv4*H1*Wl data. The Concatenate layer of the second CNN filter 107b2 receives the image processing results [compression information], the coding parameters processed by conv4, and (Nconv2+Nconv4) *Hl*Wl, concatenates each of them, and outputs Nconv3*Hl*Wl data. The conv3 [second layer] of the second CNN filter 107b2 receives [input the concatenated output] Nconv4*Hl*Wl data as input.)
partition the block into a plurality of smaller blocks based on an output of the second layer; and predict the plurality of smaller blocks. (JP2017-155903 [0112] The image filter device uses LSTM (Long Short Term Memory) which further utilizes a subnetwork of the New neural network to control the update and propagation of re-input information (internal state). and GRUs (Gated Recurrent Units) can be combined as components. [0113] As the coding parameter channels of the unfiltered image , in addition to the channel of the quantization parameter (Qp), a channel of the partition information (PartDepth) and a channel of the prediction mode information (PredMode) can be added. [0130] Also,instead of depth information, size information is used to indicate the horizontal and vertical size of the partition. The prediction parameters shown in FIG. 17(a) are arranged in transform block units. Information may be used. [0131] Figure 16 shows an example where the reference parameters are size information including the horizontal and vertical sizes of the partition. In the example shown in Figure 16, two pieces of information, namely the horizontal size and vertical size, are input for each unit area. In the example shown in Figure 16, log2 (W)2 and log2(H)2, which are the logarithmic value of the horizontal size w (width) and vertical size H (height) of the partition plus a specified offset (2) , are used as the reference parameters . For example, the partition sizes (W, H) of (1, 1 ), (2, 1 ), (0,0), (2,0), (1 '2), and (0, 1) are (8,8), (16,8), (4,4), (16,4), (8, 16), and (4,8), respectively)
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify Jeon, further incorporating JP2017-155903 in video/camera technology. One would be motivated to do so, to incorporate concatenate the output of the first layer with compression information associated with the block; input the concatenated output to a second layer associated with the DL algorithm. This functionality will improve efficiency with predictable results.
Regarding to claim 17, 27 and 35:
17. Jeon teach the device of claim 16, wherein the DL algorithm comprises a convolutional neural network (CNN), and the compression information associated with the block comprises quantization information associated with the block. (Jeon Fig. 4B shows layers of deep learning (DL) related to an image coding, Fig. 5, Fig. 9, Fig.10 and page 10 para 5 [89] the encoder 110 uses a deep neural network [convolutional layer] to convert an original image. By determining the subjective image quality of each of the included frames, the compressive strength at which the subjective image quality changes may be determined, and a quantization parameter and a ratio trajectory for the determined compressive strength may be determined)
Regarding to claim 18 and 28:
18. Jeon teach the device of claim 17, wherein the quantization information is associated with a quantization parameter of the block. (Jeon Fig. 4B shows layers of deep learning (DL) related to an image coding, Fig. 5, Fig. 9, Fig.10 and page 10 para 5 [89] the encoder 110 uses a deep neural network [convolutional layer] to convert an original image. By determining the subjective image quality of each of the included frames, the compressive strength at which the subjective image quality changes may be determined, and a quantization parameter and a ratio trajectory for the determined compressive strength may be determined)
Regarding to claim 22 and 32:
22. Jeon teach the device of claim 16, wherein the output of the second layer comprises a vector of split possibilities. (Jeon Fig. 14, page 26 para 3 [197] According to an embodiment, the image decoding apparatus 150 recursively. Can be divided. Referring to FIG. 14, the image decoding apparatus 150 may divide a first coding unit 1400 to determine a plurality of coding units 141 Oa, 141 Ob, 1430a, 1430b, 1450a, 1450b, 1450c, 1450d, and Each of the determined coding units 141 Oa, 141 Ob, 1430a, 1430b, 1450a, 1450b, 1450c, and 1450d may be recursively partitioned [vector of split possibilities]. A method of dividing the plurality of coding units 141 Oa, 141 Ob, 1430a, 1430b, 1450a, 1450b, 1450c, and 1450d may correspond to a method of dividing the first coding unit 1400. Accordingly, a plurality of coding units 141 Oa, 141 Ob, 1430a, 1430b, 1450a, 1450b, 1450c, and 1450(1) may be independently divided into a plurality of coding units. 150) may determine the second coding units 141 Oa and 141 Ob by dividing the first coding unit 1400 in the vertical direction, and further split each of the second coding units 141 Oa and 141 Ob independently or not. Can be decided. See Fig. 18 and page 28 para 2-5 [211-213])
Regarding to claim 24 and 33:
24. Jeon teach the device of claim 16, wherein the processor is further configured to: (Jeon page 7 para 2 [63] According to an embodiment, the encoder 110 may execute a program command stored in a memory and/or a storage device. The encoding unit 110 includes a central processing unit (CPU), a graphics processing unit; GPU) and the like may include at least one processor) generate a residual based on the predicted plurality of smaller blocks; (Jeon Fig. 5 Fig. 10-19) and include the residual in a bitstream. (Jeon page 16 para 2 [134] Fig. 5 is a block diagram illustrating an operation of an autoencoder 504 included in the encoder 500 according to an exemplary embodiment. The autoencoder 504 of FIG. 5 has a network structure including a plurality of layers for compressing and reducing the residual signal. The decoder 160 should perform an operation to reduce the compressed residual signal obtained from the bitstream, and according to an embodiment, the autoencoder included in the decoder 160 is an encoder (500) may have the same or similar network structure as the autoencoder 504. According to an embodiment, the autoencoder of the decoder 160 uses layers for reduction of the autoencoder 504 of the encoder 500. The residual signal can be reduced. A description of the operation of the auto-encoder of the decoding unit 160 has been described above in FIG. 5 with respect to the operation of the auto-encoder 504 of the encoding unit 500, and thus a detailed description thereof will be omitted. Jeon page 5 para 9-10 [47] According to an embodiment, the step of decoding an image of the image decoding method is at least Obtaining a residual signal of an image using compression information including quantization parameters of an image compressed with one compression strength; And generating a bitstream including the compressed residual signal.)
Regarding to claim 25:
25 Jeon teach the device of claim 16, Jeon do not explicitly teach wherein the processor is further configured to obtain an output of the second layer associated with the DL algorithm based on the concatenated output of the first layer associated with the DL algorithm.
However JP2017-155903 teach wherein the processor is further configured to obtain an output of the second layer associated with the DL algorithm based on the concatenated output of the first layer associated with the DL algorithm. (Claim is rejected for the same reason as claim 16. JP2017-155903 Fig. 20 page 25 para 8 [0163] The conv4 of the second CNN filter 107b2 receives N2*Hl*Wl data as input and outputs Nconv4*H1*Wl data. The Concatenate layer of the second CNN filter 107b2 receives the image processing results [compression information], the coding parameters processed by conv4, and (Nconv2+Nconv4) *Hl*Wl, concatenates each of them, and outputs Nconv3*Hl*Wl data. The conv3 [second layer] of the second CNN filter 107b2 receives [input the concatenated output] Nconv4*Hl*Wl data as input.)
Allowable subject matter
Regarding to claim 19-21, 23 and 29-31:
Claims 19-21, 23 and 29-31 is/are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims because the limitations of these dependent claims are not obvious from the prior art search when all the limitations of independent and intervening claims are taken into account.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NASIM N NIRJHAR whose telephone number is (571) 272-3792. The examiner can normally be reached on Monday - Friday, 8 am to 5 pm ET.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William F Kraig can be reached on (571) 272-8660. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/NASIM N NIRJHAR/Primary Examiner, Art Unit 2896