Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This office action is in response to application filed 01/08/2025 in which the claims 1-8 are pending.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 01/08/2025, 02/18/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-4, 7-8 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Koo et al. (US 2018/0041768 A1).
Regarding claim 1, Koo discloses a mesh decoding device (Abstract & Fig. 2 teaches method and a device for decoding an image on the basis of a polygon unit), comprising: a circuit, wherein the circuit: acquires a decoded motion vector of a vertex connected to a vertex to be decoded from a motion vector buffer (Para[0543], [0554] & Fig. 33 teaches for the motion vector prediction of a polygon vertex, if a motion vector predictor candidate list for a corresponding polygon vertex includes a plurality of candidates, a motion vector predictor for the corresponding polygon vertex may be derived based on the configured motion vector predictor candidate list. In this case, if any one of candidate values belonging to the motion vector predictor candidate list is selected, the motion vector predictor derivation unit 3301 belonging to the decoder may receive index information indicative of the selected candidate value from the encoder, fig. 2 a decoded picture buffer (DPB) 250)); and outputs a motion vector of the vertex to be decoded using all or a part of the acquired decoded motion vector (para[0555]-[0561] & Fig. 34 teaches the encoder/decoder derives a motion vector for the corresponding polygon vertex based on a motion vector difference for the polygon vertex and the motion vector predictor derived at step S3401 (S3402). For example, the encoder/decoder may derive a motion vector for a corresponding polygon vertex by adding the motion vector difference for the polygon vertex and the motion vector predictor for the corresponding polygon vertex derived at step S3401.The encoder/decoder may specify a corresponding partition unit using a motion vector for each polygon vertex that forms a corresponding polygon unit using the method described in A). That is, the pixel value or interpolated value of a partition unit (i.e., reference region) specified by the motion vector within the reference picture may be used as the prediction sample of the polygon unit (or pixel value).In this case, as in the description of FIG. 26, if the motion vector of each of the three vertexes of one polygon unit is determined, a prediction sample for samples belonging to the corresponding polygon unit may be derived from the sample value of the corresponding partition unit for each pixel (or sample) of the polygon unit using affine transform. That is, the pixel value of a reference region derived through affine transform may be used as the prediction sample of the polygon unit (or pixel value)).
Regarding claim 2, Koo discloses the mesh decoding device according to claim 1, wherein the circuit selects a motion vector of a vertex connected to the vertex to be decoded from among decoded motion vectors accumulated in the motion vector buffer based on an index decoded from a bit stream of a frame to be decoded (Para[0326]-[0327] teaches the position information of the polygon unit that is selected from the vertexes of a plurality of polygon units located in an adjacent side of an adjacent processing block may be used for a predictor. Furthermore, the selected polygon unit vertex may be indicated by an indicator. That is, the encoder may transmit the index information of the selected polygon unit vertex to the decoder., The encoder transmits the difference Δ between the position predictor of the vertex of the determined polygon unit and the vertex 1701 of the polygon unit located in the left side of the current prediction block, and the index information of the selected vertex (e.g., 1711) of the polygon unit to the decoder. Para[0329] teaches the decoder receives the difference Δ between the predictor and the vertex 1701 of the polygon unit located in the left side of the current prediction block, and the index information of the selected polygon unit (e.g., 1711) from the encoder. Furthermore, the decoder determines the position information of the vertex (e.g., 1711) of the selected polygon unit in the processing block adjacent to the left side using the received index information as a predictor, fig. 2 a decoded picture buffer (DPB) 250).
Regarding claim 3, Koo discloses the mesh decoding device according to claim 1, wherein the circuit selects a motion vector of a nearest vertex among decoded motion vectors accumulated in the motion vector buffer as a motion vector of a vertex connected to the vertex to be decoded (Fig. 16 & Para[0308] teaches the decoder receives the difference Δ between the position predictor of the vertex of the polygon unit of the current processing block and the vertex 1601 position of the polygon unit located in the left side of the current processing block. Furthermore, the decoder determines the position information of the vertex 1611 of the polygon unit located in the right side of the processing block adjacent to the left side that is decoded previously as a predictor. Furthermore, the decoder derives the position of the vertex 1601 of the polygon unit of the current processing block based on the received difference Δ and the position predictor of the vertex of the polygon unit which is determined., para[0326] –[0329] & Fig. 17, Para[0337]-[0338], para[0364] teaches decoder determines the position information of the vertex (e.g., 1811) of the selected polygon unit in the processing block adjacent to the left side. para[0447] teaches when there is a plurality of prediction candidates that are available since there is a plurality of polygon units adjacent to the polygon unit, the motion vector for the polygon unit adjacent to the longest side of the current polygon unit may be selected as the predictor. That is, in the above example, the polygon unit 2302 may use both of the motion vector of the polygon unit 2301 within the adjacent processing block and the motion vector of the polygon unit 2304 within the current processing block, but may select the motion vector of the polygon unit 2304 adjacent to the longest side of the current polygon unit 2302 as the predictor).
Regarding claim 4, Koo discloses the mesh decoding device according to claim 1, wherein the circuit: generates a motion vector residual from a bit stream of a frame to be decoded (Para[0543], [0554] & Fig. 33 teaches for the motion vector prediction of a polygon vertex, if a motion vector predictor candidate list for a corresponding polygon vertex includes a plurality of candidates, a motion vector predictor for the corresponding polygon vertex may be derived based on the configured motion vector predictor candidate list. In this case, if any one of candidate values belonging to the motion vector predictor candidate list is selected, the motion vector predictor derivation unit 3301 belonging to the decoder may receive index information indicative of the selected candidate value from the encoder, fig. 2 a decoded picture buffer (DPB) 250); outputs the output motion vector as a predicted value of the motion vector (Para[0412] teaches the decoder receives the difference Δ between the motion vector of the polygon unit within the current processing block 2002 and the predictor for it, the picture index that includes the prediction block 2001 and the prediction direction information. Furthermore, the decoder determines the motion vector for the polygon unit co-located within the prediction block 2001 as the predictor using the picture index and the prediction direction information. Furthermore, the decoder derives the motion vector of the polygon unit of the current processing block 2002 based on the received MV difference Δ and the determined MV predictor); and outputs a motion vector of the vertex to be decoded by adding the generated motion vector residual and the predicted value of the output motion vector (Para0427] teaches the decoder receives the difference Δ between the motion vector predictor and the motion vector value for each polygon unit in the current processing block, the picture index including the prediction block 2201, and the prediction direction. Para[0461] & Fig. 24 teaches the derivation unit 2403 may derive the motion vector of the polygon unit of the processing block being now decoded based on the motion vector difference and the motion vector predictor of the polygon unit of processing block being now decoded. Fig. 33 & Para[0546] teaches example, the motion vector difference derivation unit 3302 may derive a motion vector for a corresponding polygon vertex by adding a motion vector difference for a polygon vertex and a motion vector predictor for the corresponding polygon vertex).
Regarding claim 7, Koo discloses a mesh decoding method, comprising: acquiring a decoded motion vector of a vertex connected to a vertex to be decoded from a motion vector buffer (Para[0543] & Fig. 33 teaches for the motion vector prediction of a polygon vertex, if a motion vector predictor candidate list for a corresponding polygon vertex includes a plurality of candidates, a motion vector predictor for the corresponding polygon vertex may be derived based on the configured motion vector predictor candidate list. In this case, if any one of candidate values belonging to the motion vector predictor candidate list is selected, the motion vector predictor derivation unit 3301 belonging to the decoder may receive index information indicative of the selected candidate value from the encoder. para[0554] teaches for the motion vector prediction of a polygon vertex, if a motion vector predictor candidate list for a corresponding polygon vertex includes a plurality of candidates, a motion vector predictor for the corresponding polygon vertex may be derived based on the configured motion vector predictor candidate list. In this case, if any one of candidate values belonging to the motion vector predictor candidate list is selected, the decoder may receive index information indicative of the selected candidate value from the encoder); and outputting a motion vector of the vertex to be decoded using all or a part of the acquired decoded motion vector para[0555]-[0561] & Fig. 34 teaches the encoder/decoder derives a motion vector for the corresponding polygon vertex based on a motion vector difference for the polygon vertex and the motion vector predictor derived at step S3401 (S3402).For example, the encoder/decoder may derive a motion vector for a corresponding polygon vertex by adding the motion vector difference for the polygon vertex and the motion vector predictor for the corresponding polygon vertex derived at step S3401.The encoder/decoder may specify a corresponding partition unit using a motion vector for each polygon vertex that forms a corresponding polygon unit using the method described in A). That is, the pixel value or interpolated value of a partition unit (i.e., reference region) specified by the motion vector within the reference picture may be used as the prediction sample of the polygon unit (or pixel value).In this case, as in the description of FIG. 26, if the motion vector of each of the three vertexes of one polygon unit is determined, a prediction sample for samples belonging to the corresponding polygon unit may be derived from the sample value of the corresponding partition unit for each pixel (or sample) of the polygon unit using affine transform. That is, the pixel value of a reference region derived through affine transform may be used as the prediction sample of the polygon unit (or pixel value)).
Regarding claim 8, Koo discloses a program stored on a non-transitory computer- readable medium for causing a computer to function as a mesh decoding device (Para[0563]-[0564] teaches software codes may be stored in the memory, and driven by the processor. The memory may be located interior or exterior to the processor, and may exchange data with the processor with various known means), wherein the mesh decoding device includes a circuit, the circuit: acquires a decoded motion vector of a vertex connected to a vertex to be decoded (Para[0543] & Fig. 33 teaches for the motion vector prediction of a polygon vertex, if a motion vector predictor candidate list for a corresponding polygon vertex includes a plurality of candidates, a motion vector predictor for the corresponding polygon vertex may be derived based on the configured motion vector predictor candidate list. In this case, if any one of candidate values belonging to the motion vector predictor candidate list is selected, the motion vector predictor derivation unit 3301 belonging to the decoder may receive index information indicative of the selected candidate value from the encoder. para[0554] teaches for the motion vector prediction of a polygon vertex, if a motion vector predictor candidate list for a corresponding polygon vertex includes a plurality of candidates, a motion vector predictor for the corresponding polygon vertex may be derived based on the configured motion vector predictor candidate list. In this case, if any one of candidate values belonging to the motion vector predictor candidate list is selected, the decoder may receive index information indicative of the selected candidate value from the encoder) from a motion vector buffer unit (Fig. 2 teaches a decoded picture buffer (DPB) 250); and outputs a motion vector of the vertex to be decoded using all or a part of the acquired decoded motion vector (para[0555]-[0561] & Fig. 34 teaches the encoder/decoder derives a motion vector for the corresponding polygon vertex based on a motion vector difference for the polygon vertex and the motion vector predictor derived at step S3401 (S3402).For example, the encoder/decoder may derive a motion vector for a corresponding polygon vertex by adding the motion vector difference for the polygon vertex and the motion vector predictor for the corresponding polygon vertex derived at step S3401.The encoder/decoder may specify a corresponding partition unit using a motion vector for each polygon vertex that forms a corresponding polygon unit using the method described in A). That is, the pixel value or interpolated value of a partition unit (i.e., reference region) specified by the motion vector within the reference picture may be used as the prediction sample of the polygon unit (or pixel value).In this case, as in the description of FIG. 26, if the motion vector of each of the three vertexes of one polygon unit is determined, a prediction sample for samples belonging to the corresponding polygon unit may be derived from the sample value of the corresponding partition unit for each pixel (or sample) of the polygon unit using affine transform. That is, the pixel value of a reference region derived through affine transform may be used as the prediction sample of the polygon unit (or pixel value)).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 5-6 are rejected under 35 U.S.C. 103 as being unpatentable over Koo et al. (US 2018/0041768 A1) in view of Mammou et al. (US 2023/0290063 A1).
Regarding claim 5, Koo discloses the mesh decoding device according to claim 2, Koo does not explicitly disclose wherein the circuit decodes adaptive_mesh_flag corresponding to the entire base mesh and adaptive_bit _flag corresponding to each of the meshpatches, and the adaptive_mesh_flag and the adaptive_bit _flag are flags indicating whether to adjust accuracy of the motion vector residual, and take either a value of 0 or 1. However Mammou discloses wherein the circuit decodes adaptive_mesh_flag corresponding to the entire base mesh (Para[0314]-[0317] teaches [0314] vps_ext_mesh_data_substream_present_flag indicates the presence of a Mesh Data substream in the bitstream. If the flag is false, the base mesh substreams should not be present in the bitstream. If such bitstreams are present, such bitstreams should be ignored.
vps_ext_mesh_data_facegroup_id_attribute_present_flag equals 1 indicates that one of the attribute types present in the base mesh data stream is the facegroup Id.
vps_ext_mesh_data_submesh_id_attribute_present_flag equals 1 indicates that one of the attribute types for the base mesh data stream is the submesh Id.
vps_ext_mesh_data_attribute_count indicates the number of total attributes in the base mesh including the attributes signalled through the base mesh data substream and the attributes signalled in the video sub streams)
and adaptive_bit _flag corresponding to each of the meshpatches, and the adaptive_mesh_flag and the adaptive_bit _flag are flags indicating whether to adjust accuracy of the motion vector residual, and take either a value of 0 or 1 (para[0345] teaches to signal the attribute indices for submesh id and facegroup_id, the indices of the elements can be explicitly signalled after the present flags, as illustrated below.
Para[0364]- [0365] When asps_vmc_ext_segment_mapping_method is set to 1 this may indicate that the submesh Id is derived by the patch information in a tile. Each tile in the atlas data substream corresponds to one submesh. Otherwise, the base mesh is segmented by a method as defined by the syntax element asps_vmc_ext_segment_mapping_method. asps_vmc_ext_patch_mapping_method indicates how to map a subpart of a base mesh or a submesh to a patch. When asps_vmc_extpatch_mapping_method is equal to 0, all the triangles in the segment indicated by mdu_segment_id are associated with the current patch. In this case, there is only one patch associated with the segment. asps_vmc_extpatch_mapping_method cannot be 0 when asps_vmc_ext_segment_mapping_method is equal to 1). It would have been obvious to one having ordinary skill in the art before the effective filing date of the invention to use the method of method enables predicting the motion vector about each vertex comprising the polygon unit and predicting the sampled-data of the polygon unit of Koo with the method of taking advantage of the subdivision structure can include locally adjusting the mesh resolution in certain areas based on various criteria of Mammou in order to provide a system that adapts the resolution of a dynamic mesh (e.g., number of vertices/faces, resolution of the attribute maps, etc.) to adapt to network conditions and/or the capabilities and constraints of a consuming device/platform.
Regarding claim 6, Koo discloses the mesh decoding device according to claim 2, Koo does not explicitly disclose, wherein the circuit returns accuracy of the motion vector residual to original accuracy based on an accuracy control parameter in a case where adaptive_mesh_flag corresponding to the entire base mesh is 1 and adaptive_bit_flag corresponding to a certain meshpatch is 1. However Mammou discloses wherein the circuit returns accuracy of the motion vector residual to original accuracy based on an accuracy control parameter in a case where adaptive_mesh_flag corresponding to the entire base mesh is 1 and adaptive_bit_flag corresponding to a certain meshpatch is 1 (Para[0066], [0596] teaches the decoder 502 and the post processor 503 may adaptively adjust the resolution/accuracy of the produced mesh M″(i) and/or its associated attribute maps A″(i). para[0175] teaches Substantially preserving the shape of the original mesh can include preserving the shape of the input mesh sufficiently to achieve a desired encoder and/or decoder performance while simultaneously achieving a desired level of accuracy or fidelity in the resulting mesh representation.
Para[0293] - [0294] teaches vuh_mesh_data_sample_stream_flag indicates that the mesh subbitstream has a format of sample stream as defined herein. When the flag is 0, the mesh subbitstream is fully decoded with external methods. vuh_mesh_data_motion_field_present_flag indicates the mesh subbitstream contains data which can be used for the inter-prediction between mesh data in the mesh subbitstream). It would have been obvious to one having ordinary skill in the art before the effective filing date of the invention to use the method of method enables predicting the motion vector about each vertex comprising the polygon unit and predicting the sampled-data of the polygon unit of Koo with the method of taking advantage of the subdivision structure can include locally adjusting the mesh resolution in certain areas based on various criteria of Mammou in order to provide a system that adapts the resolution of a dynamic mesh (e.g., number of vertices/faces, resolution of the attribute maps, etc.) to adapt to network conditions and/or the capabilities and constraints of a consuming device/platform.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROWINA J CATTUNGAL whose telephone number is (571)270-5922. The examiner can normally be reached Monday-Thursday 7:30am-6pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice1.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian Pendleton can be reached at (571) 272-7527. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ROWINA J CATTUNGAL/Primary Examiner, Art Unit 2425