Prosecution Insights
Last updated: April 19, 2026
Application No. 17/128,652

THREE-DIMENSIONAL DATA ENCODING METHOD, THREE-DIMENSIONAL DATA DECODING METHOD, THREE-DIMENSIONAL DATA ENCODING DEVICE, AND THREE-DIMENSIONAL DATA DECODING DEVICE

Final Rejection §103
Filed
Dec 21, 2020
Examiner
DHOOGE, DEVIN J
Art Unit
2677
Tech Center
2600 — Communications
Assignee
Panasonic Intellectual Property Corporation of America
OA Round
6 (Final)
70%
Grant Probability
Favorable
7-8
OA Rounds
3y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
50 granted / 71 resolved
+8.4% vs TC avg
Strong +43% interview lift
Without
With
+42.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
48 currently pending
Career history
119
Total Applications
across all art units

Statute-Specific Performance

§101
8.2%
-31.8% vs TC avg
§103
49.4%
+9.4% vs TC avg
§102
35.8%
-4.2% vs TC avg
§112
5.7%
-34.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 71 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This communication is filed in response to the action filed on 12/23/2025. The claims 1, 5-8, 12-16, and 21-22 have been amended. Claims 3 and 10 are canceled. Claims 1, 4-8, and 11-22 are currently pending. Response to Arguments Applicant’s arguments filed on 12/23/2025 on pages 9-12, under REMARKS with respect to 35 U.S.C. 103 have been fully considered but they are not persuasive. Regarding claim 1 applicants on page 10 states that: PNG media_image1.png 334 678 media_image1.png Greyscale The examiner respectfully disagrees. The examiner would first like to point to YANO paragraphs [0143-0153] and paragraphs [0184-0192], which describe the encoding and decoding process of YANO and clearly includes a first and second encoding scheme each used on different image data types. Looking at the encoding process at paragraphs [0147-0149] the first encoding scheme is used at step S105 by a two-dimensional-image encoding method, the video encoding section 115 encodes a geometry video frame which is the video frame of the generated positional information. Then at step S106 the second encoding scheme is used this time targeted at encoding image features/attributes such as color, stated as: “a two-dimensional-image encoding method, the video encoding section 116 encodes a color video frame which is the video frame of the attribute information generated in Step S103”. These described steps clearly show two encoding schemes the first directed toward geometric data and the second scheme directed towards attribute data such as color information which is encoded into a bitstream and then decoded by the decoding steps to turn the bitstream back into the input image to assist in data transfer of the bit stream which is easier to transmit than full size images. Please see full rejection to claims below. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or non-obviousness. Claims 1, 4, 8, 11, 15-22 are rejected under 35 § U.S.C. 103 as being obvious over US 2021/0027505 A1 to YANO et al (hereinafter “YANO”) in view of US 2017/0347122 A1 to CHOU et al. (hereinafter “CHOU”). As per claim 1, YANO discloses a three-dimensional data encoding method (a system and method adapted to perform video based encoding and decoding on three dimensional data including geometric information and attribute information of the input video; abstract; fig 2, 14-17; paragraphs [0062-0063]), comprising: encoding three-dimensional data to generate an encoded stream (the system adapted to perform encoding three dimensional video data into a coded bit stream; paragraphs [0075-0077]); storing first information in the encoded stream (a bitstream generating section that generates a bitstream including encoded data of the video frame image and encoded data of a plurality of the feature maps generated by the feature map generating section wherein feature maps represent information on specific features being observed; paragraph [0076]), the first information indicating a first encoding scheme used in the encoding out of a plurality of encoding schemes (each patch of a frame of the input video for the video based encoding approach is encoded using a specific encoding scheme wherein each patch may use a different scheme as desired, the schemes including AVC “Advanced Video Coding”, and HEVC “High Efficiency Video Coding” , among others; paragraphs [0062-0064], [0067], [0242]); and storing second information in the encoded stream (at steps S105, by a two-dimensional-image encoding method, the video encoding section 115 encodes a geometry video frame which is the video frame of the positional information generated in Step S103, at Step S106, by a two-dimensional-image encoding method, the video encoding section 116 encodes a color video frame which is the video frame of the attribute information generated in Step S103; paragraphs [0062-0064], [0147-0149], [0242]), the second information indicating a second encoding scheme used in the encoding out of the plurality of encoding schemes (at step S106 an encoding scheme adapted to encode attribute/color information is used to encode the image into the bit stream and the color meta data and the geometry meta data are combined in the encoded bitstream of the image; paragraphs [0062-0064], [0147-0149], [0242]). YANO fails to disclose wherein the three-dimensional data includes geometry information and attribute information, the encoding includes: encoding the geometry information using the first encoding scheme indicated by the first information, the geometry information corresponding to a three- dimensional space; and encoding the attribute information using the second encoding scheme indicated by the second information, and the encoded stream includes first metadata for the geometry information and second metadata for the attribute information. CHOU discloses wherein the three-dimensional data includes geometry information and attribute information, the encoding includes (the three dimensional data which is encoded and decoded by the computing systems includes geometry information 312 and attribute data 314 of occupied points of the input video; paragraphs [0061-0062]): encoding the geometry information using the first encoding scheme indicated by the first information (and the encoding process includes the usage of an encoding tool which is adapted to have one or more schemes/methods of encoding the video frames input to the computing system; paragraphs [0043], [0053-0054], [0057], [0062]), the geometry information corresponding to a three- dimensional space (the computing system comprises encoders 301-fig3a, and 302-fig 3b, for encoding point cloud three dimensional data captured via a source the source being a depth camera or other digital video source of a video of a three dimension area/space; figs 3a-3b; paragraphs [0060-0062], [0066-0068], [0120], [0322]); and encoding the attribute information using the second encoding scheme indicated by the second information (the encoders 301 and 302 encode various attribute information including attributes 314, relating to attribute(s) for an occupied point, and lists 6 attribute types described further below; figs 3a-3b; paragraphs [0060-0062]), and the encoded stream includes first metadata for the geometry information (the encoders 301 and 302 each include a node 312 for geometry to test for indicators of occupied points in the point cloud of the 3d space wherein at node 312 the system is adapted to as per paragraph [0064] use a volumetric element “voxel” as a set of one or more collected attributes for a location in 3D space, attributes are grouped on a voxel by voxel basis, usually the geometry data (312) is the same for all attributes of a point cloud frame each occupied point has values for the same set of attributes/voxels; paragraphs [0060-0064]) and second metadata for the attribute information (the encoders 301 and 302 encode various attribute information including attributes 314, relating to attribute(s) for an occupied point can include: (1) one or more sample values each defining, at least in part, a color associated with the occupied point ( e.g., YlN sample values, RGB sample values, or sample values in some other color space); (2) an opacity value defining, at least in part, an opacity associated with the occupied point; (3) a specularity value defining, at least in part, a specularity coefficient associated with the occupied point; ( 4) one or more surface normal values defining, at least in part, direction of a flat surface associated with the occupied point; (5) a light field defining, at least in part, a set of light rays passing through or reflected from the occupied point; and/or (6) a motion vector defining, at least in part, motion associated with the occupied point as stated in paragraph [0062] wherein all of the 6 listed features would comprise attribute data and since metadata is defined as “a set of data that describes and gives information about other data” the listed attributes 314 would describe the point cloud data and therefore be defined as metadata; figs 3a-3b; paragraphs [0060-0067], [0078], [0082-0087]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify YANO to have the encoded stream includes first metadata for the geometry information and second metadata for the attribute information of CHOU reference. The Suggestion/motivation for doing so would have been to provide a point cloud frame that can depict an entire model of objects in a 3D space at a given instance of time, or a point cloud frame can depict a single object or region of interest in the 3D space at a given instance of time as suggested by CHOU at paragraph [0061]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine CHOU with YANO to obtain the invention as specified in claim 1. As per claim 4, YANO in view of CHOU discloses the three-dimensional data encoding method according to claim 1. Modified YANO further discloses further comprising: storing the encoded stream into one or more units (the computing system includes storage component (units) such as storage section 913 including a hard disk, a RAM disk, and a non-volatile memory all adapted to store information/data including an encoded bitstream file; paragraph [0237]). As per claim 8, YANO discloses a three-dimensional data decoding method (a system and method adapted to perform video based encoding and decoding on three dimensional data including geometric information and attribute information of the input video; abstract; fig 2, 14-17; paragraphs [0062-0063]), comprising: determining a first encoding scheme used in encoding three-dimensional data to generate an encoded stream (the system adapted to perform encoding three dimensional video data into a coded bit stream a bitstream generating section that generates a bitstream including encoded data of the video frame image and encoded data of a plurality of the feature maps generated by the feature map generating section wherein feature maps represent information on specific features being observed; paragraphs [0075-0077]), the first encoding scheme being determined based on first information included in the encoded stream (each patch of a frame of the input video for the video based encoding approach is encoded using a specific encoding scheme wherein each patch may use a different scheme as desired, the schemes including AVC “Advanced Video Coding”, and HEVC “High Efficiency Video Coding” , among others; paragraphs [0062-0064], [0067], [0242]), the first information indicating the first encoding scheme out of a plurality of encoding schemes (and performing a decoding process using decoding apparatus 200 of the computing system to decode the encoded bitstream and decodes each patch having its respective encoding scheme applied; paragraphs [0071-0073], [0130-0133]); determining a second encoding scheme used in the encoding, the second encoding scheme being determined based on second information included in the encoded stream, the second information indicating the second encoding scheme out of the plurality of encoding schemes (at steps S105, by a two-dimensional-image encoding method, the video encoding section 115 encodes a geometry video frame which is the video frame of the positional information generated in Step S103, at Step S106, by a two-dimensional-image encoding method, the video encoding section 116 encodes a color video frame which is the video frame of the attribute information generated in Step S103; paragraphs [0062-0064], [0147-0149], [0242]); and decoding the encoded stream based on the first encoding scheme and the second encoding scheme (at step S106 an encoding scheme adapted to encode attribute/color information is used to encode the image into the bit stream and the color meta data and the geometry meta data are combined in the encoded bitstream of the image which is then to be decoded at a decoder step to restore the image from the bit stream form; paragraphs [0062-0064], [0147-0149], [0242]). YANO fails to disclose wherein the three-dimensional data includes geometry information and attribute information, the geometry information corresponding to a three-dimensional space, the encoded stream includes encoded data of the geometry information, encoded data of the attribute information, first metadata for the geometry information, and second metadata for the attribute information, and the decoding includes: decoding the encoded data of the geometry information using the first encoding scheme indicated by the first information; and decoding the encoded data of the attribute information using the second encoding scheme indicated by the second information. CHOU discloses wherein the three-dimensional data includes geometry information and attribute information, the geometry information corresponding to a three-dimensional space (the three dimensional data which is encoded and decoded by the computing systems includes geometry information 312 and attribute data 314 of occupied points of the input video; paragraphs [0061-0062]), the encoded stream includes encoded data of the geometry information (the encoders 301 and 302 each include a node 312 for geometry to test for indicators of occupied points in the point cloud of the 3d space wherein at node 312 the system is adapted to as per paragraph [0064] use a volumetric element “voxel” as a set of one or more collected attributes for a location in 3D space, attributes are grouped on a voxel by voxel basis, usually the geometry data (312) is the same for all attributes of a point cloud frame each occupied point has values for the same set of attributes/voxels; paragraphs [0060-0064]), encoded data of the attribute information (the encoders 301 and 302 encode various attribute information including attributes 314, relating to attribute(s) for an occupied point, and lists 6 attribute types described further below; figs 3a-3b; paragraphs [0060-0062]), first metadata for the geometry information, and second metadata for the attribute information (the computing system comprises encoders 301-fig3a, and 302-fig 3b, for encoding point cloud three dimensional data captured via a source the source being a depth camera or other digital video source of a video of a three dimension area/space; figs 3a-3b; paragraphs [0060-0062], [0066-0068], [0120], [0322]), and the decoding includes: decoding the encoded data of the geometry information using the first encoding scheme indicated by the first information (the encoders 301 and 302 include corresponding decoders each include a node 312 for geometry to test for indicators of occupied points in the point cloud of the 3d space wherein at node 312 the system is adapted to as per paragraph [0064] use a volumetric element “voxel” as a set of one or more collected attributes for a location in 3D space, attributes are grouped on a voxel by voxel basis, usually the geometry data (312) is the same for all attributes of a point cloud frame each occupied point has values for the same set of attributes/voxels; paragraphs [0060-0064]); and decoding the encoded data of the attribute information using the second encoding scheme indicated by the second information (the encoders 301 and 302 encode various attribute information including attributes 314, relating to attribute(s) for an occupied point can include: (1) one or more sample values each defining, at least in part, a color associated with the occupied point such as sample values, RGB sample values, or sample values in some other color space); (2) an opacity value defining, at least in part, an opacity associated with the occupied point; (3) a specularity value defining, at least in part, a specularity coefficient associated with the occupied point; ( 4) one or more surface normal values defining, at least in part, direction of a flat surface associated with the occupied point; (5) a light field defining, at least in part, a set of light rays passing through or reflected from the occupied point; and/or (6) a motion vector defining, at least in part, motion associated with the occupied point as stated in paragraph [0062] wherein all of the 6 listed features would comprise attribute data and since metadata is defined as “a set of data that describes and gives information about other data” the listed attributes 314 would describe the point cloud data and therefore be defined as metadata; figs 3a-3b; paragraphs [0060-0067], [0078], [0082-0087]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify YANO to have the three-dimensional data includes geometry information and attribute information, the geometry information corresponding to a three-dimensional space of CHOU reference. The Suggestion/motivation for doing so would have been to provide a point cloud frame that can depict an entire model of objects in a 3D space at a given instance of time, or a point cloud frame can depict a single object or region of interest in the 3D space at a given instance of time as suggested by CHOU at paragraph [0061]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine CHOU with YANO to obtain the invention as specified in claim 8. As per claim 11, YANO in view of CHOU discloses the three-dimensional data decoding method according to claim 8. Modified YANO further discloses wherein the encoded stream is stored into one or more units, and the three-dimensional data decoding method further comprises: obtaining the encoded stream from the one or more units (the computing system includes storage component (units) such as storage section 913 including a hard disk, a RAM disk, and a non-volatile memory all adapted to store information/data including an encoded bitstream file and in turn the computing system is adapted to retrieve/obtained the stored data; paragraph [0237]). As per claim 15, YANO discloses a three-dimensional data encoding device, comprising: a processor (a computing system and method for performing video based encoding and decoding the system comprising computing components such as a processor; fig 27; paragraphs [0223-0235]); and memory, wherein, using the memory (the computing system further comprises a memory to store instructions related to the video based encoding and decoding methods; fig 27; paragraphs [0062], [0130], [0223-0235]), the processor: encodes three-dimensional data to generate an encoded stream (the system adapted to perform encoding three dimensional video data into a coded bit stream; paragraphs [0075-0077]); stores first information in the encoded stream, the first information indicating a first encoding scheme used in the encoding out of a plurality of video-based encoding schemes (the system adapted to perform encoding three dimensional video data into a coded bit stream in order to store the encoded data in the bitstream; paragraphs [0075-0077]); and storing second information in the encoded stream (a bitstream generating section that generates a bitstream including encoded data of the video frame image and encoded data of a plurality of the feature maps generated by the feature map generating section wherein feature maps represent information on specific features being observed; paragraph [0076]), the second information indicating a second encoding scheme used in the encoding out of the plurality of encoding schemes (each patch of a frame of the input video for the video based encoding approach is encoded using a specific encoding scheme wherein each patch may use a different scheme as desired, the schemes including AVC “Advanced Video Coding”, and HEVC “High Efficiency Video Coding” , among others; paragraphs [0062-0064], [0067], [0242]). YANO fails to disclose the three-dimensional data includes geometry information and attribute information, the processor encodes the geometry information using the first encoding scheme indicated by the first information, the geometry information corresponding to a three-dimensional space the processor encodes the attribute information using the second encoding scheme indicated by the second information, and the encoded stream includes first metadata for the geometry information and second metadata for the attribute information. CHOU discloses the three-dimensional data includes geometry information and attribute information (the three dimensional data which is encoded and decoded by the computing systems includes geometry information 312 and attribute data 314 of occupied points of the input video; paragraphs [0061-0062]), the processor encodes the geometry information using the first encoding scheme indicated by the first information (and the encoding process includes the usage of an encoding tool which is adapted to have one or more schemes/methods of encoding the video frames input to the computing system; paragraphs [0043], [0053-0054], [0057], [0062]), the geometry information corresponding to a three-dimensional space (the computing system comprises encoders 301-fig3a, and 302-fig 3b, for encoding point cloud three dimensional data captured via a source the source being a depth camera or other digital video source of a video of a three dimension area/space; figs 3a-3b; paragraphs [0060-0062], [0066-0068], [0120], [0322]) the processor encodes the attribute information (the encoders 301 and 302 encode various attribute information including attributes 314, relating to attribute(s) for an occupied point, and lists 6 attribute types described further below; figs 3a-3b; paragraphs [0060-0062]) using the second encoding scheme indicated by the second information (the encoders 301 and 302 each include a node 312 for geometry to test for indicators of occupied points in the point cloud of the 3d space wherein at node 312 the system is adapted to as per paragraph [0064] use a volumetric element “voxel” as a set of one or more collected attributes for a location in 3D space, attributes are grouped on a voxel by voxel basis, usually the geometry data (312) is the same for all attributes of a point cloud frame each occupied point has values for the same set of attributes/voxels; paragraphs [0060-0064]), and the encoded stream includes first metadata for the geometry information and second metadata for the attribute information (the encoders 301 and 302 encode various attribute information including attributes 314, relating to attribute(s) for an occupied point can include: (1) one or more sample values each defining, at least in part, a color associated with the occupied point such as sample values, RGB sample values, or sample values in some other color space); (2) an opacity value defining, at least in part, an opacity associated with the occupied point; (3) a specularity value defining, at least in part, a specularity coefficient associated with the occupied point; ( 4) one or more surface normal values defining, at least in part, direction of a flat surface associated with the occupied point; (5) a light field defining, at least in part, a set of light rays passing through or reflected from the occupied point; and/or (6) a motion vector defining, at least in part, motion associated with the occupied point as stated in paragraph [0062] wherein all of the 6 listed features would comprise attribute data and since metadata is defined as “a set of data that describes and gives information about other data” the listed attributes 314 would describe the point cloud data and therefore be defined as metadata; figs 3a-3b; paragraphs [0060-0067], [0078], [0082-0087]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify YANO to have the three-dimensional data encoding scheme includes geometry information and attribute information of CHOU reference. The Suggestion/motivation for doing so would have been to provide a point cloud frame that can depict an entire model of objects in a 3D space at a given instance of time, or a point cloud frame can depict a single object or region of interest in the 3D space at a given instance of time as suggested by CHOU at paragraph [0061]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine CHOU with YANO to obtain the invention as specified in claim 15. As per claim 16, YANO discloses a three-dimensional data decoding device, comprising: a processor (a computing system and method for performing video based encoding and decoding the system comprising computing components such as a processor; fig 27; paragraphs [0223-0235]); and memory, wherein, using the memory (the computing system further comprises a memory to store instructions related to the video based encoding and decoding methods; fig 27; paragraphs [0062], [0130], [0223-0235]), the processor: determines a first encoding scheme used in encoding three-dimensional data to generate an encoded stream (the system adapted to perform encoding three dimensional video data into a coded bit stream; paragraphs [0075-0077]), the first encoding scheme being determined based on first information included in the encoded stream (a bitstream generating section that generates a bitstream including encoded data of the video frame image and encoded data of a plurality of the feature maps generated by the feature map generating section wherein feature maps represent information on specific features being observed; paragraph [0076]), the first information indicating the first encoding scheme out of a plurality of encoding schemes each patch of a frame of the input video for the video based encoding approach is encoded using a specific encoding scheme wherein each patch may use a different scheme as desired, the schemes including AVC “Advanced Video Coding”, and HEVC “High Efficiency Video Coding” , among others; paragraphs [0062-0064], [0067], [0242]); determines a second encoding scheme used in the encoding, the second encoding scheme being determined based on second information included in the encoded stream (at steps S105, by a two-dimensional-image encoding method, the video encoding section 115 encodes a geometry video frame which is the video frame of the positional information generated in Step S103, at Step S106, by a two-dimensional-image encoding method, the video encoding section 116 encodes a color video frame which is the video frame of the attribute information generated in Step S103; paragraphs [0062-0064], [0147-0149], [0242]), the second information indicating the second encoding scheme out of the plurality of encoding schemes (at step S106 an encoding scheme adapted to encode attribute/color information is used to encode the image into the bit stream and the color meta data and the geometry meta data are combined in the encoded bitstream of the image; paragraphs [0062-0064], [0147-0149], [0242]); and decodes the encoded stream based on the first encoding scheme and the second encoding scheme (and performing a decoding process using decoding apparatus 200 of the computing system to decode the encoded bitstream and decodes each patch having its respective encoding scheme applied; paragraphs [0071-0073], [0130-0133]), the processor decodes the encoded data of the geometry information using the first encoding scheme indicated by the first information (the encoders 301 and 302 include corresponding decoders each include a node 312 for geometry to test for indicators of occupied points in the point cloud of the 3d space wherein at node 312 the system is adapted to as per paragraph [0064] use a volumetric element “voxel” as a set of one or more collected attributes for a location in 3D space, attributes are grouped on a voxel by voxel basis, usually the geometry data (312) is the same for all attributes of a point cloud frame each occupied point has values for the same set of attributes/voxels; paragraphs [0060-0064], [0078], [0082-0084], [0147-0149], [0242]), and the processor decodes the encoded data of the attribute information using the second encoding scheme indicated by the second information (at step S106 an encoding scheme adapted to encode attribute/color information is used to encode the image into the bit stream and the color meta data and the geometry meta data are combined in the encoded bitstream of the image which is then to be decoded at a decoder step to restore the image from the bit stream form; paragraphs [0062-0064], [0147-0149], [0242]). YANO fails to disclose the three-dimensional data includes geometry information and attribute information, the geometry information corresponding to a three-dimensional space, the encoded stream includes encoded data of the geometry information, encoded data of the attribute information, first metadata for the geometry information, and second metadata for the attribute information. CHOU discloses the three-dimensional data includes geometry information and attribute information (the three dimensional data which is encoded and decoded by the computing systems includes geometry information 312 and attribute data 314 of occupied points of the input video; paragraphs [0061-0062]), the geometry information corresponding to a three-dimensional space (the three dimensional data which is encoded and decoded by the computing systems includes geometry information 312 and attribute data 314 of occupied points of the input video; paragraphs [0061-0062]), the encoded stream includes encoded data of the geometry information (and the encoding process includes the usage of an encoding tool which is adapted to have one or more schemes/methods of encoding the video frames input to the computing system; paragraphs [0043], [0053-0054], [0057], [0062]), encoded data of the attribute information (the encoders 301 and 302 encode various attribute information including attributes 314, relating to attribute(s) for an occupied point, and lists 6 attribute types described further below; figs 3a-3b; paragraphs [0060-0062]), first metadata for the geometry information(the encoders 301 and 302 each include a node 312 for geometry to test for indicators of occupied points in the point cloud of the 3d space wherein at node 312 the system is adapted to as per paragraph [0064] use a volumetric element “voxel” as a set of one or more collected attributes for a location in 3D space, attributes are grouped on a voxel by voxel basis, usually the geometry data (312) is the same for all attributes of a point cloud frame each occupied point has values for the same set of attributes/voxels; paragraphs [0060-0064]), and second metadata for the attribute information (the encoders 301 and 302 encode various attribute information including attributes 314, relating to attribute(s) for an occupied point can include: (1) one or more sample values each defining, at least in part, a color associated with the occupied point such as sample values, RGB sample values, or sample values in some other color space); (2) an opacity value defining, at least in part, an opacity associated with the occupied point; (3) a specularity value defining, at least in part, a specularity coefficient associated with the occupied point; ( 4) one or more surface normal values defining, at least in part, direction of a flat surface associated with the occupied point; (5) a light field defining, at least in part, a set of light rays passing through or reflected from the occupied point; and/or (6) a motion vector defining, at least in part, motion associated with the occupied point as stated in paragraph [0062] wherein all of the 6 listed features would comprise attribute data and since metadata is defined as “a set of data that describes and gives information about other data” the listed attributes 314 would describe the point cloud data and therefore be defined as metadata; figs 3a-3b; paragraphs [0060-0067], [0078], [0082-0087]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify YANO to have three-dimensional data includes geometry information and attribute information, the geometry information corresponding to a three-dimensional space (), the encoded stream includes encoded data of the geometry information of CHOU reference. The Suggestion/motivation for doing so would have been to provide a point cloud frame that can depict an entire model of objects in a 3D space at a given instance of time, or a point cloud frame can depict a single object or region of interest in the 3D space at a given instance of time as suggested by CHOU at paragraph [0061]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine CHOU with YANO to obtain the invention as specified in claim 16. As per claim 17, YANO in view of CHOU discloses the three-dimensional data encoding method according to claim 1. YANO fails to disclose wherein the first metadata for the geometry information is a parameter set of the geometry information, and the second metadata for the attribute information is a parameter set of the attribute information. CHOU discloses wherein the first metadata for the geometry information is a parameter set of the geometry information (the bit rate R.sub.g is used for the number of bits to encode the geometric geometry data to indicate occupied points, where the bit rate is measure din bits per occupied voxel; paragraphs [0127], [0235]), and the second metadata for the attribute information is a parameter set of the attribute information (the encoder codes the transform coefficients f.sub.atti(m,n) for the attributes selected by user selection mode capabilities to determined occupied points of point clod image data producing coded parameter sets for distributions and related to coded transform coefficients; paragraphs [0127], [0235]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify YANO to have wherein the first metadata for the geometry information is a parameter set of the geometry information, and the second metadata for the attribute information is a parameter set of the attribute information of CHOU reference. The Suggestion/motivation for doing so would have been to provide the ability to determine a point in the point cloud is associated with a position in 3D space (typically, a position having x, y, and z coordinates) that is occupied wherein this technology could be applied to a computer vision system for controlling the motion of robots or objects as suggested and supported by CHOU at paragraph [0102]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine CHOU with YANO to obtain the invention as specified in claim 17. As per claim 18, YANO in view of CHOU discloses the three-dimensional data decoding method according to claim 8. YANO fails to disclose wherein the first metadata for the geometry information is a parameter set of the geometry information, and the second metadata for the attribute information is a parameter set of the attribute information. CHOU discloses wherein the first metadata for the geometry information is a parameter set of the geometry information (the bit rate R.sub.g is used for the number of bits to encode the geometric geometry data to indicate occupied points, where the bit rate is measure din bits per occupied voxel; paragraphs [0127], [0235]), and the second metadata for the attribute information is a parameter set of the attribute information (the encoder codes the transform coefficients f.sub.atti(m,n) for the attributes selected by user selection mode capabilities to determined occupied points of point clod image data producing coded parameter sets for distributions and related to coded transform coefficients; paragraphs [0127], [0235]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify YANO to have wherein the first metadata for the geometry information is a parameter set of the geometry information, and the second metadata for the attribute information is a parameter set of the attribute information of CHOU reference. The Suggestion/motivation for doing so would have been to provide the ability to determine a point in the point cloud is associated with a position in 3D space (typically, a position having x, y, and z coordinates) that is occupied wherein this technology could be applied to a computer vision system for controlling the motion of robots or objects as suggested and supported by CHOU at paragraph [0102]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine CHOU with YANO to obtain the invention as specified in claim 18. As per claim 19, YANO in view of CHOU discloses the three-dimensional data encoding method according to claims 1. Modified YANO further discloses wherein the attribute information is a color or a reflectance of each point (the computing system adapted to perform video-based encoding and decoding includes encoding of attribute information relating to color values and color features of the video frame; figs 2, 17, and 22; paragraphs [0061-0062], [0149]). As per claim 20, YANO in view of CHOU discloses the three-dimensional data decoding method according to claims 8. Modified YANO further discloses wherein the attribute information is a color or a reflectance of each point (the computing system adapted to perform video-based encoding and decoding includes encoding of attribute information relating to color values and color features of the video frame; figs 2, 17, and 22; paragraphs [0061-0062], [0149]). As per claim 21, CAI in view of CHOU discloses the three-dimensional data encoding method according to claim 1. Modified CAI fails to disclose wherein the first information includes a first value indicating the first encoding scheme used in the encoding, and the second information includes a second value indicating the second encoding scheme used in the encoding. CHOU discloses wherein the first information includes a first value indicating the first encoding scheme used in the encoding (the attribute encoding step 314 includes multiple modes or types of attribute data that may be encoded and act as encoding schemes when selected individually, see paragraph [0062] where it states attribute(s) for an occupied point includes: attribute types 1-7 and the encoder is adapted to encode data relating to the attribute to the corresponding occupied point found using the geometric node 312 and would be assigned values 1-7 based on the type 1-7 of encoding scheme utilized; paragraphs [0060-0062]; figs 3a-3b), and the second information includes a second value indicating the second encoding scheme used in the encoding (and when the selected attribute type 1-7 of the attribute node 314 is selected the encoder 301/302 encodes the selected attribute type to the occupied point of the point cloud data; paragraphs [0060-0062]; figs 3a-3b). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to further modify CAI to have the information includes a value indicating the video-based encoding scheme used in the encoding of CHOU reference. The Suggestion/motivation for doing so would have been to provide a plurality of attribute data types to provide significantly more data about each identified occupied point of the point cloud data as suggested by paragraph [0062] of CHOU. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine CHOU with CAI to obtain the invention as specified in claim 21. As per claim 22, CAI in view of CHOU discloses the three-dimensional data decoding method according to claim 8. Modified CAI fails to disclose wherein the first information includes a first value indicating the first encoding scheme used in the encoding, and the second information includes a second value indicating the second encoding scheme used in the encoding. CHOU discloses wherein the first information includes a first value indicating the first encoding scheme used in the encoding (the attribute encoding step 314 includes multiple modes or types of attribute data that may be encoded and act as encoding schemes when selected individually, see paragraph [0062] where it states attribute(s) for an occupied point includes: attribute types 1-7 and the encoder is adapted to encode data relating to the attribute to the corresponding occupied point found using the geometric node 312 and would be assigned values 1-7 based on the type 1-7 of encoding scheme utilized; paragraphs [0060-0062]; figs 3a-3b), and the second information includes a second value indicating the second encoding scheme used in the encoding (and when the selected attribute type 1-7 of the attribute node 314 is selected the encoder 301/302 encodes the selected attribute type to the occupied point of the point cloud data; paragraphs [0060-0062]; figs 3a-3b). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to further modify CAI to have the information includes a value indicating the encoding scheme used in the encoding of CHOU reference. The Suggestion/motivation for doing so would have been to provide a plurality of attribute data types to provide significantly more data about each identified occupied point of the point cloud data as suggested by paragraph [0062] of CHOU. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine CHOU with CAI to obtain the invention as specified in claim 22. Claims 5, 7, 12, 14 are rejected under 35 § U.S.C. 103 as being obvious over US 2021/0027505 A1 to YANO et al (hereinafter “YANO”) in view of US 2017/0347122 A1 to CHOU et al. (hereinafter “CHOU”) in further view of US 2021/0297681 A1 to CLUCAS et al. (hereinafter “CLUCAS”). As per claim 5, YANO in view of CHOU discloses the three-dimensional data encoding method according to claim 4. Modified YANO fails to disclose wherein the one or more units have a same format commonly applied to the plurality of encoding schemes, and the one or more units each include information indicating a type of data included in the unit, the information having different definitions independently applied to the plurality of encoding schemes. CLUCAS discloses wherein the one or more units have a same format commonly applied to the plurality of encoding schemes (the system performs a method to receive a first network abstraction layer “NAL” unit and parsing the first NAL unit to obtain a encoded bitstream containing encoded information which is associated information of an original signal, and finally decoding the said encoded bitstream to obtain decoded information to reconstruct the signal, after the signal is reconstructed receiving a second NAL unit parsing the second NAL unit according to a base coding standard for video to obtain a second encoded bitstream associated with the base encoded info and using the base coding standard; figs 1 and 7; paragraphs [0012-0014], [0018]), and the one or more units each include information indicating a type of data included in the unit (the first NAL unit comprises supplemental enhancement information “SEI” as a payload/output, ideally the output/payload is a user data unregistered type of base coding standards; figs 1 and 7; paragraphs [0012-0014], [0018]), the information having different definitions independently applied to the plurality of encoding schemes (there may be defined a NAL unit for aspects of the invention described herein which is specifically defined and configured to comprise enhancement information, and each frame of the video may be defined by a combination of NAL units (LCEVC+base) in an independent Access Unit where, the method may generate a base encoded stream, a first level encoded stream and a second level encoded stream according to the above defined example methods. Each of the first level encoded stream and the second level encoded stream may contain enhancement data used by a decoder to enhance the encoded base stream; figs 1 and 7; paragraphs [0132], [0253]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to further modify YANO to have a same format commonly applied to the plurality of encoding schemes of CLUCAS reference. The Suggestion/motivation for doing so would have been to provide the ability for each of the first level encoded stream and the second level encoded stream to contain enhancement data used by a decoder to enhance the encoded base stream as suggested by CLUCAS at paragraph [0253]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine CLUCAS with YANO to obtain the invention as specified in claim 5. As per claim 7, YANO in view of CHOU discloses the three-dimensional data encoding method according to claim 4. Modified YANO fails to disclose wherein the one or more units have a same format commonly applied to the plurality of encoding schemes, and the one or more units each include information indicating a type of data included in the unit, the information having a same definition commonly applied to the plurality of encoding schemes. CLUCAS discloses wherein the one or more units have a same format commonly applied to the plurality of encoding schemes (the system performs a method to receive a first network abstraction layer “NAL” unit and parsing the first NAL unit to obtain a encoded bitstream containing encoded information which is associated information of an original signal, and finally decoding the said encoded bitstream to obtain decoded information to reconstruct the signal, after the signal is reconstructed receiving a second NAL unit parsing the second NAL unit according to a base coding standard for video to obtain a second encoded bitstream associated with the base encoded info and using the base coding standard; figs 1 and 7; paragraphs [0012-0014], [0018]), and the one or more units each include information indicating a type of data included in the unit (the first NAL unit comprises supplemental enhancement information “SEI” as a payload/output, ideally the output/payload is a user data unregistered type of base coding standards; figs 1 and 7; paragraphs [0012-0014], [0018]), the information having a same definition commonly applied to the plurality of encoding schemes (there may be defined a NAL unit for aspects of the invention described herein which is specifically defined and configured to comprise enhancement information, and each frame of the video may be defined by a combination of NAL units (LCEVC+base) in an independent Access Unit where, the method may generate a base encoded stream, a first level encoded stream and a second level encoded stream according to the above defined example methods. Each of the first level encoded stream and the second level encoded stream may contain enhancement data used by a decoder to enhance the encoded base stream; figs 1 and 7; paragraphs [0132], [0253]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to further modify YANO to have a same format commonly applied to the plurality of encoding schemes of CLUCAS reference. The Suggestion/motivation for doing so would have been to provide the ability for each of the first level encoded stream and the second level encoded stream to contain enhancement data used by a decoder to enhance the encoded base stream as suggested by CLUCAS at paragraph [0253]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine CLUCAS with YANO to obtain the invention as specified in claim 7. As per claim 12, YANO in view of CHOU discloses the three-dimensional data decoding method according to claim 11. Modified YANO fails to disclose wherein the one or more units have a same format commonly applied to the plurality of encoding schemes, and the one or more units each include information indicating a type of data included in the unit, the information having different definitions independently applied to the plurality of encoding schemes. CLUCAS discloses wherein the one or more units have a same format commonly applied to the plurality of encoding schemes (the system performs a method to receive a first network abstraction layer “NAL” unit and parsing the first NAL unit to obtain a encoded bitstream containing encoded information which is associated information of an original signal, and finally decoding the said encoded bitstream to obtain decoded information to reconstruct the signal, after the signal is reconstructed receiving a second NAL unit parsing the second NAL unit according to a base coding standard for video to obtain a second encoded bitstream associated with the base encoded info and using the base coding standard; figs 1 and 7; paragraphs [0012-0014], [0018]), and the one or more units each include information indicating a type of data included in the unit (the first NAL unit comprises supplemental enhancement information “SEI” as a payload/output, ideally the output/payload is a user data unregistered type of base coding standards; figs 1 and 7; paragraphs [0012-0014], [0018]), the information having different definitions independently applied to the plurality of encoding schemes (there may be defined a NAL unit for aspects of the invention described herein which is specifically defined and configured to comprise enhancement information, and each frame of the video may be defined by a combination of NAL units (LCEVC+base) in an independent Access Unit where, the method may generate a base encoded stream, a first level encoded stream and a second level encoded stream according to the above defined example methods. Each of the first level encoded stream and the second level encoded stream may contain enhancement data used by a decoder to enhance the encoded base stream; figs 1 and 7; paragraphs [0132], [0253]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to further modify YANO to have a same format commonly applied to the plurality of encoding schemes of CLUCAS reference. The Suggestion/motivation for doing so would have been to provide the ability for each of the first level encoded stream and the second level encoded stream to contain enhancement data used by a decoder to enhance the encoded base stream as suggested by CLUCAS at paragraph [0253]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine CLUCAS with YANO to obtain the invention as specified in claim 12. As per claim 14, YANO in view of CHOU discloses the three-dimensional data decoding method according to claim 11. Modified YANO fails to disclose wherein the one or more units have a same format commonly applied to the plurality of encoding schemes, and the one or more units each include information indicating a type of data included in the unit, the information having a same definition commonly applied to the plurality of encoding schemes. CLUCAS discloses wherein the one or more units have a same format commonly applied to the plurality of encoding schemes (the system performs a method to receive a first network abstraction layer “NAL” unit and parsing the first NAL unit to obtain a encoded bitstream containing encoded information which is associated information of an original signal, and finally decoding the said encoded bitstream to obtain decoded information to reconstruct the signal, after the signal is reconstructed receiving a second NAL unit parsing the second NAL unit according to a base coding standard for video to obtain a second encoded bitstream associated with the base encoded info and using the base coding standard; figs 1 and 7; paragraphs [0012-0014], [0018]), and the one or more units each include information indicating a type of data included in the unit (the first NAL unit comprises supplemental enhancement information “SEI” as a payload/output, ideally the output/payload is a user data unregistered type of base coding standards; figs 1 and 7; paragraphs [0012-0014], [0018]), the information having a same definition commonly applied to the plurality of encoding schemes (there may be defined a NAL unit for aspects of the invention described herein which is specifically defined and configured to comprise enhancement information, and each frame of the video may be defined by a combination of NAL units (LCEVC+base) in an independent Access Unit where, the method may generate a base encoded stream, a first level encoded stream and a second level encoded stream according to the above defined example methods. Each of the first level encoded stream and the second level encoded stream may contain enhancement data used by a decoder to enhance the encoded base stream; figs 1 and 7; paragraphs [0132], [0253]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to further modify YANO to have a same format commonly applied to the plurality of encoding schemes of CLUCAS reference. The Suggestion/motivation for doing so would have been to provide the ability for each of the first level encoded stream and the second level encoded stream to contain enhancement data used by a decoder to enhance the encoded base stream as suggested by CLUCAS at paragraph [0253]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine CLUCAS with YANO to obtain the invention as specified in claim 14. Claims 6 and 13 are rejected under 35 § U.S.C. 103 as being obvious US 2021/0027505 A1 to YANO et al (hereinafter “YANO”) in view of US 2017/0347122 A1 to CHOU et al. (hereinafter “CHOU”) in further view of US 2013/0297574 A1 to Thiyanaratnam (hereinafter “Thiyanaratnam”). As per claim 6, YANO in view of CHOU discloses the three-dimensional data encoding method according to claim 4. Modified YANO fails to disclose wherein the one or more units have different formats independently applied to the plurality of encoding schemes, and the one or more units each include information indicating a type of data included in the unit, the information having different definitions independently applied to the plurality of encoding schemes. Thiyanaratnam discloses wherein the one or more units have different formats independently applied to the plurality of encoding schemes (data compression device 560 is included in the encoding/decoding system and includes data managing unit 561 to transform point cloud data into different formats that are easier to store and transmit and a grid generation unit 562 configured to generate a three dimensional grid pattern and a corresponding two dimensional grid pattern to store the data (which has different formats 2D vs 3D) transformed from the point could image data; paragraphs [0058-0059]), and the one or more units each include information indicating a type of data included in the unit (data managing unit 561 includes computing device 5611 to assign binary digits to the grided voxels where a “1” is assigned to gridded voxels containing data points and value “0” was assigned to those gridded voxels not containing data points and generated a plurality of binary strings shown in fig 3b and computing device 5611 is adapted to convert regular binary strings into modified binary strings representing the repeating times of each binary digit; fig 3b; paragraphs [0058-0059]), the information having different definitions independently applied to the plurality of encoding schemes (bitstream starts with the header buffer (A3DMC_stream_header), which contains all the necessary information for decoding the compressed stream: whether there is any repetitive structure in the original model. The 3D model compression method used for compressing patterns and other parts, if necessary, whether the "grouped instance transformation mode" option (A or "separate instance transformation mode" acting as option (B is used in this bitstream, the bitstream definition includes both the above two options (A and (B the user, or an automatic control, can choose the one which fits their one or more applications better; paragraphs [0032-0034]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify YANO to have the one or more units have different formats independently applied to the plurality of encoding schemes, and the one or more units each include information indicating a type of data included in the unit of Thiyanaratnam reference. The Suggestion/motivation for doing so would have been to provide the ability to monitor the content of the binary string before counting the repeating times of the string as suggested by Thiyanaratnam at paragraph [0058]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Thiyanaratnam with YANO to obtain the invention as specified in claim 6. As per claim 13, YANO in view of CHOU discloses the three-dimensional data decoding method according to claim 11. Modified YANO fails to disclose wherein the one or more units have different formats independently applied to the plurality of encoding schemes, and the one or more units each include information indicating a type of data included in the unit, the information having different definitions independently applied to the plurality of encoding schemes. Thiyanaratnam discloses wherein the one or more units have different formats independently applied to the plurality of encoding schemes (data compression device 560 is included in the encoding/decoding system and includes data managing unit 561 to transform point cloud data into different formats that are easier to store and transmit and a grid generation unit 562 configured to generate a three dimensional grid pattern and a corresponding two dimensional grid pattern to store the data (which has different formats 2D vs 3D) transformed from the point could image data; paragraphs [0058-0059]), and the one or more units each include information indicating a type of data included in the unit (data managing unit 561 includes computing device 5611 to assign binary digits to the grided voxels where a “1” is assigned to gridded voxels containing data points and value “0” was assigned to those gridded voxels not containing data points and generated a plurality of binary strings shown in fig 3b and computing device 5611 is adapted to convert regular binary strings into modified binary strings representing the repeating times of each binary digit; fig 3b; paragraphs [0058-0059]), the information having different definitions independently applied to the plurality of encoding schemes (bitstream starts with the header buffer (A3DMC_stream_header), which contains all the necessary information for decoding the compressed stream: whether there is any repetitive structure in the original model. The 3D model compression method used for compressing patterns and other parts, if necessary, whether the "grouped instance transformation mode" option (A or "separate instance transformation mode" acting as option (B is used in this bitstream, the bitstream definition includes both the above two options (A and (B the user, or an automatic control, can choose the one which fits their one or more applications better; paragraphs [0032-0034]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to further modify YANO to have the one or more units have different formats independently applied to the plurality of encoding schemes, and the one or more units each include information indicating a type of data included in the unit of Thiyanaratnam reference. The Suggestion/motivation for doing so would have been to provide the ability to monitor the content of the binary string before counting the repeating times of the string as suggested by Thiyanaratnam at paragraph [0058]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Thiyanaratnam with YANO to obtain the invention as specified in claim 13. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DEVIN JACOB DHOOGE whose telephone number is (571) 270-0999. The examiner can normally be reached 7:30-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached on (571) 270-5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800- 786-9199 (IN USA OR CANADA) or 571-272-1000. /Devin Dhooge/ USPTO Patent Examiner Art Unit 2677 /ANDREW W BEE/Supervisory Patent Examiner, Art Unit 2677
Read full office action

Prosecution Timeline

Dec 21, 2020
Application Filed
Sep 29, 2023
Non-Final Rejection — §103
Jan 04, 2024
Response Filed
Apr 05, 2024
Final Rejection — §103
Aug 12, 2024
Request for Continued Examination
Aug 15, 2024
Response after Non-Final Action
Nov 01, 2024
Non-Final Rejection — §103
Feb 06, 2025
Response Filed
May 15, 2025
Final Rejection — §103
Aug 18, 2025
Request for Continued Examination
Aug 27, 2025
Response after Non-Final Action
Sep 24, 2025
Non-Final Rejection — §103
Dec 23, 2025
Response Filed
Feb 26, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602773
Deep-Learning-based T1-Enhanced Selection of Linear Coefficients (DL-TESLA) for PET/MR Attenuation Correction
2y 5m to grant Granted Apr 14, 2026
Patent 12579780
HYPERSPECTRAL TARGET DETECTION METHOD OF BINARY-CLASSIFICATION ENCODER NETWORK BASED ON MOMENTUM UPDATE
2y 5m to grant Granted Mar 17, 2026
Patent 12524982
NON-TRANSITORY COMPUTER READABLE RECORDING MEDIUM, VISUALIZATION METHOD AND INFORMATION PROCESSING APPARATUS
2y 5m to grant Granted Jan 13, 2026
Patent 12517146
IMAGE-BASED DECK VERIFICATION
2y 5m to grant Granted Jan 06, 2026
Patent 12505673
MULTIMODAL GAME VIDEO SUMMARIZATION WITH METADATA
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
70%
Grant Probability
99%
With Interview (+42.9%)
3y 5m
Median Time to Grant
High
PTA Risk
Based on 71 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month