Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA
The amendments to the claims, filed on 10/23/2025, have been entered and made of record.
Claims 1-30 are cancelled.
Claims 31 – 50 are pending with claims 31, 42, 94, and 50 being amended.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 10/23/2025 has been entered.
Examiner’s Note
The instant application has a lengthy prosecution history and the examiner encourages the applicant to have an interview (telephonic or personal) with the examiner prior to filing a response to the instant office action. Also, prior to the interview the examiner encourages the applicant to present multiple possible claim amendments, so as to enable the examiner to identify claim amendments that will advance prosecution in a meaningful manner.
Response to Arguments
Arguments presented in the Remarks (“Remarks") filed on 10/23/2025 have been fully considered, but are rendered moot in view of the new ground(s) of rejection necessitated by amendment(s) initiated by the applicant(s).
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory obviousness-type double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the conflicting application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement.
Effective January 1, 1994, a registered attorney or agent of record may sign a terminal disclaimer. A terminal disclaimer signed by the assignee must fully comply with 37 CFR 3.73(b).
Claims 31, 42, 49 and 50 are provisionally rejected on the ground of nonstatutory obviousness-type double patenting as being unpatentable over claims 1, 3 and 4 of U.S Patent No. 11,290,745 (U.S Patent Application No. 15/996710). Although the conflicting claims are not identical, they are not patentably distinct from each other because the instant claims are similar to the claims in the U.S Patent to meet the limitations claimed in the instant application. Table 1 shows comparison between the instant claims and the U.S Patent claims.
This is a non-provisional obviousness-type double patenting rejection because the conflicting claims have in fact been patented.
Table 1: Comparison of claims in instant Application No. 17/674380 vs. U.S Patent No. 11,290,745 (Application No. 15/996710)
Claims: Application 17674380
Claims: Application 15996710 (US Pat. 11,290,745)
31. A three-dimensional data encoding method for encoding three-dimensional data, the three-dimensional data encoding method comprising:
dividing the three-dimensional data into first processing units, each of the first processing units being associated with three-dimensional coordinates;
dividing each of the first processing units into second processing units, each of the second processing unit indicating a three dimensional position; and
encoding each of the second processing units generated by dividing each of the first processing units, to generate encoded data,
wherein in the encoding of a current second processing unit among the second processing units included in a current first processing unit among the first processing units, predictive three-dimensional data analogous to the current second processing unit is generated by referring to another of the second processing units included in the current first processing unit, and the current second processing unit is encoded based on a differential between the predictive three-dimensional data and the current second processing unit, [Note: the limitation “the current second processing unit is generated by referring to another of the second processing units …” is rendered obvious in view of the US 103 rejection below]
in the dividing of the three-dimensional data, a three-dimensional space corresponding to the three-dimensional data is divided into first three-dimensional spaces, each of the first three dimensional spaces corresponding to one of the first processing units, and
in the dividing of each of the first processing units, each of the first processing units is divided into second three-dimensional spaces, each of the second three-dimensional spaces
corresponding to one of the second processing units.
1. A point cloud encoding method for encoding a point cloud to generate an encoded stream, the point cloud encoding method comprising:
dividing the point cloud into first processing units, each of the first processing units being a random access unit and being associated with three-dimensional coordinates;
encoding the first processing units to generate encoded data items, each of the encoded data items corresponding to a respective one of the first processing units; and
generating first information indicating (i) the first processing units, (ii) the three-dimensional coordinates associated with each of the first processing units, and (iii) data storage locations, each of the data storage locations being associated with a respective one of the first processing units, and the data storage locations each being a location where an encoded data item corresponding to a first processing unit associated with the data storage location among the first processing units is to be stored,
wherein the encoded stream includes the first information and the encoded data items.
3. The point cloud encoding method according to claim 1, wherein in the dividing, each of the first processing units is further divided into second processing units, and in the encoding, each of the second processing units is encoded.
4. The point cloud encoding method according to claim 3, wherein in the encoding, a current second processing unit among the second processing units included in a current first processing unit among the first processing units is encoded by referring to another of the second processing units included in the current first processing unit.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 31-33, 35, 37-40, 42, 44-45, 47 and 49-50 rejected under 35 U.S.C. 103 as being unpatentable over Huffman (“Huffman”) [U.S 2011/0142321 A1] in view of Lee (“Lee”) [US 2005/0180340 A1] further in view of Lees et al. (“Lees”) [US 2002/0172401 A1]
Regarding claim 31, Huffman meets the claim limitations as follows:
A three-dimensional data encoding method for encoding (i.e. ‘compression’) three-dimensional data (e.g. ‘10’ or ‘each voxel’) [Fig. 5; para. 0010, 0036: ‘each coefficient voxel corresponds to a defined three-dimensional spatial region of the volumetric image’; ‘the source volumetric data set 10’], the three-dimensional data encoding method comprising:
dividing (i.e. ‘XY Transform’) the three-dimensional data (i.e. ‘the source volumetric data set 10’) into first processing units [Fig. 5, 6: ‘510’: LL, LH, HL, HH, …; para. 0036, 0044-0046: ‘an in-plane wavelet transform 500’ into ‘510’], each of the first processing units being associated with three-dimensional coordinates (i.e., x, y, z) [Figs. 2-8, 9-16; para. 0028-0031, 0063: disclosing ‘x in a horizontal direction, y in a vertical direction, and z in an axial or depth direction’ as a voxel; ‘the spatial coordinates <x>, <y>, <z> of the received voxels’];
dividing each of the first processing units (i.e. each of ‘510’) into second processing units (i.e. octants ‘30’, e.g. LLL, LHL, HLL, HHL, ..HHH) [Fig. 5, 6, 7; para. 0046-0056, 0063], each of the second processing units (i.e. voxels) [para. 0063: ‘the spatial coordinates <x>, <y>, <z> of the received voxels’] indicating a three-dimensional position; and
encoding each of the second processing units (i.e. each of ‘30’) generated by dividing each of the first processing units [Fig. 5, 6; para. 0046: ‘the same or different transforms 500, 520 are applied’; ‘This recursion continues until a level … reaches a predetermined size’], to generate encoded data [Fig. 8; para. 0057-0058: ‘performs compression on the voxels 710’],
wherein in the encoding of a current second processing unit (i.e. each of ‘30’) among the second processing units [Fig. 5, 6] included in a current first processing unit among the first processing units, predictive three-dimensional data analogous to the current second processing unit is generated [para. 0057: ‘optionally performs compression on the voxels 710 to facilitate efficient transfer of image data’] by referring to another of the second processing units included in the current first processing unit, and the current second processing unit is encoded based on a differential (i.e. residual) between the predictive three-dimensional data (i.e. ‘image reconstructed from the compressed coefficients’) and the current second processing unit (i.e. ‘the actual image’) [para. 0058: ‘the “residual” information’; ‘the residual information can be computed, for example by subtracting the actual image from the … image reconstructed from the compressed coefficients’],
in the dividing of the three-dimensional data, a three-dimensional space (i.e. x, y, z space) [Figs. 2, 5; para. 0027 disclose ‘the source volumetric data set 10 as a stack of image slices 220’] corresponding to the three-dimensional data (i.e. ‘the source volumetric data set 10’) [Figs. 2, 5; para. 0027, 0029 disclose ‘the source volumetric data set 10 as a stack of image slices 220’] is divided into first three-dimensional spaces, each of the first three-dimensional spaces corresponding to one of the first processing units [Fig. 5, 6: ‘510’: LL, LH, HL, HH, …; para. 0036, 0044-0046: ‘an in-plane wavelet transform 500’ into ‘510’], and
in the dividing of each of the first processing units [Fig. 5, 6: ‘510’: LL, LH, HL, HH, …; para. 0036, 0044-0046: ‘an in-plane wavelet transform 500’ into ‘510’], each of the first processing units is divided into second three-dimensional spaces (i.e. x, y, z space) [Fig. 5, 6, 7], each of the second three-dimensional spaces corresponding to one of the second processing units (i.e. octants ‘30’, e.g. LLL, LHL, HLL, HHL, ..HHH) [Fig. 5, 6, 7].
Huffman does not disclose explicitly the following claim limitations (emphasis added):
dividing each of the first processing units into second processing units, each of the second processing units (i.e. voxels) indicating a three-dimensional position;
wherein in the encoding of a current second processing unit among the second processing units included in a current first processing unit among the first processing units, predictive three-dimensional data analogous to the current second processing unit is generated by referring to another of the second processing units included in the current first processing unit, and the current second processing unit is encoded based on a differential between the predictive three-dimensional data and the current second processing unit.
However in the same field of endeavor Lee discloses the deficient claim as follows:
wherein in the encoding (e.g. ‘DPCM-encoded’) of a current second processing unit (e.g. ‘P’, ‘B’, W’ or ‘E’ nodes) among the second processing units included in a current first processing unit among the first processing units (e.g. ‘S’ node) [Fig. 7, 8, 9 disclose an adaptive octree as ‘a structure in which a root node has eight child nodes, where each of the child nodes may have eight child nodes or leaf nodes’; para. 0060-0062], predictive three-dimensional data [Fig. 7, 8, 9, 14; para. 0061-0062, 0072-0077: ‘the color information of each voxel location is DPCM …encoded and MC-encoded’ ] analogous to the current second processing unit is generated by referring to another of the second processing units (i.e. ‘a ‘B’ voxel to be encoded are predicted from those of the previous ‘B’ voxel’) [Fig. 14; para. 0072-0077: ‘Next, nodes are sequentially encoded one by one starting from a root node to generate bitstreams’. ‘The bitstream of each of the nodes is composed of SOP (‘S’ or ‘P’)’] included in the current first processing unit, and the current second processing unit is encoded based on a differential between the predictive three-dimensional data and the current second processing unit.
Huffman [Fig. 2, 3, 4] and Lee [Fig. 7-9] are combinable because they are from the same field of coding three-dimensional (3D) video.
It would have been obvious to one with ordinary skill in the art before the effective filling date of the claimed invention to combine teachings of Huffman and Lee as motivation to include DPCM encoding/decoding for efficiently encoding/decoding 3D data [Lee: para. 0014].
Huffman does not disclose explicitly the following claim limitations (emphasis added):
dividing each of the first processing units into second processing units, each of the second processing units (i.e. voxels) indicating a three-dimensional position;
However in the same field of endeavor Lees discloses the deficient claim as follows:
dividing each of the first processing units into second processing units, each of the second processing units (i.e. voxels) indicating a three-dimensional position [para. 0034: ‘a voxel comprises a 3D coordinate location and a data value’];
Huffman and Lees are combinable because they are from the same field of three-dimensional volume data set.
It would have been obvious to one with ordinary skill in the art before the effective filling date of the claimed invention to combine teachings of Huffman and Lees as motivation to include voxels comprising a 3D coordinate location and a data value as known by those of skill in the art [Lees: para. 0034].
Regarding claim 32, Huffman meets the claim limitations as follows:
The three-dimensional data encoding method according to claim 31, wherein the encoded data includes first information (i.e. ‘A complete labeling or identification of a single voxel’) [para. 0056, 0073: <x>, <y>, <z>; Lee [para. 0034] also discloses ‘a voxel comprises a 3D coordinate location’] indicating the three-dimensional coordinates associated with each of the first processing units.
Regarding claim 33, Huffman meets the claim limitations as follows:
The three-dimensional data encoding method according to claim 31, wherein the encoded data includes first information indicating at least one of an object (i.e. ‘orthant’) [para. 0073], a time [Fig. 5-7; para. 0073 disclose information of a 4-dimensional voxel: (object: <level>, <orthant>; coordinates: <x>, <y>, <z>; and time: <t>] and a data storage associated with each of the first processing units.
Regarding claim 35, Huffman meets the claim limitations as follows:
The three-dimensional data encoding method according to claim 31, wherein a size of the first processing units is determined in accordance with the number [para. 0046, 0048, 0055: ‘a level of the three dimensional pyramidal data structure 30 reaches a predetermined size’; ‘A termination condition such as a minimum x, y, z dimension can be used to determine …’; ‘n=m=p=4’], or sparseness and denseness of objects or dynamic objects included in the three-dimensional data.
Regarding claim 37, Huffman meets the claim limitations as follows:
The three-dimensional data encoding method according to claim 31, wherein in the dividing of each of the first processing units, each of the second processing units is further divided into third processing units [Fig. 6, 7: ‘710’], and in the encoding, each of the third processing units is encoded [Fig. 7 shows each of second processing unit 810 divided into third processing units 710; each processing unit 710 is p slices of mxn pixels].
Regarding claim 38, Huffman meets the claim limitations as follows:
The three-dimensional data encoding method according to claim 37, wherein each of the third processing units is a minimum unit [Fig. 6, 7: ‘710’] in which position information is associated [Fig. 7 shows each of second processing unit 810 divided into third processing units 710; each processing unit 710 is p slices of mxn pixels].
Regarding claim 39, Huffman meets the claim limitations as follows:
The three-dimensional data encoding method according to claim 31, wherein the encoded data includes information indicating an encoding order of the first processing units [para. 0056: <level>, ‘where <level> identifies the hierarchical level of the voxel].
Regarding claim 40, Huffman meets the claim limitations set forth in claim 31.
Huffman does not disclose explicitly the following claim limitations (emphasis added):
The three-dimensional data encoding method according to claim 31, wherein the encoded data includes information indicating a size of the first processing units.
However in the same field of endeavor Lee discloses the deficient claim as follows:
wherein the encoded data includes information indicating a size of the first processing units [Fig. 14; para. 0074: ‘the header information is encoded into width, height, and depth’].
Huffman [Fig. 2, 3, 4] and Lee [Fig. 7-9] are combinable because they are from the same field of coding three-dimensional (3D) video.
It would have been obvious to one with ordinary skill in the art before the effective filling date of the claimed invention to combine teachings of Huffman and Lee as motivation to include DPCM encoding/decoding for efficiently encoding/decoding 3D data [Lee: para. 0014].
Regarding claim 42, all claim limitations are set forth as claim 31 in the form of “A three-dimensional data decoding method” and rejected as per discussion for claim 31.
Regarding claim 44, all claim limitations are set forth as claim 37 in the form of “A three-dimensional data decoding method” and rejected as per discussion for claim 37.
Regarding claim 45, all claim limitations are set forth as claim 38 in the form of “A three-dimensional data decoding method” and rejected as per discussion for claim 38.
Regarding claim 47, all claim limitations are set forth as claim 40 in the form of “A three-dimensional data decoding method” and rejected as per discussion for claim 40.
Regarding claim 49, all claim limitations are set forth as claim 31 in the form of “A three-dimensional data encoding device” and rejected as per discussion for claim 31.
Regarding claim 50, all claim limitations are set forth as claim 42 in the form of “A three-dimensional data decoding device” and rejected as per discussion for claim 42.
Claims 34, 36, 43, and 46 rejected under 35 U.S.C. 103 as being unpatentable over Huffman in view of Lee further in view of Lees further in view of Chen et al. (“Chen_686”) [US 2013/0038686 A1]
Regarding claim 34, Huffman meets the claim limitations set forth in claim 31.
Huffman does not disclose explicitly the following claim limitations (emphasis added):
The three-dimensional data encoding method according to claim 31, wherein in the encoding, one of three types is selected as a type of the current second processing unit, and the current second processing unit is encoded in accordance with the type that has been selected, the three types being a first type in which another of the second processing units is not referred to, a second type in which another of the second processing units is referred to, and a third type in which other two of the second processing units are referred to.
However in the same field of endeavor Chen_686 discloses the deficient claim as follows:
wherein in the encoding, one of three types (i.e. three type of encoding methods: “I”, “P” or “B”) [Fig. 4; para. 0005, 0049, 0056, 0071, 0105: intra-prediction, inter-prediction] is selected as a type of the current second processing unit, and the current second processing unit is encoded in accordance with the type that has been selected, the three types being a first type in which another of the second processing units is not referred to (i.e. intra-coding “I”) [Fig. 4; para. 0005, 0049, 0056, 0071, 0105: intra-prediction, inter-prediction], a second type (i.e. single-reference inter-coding “P”) [Fig. 4; para. 0005, 0049, 0056, 0071, 0105: intra-prediction, inter-prediction] in which another of the second processing units is referred to, and a third type in (i.e. bi-predicted inter-coding “B”) [Fig. 4; para. 0005, 0049, 0056, 0071, 0105: intra-prediction, inter-prediction] which other two of the second processing units are referred to.
Huffman [Fig. 2, 3, 4] and Chen_686 [Fig. 4, 5A, 7A, 7B] are combinable because they are from the same field of coding three-dimensional video in multiple views.
It would have been obvious to one with ordinary skill in the art before the effective filling date of the claimed invention to combine teachings of Huffman and Chen_686 as motivation to include multi-view coding video three-dimensional units (i.e. voxels, or access units) as reduction or removal of redundancy inherent in video sequences in accordance with the multi-view coding (MVC) extension to the H.264/AVC standard [Chen_686: para. 0004-0006].
Regarding claim 36, Huffman meets the claim limitations set forth in claim 31.
Huffman does not disclose explicitly the following claim limitations (emphasis added):
The three-dimensional data encoding method according to claim 31, further comprising: determining whether the another of the second processing units is refered to.
However in the same field of endeavor Chen_686 discloses the deficient claim as follows:
further comprising: determining whether referring the another of the second processing units [para. 0037: ‘further indicate whether a view is predicted relative to another view of the same resolution’].
Huffman [Fig. 2, 3, 4] and Chen_686 [Fig. 4, 5A, 7A, 7B] are combinable because they are from the same field of coding three-dimensional video in multiple views.
It would have been obvious to one with ordinary skill in the art before the effective filling date of the claimed invention to combine teachings of Huffman and Chen_686 as motivation to include multi-view coding video three-dimensional units (i.e. voxels, or access units) as reduction or removal of redundancy inherent in video sequences in accordance with the multi-view coding (MVC) extension to the H.264/AVC standard [Chen_686: para. 0004-0006].
Regarding claim 43, all claim limitations are set forth as claim 36 in the form of “A three-dimensional data decoding method” and rejected as per discussion for claim 36.
Regarding claim 46, Huffman meets the claim limitations set forth in claim 31.
Huffman does not disclose explicitly the following claim limitations (emphasis added):
The three-dimensional data decoding method according to claim 42, wherein the encoded data includes information indicating a decoding order of the first processing units.
However in the same field of endeavor Lee discloses the deficient claim as follows:
wherein the encoded data includes information indicating a decoding order of the first processing units [para. 0109: ‘an index that indicates the decoding order of view component in an access unit’].
Huffman [Fig. 2, 3, 4] and Lee [Fig. 4, 5A, 7A, 7B] are combinable because they are from the same field of coding three-dimensional video in multiple views.
It would have been obvious to one with ordinary skill in the art before the effective filling date of the claimed invention to combine teachings of Huffman and Lee as motivation to include multi-view coding video three-dimensional units (i.e. voxels, or access units) as reduction or removal of redundancy inherent in video sequences in accordance with the multi-view coding (MVC) extension to the H.264/AVC standard [Lee: para. 0004-0006].
Claims 41 and 48 rejected under 35 U.S.C. 103 as being unpatentable over Huffman in view of Lee in further view of Lees in further view of Watanabe et al. (“Watanabe”) [U.S 2004/0109679 A1]
Regarding claim 41, Huffman meets the claim limitations set forth in claim 31.
Huffman does not disclose explicitly the following claim limitations (emphasis added):
The three-dimensional data encoding method according to claim 31, wherein in the encoding, the first processing units are encoded in parallel.
Huffman does not disclose explicitly the following claim limitations (emphasis added):
wherein in the encoding, the first processing units are encoded in parallel.
However in the same field of endeavor Watanabe discloses the deficient claim as follows:
wherein in the encoding, the first processing units are encoded in parallel [Fig. 1: first data block 22 into first encoder 44; para. 0025: ‘The first encoder 44, second encoder 46 and third encoder 48 encode, in parallel, the image data stored in the first data block 22, second data block 24 and third data block, respectively’].
Huffman, Lee, Lees and Watanabe are combinable because they are from the same field of video coding.
It would have been obvious to one with ordinary skill in the art before the effective filling date of the claimed invention to combine teachings of Huffman, Lee, Lees and Watanabe as motivation to include parallel encoding for reduction of the processing time [Watanabe: para. 0004].
Regarding claim 48, all claim limitations are set forth as claim 41 in the form of “A three-dimensional data decoding method” and rejected as per discussion for claim 41.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See form 892.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PETER D LE whose telephone number is (571)270-5382. The examiner can normally be reached on Monday - Alternate Friday: 10AM-6:30PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, SATH PERUNGAVOOR can be reached on 571-272-7455. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PETER D LE/
Primary Examiner, Art Unit 2488