Prosecution Insights
Last updated: April 19, 2026
Application No. 17/154,589

THREE-DIMENSIONAL DATA ENCODING METHOD, THREE-DIMENSIONAL DATA DECODING METHOD, THREE-DIMENSIONAL DATA ENCODING DEVICE, AND THREE-DIMENSIONAL DATA DECODING DEVICE

Final Rejection §103
Filed
Jan 21, 2021
Examiner
HESS, MICHAEL J
Art Unit
2481
Tech Center
2400 — Computer Networks
Assignee
Panasonic Intellectual Property Corporation of America
OA Round
10 (Final)
44%
Grant Probability
Moderate
11-12
OA Rounds
3y 1m
To Grant
52%
With Interview

Examiner Intelligence

Grants 44% of resolved cases
44%
Career Allow Rate
183 granted / 418 resolved
-14.2% vs TC avg
Moderate +8% lift
Without
With
+7.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
66 currently pending
Career history
484
Total Applications
across all art units

Statute-Specific Performance

§101
4.6%
-35.4% vs TC avg
§103
56.8%
+16.8% vs TC avg
§102
10.3%
-29.7% vs TC avg
§112
20.8%
-19.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 418 resolved cases

Office Action

§103
DETAILED ACTION This action is responsive to the Amendments and Remarks received 08/13/2025 in which claims 2–7, 11–16, and 21–26 are cancelled, claims 1, 10, 19, and 20 are amended, and no claims are added as new claims. Response to Arguments Examiner incorporates herein previous Responses to Arguments. On page 8 of the Remarks, Applicant contends the prior art is deficient for failing to teach or suggest slice and tile identifiers. Examiner disagrees. Examiner finds the argument, that slice and tile identifiers are novel, belied by the evidence of record. First, it was well known in the art to utilize the state-of-the-art video compression standard to code point cloud or 3D data. See evidence under the Conclusion Section of this Office Action establishing that, for example, it was well-known to use H.264 to compress point cloud data. The skilled artisan knows that the video compression standard utilizes slice and tile partitioning to segment the image/video data into manageable pieces. Second, the rejection relies on Hattori to teach or suggest identifying slices and tiles by their identifiers. Specifically, Hattori teaches utilizing slice and tile index boxes or otherwise using slice and tile indexes or header information to identify slices and tiles. See rejection, infra. Third, additional prior art was easily found that evidences that slice and tile identifiers are a generic feature of coding slices and tile in the conventional image/video coding scheme. For example, Maycotte (US 2018/0034824 A1) teaches “tile ID” and “slice ID” in paragraph [0096]. Wozniak (US 2016/0353128 A1) teaches, in paragraph [0043], region identifiers can be accomplished using “a slice or tile identifier.” Yamamoto (US 2016/0286235 A1) teaches tile and slice identifiers in paragraphs [0251] and [0259], respectively. Lee (US 2015/0350645 A1) teaches slice identifiers and tile identifiers in paragraph [0118] and Claim 1, respectively. Finally, Lim (US 2019/0215516 A1) teaches, in paragraph [0119], slice identification information and tile identification information are common coding parameters. The foregoing list of additional references establishes overwhelming evidence that slice and tile identifiers used to identify and position slices and tiles for 3D was in the possession of the skilled artisan before Applicant’s effective filing date. Because Applicant’s claims have been demonstrated by a preponderance of the evidence to lack any novel or nonobvious feature, Applicant’s argument is unpersuasive of patentability. Accordingly, a rejection of Applicant’s claims under 35 U.S.C. 103 is proper. Other claims are not argued separately. Remarks, 8–9. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 8–10, 17–20, 27, 28, 30, 31, 33, and 34 are rejected under 35 U.S.C. 103 as being unpatentable over Hannuksela (US 2020/0288171 A1), Agarwal (US 2019/0179022 A1), Dore (US 2019/0371051), Aflaki Beni (US 2021/0192796 A1), and Hattori (US 2015/0201202 A1). In view of Applicant’s Figs. 54 and 55, Examiner interprets the invention as drawn to a vehicle having a LIDAR device installed thereon which acquires point cloud data and when there is an overpass, recognizing that the point cloud data has structure recognized both above and below the overpass such that the overlap from a top-down view can be machine-recognized as an overpass. Regarding claim 1, the combination of Hannuksea, Agarwal, Dore, Aflaki Beni, and Hattori teaches or suggests a three-dimensional data encoding method, comprising: generating slices each including (i) encoded geometry data generated by encoding a portion of geometry data included in three-dimensional data and (ii) encoded attribute data generated by encoding a portion of attribute data included in the three-dimensional data (Aflaki Beni, ¶ 0004: teaches volumetric data can represent geometry data and attribute data; Aflaki Beni, ¶ 0140: teaches auxiliary data can include coded slices; Hannuksela, ¶¶ 0107–0108: teaches video data partitioned into slices as independently decodable pieces often regarded as elementary units for transmission); generating tiles each including information which defines a boundary box corresponding to one of spaces divided from a current space corresponding to the three-dimensional data (Hannuksela, ¶¶ 0005, 0112–0117, and 0233: teach that encoding 3D video content can be accomplished using state-of-the-art video compression standards wherein slices and tiles, being a way to partition a 2D images, are also a way of partitioning 3D space); and generating a bitstream which includes the slices and tiles (Hannuksela, ¶ 0112: teaches tiles and slices are included in the bitstream), wherein the tiles are allowed to overlap each other (Hannuksela, ¶ 0233: teaches tiles for 3D video are allowed to be partly overlapping; Dore, ¶ 0098: teaches partitioning three-dimensional point cloud data into overlapping parts or spaces), the bitstream includes first information indicating shape information including at least one of a height, a width, a depth, or a radius of each of the tiles (Agarwal, ¶ 0011: teaches the volumes may have a rectangular shape, suggesting other shapes are within the level of skill in the art; Examiner notes height, width, depth, and radius parameters are size parameters; Agarwal, ¶¶ 0043–0045: teach volumes and subvolumes defined by height, width, and depth; Agarwal, ¶ 0052: teaches circular subvolumes defined by diameter), the bitstream includes second information indicating a position of each of the tiles (Aflaki Beni, ¶ 0004: teaches partitioned volumetric video data includes three-dimensional position information; Aflaki Beni, ¶ 0093: explains position information for a source volume of three-dimensional data is provided in the bitstream; Examiner notes partitioned video or image content, indicated by coordinate information, is well-established in the art; Agarwal, ¶ 0044: teaches volumes can be identified by location information), the bitstream includes (i) first identifiers each identifying one of the slices and (ii) second identifiers each identifying one of the tiles (Aflaki Beni, ¶¶ 0178–0179: teaches ROI regions which are coded with less compression; Examiner notes that 2D and 3D video regions are the result of partitioning and grouping portions of the video together for common processing; Examiner finds such regions having common coding characteristics would belong to partitioned slices and/or tiles which would need to be identified; In order to expedite prosecution, Examiner cites to Hattori, ¶¶ 0227, 0243, and 0246, which teaches identifying a region of interest (ROI) by using slice and tile identifiers (slice and tile indexes) or otherwise using slice and tile header information to identify the slices and/or tiles; Agarwal, ¶ 0044: teaches volumes can be identified by location information), and each of the encoded geometry data includes (i) one of the first identifiers identifying the one of the slices corresponding to the encoded geometry data, and (ii) one of the second identifiers identifying the one of the tiles corresponding to the encoded geometry data (Aflaki Beni, ¶ 0004: teaches volumetric data can represent geometry data and attribute data, wherein the geometry data includes position information in 3D space; Agarwal, ¶ 0044: teaches volumes can be identified by location information; These teachings, combined with Hannuksela’s and Hattori’s teachings that slice and tile data for 3D coding would obviously include slice and tile identifier information, teaches or suggests Applicant’s claimed feature that slice and tile identifiers would be part of a coded representation of 3D video/image data). One of ordinary skill in the art, before the effective filing date of the claimed invention, would have been motivated to combine the elements taught by Hannuksela, with those of Agarwal, because both references are drawn to the same field of endeavor such that one wishing to practice three-dimensional video coding would be led to their relevant teachings and because combining Agarwal’s 3D point cloud data compression techniques with Hannuksela’s 3D encoding techniques for the purpose generating a bitstream represents a mere combination of prior art elements, according to known methods, to yield a predictable result. This rationale applies to all combinations of Hannuksela and Agarwal used in this Office Action unless otherwise noted. One of ordinary skill in the art, before the effective filing date of the claimed invention, would have been motivated to combine the elements taught by Hannuksela and Agarwal, with those of Dore, because all three references are drawn to the same field of endeavor and because combining Agarwal’s 3D point cloud data compression techniques with Hannuksela’s and Dore’s partitioning scheme that yields overlapping tile regions represents a mere combination of prior art elements, according to known methods, to yield a predictable result. This rationale applies to all combinations of Hannuksela, Agarwal, and Dore used in this Office Action unless otherwise noted. One of ordinary skill in the art, before the effective filing date of the claimed invention, would have been motivated to combine the elements taught by Hannuksela, Agarwal, and Dore, with those of Aflaki Beni, because all four references are drawn to the same field of endeavor such that one wishing to practice octree partitioning (e.g. Aflaki Beni, ¶ 0048) of point cloud data would be led to their relevant teachings and because combining Agarwal’s 3D point cloud data compression techniques with Aflaki Beni’s voxel encoding techniques for the purpose generating a bitstream represents a mere combination of prior art elements, according to known methods, to yield a predictable result. This rationale applies to all combinations of Hannuksela, Agarwal, Dore, and Aflaki Beni used in this Office Action unless otherwise noted. One of ordinary skill in the art, before the effective filing date of the claimed invention, would have been motivated to combine the elements taught by Hannuksela, Agarwal, Dore, and Aflaki Beni, with those of Hattori, because all five references are drawn to the same field of endeavor and because combining Aflaki Beni’s ROI coding using slices and tiles (e.g. ¶ 0179) with Hattori’s teaching that ROI partial playback benefits from being able to quickly identify and decode only particular ROI tiles (¶ 0239) represents a mere combination of prior art elements, according to known methods, to yield a predictable result. This rationale applies to all combinations of Hannuksela, Agarwal, Dore, Aflaki Beni, and Hattori used in this Office Action unless otherwise noted. Regarding claim 8, the combination of Hannuksea, Agarwal, Dore, Aflaki Beni, and Hattori teaches or suggests the three-dimensional data encoding method according to claim 1, wherein the bitstream includes third information indicating a total number of the tiles (Agarwal, ¶ 0044: teaches defining a certain number of sub-volumes (7) for a given volume; Hattori, ¶¶ 0081–0082: teaches information regarding the number of tile columns and rows; see also Hattori, ¶ 0090: teaching the number of tiles in a picture inserted into the bitstream). Regarding claim 9, the combination of Hannuksea, Agarwal, Dore, Aflaki Beni, and Hattori teaches or suggests the three-dimensional data encoding method according to claim 1, wherein the bitstream includes third information indicating an interval between the tiles (Examiner notes interval is just another way to define a size parameter, like height, width, depth, and radius parameters are size parameters; Agarwal, ¶¶ 0043–0045: teach volumes and subvolumes defined by height, width, and depth; Agarwal, ¶ 0052: teaches circular subvolumes defined by diameter). Claim 10 lists essentially the same elements as claim 1, but is drawn to the corresponding decoding method. Therefore, the rationale for the rejection of claim 1 applies to the instant claim. Examiner notes this claim is drawn to the actions of the decoder, which conventionally has as its purpose restoring a current space. Agarwal, ¶ [0054] teaches image reconstruction as a purpose for 3D data compression. Claim 17 lists essentially the same elements as claim 8, but is drawn to the corresponding decoding method. Therefore, the rationale for the rejection of claim 8 applies to the instant claim. Examiner notes this claim is drawn to the actions of the decoder, which conventionally has as its purpose restoring a current space. Agarwal, ¶ [0054] teaches image reconstruction as a purpose for 3D data compression. Claim 18 lists essentially the same elements as claim 9, but is drawn to the corresponding decoding method. Therefore, the rationale for the rejection of claim 9 applies to the instant claim. Examiner notes this claim is drawn to the actions of the decoder, which conventionally has as its purpose restoring a current space. Agarwal, ¶ [0054] teaches image reconstruction as a purpose for 3D data compression. Claim 19 lists essentially the same elements as claim 1, but is drawn to the corresponding encoding device. Therefore, the rationale for the rejection of claim 1 applies to the instant claim. Claim 20 lists essentially the same elements as claim 1, but is drawn to the corresponding decoding device. Therefore, the rationale for the rejection of claim 1 applies to the instant claim. Examiner notes this claim is drawn to the actions of the decoder, which conventionally has as its purpose restoring a current space. Agarwal, ¶ [0054] teaches image reconstruction as a purpose for 3D data compression. Regarding claim 27, the combination of Hannuksea, Agarwal, Dore, Aflaki Beni, and Hattori teaches or suggests the three-dimensional data encoding method according to claim 1, wherein a shape of each of the tiles is a two-dimensional shape or a three-dimensional shape (Agarwal, ¶ 0011: teaches the volumes may have a rectangular shape, suggesting other shapes are within the level of skill in the art; Agarwal, ¶ 0052: teaches circular sub-volumes defined by diameter). Regarding claim 28, the combination of Hannuksea, Agarwal, Dore, Aflaki Beni, and Hattori teaches or suggests the three-dimensional data encoding method according to claim 1, wherein a shape of each of the tiles is rectangular or circular (Agarwal, ¶ 0011: teaches the volumes may have a rectangular shape, suggesting other shapes are within the level of skill in the art; Agarwal, ¶ 0052: teaches circular sub-volumes defined by diameter). Claim 30 lists essentially the same elements as claim 27, but is drawn to the corresponding decoding method. Therefore, the rationale for the rejection of claim 27 applies to the instant claim. Examiner notes this claim is drawn to the actions of the decoder, which conventionally has as its purpose restoring a current space. Agarwal, ¶ [0054] teaches image reconstruction as a purpose for 3D data compression. Claim 31 lists essentially the same elements as claim 28, but is drawn to the corresponding decoding method. Therefore, the rationale for the rejection of claim 28 applies to the instant claim. Examiner notes this claim is drawn to the actions of the decoder, which conventionally has as its purpose restoring a current space. Agarwal, ¶ [0054] teaches image reconstruction as a purpose for 3D data compression. Regarding claim 33, the combination of Hannuksela, Agarwal, Dore, Aflaki Beni, and Hattori teaches or suggests the three-dimensional data encoding method according to claim 1, wherein the bitstream includes third information indicating a shape of each of the tiles (Aflaki Beni, ¶ 0004: teaches volumetric data can represent geometry data and attribute data, wherein the geometry data includes shape of the partitioned 3D shape), and the bitstream includes fourth information indicating whether the tiles overlap (Hannuksela, ¶ 0233: teaches overlapping tiles are contemplated as a way to partition 3D data; Dore, ¶ 0098: teaches partitioning three-dimensional point cloud data into overlapping parts or spaces such that it would be obvious to the skilled artisan to include information regarding the type of partitioning and whether segments overlap in the bitstream). Claim 34 lists essentially the same elements as claim 33, but is drawn to the corresponding decoding method. Therefore, the rationale for the rejection of claim 33 applies to the instant claim. Examiner notes this claim is drawn to the actions of the decoder, which conventionally has as its purpose restoring a current space. Agarwal, ¶ [0054] teaches image reconstruction as a purpose for 3D data compression. Claims 29 and 32 are rejected under 35 U.S.C. 103 as being unpatentable over Hannuksea, Agarwal, Dore, Aflaki Beni, Hattori, and Shimomura (US 2013/0169794 A1). Regarding claim 29, the combination of Hannuksea, Agarwal, Dore, Aflaki Beni, Hattori, and Shimomura teaches or suggests the three-dimensional data encoding method according to claim 1, wherein the bitstream includes third information indicating whether a division method used to obtain the tiles is a division method using a top view (Examiner finds the “view” of the data is arbitrary and has no technological significance because it merely represents the projection plane of three-dimensional data; Examiner finds volumes built in this art use the “depth” dimension as the height when viewed from a top view, which Examiner finds is common in the art; see under Conclusion Section of this Action for additional art evidencing this fact; Examiner finds a GPS map in a car is often a top-view of the road and is two-dimensional because it is viewed as a projection onto a two-dimensional surface, e.g. the display surface; Agarwal, Fig. 2 and ¶ 0053: teaches the adjacent sub-volumes are built from the bottom-up; A top-view of the volumes would be obvious since this data is used for vehicle navigation and navigation maps are often top views; As evidence of this fact, Shimomura, e.g. Fig. 10: illustrates a top-down view (i.e. two-dimensional projection plane) of obtained three-dimensional data presented on a vehicle navigation screen; Examiner notes that although not relied upon for this rejection, He, Fig. 2, cited under the Conclusion Section of this Office Action also illustrates a top-down view (projection plane) of a 3D space/volume). One of ordinary skill in the art, before the effective filing date of the claimed invention, would have been motivated to combine the elements taught by Hannuksela, Agarwal, Dore, Aflaki Beni, and Hattori, with those of Shimomura, because all six references are drawn to the same field of endeavor such that one wishing to practice mapping of a 3D environment would be led to their relevant teachings and because combining Agarwal’s 3D point cloud data compression techniques with Aflaki Beni’s voxel encoding techniques for the purpose generating a bitstream, and further using the data to yield Shimomura’s overlapping top-view regions presented to a vehicle and its user represents a mere combination of prior art elements, according to known methods, to yield a predictable result. This rationale applies to all combinations of Hannuksela, Agarwal, Dore, Aflaki Beni, Hattori, and Shimomura used in this Office Action unless otherwise noted. Claim 32 lists essentially the same elements as claim 29, but is drawn to the corresponding decoding method. Therefore, the rationale for the rejection of claim 29 applies to the instant claim. Examiner notes this claim is drawn to the actions of the decoder, which conventionally has as its purpose restoring a current space. Agarwal, ¶ [0054] teaches image reconstruction as a purpose for 3D data compression. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Qian et al., “Earth Documentation: Overpass Detection Using Mobile LIDAR,” Proceedings of 2010 IEEE 17th International Conference on Image Processing, Hong Kong, Sept. 2010. Choudhry (US 2019/0000588 A1) teaches overlapping adjacent subvolumes (¶ 0060). Toma (US 2018/0278956 A1) teaches point cloud compression (¶ 0005) and point groups having shape (¶ 0058). Maurer (US 2016/0127746 A1) teaches encoding point cloud data from a top view (e.g. Fig. 2) and using a top-down partitioning scheme wherein each layer is a slice (compare Maurer’s Fig. 14 to Applicant’s Fig. 47). Dorn (US 2008/0015784 A1) teaches top-down stratal slices of a point cloud (e.g. ¶¶ 0026, 0044, and 0152). Dore (see above) teaches partitioning a point cloud according to MPEG-DASH and ISOBMFF (see corresponding subject matter in e.g. Applicant’s Figs. 14 and 20). Golla et al., “Real-time Point Cloud Compression,” 2015 IEEE International Conference on Intelligent Robots and Systems, Hamburg, Germany Oct. 2015. Wu et al., “Voxel-based Marked Neighborhood Searching Method for Identifying Street Trees Using Vehicle-borne Laser Scanning Data,” 2012 Second International Workshop on Earth Observation and Remote Sensing Applications. Haala et al., “Mobile Lidar Mapping for 3D Point Cloud Collection in Urban Areas – A Performance Test,” (2008). F. Nenci, L. Spinello, and C. Stachniss, “Effective compression of range data streams for remote robot operations using h.264,” in Proceeding of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Chicago, USA, 2014. Uses H.264 for point cloud compression. H. Houshiar and A. Nuchter, “3d point cloud compression using ¨ conventional image compression for efficient data transmission,” in 2015 XXV International Conference on Information, Communication and Automation Technologies (ICAT), Oct 2015, pp. 1–8. Doemling (US 2021/0248752 A1) teaches segmenting point cloud data (e.g. ¶ 0030) and explains that planar or cubic primitives “may be used to generate a quadtree or octree partition of the geometric primitive.” (¶ 0044). Lasserre (US 2020/0334866 A1) explains, “This figure represents an example of a quadtree-based structure that splits a square, but the reader will easily extend it to the 3D case by replacing the square by a cube….” (¶ 0123). Vrcelj (US 2016/0196659 A1) teaches, in a mapping the environment implementation, determining the size of an overlap of regions using intersection over union (IoU), wherein the intersection part of the ratio calculates the size of the overlapping region (e.g. ¶ 0073). Park (US 2016/0171753 A1) teaches determining the size of overlapping regions in a 3D scene mapping scenario (e.g. ¶ 0117). He et al., “Best-effort projection based attribute compression for 3d point cloud,” in 2017 23rd Asia-Pacific Conference on Communications (APCC), Dec 2017, pp. 1–6. He, Abstract: teaches 3D point cloud data is projected onto a 2D plane to take advantage of existing 2D compression schemes, like conventional video coding; Examiner notes inter-view coding in the video compression art is well-established for multi-view coding because different views of a scene offer spatial redundancies similar to temporal redundancies handled by conventional video coding technologies; He, Section III(B): teaches inter-view prediction and cites to endnote 14 which is a citation to a video encoding publication. Maycotte (US 2018/0034824 A1) teaches “tile ID” and “slice ID” in paragraph [0096]. Wozniak (US 2016/0353128 A1) teaches in paragraph [0043] region identifiers can be accomplished using “a slice or tile identifier.” Yamamoto (US 2016/0286235 A1) teaches tile and slice identifiers in paragraphs [0251] and [0259], respectively. Lee (US 2015/0350645 A1) teaches slice identifiers and tile identifiers in paragraph [0118] and Claim 1, respectively. Lim (US 2019/0215516 A1) teaches, in paragraph [0119], slice identification information and tile identification information are common coding parameters. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Michael J Hess whose telephone number is (571)270-7933. The examiner can normally be reached Mon - Fri 9:00am-5:30pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Vaughn can be reached on (571)272-3922. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8933. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL J HESS/Examiner, Art Unit 2481
Read full office action

Prosecution Timeline

Jan 21, 2021
Application Filed
Sep 22, 2021
Non-Final Rejection — §103
Dec 27, 2021
Response Filed
Jan 13, 2022
Final Rejection — §103
Apr 19, 2022
Response after Non-Final Action
May 19, 2022
Request for Continued Examination
May 22, 2022
Response after Non-Final Action
Jun 17, 2022
Non-Final Rejection — §103
Sep 09, 2022
Interview Requested
Sep 15, 2022
Examiner Interview Summary
Sep 15, 2022
Applicant Interview (Telephonic)
Sep 23, 2022
Response Filed
Dec 01, 2022
Final Rejection — §103
Mar 03, 2023
Request for Continued Examination
Mar 11, 2023
Response after Non-Final Action
May 06, 2023
Non-Final Rejection — §103
Jul 12, 2023
Interview Requested
Aug 24, 2023
Applicant Interview (Telephonic)
Aug 25, 2023
Examiner Interview Summary
Sep 11, 2023
Response Filed
Nov 03, 2023
Final Rejection — §103
Feb 08, 2024
Request for Continued Examination
Feb 14, 2024
Response after Non-Final Action
Mar 25, 2024
Examiner Interview (Telephonic)
May 31, 2024
Response Filed
Jun 14, 2024
Non-Final Rejection — §103
Oct 17, 2024
Response Filed
Dec 10, 2024
Final Rejection — §103
Dec 10, 2024
Examiner Interview (Telephonic)
Mar 12, 2025
Request for Continued Examination
Mar 19, 2025
Response after Non-Final Action
May 17, 2025
Non-Final Rejection — §103
Aug 13, 2025
Response Filed
Sep 27, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12563195
Method And An Apparatus for Encoding and Decoding of Digital Image/Video Material
2y 5m to grant Granted Feb 24, 2026
Patent 12563208
PICTURE CODING METHOD, PICTURE CODING APPARATUS, PICTURE DECODING METHOD, AND PICTURE DECODING APPARATUS
2y 5m to grant Granted Feb 24, 2026
Patent 12556737
MOTION COMPENSATION FOR VIDEO ENCODING AND DECODING
2y 5m to grant Granted Feb 17, 2026
Patent 12556747
ARRAY BASED RESIDUAL CODING ON NON-DYADIC BLOCKS
2y 5m to grant Granted Feb 17, 2026
Patent 12549728
METHOD AND APPARATUS FOR CODING VIDEO DATA IN TRANSFORM-SKIP MODE
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

11-12
Expected OA Rounds
44%
Grant Probability
52%
With Interview (+7.7%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 418 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month