Prosecution Insights
Last updated: April 19, 2026
Application No. 18/526,280

ENTROPY ENCODING METHOD AND APPARATUS, AND ENTROPY DECODING METHOD AND APPARATUS

Final Rejection §102§103
Filed
Dec 01, 2023
Examiner
KRAYNAK, JACK PETER
Art Unit
2668
Tech Center
2600 — Communications
Assignee
Vivo Mobile Communication Co., Ltd.
OA Round
2 (Final)
78%
Grant Probability
Favorable
3-4
OA Rounds
3y 1m
To Grant
97%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
75 granted / 96 resolved
+16.1% vs TC avg
Strong +19% interview lift
Without
With
+18.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
18 currently pending
Career history
114
Total Applications
across all art units

Statute-Specific Performance

§101
8.1%
-31.9% vs TC avg
§103
54.4%
+14.4% vs TC avg
§102
27.3%
-12.7% vs TC avg
§112
7.2%
-32.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 96 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed 3/2/2026 have been fully considered but they are not persuasive. The applicant argues that Han et al (US 20210144379 A1) does not disclose "a type of occupancy code context model," but instead determines whether to perform initialization based on the point cloud density. The applicant argues that "whether initialization is performed does not constitute different context model types," and "claim 1 determines the use of different model types based on sparsity/density information." The examiner finds these arguments to be not persuasive. As taught by Han et al., if the "variation of the data density does not satisfy the predetermined condition, the three-dimensional data encoding device may determine that the coding efficiency is better when CABAC is initialized" (Para 557-558). In other words, a type of occupancy code context model is determined (CABAC is initialized), based on the sparsity/density information (if variation of the data density does not satisfy the predetermined condition). The examiner would like to emphasize that under broadest reasonable interpretation of the claim (MPEP 2111: "because applicant has the opportunity to amend the claims during prosecution, giving a claim its broadest reasonable interpretation will reduce the possibility that the claim, once issued, will be interpreted more broadly than is justified. In re Yamamoto, 740 F.2d 1569, 1571 (Fed. Cir. 1984); In re Zletz, 893 F.2d 319, 321, 13 USPQ2d 1320, 1322 (Fed. Cir. 1989) ("During patent examination the pending claims must be interpreted as broadly as their terms reasonably allow."); In re Prater, 415 F.2d 1393, 1404-05, 162 USPQ 541, 550-51 (CCPA 1969)) the CABAC is a type of occupancy code context model, and it is determined, based on the sparsity/density information. The claim language uses the broad term 'determined,' not more specific terms such as 'select' or 'switch.' Claim 1 does not teach "the use of different model types" or "switching of the occupancy code context model," as is argued by the applicant. Claim 1 actually states "determining […] a type of occupancy code context model," or, in other words, a type of occupancy code context model is determined based on the sparsity/density information. Claim 1 is furthermore silent regarding "determining whether to perform initialization" or "the use of different model types". Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-2, 7-13, and 18-20 is/are rejected under 35 U.S.C. 102(a)(1) as being clearly anticipated by Han et al (US 20210144379 A1). Regarding claim 1, Han et al teaches an entropy encoding method, comprising (Para 576,The three-dimensional data encoding device then determines, for each slice, whether to initialize CABAC. ALSO see Para 462, CABAC is an abbreviation of context-based adaptive binary arithmetic coding, which is an encoding method that realizes an arithmetic encoding (entropy encoding) with high compression ratio by increasing the probability precision by successively updating a context (a model for estimating the probability of occurrence of an input binary symbol) based on the encoded information. i.e. CABAC is an entropy encoding method that is a type of an occupancy code context model): obtaining, by an entropy encoding apparatus (Para 128, a three-dimensional data encoding device according to an aspect of the present disclosure includes: a processor; and memory), sparsity/density information of a to-be-encoded target point cloud (Para 574-588, The three-dimensional data encoding device determines an initialization with high coding efficiency based on the point cloud data density, for example. i.e. sparsity/density information of a 3D point cloud slice is determined); determining, based on the sparsity/density information, a type of an occupancy code context model used to perform entropy encoding on the target point cloud; and performing entropy encoding on the target point cloud based on the type of the occupancy code context model (Para 557-560 if the variation of the data density does not satisfy the predetermined condition, the three-dimensional data encoding device may determine that the coding efficiency is better when CABAC is initialized, and determine to initialize CABAC, and furthermore Para 574-588, the three-dimensional data encoding device then determines, for each slice, whether to initialize CABAC for the encoding of geometry information and the encoding of attribute information based on the data density of the object of the slice (S5272). In other words, the three-dimensional data encoding device determines CABAC initialization information (CABAC initialization flag) for the encoding of geometry information and the encoding of attribute information based on the geometry information. The three-dimensional data encoding device determines an initialization with high coding efficiency based on the point cloud data density, for example. The CABAC initialization information may be indicated by cabac_init_flag that is common to the geometry information and the attribute information. i.e. the encoding device decides whether or not to determines, based on the sparsity/density information - to initialize CABAC – (a type of entropy encoding code context model) for the target point cloud (3D point cloud slice as stated in Para 575) based on the sparsity/density information (density of the point cloud)). Regarding claim 2, Han et al teaches the method according to claim 1, wherein the obtaining sparsity/density information of a to-be-encoded target point cloud comprises: obtaining size information of a bounding box corresponding to the target point cloud and information on the number of points comprised in the target point cloud; and determining the sparsity/density information of the target point cloud based on the size information and the information on the number of points (Para 557-560, the three-dimensional data encoding device may determine the density of point cloud data for each slice, that is, the number of points per unit area belonging to each slice, compare the data density of the slice with the data density of another slice, and determine that the coding efficiency is better when CABAC is not initialized and determine not to initialize CABAC if the variation of the data density satisfies a predetermined condition. i.e. when determining the type of a type of occupancy code context model to use for the encoding (whether or not to use CABAC), the area of the bounding box (size of the bounding box) and the number of points are used to determine the 'sparsity/density information' of the target point cloud). Regarding claim 7, Han et al teaches the method according to claim 1, wherein after the determining, based on the sparsity/density information, a type of an occupancy code context model used to perform entropy encoding on the target point cloud, the method further comprises: encoding second information into geometric slice header information of the target point cloud, wherein the second information comprises the sparsity/density information of the target point cloud or the type of the occupancy code context model used to perform entropy encoding on the target point cloud (Para 575-580, the CABAC initialization information may be indicated by cabac_init_flag that is common to the geometry information and the attribute information. i.e. the type of occupancy code context (CABAC) is encoded into the header information cabac_init_flag0. See also Para 170 and 323-324 regarding encoding headers). Regarding claim 8, Han et al teaches the method according to claim 1, wherein the target point cloud is a point cloud sequence or a point cloud slice in a point cloud sequence (Para 574-576, is a flowchart illustrating an example of the method of determining whether to initialize CABAC and determining a context initial value. First, the three-dimensional data encoding device divides point cloud data into slices based on an object determined from geometry information (S5271). i.e. target point cloud is a point cloud slice in a point cloud sequence). Regarding claim 9, Han et al teaches an entropy decoding method, comprising (Para 548 and Fig 64, a flowchart illustrating an example of a process of initializing the CABAC decoder in the combining (S5254) of information divided into slices and the combining (S5256) of information divided into tiles), obtaining, by an entropy decoding apparatus (Para 128, a three-dimensional data encoding device according to an aspect of the present disclosure includes: a processor; and memory), a type of an occupancy code context model used to perform entropy decoding on a to-be-decoded target point cloud, wherein the type of the occupancy code context model used to perform entropy decoding on the target point cloud is determined by sparsity/density information of the target point cloud, and performing entropy decoding on the target point cloud based on the type of the occupancy code context model (Para 548-556, when the CABAC initialization flag is 1 (if Yes in S5262), the three-dimensional data decoding device reinitializes the CABAC decoder to the default state (S5263). […] The three-dimensional data decoding device then continues the decoding process until a condition for stopping the decoding process is satisfied, such as until there is no data to be decoded (S5264). i.e. the CABAC initialization flag is the type of occupancy code context model that was used to perform encoding of the target point cloud, and entropy decoding is performed based on this type of occupancy code context model, which was determined by sparsity/density information of the target point cloud (see Para 557-560 and Para 574-588)). Regarding claim 10, claim 10 rejected for the same reasons as claim 9 and claim 7, above. Regarding claim 11, Han et al teaches the method according to claim 10, wherein the second information comprises the type of the occupancy code context model used to perform entropy encoding on the target point cloud, and the determining the type of the occupancy code context model used to perform entropy decoding on the target point cloud comprises: determining that the type of the occupancy code context model used to perform entropy decoding on the target point cloud is the same as the type of the occupancy code context model used to perform entropy encoding on the target point cloud; or, wherein the second information comprises the sparsity/density information of the target point cloud, and the determining the type of the occupancy code context model used to perform entropy decoding on the target point cloud comprises: determining, based on the sparsity/density information of the target point cloud, the type of the occupancy code context model used to perform entropy decoding on the target point cloud (Para 548-556, when the CABAC initialization flag is 1 (if Yes in S5262), the three-dimensional data decoding device reinitializes the CABAC decoder to the default state (S5263). […] The three-dimensional data decoding device then continues the decoding process until a condition for stopping the decoding process is satisfied, such as until there is no data to be decoded (S5264). Furthermore, see, on the other hand, when the CABAC initialization flag is not 1 (if No in S5262), the three-dimensional data decoding device does not re-initialize the CABAC decoder and proceeds to step S5264. […] The three-dimensional data decoding device then continues the decoding process until a condition for stopping the decoding process is satisfied, such as until there is no data to be decoded (S5264). i.e. the CABAC initialization flag is the type of occupancy code context model that was used to perform encoding of the target point cloud, and entropy decoding is performed based on this type of occupancy code context model, therefore the entropy decoding model is confirmed to be the CABAC model, or the same as the encoding model). Regarding claim 12, claim 12 rejected for the same reasons as claims 7, 9, and 11, above. Regarding claim 13, claim 13 rejected for the same reasons as claim 2. Regarding claim 18, claim 18 rejected for the same reasons as claims 8. Regarding claim 19, claim 19 rejected for the same reasons as claims 1. Regarding claim 20, claim 20 rejected for the same reasons as claims 9. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 3-4 and 14-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Han et al (US 20210144379 A1) in view of Smirnov (US 20200342250 A1). Regarding claim 3, Han et al does not teach, the method according to claim 2, wherein the determining the sparsity/density information of the target point cloud based on the size information and the information on the number of points comprises: determining a first volume based on the size information and the information on the number of points, wherein the first volume is an average volume occupied by each point in the target point cloud, in the bounding box; and determining the sparsity/density information of the target point cloud based on a relationship between the first volume and a preset threshold. In a similar field of endeavor, Smirnov teaches, the method according to claim 2, wherein the determining the sparsity/density information of the target point cloud based on the size information and the information on the number of points comprises: determining a first volume based on the size information and the information on the number of points (Para 50-54, in such an example, a volume of the point cloud is determined based on the represented shape. Moreover, the location of each of the plurality of the points in the point cloud enables the data processing module to determine the centroid of the point cloud. Furthermore, the location of each point of the plurality of points enables the data processing module to determine a number of points per unit area which is the density of the point cloud. i.e. determining a first volume based on the size information and number of points), wherein the first volume is an average volume occupied by each point in the target point cloud, in the bounding box (Para 50-54, in an example, a point cloud having one point per square metre is characterised as “sparse point cloud”, a point cloud having number of points in the range of one to two points per square metre is characterised as “low density point cloud”, a point cloud having number of points in the range of two to five points per square metre is characterised as “medium density point cloud”, a point cloud having number of points in the range of five to ten points per square metre is characterised as “dense point cloud”, and a point cloud having number of points more than ten points per square metre is characterised as “high density point cloud”. i.e. the area can be considered a bounding box, see also Fig 4 402 and 404, and 'average volume occupied by each point in the area (considered the bounding box or cube)' is 'five to ten points per square meter', see also Para 50: a size of the point cloud is determined based on the represented shape. Furthermore, the location and local density of the point having three-dimensional coordinates in the point cloud enables the data processing module to determine the volume of the point cloud); and determining the sparsity/density information of the target point cloud based on a relationship between the first volume and a preset threshold (Para 50-54, in an example, a point cloud having one point per square metre is characterised as “sparse point cloud”, a point cloud having number of points in the range of one to two points per square metre is characterised as “low density point cloud”, a point cloud having number of points in the range of two to five points per square metre is characterised as “medium density point cloud”, a point cloud having number of points in the range of five to ten points per square metre is characterised as “dense point cloud”, and a point cloud having number of points more than ten points per square metre is characterised as “high density point cloud”. i.e. a low number of points taking up a certain amount of volume in the area (1 to 2 points per square meter) is considered sparse, and more points per area is considered dense). Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date to incorporate the teachings of Han et al (US 20210144379 A1) in view of Smirnov (US 20200342250 A1) so that the method includes determining a first volume based on the size information and the information on the number of points, wherein the first volume is an average volume occupied by each point in the target point cloud, in the bounding box; and determining the sparsity/density information of the target point cloud based on a relationship between the first volume and a preset threshold. Doing so would allow for normalisation of the point cloud that is done in order to regularise the number of plurality of points acquired. Such normalisation accelerates a processing speed of the data processing module as a smaller number of plurality of points are processed at a given instant of time (Smirnov, Para 51). Regarding claim 4, Han et al teaches the method according to claim 3, wherein the determining the sparsity/density information of the target point cloud based on a relationship between the first volume and a preset threshold comprises at least one of the following: if the first volume is greater than the preset threshold, determining that the sparsity/density information of the target point cloud is a sparse point cloud; or if the first volume is less than or equal to the preset threshold, determining that the sparsity/density information of the target point cloud is a dense point cloud; or, wherein the preset threshold is determined by the entropy encoding apparatus or prescribed by a protocol (Para 557-560, here, “another slice” may be the preceding slice in the decoding order or a spatially neighboring slice, for example. The three-dimensional data encoding device may not perform the comparison of the data density with that of another slice and may determine whether to initialize CABAC based on whether the data density of the slice is a predetermined data density or not. i.e. if the first volume of point cloud data is greater than the threshold, the point cloud is determined to be dense and CABAC is used, and the opposite for if the volume of point cloud data is less than the threshold (considered sparse)). Regarding claim 14, claim 14 rejected for the same reasons as claims 3 and 4. Regarding claim 15, claim 15 rejected for the same reasons as claim 4. Claim(s) 5 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Han et al (US 20210144379 A1) in view of in view of Smirnov (US 20200342250 A1) and Park et al (US 20220383553 A1). Regarding claim 5, Han et al teaches encoding a type of encoder used (when CABAC Is utilized for encoding) into the header of the encoding information (see Para 575-580: the CABAC initialization information may be indicated by cabac_init_flag that is common to the geometry information and the attribute information), but does not teach after the determining the sparsity/density information of the target point cloud based on a relationship between the first volume and a preset threshold, the method further comprises: encoding first information into geometric slice header information of the target point cloud, wherein the first information is the preset threshold or identification information corresponding to the preset threshold. Smirnov does not teach the method according to claim 3, wherein in a case that the preset threshold is determined by the entropy encoding apparatus, after the determining the sparsity/density information of the target point cloud based on a relationship between the first volume and a preset threshold, the method further comprises: encoding first information into geometric slice header information of the target point cloud, wherein the first information is the preset threshold or identification information corresponding to the preset threshold. In a similar field of endeavor, Park et al teaches, the method according to claim 3, wherein in a case that the preset threshold is determined by the entropy encoding apparatus, after the determining the sparsity/density information of the target point cloud based on a relationship between the first volume and a preset threshold, the method further comprises: encoding first information into geometric slice header information of the target point cloud, wherein the first information is the preset threshold or identification information corresponding to the preset threshold (Para 351 - 359, TPS is tile parameter set in header, (see Para 349: when there is configuration information in the TPS, the reception method/device according to the embodiments may use the information of the TPS. In the attribute slice header, each tile may be divided into slices. That is, configuration information may be configured for each slice). This includes the threshold information for a Morton code generation order that is transmitted in the bitstream, in order for a specific corresponding decoding to the encoding method be performed). Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date to incorporate the teachings of Han et al (US 20210144379 A1) in view of in view of Smirnov (US 20200342250 A1) and Park et al (US 20220383553 A1) so that the method includes encoding first information into geometric slice header information of the target point cloud, wherein the first information is the preset threshold or identification information corresponding to the preset threshold. Doing so would allow for the system to perform operations related to the condition and threshold for a Morton code generation order, parameter information related to the condition for generation and the threshold may be signaled (Park et al., Para 351). Regarding claim 16, claim 16 rejected for the same reasons as claim 5 above. Claim(s) 6 and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Han et al (US 20210144379 A1) in view of Yano et al (US 20220353492 A1). Regarding claim 6, Han et al teaches a selection of an occupancy code context model based on a sparsity/density determination in the target point cloud, but does not specifically teach an alternative 'model 2' as stated in the limitation: or in a case that the sparsity/density information of the target point cloud is a dense point cloud, determining that the type of the occupancy code context model used to perform entropy encoding on the target point cloud is occupancy code context model 2. In a similar field of endeavor, Yano et al teaches the method according to claim 1, wherein the determining, based on the sparsity/density information, a type of an occupancy code context model used to perform entropy encoding on the target point cloud comprises at least one of the following: in a case that the sparsity/density information of the target point cloud is a sparse point cloud, determining that the type of the occupancy code context model used to perform entropy encoding on the target point cloud is occupancy code context model 1; or in a case that the sparsity/density information of the target point cloud is a dense point cloud, determining that the type of the occupancy code context model used to perform entropy encoding on the target point cloud is occupancy code context model 2 (Fig 9 and Para 149-152, mode determination can be made based on either 2-2-2 or 2-2-3 mode determination. In other words, the mode determination "is possible to perform simple density determination by presence or absence of actual points." Para 149-150 states: method 2-2-3 in a seventh stage from the top of the table in FIG. 9, it is possible to confirm presence or absence of actual points (point distribution status) around the node to be processed, and select the mode on the basis of a confirmation result. That is, it is possible to determine an actual density state, and select a more appropriate mode (in which a greater effect may be obtained) on the basis of a determination result. […] for example, it is possible to determine presence or absence of the point in a region within a predetermined distance from the node to be processed, and apply the prediction mode when the point is present and apply the DCM when the point is not present. i.e. selecting either mode 1 or mode 2 as can be seen in Figure 9 is based on a density/sparsity of points within the target point cloud. Furthermore, this density/sparsity calculation is used to determine what 'mode' or context to use for encoding, see Para 171, the mode selection unit 321 performs a process regarding selection of the encoding method (mode). For example, the mode selection unit 321 obtains the voxel data supplied from the voxel setting unit 312. Furthermore, the mode selection unit 321 selects the encoding method (mode) for each voxel (node in the octree). For example, the mode selection unit 321 selects whether to apply a method using the prediction of the position information of the point to be processed or to apply the CM as the encoding method of the point to be processed). Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date to incorporate the teachings of Han et al (US 20210144379 A1) in view of Yano et al (US 20220353492 A1) so that the method includes: in a case that the sparsity/density information of the target point cloud is a sparse point cloud, determining that the type of the occupancy code context model used to perform entropy encoding on the target point cloud is occupancy code context model 1; or in a case that the sparsity/density information of the target point cloud is a dense point cloud, determining that the type of the occupancy code context model used to perform entropy encoding on the target point cloud is occupancy code context model 2. Doing so would allow for mode selection on the basis of the density determination in this manner, a more appropriate mode may be selected more accurately than in a case of the method 2-2-2 described above. Therefore, the reduction in encoding efficiency may be further suppressed (Yano et al., Para 151). Regarding claim 17, claim 17 rejected for the same reasons as claim 6 above. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Lasserre et al., (US 20210218969 A1), Para 135-150, Referring now to FIG. 15, another example of a context-based entropy coder 400 is shown. In this example, the entropy coder 400 uses the predicted occupancy pattern bP to select between subsets of contexts. If the predicted occupancy pattern is empty, the predictive context selection feature is disabled for coding the occupancy pattern and the coder 400 selects contexts using other criteria. The contexts may, in some embodiments, be selected from a first subset of contexts. If the predicted occupancy pattern is not empty, then the entropy coder 400 determines whether, for each bit bi to be coded, the corresponding predicted occupancy pattern bit bPi is non-zero. If it is zero, then the corresponding sub-volume is predicted to be empty and a second subset of contexts may be used for coding the bit If the predicted occupancy pattern bit bPi is non-zero, then the sub-volume is predicted to contain at least one point. In this example, the entropy coder 400 then assesses how many predicted points are found with the corresponding sub-volume. If the number of predicted points in the sub-volume does not exceed a preset threshold value, then the sub-volume is predicted to be occupied but sparsely populated and a third subset of contexts is used for coding. If the number of predicted points in the sub-volume exceeds the preset threshold value, then the sub-volume is predicted to be densely populated with points and a fourth subset of contexts is then used for selecting a context for coding bi. Yano et al (US 20220277517 A1), Para 77-79, For example, as shown in FIG. 3, voxels 52 adjacent to each of the top, bottom, left, right, front, and rear faces of a voxel 51 to be processed are set as neighboring voxels. Then, contexts are switched according to presence or absence of points of each voxel 52. That is, a context is prepared in advance for each pattern (pattern with or without points of each voxel 52) that becomes a model of a distribution of neighboring points, it is determined whether the pattern of a distribution of points around the voxel 51 to be processed corresponds to any of the model patterns, and a context associated with that pattern is applied to arithmetic coding of the voxel 51 to be processed. Therefore, the same context is set for voxels having the same distribution pattern. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JACK PETER KRAYNAK whose telephone number is (703)756-1713. The examiner can normally be reached Monday - Friday 7:30 AM - 5 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vu Le can be reached at (571) 272-7332. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JACK PETER KRAYNAK/Examiner, Art Unit 2668 /UTPAL D SHAH/Primary Examiner, Art Unit 2668
Read full office action

Prosecution Timeline

Dec 01, 2023
Application Filed
Dec 01, 2025
Non-Final Rejection — §102, §103
Mar 02, 2026
Response Filed
Mar 18, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602819
IMAGE PROCESSING APPARATUS, FEATURE MAP GENERATING APPARATUS, LEARNING MODEL GENERATION APPARATUS, IMAGE PROCESSING METHOD, AND COMPUTER-READABLE RECORDING MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12592065
SYSTEMS AND METHODS FOR OBJECT DETECTION IN EXTREME LOW-LIGHT CONDITIONS
2y 5m to grant Granted Mar 31, 2026
Patent 12586210
BIDIRECTIONAL OPTICAL FLOW ESTIMATION METHOD AND APPARATUS
2y 5m to grant Granted Mar 24, 2026
Patent 12579720
METHOD OF GENERATING TRAINED MODEL, MACHINE LEARNING SYSTEM, PROGRAM, AND MEDICAL IMAGE PROCESSING APPARATUS
2y 5m to grant Granted Mar 17, 2026
Patent 12568314
IMAGE SIGNAL PROCESSOR, METHOD OF OPERATING THE IMAGE SIGNAL PROCESSOR, AND APPLICATION PROCESSOR INCLUDING THE IMAGE SIGNAL PROCESSOR
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
78%
Grant Probability
97%
With Interview (+18.8%)
3y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 96 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month