Prosecution Insights
Last updated: April 19, 2026
Application No. 18/624,683

ATTRIBUTE CODING FOR POINT CLOUD COMPRESSION

Non-Final OA §101§102§103
Filed
Apr 02, 2024
Examiner
PARK, EDWARD
Art Unit
2675
Tech Center
2600 — Communications
Assignee
Qualcomm Incorporated
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
576 granted / 704 resolved
+19.8% vs TC avg
Strong +18% interview lift
Without
With
+18.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
27 currently pending
Career history
731
Total Applications
across all art units

Statute-Specific Performance

§101
16.9%
-23.1% vs TC avg
§103
47.3%
+7.3% vs TC avg
§102
21.3%
-18.7% vs TC avg
§112
6.3%
-33.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 704 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Contents Notice of Pre-AIA or AIA Status 2 Claim Rejections - 35 USC § 101 2 Claim Rejections - 35 USC § 102 3 Claim Rejections - 35 USC § 103 6 Allowable Subject Matter 16 Conclusion 17 Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is responsive to applicant’s claim set received on 4/2/24. Claims 1-20 are currently pending. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter as follows. Claims 1 and 12 are directed to an abstract idea since both are collecting, analyzing and correlating image data and text data to assign labels based on similarity. The claims do not provide an inventive concept or significantly more since the elements are function, high level of generality. For claims 2-11, 13-20, the dependent claims are considered abstract idea that do not provide any elements that are significantly more. Thus, all claims are considered non statutory subject matter. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless - (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 12 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Mammou et al (US 2019/0087979 A1). Regarding claim 1, Mammou discloses a device for coding point cloud data, the device comprising: a memory configured to store point cloud data (see 0604, 0004; system memory….. In some embodiments, a system includes one or more sensors configured to capture points that collectively make up a point cloud, wherein each of the points comprises spatial information identifying a spatial location of the respective point and attribute information defining one or more attributes associated with the respective point); and one or more processors implemented in circuitry and configured to: decode encoded point cloud geometry data for a point cloud to reconstruct point cloud geometry data for the point cloud (see 0010; The decoder is further configured to determine, for each patch, spatial information for the set of points of the patch based, at least in part, on the patch image comprising the set of points of the patch projected onto the patch plane and the patch image comprising the depth information for the set of points of the patch, and generate a decompressed version of the compressed point cloud based, at least in part, on the determined spatial information for the plurality of patches and the attribute information included in the patches); downscale the point cloud geometry data to form downscaled point cloud geometry data (see 0029, 0331; or example, spatial down-scaler 502 may downscale the geometry image frame 526 and the texture down-scaler 504 may independently downscale the texture image frame 528. In some embodiments, attribute down-scaler 506 may downscale an attribute independently of spatial down-scaler 502 and texture down-scaler 504. Because different down-scalers are used to downscale different types of image frames (e.g. spatial information, texture, other attributes, etc.), different downscaling parameters may be applied to the different types of image frames to downscale geometry different than texture or attributes); and code attribute data for the point cloud using the downscaled point cloud geometry data (see 0326-0327; FIG. 5A illustrates components of an encoder that includes geometry, texture, and/or attribute downscaling, according to some embodiments. Any of the encoders described herein may further include a spatial down-scaler component 502, a texture down-scaler component 504, and/or an attribute down-scaler component 506 as shown for encoder 500 in FIG. 5A. For example, encoder 200 illustrated in FIG. 2A may further include downscaling components as described in FIG. 5A. In some embodiments, encoder 250 may further include downscaling components as described in FIG. 5A. [0327] In some embodiments, an encoder that includes downscaling components, such as geometry down-scaler 502, texture down-scaler 504, and/or attribute down-scaler 506, may further include a geometry up-scaler, such as spatial up-scaler 508, and a smoothing filter, such as smoothing filter 510. In some embodiments, a reconstructed geometry image is generated from compressed patch images, compressed by video compression module 218. In some embodiments an encoder may further include a geometry reconstruction module (not shown) to generate the reconstructed geometry image. The reconstructed geometry image may be used by the occupancy map to encode and/or improve encoding of an occupancy map that indicates patch locations for patches included in one or more frame images. Additionally, the reconstructed geometry image may be provided to a geometry up-scaler, such as geometry up-scaler 508. A geometry up-scaler may scale the reconstructed geometry image up to an original resolution or a higher resolution approximating the original resolution of the geometry image, wherein the original resolution is a resolution prior to downscaling being performed at geometry down-scaler 502. In some embodiments, the upscaled reconstructed geometry image may be provided to a smoothing filter that generates a smoothed image of the reconstructed and upscaled geometry image. This information may then be provided to the spatial image generation module 210, texture image generation module 212, and/or the attribute image generation module 214. These modules may adjust generation of spatial images, texture images, and/or other attribute images based on the reconstructed geometry images. For example, if a patch shape (e.g. geometry) is slightly distorted during the downscaling, encoding, decoding, and upscaling process, these changes may be taken into account when generating spatial images, texture images, and/or other attribute images to correct for the changes in patch shape (e.g. distortion)). Regarding claim 12, the claim is analyzed as a method claim that implements the limitations of claim 1 (see rejection of claim 1). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimedinvention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Mammou et al (US 2019/0087979 A1) in view of Pang et al (WO 2022/271602 A1). Regarding claim 2, Mammou teaches all elements as mentioned above in claim 1. Mammou does not teach expressly encode the attribute data for the point cloud, and wherein the one or more processors are further configured to encode the point cloud geometry data using a deep learning-based geometry encoder to form the encoded point cloud geometry data prior to decoding the encoded point cloud geometry data. Pang, in the same field of endeavor, teaches encode the attribute data for the point cloud, and wherein the one or more processors are further configured to encode the point cloud geometry data using a deep learning-based geometry encoder to form the encoded point cloud geometry data prior to decoding the encoded point cloud geometry data (see 59, 82-83, 3, 49-50, 82-84; [82] In one embodiment, the compression of the XYZ image can be based on state-of-the-art image/video compression methods, e.g., JPEG, MPEG AVC/HEVC/VVC. As described before, the information indicative of point positions ({X, Y, Z} or {AX, AY, AZ}) may be arranged into a 3-channel image with each position-indicative parameter carried on one of the three channels. 15 Quantization (1110) is performed before compression, for example, in order to convert the floating numbers in the XYZ image to a data format used by the 2D video encoder. Also, without any adjustments, both the XYZ image and the AXYZ image may have negative values, we can normalize them and make their dynamic range fall into a pre-defined interval before sending them to the codec. In one embodiment, we first compute the minimum and the maximum values of each 20 channel of the XYZ (or AXYZ) image, denoted by mink and maxk where k ranges from 1 to K. Then we normalize each channel of the XYZ image to the range, for example, [0, 255], before feeding it to the codec. In this case, the minimum and the maximum values mink and maxk of each channel need to be sent as metadata to facilitate the decoding. Note that the minimum and the maximum values may be floating point numbers and can take negative values. 25 [83] In another embodiment, the compression can be neural network-based methods, such as a variational autoencoder based on a factorized prior model or a scale hyperprior model, which approximates the quantization operation by adding uniform noise. It generates a differentiable bitrate Rxyz for end-to-end training. Since the neighboring samples in the XYZ or AXYZ images usually represent neighboring points in the original point cloud, usually there is strong correlation 30 between neighboring samples. Thus, we anticipate that the XYZ and the AXYZ images would be WO 2022/271602 17 PCT/US2022/034184 5 10 15 20 25 coded efficiently with (unmodified) standard image and video codecs. [84] On the decoder side, provided a bitstream of the XYZ image as input, the XYZ image is decoded (1120). The reconstruction PC1 is simply the 3D points on the XYZ image. In another embodiment where metadata is also received, the reconstruction also relies on the received metadata. For example, when the minimum and the maximum values mink and maxk of each channel is received on the decoder side, they are used to have each channel of the reconstruction PC1 scaled back to their original range of values.).. It would have been obvious (before the effective filing date of the claimed invention) or (at the time the invention was made) to one of ordinary skill in the art to modify Mammou to utilize the cited limitations as suggested by Pang. The suggestion/motivation for doing so would have been to code efficiently (see 0083) Furthermore, the prior art collectively includes each element claimed (though not all in the same reference), and one of ordinary skill in the art could have combined the elements in the manner explained above using known engineering design, interface and/or programming techniques, without changing a “fundamental” operating principle of Mammou, while the teaching of Pang continues to perform the same function as originally taught prior to being combined, in order to produce the repeatable and predictable result. It is for at least the aforementioned reasons that the examiner has reached a conclusion of obviousness with respect to the claim in question. Claims 3, 13 are rejected under 35 U.S.C. 103 as being unpatentable over Mammou et al (US 2019/0087979 A1) in view of Pang et al (WO 2022/271602 A1), and further in view of Budagavi et al (US 2018/0268570 A1). Regarding claim 3, Mammou with Pang teaches all elements as mentioned above in claim 1. Mammou with Pang does not teach expressly encode a value representing an amount of downscaling to be applied to the point cloud geometry data, wherein to downscale the point cloud geometry data, the one or more processors are configured to downscale the point cloud geometry data according to the value representing the amount of downscaling. Budagavi, in the same field of endeavor, teaches encode a value representing an amount of downscaling to be applied to the point cloud geometry data, wherein to downscale the point cloud geometry data, the one or more processors are configured to downscale the point cloud geometry data according to the value representing the amount of downscaling (see 0086). It would have been obvious (before the effective filing date of the claimed invention) or (at the time the invention was made) to one of ordinary skill in the art to modify Mammou with Pang to utilize the cited limitations as suggested by Budagavi. The suggestion/motivation for doing so would have been to reduce storage and bandwidth requirements (see 0074). Furthermore, the prior art collectively includes each element claimed (though not all in the same reference), and one of ordinary skill in the art could have combined the elements in the manner explained above using known engineering design, interface and/or programming techniques, without changing a “fundamental” operating principle of Mammou with Pang, while the teaching of Budagavi continues to perform the same function as originally taught prior to being combined, in order to produce the repeatable and predictable result. It is for at least the aforementioned reasons that the examiner has reached a conclusion of obviousness with respect to the claim in question. Regarding claim 13, the claim is analyzed as a method claim that implements the limitations of claim 3 (see rejection of claim 3). Claims 4, 14, 16 are rejected under 35 U.S.C. 103 as being unpatentable over Mammou et al (US 2019/0087979 A1) in view of Fang et al (CV: “3DAC: Learning Attribute Compression for Point Clouds”). Regarding claim 4, Mammou teaches all elements as mentioned above in claim 1. Mammou does not teach expressly decode the attribute data for the point cloud to form downscaled point cloud attribute data, and wherein the one or more processors are further configured to upscale the downscaled point cloud attribute data. Fang, in the same field of endeavor, teaches decode the attribute data for the point cloud to form downscaled point cloud attribute data, and wherein the one or more processors are further configured to upscale the downscaled point cloud attribute data (see 3, 3.2.1, 3.4, fig. 3). It would have been obvious (before the effective filing date of the claimed invention) or (at the time the invention was made) to one of ordinary skill in the art to modify Mammou to utilize the cited limitations as suggested by Fang. The suggestion/motivation for doing so would have been to increase compression performance and reduce bitrate (see section 5). Furthermore, the prior art collectively includes each element claimed (though not all in the same reference), and one of ordinary skill in the art could have combined the elements in the manner explained above using known engineering design, interface and/or programming techniques, without changing a “fundamental” operating principle of Mammou, while the teaching of Fang continues to perform the same function as originally taught prior to being combined, in order to produce the repeatable and predictable result. It is for at least the aforementioned reasons that the examiner has reached a conclusion of obviousness with respect to the claim in question. Regarding claim 14, Mammou teaches all elements as mentioned above in claim 12. Mammou does not teach expressly decoding the attribute data for the point cloud to form downscaled point cloud attribute data, the method further comprising: upscaling the downscaled point cloud attribute data, wherein upscaling the downscaled point cloud attribute data comprises upscaling the downscaled point cloud attribute data using a deep learning-based attribute upsampler. Fang, in the same field of endeavor, teaches decoding the attribute data for the point cloud to form downscaled point cloud attribute data, the method further comprising: upscaling the downscaled point cloud attribute data, wherein upscaling the downscaled point cloud attribute data comprises upscaling the downscaled point cloud attribute data using a deep learning-based attribute upsampler (see 3.2.1, 3.3, 3.4). It would have been obvious (before the effective filing date of the claimed invention) or (at the time the invention was made) to one of ordinary skill in the art to modify Mammou to utilize the cited limitations as suggested by Fang. The suggestion/motivation for doing so would have been to increase compression performance and reduce bitrate (see section 5). Furthermore, the prior art collectively includes each element claimed (though not all in the same reference), and one of ordinary skill in the art could have combined the elements in the manner explained above using known engineering design, interface and/or programming techniques, without changing a “fundamental” operating principle of Mammou, while the teaching of Fang continues to perform the same function as originally taught prior to being combined, in order to produce the repeatable and predictable result. It is for at least the aforementioned reasons that the examiner has reached a conclusion of obviousness with respect to the claim in question. Regarding claim 16, Mammou teaches reconstructing upscaled point cloud attribute data for the point cloud, the method further comprising applying the upscaled point cloud attribute data to the point cloud geometry data to reconstruct the point cloud (see 0010, 0011, 0030, 0038-0040). Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Mammou et al (US 2019/0087979 A1) in view of Fang et al (CV: “3DAC: Learning Attribute Compression for Point Clouds”), in further in view of Wang et al (IVP: “Sparse Tensor-based Point Cloud Attribute Compression”). Regarding claim 5, Mammou with Fang teaches all elements as mentioned above in claim 4. Mammou with Fang does not teach expressly upscale the downscaled point cloud attribute data using a deep learning-based attribute upsampler. Wang, in the same field of endeavor, teaches upscale the downscaled point cloud attribute data using a deep learning-based attribute upsampler (see 3.1, abstract). It would have been obvious (before the effective filing date of the claimed invention) or (at the time the invention was made) to one of ordinary skill in the art to modify Mammou with Fang to utilize the cited limitations as suggested by Wang. The suggestion/motivation for doing so would have been to outperform existing methods (see abstract). Furthermore, the prior art collectively includes each element claimed (though not all in the same reference), and one of ordinary skill in the art could have combined the elements in the manner explained above using known engineering design, interface and/or programming techniques, without changing a “fundamental” operating principle of Mammou with Fang, while the teaching of Wang continues to perform the same function as originally taught prior to being combined, in order to produce the repeatable and predictable result. It is for at least the aforementioned reasons that the examiner has reached a conclusion of obviousness with respect to the claim in question. Claims 7-8, 17 are rejected under 35 U.S.C. 103 as being unpatentable over Mammou et al (US 2019/0087979 A1) in view of Fang et al (CV: “3DAC: Learning Attribute Compression for Point Clouds”), in further in view of Budagavi et al (US 2018/0268570 A1). Regarding claims 7-8, Mammou with Fang teaches all elements as mentioned above in claim 4. Mammou with Fang does not teach expressly reconstruct upscaled point cloud attribute data for the point cloud, and wherein the one or more processors are further configured to apply the upscaled point cloud attribute data to the point cloud geometry data to reconstruct the point cloud; decode a value representing an amount of downscaling to be applied to the point cloud geometry data, wherein to downscale the point cloud geometry data, the one or more processors are configured to downscale the point cloud geometry data according to the value representing the amount of downscaling. Budagavi, in the same field of endeavor, teaches reconstruct upscaled point cloud attribute data for the point cloud, and wherein the one or more processors are further configured to apply the upscaled point cloud attribute data to the point cloud geometry data to reconstruct the point cloud; decode a value representing an amount of downscaling to be applied to the point cloud geometry data, wherein to downscale the point cloud geometry data, the one or more processors are configured to downscale the point cloud geometry data according to the value representing the amount of downscaling (see 0069-0081, 0083). It would have been obvious (before the effective filing date of the claimed invention) or (at the time the invention was made) to one of ordinary skill in the art to modify Mammou with Fang to utilize the cited limitations as suggested by Budagavi. The suggestion/motivation for doing so would have been to reduce storage and bandwidth requirements (see 0074). Furthermore, the prior art collectively includes each element claimed (though not all in the same reference), and one of ordinary skill in the art could have combined the elements in the manner explained above using known engineering design, interface and/or programming techniques, without changing a “fundamental” operating principle of Mammou with Fang, while the teaching of Budagavi continues to perform the same function as originally taught prior to being combined, in order to produce the repeatable and predictable result. It is for at least the aforementioned reasons that the examiner has reached a conclusion of obviousness with respect to the claim in question. Regarding claim 17, the claim is analyzed as a method claim that implements the limitations of claim 8 (see rejection of claim 8). Claim 11, 20 are rejected under 35 U.S.C. 103 as being unpatentable over Mammou et al (US 2019/0087979 A1) in view of Wilhelms et al (ACM: “Octrees for Faster Isosurface Generation”). Regarding claim 11, Mammou teaches all elements as mentioned above in claim 1. Mammou does not teach expressly for each node of an octree that includes eight leaf sub-nodes where at least one of the eight leaf sub-nodes is occupied by a point, redefine the node as an occupied leaf node in a downscaled octree; and for each node of the octree that includes eight leaf sub-nodes where none of the eight leaf sub-nodes is occupied by a point, redefine the node as an unoccupied leaf node in the downscaled octree. Wilhelms, in the same field of endeavor, teaches for each node of an octree that includes eight leaf sub-nodes where at least one of the eight leaf sub-nodes is occupied by a point, redefine the node as an occupied leaf node in a downscaled octree; and for each node of the octree that includes eight leaf sub-nodes where none of the eight leaf sub-nodes is occupied by a point, redefine the node as an unoccupied leaf node in the downscaled octree (see 1.1, octree basics). It would have been obvious (before the effective filing date of the claimed invention) or (at the time the invention was made) to one of ordinary skill in the art to modify Mammou to utilize the cited limitations as suggested by Wilhelms. The suggestion/motivation for doing so would have been to yield substantial improvements in performance (see section 7). Furthermore, the prior art collectively includes each element claimed (though not all in the same reference), and one of ordinary skill in the art could have combined the elements in the manner explained above using known engineering design, interface and/or programming techniques, without changing a “fundamental” operating principle of Mammou, while the teaching of Wilhelms continues to perform the same function as originally taught prior to being combined, in order to produce the repeatable and predictable result. It is for at least the aforementioned reasons that the examiner has reached a conclusion of obviousness with respect to the claim in question. Regarding claim 20, the claim is analyzed as a method claim that implements the limitations of claim 11 (see rejection of claim 11). Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Mammou et al (US 2019/0087979 A1) in view of Fang et al (CV: “3DAC: Learning Attribute Compression for Point Clouds”), and further in view of Ma et al (US 2023/0075442 A1). Regarding claim 15, Mammou with Fang teaches all elements as mentioned above in claim 14. Mammou with Fang does not teach expressly decode the attribute data for the point cloud to form downscaled point cloud attribute data, and wherein the one or more processors are further configured to upscale the downscaled point cloud attribute data. Ma, in the same field of endeavor, teaches at least one of a convolutional target coordinates 5x5x5 layer, a transposed convolutional layer, a deconvolution layer, or an unpooling layer to the downscaled point cloud attribute data (see 0069). It would have been obvious (before the effective filing date of the claimed invention) or (at the time the invention was made) to one of ordinary skill in the art to modify Mammou with Fang to utilize the cited limitations as suggested by Ma. The suggestion/motivation for doing so would have been to improve operation speed and have high coding performance (see 0160) Furthermore, the prior art collectively includes each element claimed (though not all in the same reference), and one of ordinary skill in the art could have combined the elements in the manner explained above using known engineering design, interface and/or programming techniques, without changing a “fundamental” operating principle of Mammou with Fang, while the teaching of Ma continues to perform the same function as originally taught prior to being combined, in order to produce the repeatable and predictable result. It is for at least the aforementioned reasons that the examiner has reached a conclusion of obviousness with respect to the claim in question. Allowable Subject Matter Claims 6, 9-10, 18-19 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Regarding claim 6, none of the references of record alone or in combination suggest or fairly teach wherein to upscale the downscaled point cloud attribute data, the one or more processors are configured to apply a convolutional target coordinates 5x5x5 layer to the downscaled point cloud attribute data. Regarding claims 9, 18, none of the references of record alone or in combination suggest or fairly teach wherein to upscale the downscaled point cloud attribute data, the one or more processors are configured to upscale the downscaled point cloud attribute data according to the value representing the amount of downscaling to be applied to the point cloud geometry data. Regarding claims 10, 19, none of the references of record alone or in combination suggest or fairly teach wherein the value comprises a first value, and wherein the one or more processors are further configured to decode a second value representing an amount of upscaling to be applied to the downscaled point cloud attribute data, wherein to upscale the downscaled point cloud attribute data, the one or more processors are configured to upscale the downscaled point cloud attribute data according to the second value representing the amount of upscaling to be applied to the point cloud attribute data. Conclusion Claims 1-5, 7-8, 11-17, 20 are rejected. Claims 6, 9-10, 18-19 are objected to as being dependent upon a rejected base claim. Any inquiry concerning this communication or earlier communications from the examiner should be directed to EDWARD PARK. The examiner’s contact information is as follows: Telephone: (571)270-1576 | Fax: 571.270.2576 | Edward.Park@uspto.gov For email communications, please notate MPEP 502.03, which outlines procedures pertaining to communications via the internet and authorization. A sample authorization form is cited within MPEP 502.03, section II. The examiner can normally be reached on M-F 9-6 CST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Moyer, can be reached on (571) 272-9523. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /EDWARD PARK/ Primary Examiner, Art Unit 2666
Read full office action

Prosecution Timeline

Apr 02, 2024
Application Filed
Jan 15, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602911
SYSTEMS AND METHODS FOR HANDWRITING RECOGNITION USING OPTICAL CHARACTER RECOGNITION
2y 5m to grant Granted Apr 14, 2026
Patent 12602815
WEAKLY PAIRED IMAGE STYLE TRANSFER METHOD BASED ON POSE SELF-SUPERVISED GENERATIVE ADVERSARIAL NETWORK
2y 5m to grant Granted Apr 14, 2026
Patent 12597173
AUTOMATIC GENERATION OF AN IMAGE HAVING AN ATTRIBUTE FROM A SUBJECT IMAGE
2y 5m to grant Granted Apr 07, 2026
Patent 12594023
METHOD AND DEVICE FOR PROVIDING ALOPECIA INFORMATION
2y 5m to grant Granted Apr 07, 2026
Patent 12592000
SYSTEMS AND METHODS FOR PROCESSING DIGITAL IMAGES TO ADAPT TO COLOR VISION DEFICIENCY
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
99%
With Interview (+18.4%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 704 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month