Prosecution Insights
Last updated: April 18, 2026
Application No. 18/123,554

IMAGE PROCESSING METHOD AND APPARATUS, COMPUTER DEVICE, AND MEDIUM

Final Rejection §102§112
Filed
Mar 20, 2023
Examiner
ALFONSO, DENISE G
Art Unit
2662
Tech Center
2600 — Communications
Assignee
Tencent Cloud Computing (Beijing) Co. Ltd.
OA Round
2 (Final)
74%
Grant Probability
Favorable
3-4
OA Rounds
3y 1m
To Grant
94%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
76 granted / 103 resolved
+11.8% vs TC avg
Strong +20% interview lift
Without
With
+19.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
31 currently pending
Career history
134
Total Applications
across all art units

Statute-Specific Performance

§101
8.3%
-31.7% vs TC avg
§103
59.8%
+19.8% vs TC avg
§102
19.4%
-20.6% vs TC avg
§112
8.1%
-31.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 103 resolved cases

Office Action

§102 §112
DETAILED ACTIONS Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant’s claim this application being a Continuation of the International Application No. PCT/CN2021/108929, filed on July 08, 2021, and benefit of foreign priority from Chinese Patent Application No. CN202110302731.1 filed on March 22, 2021. Information Disclosure Statement The information disclosure statement (“IDS”) filed on 05/21/2024 and 06/25/2025 were reviewed and the listed references were noted. Drawings The 15-page drawings have been considered and placed on record in the file. Status of Claims Claims 1, 4-15, 17-20, and new claims 21-23 are pending. Claims 2-3 and 16 are canceled. Response to Amendment The amendment filed 12/18/2025 has been entered. Claims 1, 4-15, 17-20, and new claims 21-23 are pending. Claims 2-3 and 16 are canceled. Response to Arguments Applicant's arguments filed 12/18/2025 have been fully considered but they are not persuasive. On page 13 of the Remarks, Applicants contend that Lin does not teach or suggest “determining a first predicted value of a characteristic of the target object based on a first feature extraction result of the first feature extraction performed on the image”, “determining a second predicted value of the characteristic of the target object based on a second feature extraction result of the second feature extraction performed on the mask image”, and “determining, by processing circuitry, a target predicted value of the characteristic of the target object according to the first predicted value and the second predicted value” as required by amended independent claim 1. Applicants argue that the amendment patentably defines over Lin. The Examiner respectfully disagrees with this characterization of Lin and submits that the reference does indeed disclose the limitation in question. The definition of “characteristic” is a feature or quality serving to identify the object. There is no support or explicitly definition of the word “characteristic” in the disclosure of the Applicant. It is clearly shown in Figure 3 of Lin that the features extracted from both the source input and target input is used to determine a first and second predicted value. As shown in Fig. 3, the gradient reverse output of the discriminator is used as an input to the Regressor to determine a final predicted value. Therefore, Lin discloses the limitations “determining a first predicted value of a characteristic of the target object based on a first feature extraction result of the first feature extraction performed on the image”, “determining a second predicted value of the characteristic of the target object based on a second feature extraction result of the second feature extraction performed on the mask image”, and “determining, by processing circuitry, a target predicted value of the characteristic of the target object according to the first predicted value and the second predicted value”. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1 and 14-15 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. The limitations “determining a first predicted value of a characteristic of the target object based on a first feature extraction result of the first feature extraction performed on the image”, “determining a second predicted value associated with of the characteristic of the target object based on a second feature extraction result of the second feature extraction performed on the mask image” and “determining, by processing circuitry, a target predicted value associated with of the characteristic of the target object according to the first predicted value and the second predicted value”, lacks written description on the Specification disclosed by the Applicant. The Specification explicitly discloses throughout that the predicted values that are being determined are “associated with” rather than as a “characteristic of” the target object. There is no clear definition in the Specification differentiating these two terms, therefore claims 1 and 14-15 are rejected for failing to comply with the written description requirement. Claims 4-13, 17-20, and 21-23 are rejected by virtue of their dependency on the independent claims 1 and 14-15. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 4-5, 11, 14-15, and 17-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Lin et al., "Seg4Reg Networks for Automated Spinal Curvature Estimation" (published 02/01/2020), hereinafter referred to as Lin. The applied reference has a common joint inventor (Yi Lin) with the instant application. Based upon the earlier effectively filed date of the reference, it constitutes prior art under 35 U.S.C. 102(a)(1) because the publication date of the applied reference is outside the grace period. Claim 1 Lin discloses an image processing method (Lin, Section 2), comprising: obtaining an image (Lin, Fig.1, input image) including a target object (Lin, Abstract, “accurate spinal curvature estimation”, spine is the target object); performing image segmentation on the image (Lin, Fig.1, segmentation, “We first process the X-ray using a segmentation network”); determining a mask image of the target object (Lin, Fig. 1, “predicted mask”) based on the image segmentation performed on the image (Lin, Fig. 1, the segmentation process outputs a predicted segmentation mask); performing a first feature extraction on the image (Lin, Section 2.2, “For backbone architecture, we simply took ResNet-50 and ResNet-101 as the basic feature extractor (Fig. 3)”, Fig. 3, source input, feature extractor); determining a first predicted value (Fig. 3, the output of the first branch with source input is the first predicted value) of the characteristic of the target object based on a first feature extraction result of the first feature extraction performed on the image (Lin, Fig. 3, source input, feature extractor, feature output, The definition of “characteristic” is a feature or quality serving to identify the object. There is no support or explicitly definition of the word “characteristic” in the disclosure of the Applicant. It is clearly shown in Figure 3 of Lin that the features extracted from both the source input and target input is used to determine a first and second predicted value. As shown in Fig. 3, the gradient reverse output of the discriminator is used as an input to the Regressor to determine a final predicted value.); performing a second feature extraction on the mask image (Lin, Section 2.2, “For backbone architecture, we simply took ResNet-50 and ResNet-101 as the basic feature extractor (Fig. 3)”, Fig. 3, target input, feature extractor); determining a second predicted value (Fig. 3, the output of the second branch with target input is the second predicted value) of the characteristic of the target object based on a second feature extraction result of the second feature extraction performed on the mask image (Lin, Fig. 3, target input, feature extractor, feature output, The definition of “characteristic” is a feature or quality serving to identify the object. There is no support or explicitly definition of the word “characteristic” in the disclosure of the Applicant. It is clearly shown in Figure 3 of Lin that the features extracted from both the source input and target input is used to determine a first and second predicted value. As shown in Fig. 3, the gradient reverse output of the discriminator is used as an input to the Regressor to determine a final predicted value.); and determining, by processing circuitry (Lin, Fig. 1, the network requires a computer to be processed which has processing circuitry, Section 2.3, “NVIDIA P40 GPU”), a target predicted value (Lin, Fig. 1, the final target predicted value are the Cobb angles output of the pipeline in Fig. 1 (angle 1, angle 2, and angle 3)) associated with the target object according to the first predicted value and the second predicted value (Lin, Fig. 1, the regression network in Fig. 3 is the same as the regression network shown in Fig, 1 which outputs both the first and second target predicted value). PNG media_image1.png 331 862 media_image1.png Greyscale PNG media_image2.png 306 818 media_image2.png Greyscale Claim 4 Lin disclose the method according to claim 1 (Lin, Section 2), wherein the image segmentation (Lin, Fig. 1, Segmentation model) and the determination of the mask image (Lin, Fig. 1, mask image is shown as an output of the segmentation model) are performed by a target segmentation network (Lin, Fig. 1, Segmentation model) in a target image processing model (Lin, Fig. 1); the target image processing model (Lin, Fig. 1) includes a target regression network (Lin, Fig. 1, segmentation network, Fig. 3 shows the detailed part of the regression network), and the target regression network includes a first branch network and a second branch network (Lin, Fig. 3, first branch is for the source input and the second branch is for the target input), the first feature extraction is performed by the first branch network Lin, Section 2.2, “For backbone architecture, we simply took ResNet-50 and ResNet-101 as the basic feature extractor (Fig. 3)”, Fig. 3, source input, feature extractor), and the second feature extraction is performed by the second branch network (Lin, Section 2.2, “For backbone architecture, we simply took ResNet-50 and ResNet-101 as the basic feature extractor (Fig. 3)”, Fig. 3, target input, feature extractor). Claim 5 Lin disclose the method according to claim 1 (Lin, Section 2), wherein the image segmentation (Lin, Fig. 1, Segmentation model) and the determination of the mask image (Lin, Fig. 1, mask image is shown as an output of the segmentation model) are performed by a target segmentation network (Lin, Fig. 1, Segmentation model) in a target image processing model (Lin, Fig. 1); the target image processing model further includes a target regression network (Lin, Fig. 1, segmentation network, Fig. 3 shows the detailed part of the regression network), and the target regression network is obtained based on a regression network that includes a first branch network and a second branch network (Lin, Fig. 3, first branch is for the source input and the second branch is for the target input); and the method (Lin, Section 2) further comprises: obtaining a first sample image including a sample target object (Lin, Section 3.1, Fig. 2(1), training set), and obtaining a target label of the first sample image (Lin, Fig. 2(a) training set, Section 3.1, “ground truth labels”, the training sets includes ground truth labels, wherein the target label indicates a target mark value associated with the sample target object (Lin, Section 3.1, “ground truth labels”); performing image segmentation on the first sample image (Lin, Section 2.3, Network training Training includes using the training set as input to the network in Fig. 1 and Fig. 3) through a segmentation network (Lin, Fig. 1, segmentation network), and determining a first sample mask image of the sample target object based on the image segmentation performed on the first sample image (Lin, Fig. 1, “predicted mask” is the first sample mask image if the input is an image from the training set); performing a first feature extraction on the first sample image through the first branch network in the regression network (Lin, Section 2.2, “For backbone architecture, we simply took ResNet-50 and ResNet-101 as the basic feature extractor (Fig. 3)”, Fig. 3, source input, feature extractor), and determining a first sample predicted value associated with the target object based on a first feature extraction result of the first feature extraction performed on the first sample image (Fig. 3, the output of the first branch with source input is the first predicted value if the input image is part of the training set); performing a second feature extraction on the first sample mask image through the second branch network in the regression network (Lin, Section 2.2, “For backbone architecture, we simply took ResNet-50 and ResNet-101 as the basic feature extractor (Fig. 3)”, Fig. 3, target input, feature extractor), and determining a second sample predicted value associated with the target object based on a second feature extraction result of the second feature extraction performed on the sample mask image (Fig. 3, the output of the second branch with target input is the second predicted value if the input image is part of the training set); determining a sample target sample predicted value (Lin, Fig. 1, the final target predicted value are the Cobb angles output of the pipeline in Fig. 1 (angle 1, angle 2, and angle 3), it is the sample predicted value if the input is part of the training set) associated with the sample target object based on the first sample predicted value and the second sample predicted value (Lin, Fig. 1, the regression network in Fig. 3 is the same as the regression network shown in Fig, 1 which outputs both the first and second target predicted value); and updating a network parameter of the regression network according to the target sample predicted value and the target mark value, and iteratively training the regression network according to the updated network parameter of the regression network, to obtain the target regression network (Lin, Section 2.3, “We used Adam as the default optimizer for both networks where the initial learning rate is 3e−3. β1 and β2 are set to 0.9 and 0.999, respectively. We also used weight decay which is 1e−5 and cosine annealing strategy. For the segmentation model, we ran each network for 50 epochs while 90 epochs seem to be a better choice for the regression model.”, Adam optimizer is used for network parameter tuning ). Claim 11 Lin disclose the method according to claim 5 (Lin, Section 2), wherein the updating the network parameter of the regression network (Lin, Section 2.3, “We used Adam as the default optimizer for both networks where the initial learning rate is 3e−3. β1 and β2 are set to 0.9 and 0.999, respectively. We also used weight decay which is 1e−5 and cosine annealing strategy. For the segmentation model, we ran each network for 50 epochs while 90 epochs seem to be a better choice for the regression model.”, Adam optimizer is used for network parameter tuning ) comprises: obtaining a regression network loss function (Lin, Section 2.2, “idea is pretty simple which adds a discriminator branch and reverses its gradients during the back propagation so the final loss function can be formalized as: Loss = Ly + λLd); substituting the target sample predicted value and the target mark value to the regression network loss function, to obtain a loss value (Lin, Section 2.2, “idea is pretty simple which adds a discriminator branch and reverses its gradients during the back propagation so the final loss function can be formalized as: Loss = Ly + λLd);; and updating the network parameter of the regression network to reduce the loss value (Lin, Section 2.3, “We used Adam as the default optimizer for both networks where the initial learning rate is 3e−3. β1 and β2 are set to 0.9 and 0.999, respectively. We also used weight decay which is 1e−5 and cosine annealing strategy. For the segmentation model, we ran each network for 50 epochs while 90 epochs seem to be a better choice for the regression model.”, Adam optimizer is used for network parameter tuning ). Claim 14 Lin discloses an image processing method (Lin, Section 2), comprising: obtaining an image processing model including a segmentation network and a regression network (Lin, Fig. 1, segmentation network, Fig. 3 shows the detailed part of the regression network), the regression network including a first branch network and a second branch network (Lin, Fig. 3, first branch is for the source input and the second branch is for the target input); obtaining a first sample image including a target object (Lin, Section 3.1, Fig. 2(1), training set) and a target label of the first sample image (Lin, Fig. 2(a) training set, Section 3.1, “ground truth labels”, the training sets includes ground truth labels), the target label indicating a target mark value associated with the target object (Lin, Section 3.1, “ground truth labels”); performing image segmentation on the first sample image (Lin, Section 2.3, Network training Training includes using the training set as input to the network in Fig. 1 and Fig. 3) through a segmentation network (Lin, Fig. 1, segmentation network), and determining a first sample mask image of the target object based on the image segmentation performed on the first sample image (Lin, Fig. 1, “predicted mask” is the first sample mask image if the input is an image from the training set); updating a network parameter of the segmentation network based on the first sample mask image, and iteratively training the segmentation network according to the updated network parameter of the segmentation network, to obtain a target segmentation network (Lin, Section 2.3, “We used Adam as the default optimizer for both networks where the initial learning rate is 3e−3. β1 and β2 are set to 0.9 and 0.999, respectively. We also used weight decay which is 1e−5 and cosine annealing strategy. For the segmentation model, we ran each network for 50 epochs while 90 epochs seem to be a better choice for the regression model.”, Adam optimizer is used for network parameter tuning ); performing a first feature extraction on the first sample image (Lin, Section 2.2, “For backbone architecture, we simply took ResNet-50 and ResNet-101 as the basic feature extractor (Fig. 3)”, Fig. 3, source input, feature extractor); determining, via the first branch network (Lin, Fig. 3, source input branch a first sample predicted value of the characteristic of the sample target object based on a first feature extraction result of the first feature extraction performed on the first sample image (Fig. 3, the output of the first branch with source input is the first predicted value if the input image is part of the training set); performing a second feature extraction on the first sample mask image (Lin, Section 2.2, “For backbone architecture, we simply took ResNet-50 and ResNet-101 as the basic feature extractor (Fig. 3)”, Fig. 3, target input, feature extractor); determining, via the second branch network (Lin, Fig. 3, target input branch), a second sample predicted value of the characteristic of the sample target object based on tha second feature extraction result of the second feature extraction performed on the first sample mask image (Fig. 3, the output of the second branch with target input is the second predicted value if the input image is part of the training set); determining a target sample predicted value (Lin, Fig. 1, the final target predicted value are the Cobb angles output of the pipeline in Fig. 1 (angle 1, angle 2, and angle 3), it is the sample predicted value if the input is part of the training set) associated with the target object based on the first sample predicted value and the second sample predicted value (Lin, Fig. 1, the regression network in Fig. 3 is the same as the regression network shown in Fig, 1 which outputs both the first and second target predicted value); updating one or more network parameters of the regression network according to the target sample predicted value and the target mark value, and iteratively training the regression network according to the updated network parameter of the regression network, to obtain a target regression network (Lin, Section 2.3, “We used Adam as the default optimizer for both networks where the initial learning rate is 3e−3. β1 and β2 are set to 0.9 and 0.999, respectively. We also used weight decay which is 1e−5 and cosine annealing strategy. For the segmentation model, we ran each network for 50 epochs while 90 epochs seem to be a better choice for the regression model.”, Adam optimizer is used for network parameter tuning ); and obtaining, by processing circuitry (Lin, Fig. 1, the network requires a computer to be processed which has processing circuitry, Section 2.3, “NVIDIA P40 GPU”), a target image processing model (Lin, Fig. 1) through the target segmentation network (Lin, Fig. 1, segmentation network) and the target regression network (Lin, Fig. 1, segmentation network), the target image processing model being configured to perform data analysis on an image including the sample target object, to obtain a target predicted value of the characteristic of the sample target object (Lin, Fig. 1, the output is the cobb angles of the vertebrae). Claims 15 and 17-18 are rejected for similar reasons as those described in claims 1 and 4, respectively. The additional elements in Claims 15 and 17-18 (Lin) discloses includes: an image processing apparatus (Lin, Fig. 1, the network requires a computer to be processed which has processing circuitry, Section 2.3, “NVIDIA P40 GPU”). Claim 19 is rejected for similar reasons as those described in claim 1. The additional elements in Claim 19 (Lin) discloses includes: a non-transitory computer-readable storage medium (Lin, Fig. 1, the network requires a computer to be processed which has processing circuitry, Section 2.3, “NVIDIA P40 GPU”, a computer inherently has a memory )storing instructions which when executed by a processor (Lin, Fig. 1, the network requires a computer to be processed which has processing circuitry, Section 2.3, “NVIDIA P40 GPU”) cause the processor to perform the method according to claim 1 (Lin, Fig. 1). Claim 20 is rejected for similar reasons as those described in claim 14. The additional elements in Claim 20 (Lin) discloses includes: a non-transitory computer-readable storage medium (Lin, Fig. 1, the network requires a computer to be processed which has processing circuitry, Section 2.3, “NVIDIA P40 GPU”, a computer inherently has a memory )storing instructions which when executed by a processor (Lin, Fig. 1, the network requires a computer to be processed which has processing circuitry, Section 2.3, “NVIDIA P40 GPU”) cause the processor to perform the method according to claim 14 (Lin, Fig. 1). Allowable Subject Matter Claims 6-10 and 12-13 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims and if amended to overcome the 112(a) rejection. The following is a statement of reasons for the indication of allowable subject matter: The claimed features such as “performing classification activation mapping on the first feature extraction result of the first sample image, to obtain a first classification activation mapping graph, wherein the first classification activation mapping graph highlights an image area associated with the sample target object and determining the first sample predicted value associated with the sample target object based on the first classification activation mapping graph” claimed in dependent claim 6, in combination with the remainder of the limitations of the claims, are neither anticipated nor obvious in view of the prior art of record. In the closest prior art found, Qin “Residual Block-based Multi-Label Classification and Localization Network with Integral Regression for Vertebrae Labeling”, teaches highlighting the areas associated with the target object as shown in Fig. 2. However, Qin fails to teach performing a classification activation mapping on the extracted features to obtain the highlighted areas or the classification activation mapping graph. Therefore claim 6 would be allowable for claiming the limitation “performing classification activation mapping on the first feature extraction result of the first sample image, to obtain a first classification activation mapping graph, wherein the first classification activation mapping graph highlights an image area associated with the sample target object and determining the first sample predicted value associated with the sample target object based on the first classification activation mapping graph”. Claims 7-10 and 12-13 would be allowable by virtue of their dependency on claim 6. Claims 21-23 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims and if amended to overcome the 112(a) rejection. The following is a statement of reasons for the indication of allowable subject matter: The claimed features such as “obtaining a first classification activation mapping graph based on a first classification activation mapping of the first feature extraction result; and determining the first predicted value based on the first classification activation mapping graph, and the determining the second predicted value includes: obtaining a second classification activation mapping graph based on a second classification activation mapping of the second feature extraction result; and determining the second predicted value based on the second classification activation mapping graph” claimed in dependent claims 21-23, in combination with the remainder of the limitations of the claims, are neither anticipated nor obvious in view of the prior art of record. In the closest prior art found, Qin “Residual Block-based Multi-Label Classification and Localization Network with Integral Regression for Vertebrae Labeling”, teaches highlighting the areas associated with the target object as shown in Fig. 2. However, Qin fails to teach obtaining an activation mapping graph based on classification active mapping of both the first and second feature extraction result from both the source image and the target image. Therefore claims 21-23 would be allowable for claiming the limitation “performing classification activation mapping on the first feature extraction result of the first sample image, to obtain a first classification activation mapping graph, wherein the first classification activation mapping graph highlights an image area associated with the sample target object; and determining the first sample predicted value associated with the sample target object based on the first classification activation mapping graph”. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DENISE G ALFONSO whose telephone number is (571)272-1360. The examiner can normally be reached Monday - Friday 7:30 - 5:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amandeep Saini can be reached at (571)272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DENISE G ALFONSO/Examiner, Art Unit 2662 /AMANDEEP SAINI/Supervisory Patent Examiner, Art Unit 2662
Read full office action

Prosecution Timeline

Mar 20, 2023
Application Filed
Sep 13, 2025
Non-Final Rejection — §102, §112
Oct 21, 2025
Applicant Interview (Telephonic)
Oct 21, 2025
Examiner Interview Summary
Dec 18, 2025
Response Filed
Apr 03, 2026
Final Rejection — §102, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586352
IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD AND STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12579693
ELECTRONIC SHELF LABEL MANAGING SERVER, DISPLAY DEVICE AND CONTROLLING METHOD THEREOF
2y 5m to grant Granted Mar 17, 2026
Patent 12555371
VISION TRANSFORMER FOR MOBILENET SIZE AND SPEED
2y 5m to grant Granted Feb 17, 2026
Patent 12541980
METHOD FOR DETERMINING OBJECT INFORMATION RELATING TO AN OBJECT IN A VEHICLE ENVIRONMENT, CONTROL UNIT AND VEHICLE
2y 5m to grant Granted Feb 03, 2026
Patent 12541941
A Method for Testing an Embedded System of a Device, a Method for Identifying a State of the Device and a System for These Methods
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
74%
Grant Probability
94%
With Interview (+19.8%)
3y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 103 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month