Prosecution Insights
Last updated: April 19, 2026
Application No. 18/588,194

INFORMATION PROCESSING APPARATUS, METHOD AND NON-TRANSITORY COMPUTER READABLE MEDIUM

Non-Final OA §102
Filed
Feb 27, 2024
Examiner
LIU, XIAO
Art Unit
2664
Tech Center
2600 — Communications
Assignee
Kabushiki Kaisha Toshiba
OA Round
1 (Non-Final)
89%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 89% — above average
89%
Career Allow Rate
257 granted / 290 resolved
+26.6% vs TC avg
Moderate +12% lift
Without
With
+11.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
44 currently pending
Career history
334
Total Applications
across all art units

Statute-Specific Performance

§101
8.8%
-31.2% vs TC avg
§103
50.9%
+10.9% vs TC avg
§102
17.0%
-23.0% vs TC avg
§112
17.4%
-22.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 290 resolved cases

Office Action

§102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 02/27/2024 has/have been considered by the examiner. Specification Applicant is reminded of the proper language and format for an abstract of the disclosure. The abstract should be in narrative form and generally limited to a single paragraph on a separate sheet within the range of 50 to 150 words in length. The abstract should describe the disclosure sufficiently to assist readers in deciding whether there is a need for consulting the full patent text for details. The language should be clear and concise and should not repeat information given in the title. It should avoid using phrases which can be implied, such as, “The disclosure concerns,” “The disclosure defined by this invention,” “The disclosure describes,” etc. In addition, the form and legal phraseology often used in patent claims, such as “means” and “said,” should be avoided. The abstract of the disclosure is objected to because it has phrases “According to one embodiment”. A corrected abstract of the disclosure is required and must be presented on a separate sheet, apart from any other text. See MPEP § 608.01(b). Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-9 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Sultana et al (US 20240203098 A1), hereinafter Sultana. -Regarding claim 1, Sultana discloses an information processing apparatus comprising a processor configured to (Abstract; FIGS. 1-19 PNG media_image1.png 313 584 media_image1.png Greyscale ; [0119]): acquire training data that is used for training of a first feature extractor and a second feature extractor (FIGS. 4-5A; input image 402, 502; [0061]; [0067], “soft supervision … training”); determine a model size of the second feature extractor (FIGS. 4-5A; 2nd feature extractor is the subblocks in trained model 410 or 510 without the subblocks, f 1 … f i (i.e., first feature extractor) selected by selector 520. Once selected subblocks decided, the size of 2nd feature extractor is determined); extract a first feature by inputting the training data to the first feature extractor (FIG. 5A, output of f i ); extract a second feature by inputting the first feature to the second feature extractor (FIG. 5A, output of block f n ; output of f i is the input of 2nd feature extractor); and train the first feature extractor in such a manner as to make the first feature closer to the second feature (FIG. 5A; [0072]; equations (2)-(3)). -Regarding claim 2. Sultana discloses the apparatus of claim 1. Sultana further discloses wherein the first feature extractor and the second feature extractor have an equal number of dimensions of features that are extracted (FIG. 4; FIG. 5A, tokens 506, 508; [0064], “A ViT is arranged as … any transformer block produces equi-dimensional features … each token has d dimensions”). -Regarding claim 3. Sultana discloses the apparatus of claim 1. Sultana further discloses wherein pre-trained parameters in the first feature extractor are set as an initial value ([0055], “the present self-distillation approach seamlessly modularizes the architecture of ViTs and avoids introducing any new parameters”; [0062]; [0072]; FIGS. 4-5A; [0075]; Note: blocks 410 or 510 are pretrained” ). -Regarding claim 4. Sultana discloses the apparatus of claim 1. Sultana further discloses comprising a storage configured to store a plurality of the second feature extractors, wherein the processor is configured to select the second feature extractor that is used for the training, from among the plurality of the second feature extractors (FIG. 5A; [0066], “extract a lot of knowledge from the training data is to learn many different models in parallel. The models should be as different as possible to minimize the correlations between their errors. The models can be made different by using different initializations or different architectures or different subsets of the training data”; [0075], “In one embodiment, the DeiT backbone is arranged as the intermediate blocks 510. In one embodiment, the CvT backbone is arranged as the intermediate blocks 510. In one embodiment, the T2T-ViT backbone is arranged as the intermediate blocks 510”). -Regarding claim 5, Sultana discloses the apparatus of claim 1. Sultana further discloses wherein the processor is configured to determine the model size of the second feature extractor, based on at least one of a memory size, a calculation cost, and an inference accuracy ([0013], “capacity is smaller”; [0018], “optimized”; [0068], “a self-distilled vision transformer”; [0072], “minimizing the overall loss”; equations (2)-(3); FIGS. 5A-6D). -Regarding claim 6. Sultana discloses the apparatus of claim 1. Sultana further discloses wherein each of the first feature extractor and the second feature extractor is a model using a Transformer configuration, or a model using an MLP-Mixer (FIGS. 4-5A). -Regarding claim 7. Sultana discloses the apparatus of claim 1. Sultana further discloses extracting a feature of an intermediate layer of the first feature extractor as the first feature, and extracting a feature of an intermediate layer of the second feature extractor as the second feature (FIG. 5A). -Regarding claim 8, Sultana discloses an information processing method comprising (Abstract; FIGS. 1-19; [0119]): acquire training data that is used for training of a first feature extractor and a second feature extractor (FIGS. 4-5A; input image 402, 502; [0061]; [0067], “soft supervision … training”); determine a model size of the second feature extractor (FIGS. 4-5A; 2nd feature extractor is the subblocks in trained model 410 or 510 without the subblocks, f 1 … f i (i.e., first feature extractor) selected by selector 520. Once selected subblocks decided, the size of 2nd feature extractor is determined); extract a first feature by inputting the training data to the first feature extractor (FIG. 5A, output of f i ); extract a second feature by inputting the first feature to the second feature extractor (FIG. 5A, output of block f n ; output of f i is the input of 2nd feature extractor); and train the first feature extractor in such a manner as to make the first feature closer to the second feature (FIG. 5A; [0072]; equations (2)-(3)). -Regarding claim 9, Sultana discloses a non-transitory computer readable medium including computer executable instructions, wherein the instructions, when executed by a processor (FIG. 19; [0113]), cause the processor to perform a method comprising (Abstract; FIGS. 1-19; [0119]): acquire training data that is used for training of a first feature extractor and a second feature extractor (FIGS. 4-5A; input image 402, 502; [0061]; [0067], “soft supervision … training”); determine a model size of the second feature extractor (FIGS. 4-5A; 2nd feature extractor is the subblocks in trained model 410 or 510 without the subblocks, f 1 … f i (i.e., first feature extractor) selected by selector 520. Once selected subblocks decided, the size of 2nd feature extractor is determined); extract a first feature by inputting the training data to the first feature extractor (FIG. 5A, output of f i ); extract a second feature by inputting the first feature to the second feature extractor (FIG. 5A, output of block f n ; output of f i is the input of 2nd feature extractor); and train the first feature extractor in such a manner as to make the first feature closer to the second feature (FIG. 5A; [0072]; equations (2)-(3)). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Singh et al, ROBUST REPRESENTATION LEARNING WITH SELF-DISTILLATION FOR DOMAIN GENERALIZATION, arXiv:2302.06874 v1 14 Feb (Year: 2023), hereinafter Singh teaches a method for representing learning with self-distillation to improve generalization of vision transformers. Zhang, et al., "Be Your Own Teacher: Improve the Performance of Convolutional Neural Networks via Self Distillation," IEEE/CVF Int'l Conf. on Computer Vision (ICCV) (Year: 2019), hereinafter Zhang teaches a training method with self-distillation to enhance accuracy of convolutional neural networks through shrinking the size of the network. Any inquiry concerning this communication or earlier communications from the examiner should be directed to XIAO LIU whose telephone number is (571)272-4539. The examiner can normally be reached Monday-Thursday and Alternate Fridays 8:30-4:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Mehmood can be reached at (571) 272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /XIAO LIU/Primary Examiner, Art Unit 2664
Read full office action

Prosecution Timeline

Feb 27, 2024
Application Filed
Jan 08, 2026
Non-Final Rejection — §102
Mar 05, 2026
Examiner Interview Summary
Mar 05, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603972
WIRELESS TRANSMITTER IDENTIFICATION IN VISUAL SCENES
2y 5m to grant Granted Apr 14, 2026
Patent 12592069
OBJECT RECOGNITION METHOD AND APPARATUS, AND DEVICE AND MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12579834
Information Extraction Method and Apparatus for Text With Layout
2y 5m to grant Granted Mar 17, 2026
Patent 12576873
SYSTEM AND METHOD OF CAPTIONS FOR TRIGGERS
2y 5m to grant Granted Mar 17, 2026
Patent 12573175
TARGET TRACKING METHOD, TARGET TRACKING SYSTEM AND ELECTRONIC DEVICE
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
89%
Grant Probability
99%
With Interview (+11.5%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 290 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month