Prosecution Insights
Last updated: April 19, 2026
Application No. 17/791,369

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND RECORDING MEDIUM

Non-Final OA §102§103§112
Filed
Jul 07, 2022
Examiner
TRAN, TAN H
Art Unit
2141
Tech Center
2100 — Computer Architecture & Software
Assignee
NEC Corporation
OA Round
1 (Non-Final)
60%
Grant Probability
Moderate
1-2
OA Rounds
3y 6m
To Grant
92%
With Interview

Examiner Intelligence

Grants 60% of resolved cases
60%
Career Allow Rate
184 granted / 307 resolved
+4.9% vs TC avg
Strong +32% interview lift
Without
With
+31.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
60 currently pending
Career history
367
Total Applications
across all art units

Statute-Specific Performance

§101
14.4%
-25.6% vs TC avg
§103
55.3%
+15.3% vs TC avg
§102
19.2%
-20.8% vs TC avg
§112
6.1%
-33.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 307 resolved cases

Office Action

§102 §103 §112
Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION 2. This action is in response to the original filing on 07/07/2022. Claims 1-10 are pending and have been considered below. Information Disclosure Statement 3. The information disclosure statement (IDS(s)) submitted on 07/07/2022, 12/18/2023, 05/13/2025 is/are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 U.S.C. § 112 4. The following is a quotation of 35 U.S.C. § 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claim 7 is rejected under 35 U.S.C. § 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention. Claim 7 recites “the processor performs a parallel process of a SIDM method.” The term “SIDM” is unclear and indefinite, as it is not defined in the claims and is not a recognized term of art. For the purpose of prior art analysis, the Examiner assumes that “SIDM” is a typographical error and that Applicant intended to recite “SIMD” (Single Instruction Multiple Data). Claim Rejections - 35 USC § 102 5. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. 6. Claims 1-4 and 9-10 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Jiang et al. (U.S. Patent Application Pub. No. US 20210150372 A1). Claim 1: Jiang teaches an information processing apparatus using a decision tree including condition determination nodes and leaf nodes (i.e. each decision tree includes two tree nodes: an internal tree node and a leaf node. The internal tree node provides a splitting rule, to assign data to a left child node or a right child node of the internal tree node; para. [0036]), the information processing apparatus comprising: a memory storing instructions (i.e. The memory, as a non-transitory computer-readable storage medium, may be configured to store a non-transitory software program, a non-transitory computer-executable program and a module; para. [0204]); and one or more processors configured to execute the instructions to (i.e. The processor may be a general purpose processor, for example, a central processing unit (CPU); para. [0203]): acquire an input data matrix that includes a plurality of data rows each having a plurality of feature amounts (i.e. In a gradient histogram, a horizontal axis is the feature value of the sample, and a vertical axis is a sum of gradients of the samples. A feature value corresponding to a feature j of an ith sample is stored in an ith row and a jth column in a feature matrix; para. [0046]), data rows each containing feature values columns; generate grouping information by dividing at least a portion of row numbers of the input data matrix in association with a child node selected based on a condition determination at the condition determination node (i.e. the splitting rule of the root node includes the feature j and the feature value sj,k, the feature column in the feature matrix is obtained, and samples whose feature values are less than sj,k in the feature column Tj are assigned to the left child node, and samples whose feature values are not less than sj,k in the feature column are assigned to the right child node, to obtain the splitting result of the root node; para. [0143]), and pass the grouping information to the child node (i.e. to obtain a splitting result of the root node, and transmits the splitting result to each processing subnode; para. [0142]), these paragraphs divide samples (rows) into left/right child nodes based on the splitting rule and transmit the splitting result onward; perform a determination decision process with respect to a plurality of rows indicated in the grouping information received at the condition determination node (i.e. The processing subnode determines, according to the splitting result of the root node, samples assigned to the left child node and the right child node of the root node; para. [0144, 0145]); and output respective predicted values for the plurality of data rows indicated in the grouping information received at the leaf node (i.e. The internal tree node provides a splitting rule, to assign data to a left child node or a right child node of the internal tree node. The splitting rule may be a range of continuous features or a category feature. Through the layer-by-layer processing of the internal tree node, a predicted value of a decision tree on data may be obtained until the data is assigned to the leaf node ... performs batch processing on the samples on the leaf nodes; para. [0036, 0042, 0044]), predicted values at leaf nodes and batch processing of samples on leaf nodes. Claim 2: Jiang teaches the information processing apparatus according to claim 1. Jiang further teaches wherein the processor generates divisional data matrixes acquired by dividing the input data matrix as the grouping information (i.e. the main processing node may alternatively control any processing subnode to divide the feature matrix into N feature subsets, and transmit the feature subsets to the corresponding processing subnodes … a set of data of the specified quantity of rows corresponding to the processing subnode that is obtained by the processing subnode from the feature matrix may be referred to as a sample subset. The sample subset is also stored according to the format of the data matrix; para. [0064, 0068, 0075]). Claim 3: Jiang teaches the information processing apparatus according to claim 2. Jiang further teaches wherein the processor generates row number groups by dividing only the portion of row numbers of the input data matrix as the grouping information (i.e. if the quantity of rows of the feature matrix is 98, and the quantity of processing subnodes is 10, it may be specified that the specified quantity of rows of the processing subnode 201-1 is 1-10, the specified quantity of rows of the processing subnode 201-2 is 11-20, . . . , the specified quantity of rows of the processing subnode 201-9 is 81-90, and the specified quantity of rows of the processing subnode 201-10 is 91-98; para. [0070-0072]). Claim 4: Jiang teaches the information processing apparatus according to claim 3. Jiang further teaches wherein the processor is further configured to store the input data matrix in the memory (i.e. the data storage server 203 may store sample data of any training task in a data matrix manner. The sample data of each training task includes two parts. One part is a feature matrix, used for storing the feature values of all the features of all the samples. The feature values corresponding to the feature j of the ith sample are stored in the ith row and the jth column in the feature matrix; para. [0054]), wherein the processor performs a condition determination process by referring to the input data matrix stored (i.e. the splitting rule of the root node includes the feature j and the feature value sj,k, the feature column in the feature matrix is obtained, and samples whose feature values are less than sj,k in the feature column Tj are assigned to the left child node, and samples whose feature values are not less than sj,k in the feature column are assigned to the right child node, to obtain the splitting result of the root node; para. [0143]) in the memory based on row numbers included in each row number group (i.e. if the total quantity of rows of the feature matrix is 98, and the quantity of processing subnodes is N=10, the processing subnode 201-1 obtains data of the quantity of rows with the remainder 0 in the feature matrix, that is, data of rows 10, 20, 30, . . . and 90, to obtain the sample subset of the processing subnode 201-1; the processing subnode 201-2 obtains data of the quantity of rows with the remainder 1 in the feature matrix, that is, data of rows 1, 11, 21, 31, . . . and 91, to obtain the sample subset of the processing subnode 201-2; para. [0071, 0072]). Claims 9-10 are similar in scope to Claim 1 and are rejected under a similar rationale. Claim Rejections – 35 USC § 103 7. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 8. Claims 5-7 are rejected under 35 U.S.C. 103 as being unpatentable over Jiang in view of Sperber et al. (U.S. Patent Application Pub. No. US 20130326160 A1). Claim 5: Jiang teaches the information processing apparatus according to claim 1. Jiang further teaches wherein the processor rearranges and outputs the predicted values in the same order as an order (i.e. original F(t) sequentially processes the samples, and calculates a predicted value of each sample, and the rewritten {tilde over (F)}(t) sequentially processes the leaf nodes; para. [0042]) of the row numbers in the input data matrix (i.e. The feature values corresponding to the feature j of the ith sample are stored in the ith row and the jth column in the feature matrix. That is, feature values of all features of one sample are stored in each row of the feature matrix, and feature values of the same feature of all samples are stored in each column; para. [0054]). Jiang does not explicitly teach rearranges and outputs the values in the same order as an order. However, Sperber teaches rearranges and outputs the values in the same order as an order (i.e. In the case of a gather operation data merge logic, operatively coupled with the memory access unit and with a SIMD vector register writes corresponding data elements at in-register positions according to a respective position in their corresponding indices; para. [0037, claim 26]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Jiang to include the feature of Sperber. One would have been motivated to make this modification because it provides an efficient technique to restore the original index order. Claim 6: Jiang and Sperber teach the information processing apparatus according to claim 5. Jiang further teaches wherein the processor performs a process for the predicted values in the same order as the order of the row numbers in the input data matrix (i.e. The feature values corresponding to the feature j of the ith sample are stored in the ith row and the jth column in the feature matrix. That is, feature values of all features of one sample are stored in each row of the feature matrix, and feature values of the same feature of all samples are stored in each column; para. [0054]), by a parallel process. Jiang does not explicitly teach wherein the processor performs a rearrangement process for rearranging the values in the same order as the order, by a parallel process. However, Sperber further teaches wherein the processor performs a rearrangement process for rearranging the values in the same order as the order in the input data, by a parallel process (i.e. In the case of a gather operation data merge logic, operatively coupled with the memory access unit and with a SIMD vector register writes corresponding data elements at in-register positions according to a respective position in their corresponding indices; para. [0037, claim 26]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Jiang to include the feature of Sperber. One would have been motivated to make this modification because it provides an efficient technique to restore the original index order. Claim 7: Jiang teaches the information processing apparatus according to claim 1. Jiang further teaches wherein the processor performs a parallel process of a method (i.e. parallel processing is performed by using N processing subnodes; para. [0050]). Jiang does not explicitly teach a SIMD method. However, Sperber further teaches a parallel process of a SIMD method (i.e. In the case of a gather operation data merge logic, operatively coupled with the memory access unit and with a SIMD vector register writes corresponding data elements at in-register positions according to a respective position in their corresponding indices; para. [0037, claim 26]. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Jiang to include the feature of Sperber. One would have been motivated to make this modification because it improves throughput/latency by using SIMD vector operations when processing many rows/predictions. 9. Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Jiang in view of Oberg et al. (U.S. Patent Application Pub. No. US 20130179377 A1). Claim 8: Jiang teaches the information processing apparatus according to claim 1. Jiang further teaches wherein the condition determination node selects one child node from among a plurality of child nodes (i.e. each decision tree includes two tree nodes: an internal tree node and a leaf node. The internal tree node provides a splitting rule, to assign data to a left child node or a right child node of the internal tree node; para. [0036, 0142]) based on a result of the condition determination by a comparison and a computation of a predetermined instruction with respect to a value of a predetermined feature amount included in the input data matrix and a predetermined threshold value (i.e. the splitting rule of the root node includes the feature j and the feature value sj,k, the feature column in the feature matrix is obtained, and samples whose feature values are less than sj,k in the feature column Tj are assigned to the left child node, and samples whose feature values are not less than sj,k in the feature column are assigned to the right child node, to obtain the splitting result of the root node; para. [0137, 0143]); and the leaf node and outputs a predicted value corresponding to the leaf node (i.e. The internal tree node provides a splitting rule, to assign data to a left child node or a right child node of the internal tree node. The splitting rule may be a range of continuous features or a category feature. Through the layer-by-layer processing of the internal tree node, a predicted value of a decision tree on data may be obtained until the data is assigned to the leaf node ... performs batch processing on the samples on the leaf nodes; para. [0036, 0042, 0044]), predicted values at leaf nodes and batch processing of samples on leaf nodes. Jiang does not explicitly teach the leaf node does not have a child node. However, Oberg teaches the leaf node does not have a child node (i.e. Terminating nodes without child nodes are referred to as leaf nodes; para. [0001]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Jiang to include the feature of Oberg. One would have been motivated to make this modification because it clarifies no further branching occurs at the leaf and enables output of the leaf’s predicted value. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. Zhang et al. (Pub. No. US 20200193332 A1), the steps may be performed in parallel for some or all of the decision nodes of the decision trees of the random forest. It is noted that any citation to specific pages, columns, lines, or figures in the prior art references and any interpretation of the references should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. In re Heck, 699 F.2d 1331, 1332-33, 216 U.S.P.Q. 1038, 1039 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006, 1009, 158 U.S.P.Q. 275, 277 (C.C.P.A. 1968)). Any inquiry concerning this communication or earlier communications from the examiner should be directed to TAN TRAN whose telephone number is (303)297-4266. The examiner can normally be reached on Monday - Thursday - 8:00 am - 5:00 pm MT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matt Ell can be reached on 571-270-3264. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TAN H TRAN/Primary Examiner, Art Unit 2141
Read full office action

Prosecution Timeline

Jul 07, 2022
Application Filed
Jan 22, 2026
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594668
BRAIN-LIKE DECISION-MAKING AND MOTION CONTROL SYSTEM
2y 5m to grant Granted Apr 07, 2026
Patent 12579420
Analog Hardware Realization of Trained Neural Networks
2y 5m to grant Granted Mar 17, 2026
Patent 12579421
Analog Hardware Realization of Trained Neural Networks
2y 5m to grant Granted Mar 17, 2026
Patent 12572850
METHOD FOR IMPLEMENTING MODEL UPDATE AND DEVICE THEREOF
2y 5m to grant Granted Mar 10, 2026
Patent 12572326
DIGITAL ASSISTANT FOR MOVING AND COPYING GRAPHICAL ELEMENTS
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
60%
Grant Probability
92%
With Interview (+31.8%)
3y 6m
Median Time to Grant
Low
PTA Risk
Based on 307 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month