Prosecution Insights
Last updated: April 19, 2026
Application No. 18/429,504

LEARNING DEVICE, LEARNING METHOD, AND STORAGE MEDIUM

Non-Final OA §101§103
Filed
Feb 01, 2024
Examiner
AZIMA, SHAGHAYEGH
Art Unit
2671
Tech Center
2600 — Communications
Assignee
Honda Motor Co. Ltd.
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
93%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
286 granted / 350 resolved
+19.7% vs TC avg
Moderate +11% lift
Without
With
+11.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
36 currently pending
Career history
386
Total Applications
across all art units

Statute-Specific Performance

§101
15.8%
-24.2% vs TC avg
§103
42.5%
+2.5% vs TC avg
§102
13.9%
-26.1% vs TC avg
§112
14.5%
-25.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 350 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This action is in response to the applicant's communication filed on 02/01/2024. In virtue of this communication, claims 1-9 filed on 02/01/2024 are currently pending in the instant application. Information Disclosure Statement The information Disclosure statement (IDS) form PTO-1449, filed on 02/01/2024 are in compliance with the provisions of CFR 1.97. Accordingly, the information disclosed therein was considered by the examiner. Drawings The drawings were received on 02/01/2024 have been reviewed by Examiner and they are acceptable. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-9 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Independent Claims 1, 8, and 9 recite “receiving an input of an image including a plurality of pixels and outputs a degree of accuracy with which each pixel corresponds to a class indicating a type of an object,” “adjusting an output value of the degree of accuracy using a predetermined parameter with a tendency to decrease the output value of the degree of accuracy corresponding to a correct-answer class and to increase the output value of the degree of accuracy corresponding to a class other than the correct-answer class;” and “training the machine learning model on the basis of the adjusted output value”. Step 1: With regard to Step 1, the instant claims are directed to an device, a method, and a non-transitory computer-readable storage medium, all among the statutory categories of invention. Step 2A — Prong 1: With regard to Step 2A — Prong 1, for example in method Claim 8, the limitations “receiving an input of an image including a plurality of pixels and outputs a degree of accuracy with which each pixel corresponds to a class indicating a type of an object,” “adjusting an output value of the degree of accuracy using a predetermined parameter with a tendency to decrease the output value of the degree of accuracy corresponding to a correct-answer class and to increase the output value of the degree of accuracy corresponding to a class other than the correct-answer class;” and “training the machine learning model on the basis of the adjusted output value”, as recited, is a method that, under its broadest reasonable interpretation, covers performance of the limitation as a mathematical relationship of adjusting output value by increasing or decreasing accuracy values and training machine learning model based on the adjusted output value. That is, other than reciting “by a computer" nothing in the claim steps preclude the limitations from practically being performed by a mathematical calculation and relation. The recited computer is simply a generic device. If a claim limitation, under its broadest reasonably interpretation covers performance of the limitation in the mind but for the recitation of a generic components, then it falls within the "Mathematical concept" grouping of the abstract idea, which include concepts performed as a mathematical relationship, including adjusting values, increasing or decreasing values and adjusting degree of accuracy values. Accordingly, the claim recites an abstract idea. In addition, the additional components recited in independent Claims 1 and 9, i.e., a processor, and a non-transitory computer-readable medium are simply generic computing components, accordingly, these independent claims include the above- described abstract idea. Step 2A — Prong 2: The 2019 PEG defines the phrase “integration into a practical application’ to require an additional element or a combination of additional elements in the claim to apply, rely on, or use the judicial exception. In the instant case, the additional elements in the claims do not apply, rely on, or use the judicial exception. This judicial exception is not integrated into a practical application because the claims only recite additional elements using a computer, a processor, or a non-transitory Computer-readable medium, for instance, that includes to perform the recited elements/functions/steps. These computing components in all are recited at high-level of generality and there are no other recited additional limitations in the claims. Accordingly, these additional steps/element do not integrate the abstract idea into a practical application because it is a field-of-use limitation that does not impose any meaningful limits on practicing the abstract idea. Therefore, independent Claims 1, 8, and 9 recite an abstract idea. Step 2B: Because the claims fail under Step 2A, the claims are further evaluated under Step 2B. The claims herein do not include additional elements that are sufficient to amount to significantly more than the judicial exception because as discussed above with respect to integration of the abstract idea into practical application, the additional element of using a computer, a processor, or a non-transitory computer- readable medium to execute programming instructions to perform the step amounts to no more than mere instructions to apply the exception using a generic apparatus component. Mere instructions to apply an exception using generic apparatus component cannot provide an inventive concept. The claim is not patent eligible. Further, with regard to dependent Claims 2-7 viewed individually, these additional elements are under their broadest reasonable interpretation, cover performance of the limitation as mathematical concept and do not provide meaningful limitations to transform the abstract idea into a patent eligible application of the abstract idea such that the claims amount to significantly more than the abstract idea itself. Accordingly, Claims 1-9 are rejected under 35 U.S.C. 101. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-4, 8, and 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sfar et al. (US 2019/0243928), in view of Ioffe et al. (US 2017/0132512). As per claim 1, Sfar discloses A learning device comprising: “a storage medium configured to store computer-readable instructions; and a processor connected to the storage medium,(Sfar, Figure 2, ¶’s[0045-0047]) wherein the processor trains a machine learning model that receives an input of an image including a plurality of pixels and outputs a degree of accuracy with which each pixel corresponds to a class indicating a type of an object by executing the computer-readable instructions”.(Sfar, ¶[0018] discloses providing a dataset comprising 2D floor plans each associated to a respective semantic segmentation. The method also comprises learning the function based on the dataset. ¶[0019] discloses the set of classes comprise at least two classes among a wall class, a door class and a window class.¶[0020] discloses The neural network may comprise weights, and the learning may comprise, with an optimization algorithm, updating the weights according to the dataset and to a loss function. ¶[0021] discloses the pixel-wise classifier may output, for each input 2D floor plan, respective data for inference of a semantic segmentation mask of the input 2D floor plan. The semantic segmentation mask is a pixel-wise classification of the 2D floor plan with respect to the set of classes. ¶[0023] discloses the respective data outputted by the pixel-wise classifier may comprise a distribution of probabilities over the set of classes. Further see ¶[0029] discloses y.sub.pred.sup.i probability outputted by the pixel-wise classifier for class i. further ) However Sfar does not explicitly disclose the following which would have been obvious in view of Ioffe from similar filed of endeavor “wherein the processor adjusts an output value of the degree of accuracy using a predetermined parameter with a tendency to decrease the output value of the degree of accuracy corresponding to a correct-answer class and to increase the output value of the degree of accuracy corresponding to a class other than the correct-answer class”(Ioffe, ¶[0048] discloses reducing the highest score in a training data label distribution by a predetermined amount. Similarly, changing the distribution of scores in a training data item's training label distribution may include, for example, increasing one or more of the lowest scores in the training label distribution by a predetermined amount. ¶[0051-0052] discloses For a training example with correct label y, the neural network training system 210 can replace the label distribution q(k|x)=δ.sub.k,y, where δ.sub.k,y is the Dirac delta which equals 1 for k=y and 0 when k is not equal to y, with: q′(k|x)=(1−ε)δ.sub.k,y+εu(k)) “and wherein the processor trains the machine learning model on the basis of the adjusted output value.”(Ioffe, ¶[0058] discloses modifying the initial target label distribution and The resulting training data set may be referred to as a regularizing training data set. ¶[0059] discloses the system trains a neural network using the regularizing training data set.) Before the effective filing date of the claimed invention it would have been obvious to a person of ordinary skill in the art to combine Ioffe technique of regularizing machine learning models into Sfar technique to provide the known and expected uses and benefits of Ioffe technique over using pixel-wise classifiers in neural network technique of Sfar. The proposed combination would have constituted a mere arrangement of old elements with each performing their known function, the combination yielding no more than one would expect from such an arrangement. Therefore, it would have been obvious to a person of ordinary skill in the art to incorporate Ioffe to Sfar in order to improve the performance of a trained neural network. (Refer to Ioffe paragraph [0005].) Claim 8 and 9 have been analyzed and are rejected for the reasons indicated in claim 1 above. As per claim 2, The learning device according to claim 1, “wherein the processor adjusts the output value by subtracting the predetermined parameter which is a positive constant from the output value of the degree of accuracy corresponding to the correct-answer class and adding the predetermined parameter to the output value of the degree of accuracy corresponding to a class other than the correct-answer class.”(Ioffe, ¶[0048] discloses changing the distribution of scores in a training data item's training label distribution may include, for example, reducing the highest score in a training label distribution by a predetermined amount. Further changing the distribution of scores in a training data item's training label distribution may include, for example, increasing one or more of the lowest scores in the training label distribution by a predetermined amount. Further see ¶[0051].) As per claim 3, The learning device according to claim 1, “wherein the processor sets a positive constant which differs depending on a type of the correct-answer class as the predetermined parameter when each of a plurality of classes is the correct-answer class,” (Ioffe, ¶[0017] discloses the smoothing label distribution may include a respective smoothing score for each label in the predetermined set of labels, and the smoothing scores may be non-uniform. ¶[0062] discloses the smoothing label distribution may be a non-uniform distribution that includes one or more smoothing scores that are capable of being different from one or more other smoothing scores in the same smoothing label distribution.) “wherein the processor adjusts the output value by subtracting the predetermined parameter from the output value of the degree of accuracy corresponding to the correct-answer class and adding the predetermined parameter to the output value of the degree of accuracy corresponding to a class other than the correct-answer class.” (Ioffe, ¶[0048] discloses changing the distribution of scores in a training data item's training label distribution may include, for example, reducing the highest score in a training label distribution by a predetermined amount. Further changing the distribution of scores in a training data item's training label distribution may include, for example, increasing one or more of the lowest scores in the training label distribution by a predetermined amount. Further ¶[0045] and ¶[0051] discloses having highest score 1, and other labels are lower 0.) As per claim 4, The learning device according to claim 1, “wherein the processor sets a positive constant which differs depending on a type of the correct-answer class and a class other than the correct-answer class as the predetermined parameter when each of a plurality of classes is the correct-answer class,” (Ioffe, ¶[0017] discloses the smoothing label distribution may include a respective smoothing score for each label in the predetermined set of labels, and the smoothing scores may be non-uniform. ¶[0045] and ¶[0051] discloses having highest score 1, and other labels are lower 0. ¶[0062] discloses the smoothing label distribution may be a non-uniform distribution that includes one or more smoothing scores that are capable of being different from one or more other smoothing scores in the same smoothing label distribution.) “and wherein the processor adjusts the output value by subtracting the predetermined parameter from the output value of the degree of accuracy corresponding to the correct-answer class and adding the predetermined parameter to the output value of the degree of accuracy corresponding to a class other than the correct-answer class.” (Ioffe, ¶[0048] discloses changing the distribution of scores in a training data item's training label distribution may include, for example, reducing the highest score in a training label distribution by a predetermined amount. Further changing the distribution of scores in a training data item's training label distribution may include, for example, increasing one or more of the lowest scores in the training label distribution by a predetermined amount. Further see ¶[0051].) Claim(s) 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sfar et al. (US 2019/0243928), in view of Ioffe et al. (US 2017/0132512), further in view of Saha et al; “Similarity based label smoothing for dialogue generation”; Proceedings of the 19th International Conference on Natural Language Processing (ICON), December 2022. As per claim 5, The learning device according to claim 1, “wherein the processor sets the same predetermined parameter for two or more classes between which a semantic similarity is same.”(Ioffe, ¶[0016] discloses the smoothing label distribution may include a respective smoothing score for each label in the predetermined set of labels, and each smoothing score may be the same predetermined value. Further ¶[0068] discloses the smoothing distribution may be a uniform distribution u that assigns the same score for each label in the set of labels associated with the smoothing distribution.) However Sfar as modified by Ioffe is silent on the following which would have been obvious in view of Saha from similar filed of endeavor “ wherein the processor sets the same predetermined parameter for two or more classes between which a semantic similarity is determined to be equal to or greater than a threshold value out of a plurality of classes.”(Saha, page 256, table 1 and related paragraphs disclose s: amount of smoothing ( setting same parameters) in two or more classes and cosine similarity (semantic similarity) and t: is cosine similarity threshold values for different classes. Page 259, table 3.) Before the effective filing date of the claimed invention it would have been obvious to a person of ordinary skill in the art to combine Saha technique of labe smoothing into Sfar as modified by Ioffe technique to provide the known and expected uses and benefits of Saha technique over using pixel-wise classifiers in neural network technique of Sfar as modified by Ioffe. The proposed combination would have constituted a mere arrangement of old elements with each performing their known function, the combination yielding no more than one would expect from such an arrangement. Therefore, it would have been obvious to a person of ordinary skill in the art to incorporate Saha to Sfar as modified by Ioffe in order to perform regularization techniques to enhance the generalization capability of deep neural networks. (Refer to Saha page 253, Col. 2 introduction.) Allowable Subject Matter Claims 6-7 are objected to as being dependent upon a rejected base claim, but would be allowable if: rewritten in independent form including all of the limitations of the base claim and any intervening claims and on the pending conditions of the rejected and objected matter set forth in this action. The following is a statement of reasons for the indication of allowable subject matter: the prior art of record, alone or in combination, fails to teach or suggest the limitations set forth by each of claims 6-7. Contact Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHAGHAYEGH AZIMA whose telephone number is (571)272-1459. The examiner can normally be reached Monday-Friday, 9:30-6:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vincent Rudolph can be reached at (571)272-8243. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SHAGHAYEGH AZIMA/Examiner, Art Unit 2671
Read full office action

Prosecution Timeline

Feb 01, 2024
Application Filed
Jan 09, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586350
DETERMINING AUDIO AND VIDEO REPRESENTATIONS USING SELF-SUPERVISED LEARNING
2y 5m to grant Granted Mar 24, 2026
Patent 12573209
ROBUST INTERSECTION RIGHT-OF-WAY DETECTION USING ADDITIONAL FRAMES OF REFERENCE
2y 5m to grant Granted Mar 10, 2026
Patent 12561989
VEHICLE LOCALIZATION BASED ON LANE TEMPLATES
2y 5m to grant Granted Feb 24, 2026
Patent 12530867
Action Recognition System
2y 5m to grant Granted Jan 20, 2026
Patent 12525049
PERSON RE-IDENTIFICATION METHOD, COMPUTER-READABLE STORAGE MEDIUM, AND TERMINAL DEVICE
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
93%
With Interview (+11.4%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 350 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month