Prosecution Insights
Last updated: April 19, 2026
Application No. 18/551,192

MACHINE LEARNING DEVICE, ACCELERATION AND DECELERATION ADJUSTMENT DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM

Non-Final OA §101§102
Filed
Sep 19, 2023
Examiner
TRAN, VINCENT HUY
Art Unit
2115
Tech Center
2100 — Computer Architecture & Software
Assignee
Fanuc Corporation
OA Round
1 (Non-Final)
87%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
96%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
938 granted / 1083 resolved
+31.6% vs TC avg
Moderate +9% lift
Without
With
+9.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
39 currently pending
Career history
1122
Total Applications
across all art units

Statute-Specific Performance

§101
8.0%
-32.0% vs TC avg
§103
42.5%
+2.5% vs TC avg
§102
25.6%
-14.4% vs TC avg
§112
12.7%
-27.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1083 resolved cases

Office Action

§101 §102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-9 are pending in the application. Claims 4-7 and 9 are Non-Elected. Examiner’s Note: The examiner has cited particular passages including column and line numbers, paragraphs as designated numerically and/or figures as designated numerically in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claims, other passages, paragraphs and figures of any and all cited prior art references may apply as well. It is respectfully requested from the applicant, in preparing an eventual response, to fully consider the context of the passages, paragraphs and figures as taught by the prior art and/or cited by the examiner while including in such consideration the cited prior art references in their entirety as potentially teaching all or part of the claimed invention. MPEP 2141.02 VI: “PRIOR ART MUST BE CONSIDERED IN ITS ENTIRETY, INCLUDING DISCLOSURES THAT TEACH AWAY FROM THE CLAIMS." Information Disclosure Statement The information disclosure statement (IDS) submitted on 09/19/2023 was filed after the mailing date of the first office action. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Election/Restrictions Applicant’s election without traverse of Group I, claims 1-3 and 8 in the reply filed on 12/24/2025 is acknowledged. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “a state observer”, “a determination condition acquirer”, “a reward calculation unit”, “a value function update unit”, a decision maker” in claim 1 and 8. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 8 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter. The claim recites, inter alia, “A computer-readable storage medium storing a program to operate…” After close inspection, the Examiner respectfully notes that the disclosure, as a whole, does not specifically identify what may be included as a computer readable storage medium and what is not to be included as a computer readable storage medium. The Examiner is obliged to give claims their broadest reasonable interpretation consistent with the specification during examination. The broadest reasonable interpretation of a claim drawn to a computer readable medium (also called machine readable medium and other such variations) typically covers forms of non-transitory tangible media and transitory propagating signals per se in view of the ordinary and customary meaning of computer readable media, particularly when the specification is silent. See MPEP 2111.01. When the broadest reasonable interpretation of a claim covers a signal, per se, the claim must be rejected under 35 U.S.C. § 101 as covering non-statutory subject matter Therefore, given the silence of the disclosure and the broadest reasonable interpretation, the computer readable storage medium of the claim may include transitory propagating signals. As a result, the claim pertains to non-statutory subject matter. However, the Examiner respectfully submits a claim drawn to such a computer readable medium that covers both transitory and non-transitory embodiments may be amended to narrow the claim to cover only statutory embodiments to avoid a rejection under 35 U.S.C. § 101 by adding the limitation “non-transitory…when execute, causing a computer to” to the claim. Such an amendment would typically not raise the issue of new matter, even when the specification is silent because the broadest reasonable interpretation relies on the ordinary and customary meaning that includes signals per se. For additional information, please see the Patents’ Official Gazette notice published February 23, 2010 (1351 OG 212). Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-3, 8 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Aizawa et al. US Pub. No. 2018/0307211 (“Aizawa”). Regarding claim 1, a machine learning device [Machine learning Apparatus 20 see fig. 5] or estimating parameters related to control of an amount of movement for each control cycle, including an N-order time differential element (N being a natural number) of each shaft included in a machine tool for performing machining of a workpiece, the machine learning device comprising: [0006] An acceleration and deceleration controller in one embodiment of the present invention is an acceleration and deceleration controller for controlling a machine tool configured to machine a workpiece. The acceleration and deceleration controller includes a machine learning apparatus configured to learn an Nth-order time-derivative component (N is a natural number) of a speed of each axis of the machine tool. [0030] In the present invention, by performing machine learning concerning the determination of travel distances for the sake of adjustment of the acceleration and deceleration of each axis of a machine tool in the machining of a workpiece based on a machining program, the speed distribution of each axis of the machine tool in the machining of the workpiece is optimally determined. Here, the speed distribution of each axis means an Nth-order time-derivative component of the speed (N is any natural number), for example, the acceleration or jerk of each axis. The speed distribution (Nth-order time-derivative component of the speed) of each axis is determined so that faster tool travel, improved machining accuracy, and improved machined-surface quality may be achieved. Thus, a workpiece can be machined in a shorter period of time without sacrificing machining accuracy and machined-surface quality. a state observer [State observation section 22 fig. 5] configured to observe information related to at least one of machining accuracy or machined surface quality in the machining and a machining time consumed for the machining, as data indicating an operating state of the machine tool; [0033] As represented by functional blocks in FIG. 1, the machine learning apparatus 20 of the acceleration and deceleration controller 10 includes a state observation section 22 for observing a state variable S representing a speed distribution (Nth-order time-derivative component of the speed) of each axis, a determination data acquisition section 24 for acquiring determination data D for a given state variable S, the determination data D including determination data D1 representing the surface quality of a machined workpiece and determination data D2 representing machining time, and a learning section 26 for learning an optimal speed distribution (Nth-order time-derivative component of the speed) of each axis using the state variable S and the determination data D. [0041] In one modified example of the machine learning apparatus 20 of the acceleration and deceleration controller 10, the state observation section 22 may further observe a machining type S2 representing the shape of a machining path and the like as the state variable S. The machining type S2 may include, for example, the shape (identification data for identifying a straight line portion, a corner portion, a portion machined into a rounded shape, a portion machined into a concentric shape, or the like) of the machining path. The machining type S2 may further include data (such as angle and radius) representing the size of a corner or the like, except a straight line portion. With respect to angle and radius, a plurality of grades may be defined in advance, and the machining type S2 may include identification data indicating which grade is assigned to each of angles and radii included in the machining path. Places where the speed distribution (Nth-order time-derivative component of the speed) of each axis needs to be changed are generally corner portions and the like, except straight line portions. The optimal speed distribution (Nth-order time-derivative component of the speed) of each axis may vary in accordance with the shape of a corner portion or the like. If the machining type S2 is observed, the learning section 26 can learn the surface quality and the machining time of a machined workpiece in relation to both the speed distribution (Nth-order time-derivative component of the speed) S1 of each axis and the machining type S2. Specifically, a model representing the correlation between a combination of surface quality and machining time and the speed distribution (Nth-order time-derivative component of the speed) of each axis can be constructed independently for each machining type S2. Accordingly, the optimal speed distribution (Nth-order time-derivative component of the speed) of each axis in accordance with the shape of a corner portion or the like can be learned. a determination condition acquirer [determination data acquisition section 24 fig. 5] configured to acquire a target value related to data observed by the state observer as determination data; [0033] As represented by functional blocks in FIG. 1, the machine learning apparatus 20 of the acceleration and deceleration controller 10 includes a state observation section 22 for observing a state variable S representing a speed distribution (Nth-order time-derivative component of the speed) of each axis, a determination data acquisition section 24 for acquiring determination data D for a given state variable S, the determination data D including determination data D1 representing the surface quality of a machined workpiece and determination data D2 representing machining time, and a learning section 26 for learning an optimal speed distribution (Nth-order time-derivative component of the speed) of each axis using the state variable S and the determination data D. [0035] The determination data acquisition section 24 can be configured as, for example, one function of a CPU of a computer. Alternatively, the determination data acquisition section 24 can be configured as, for example, software that causes a CPU of a computer to work. The determination data D1 acquired by the determination data acquisition section 24 are numerical data representing results of inspection of a machined surface, such as data obtained from an inspection apparatus (not shown) or a sensor installed in an inspection apparatus or data obtained by using or converting that data. Examples of such an inspection apparatus include a machined-surface analysis apparatus (typically, a laser microscope), a machined-surface image capture apparatus, a light reflectance measurement apparatus, and the like. Examples of data that represent surface quality capable of being measured by an inspection apparatus include surface roughness Sa, surface maximum height Sv, surface texture aspect ratio Str, kurtosis Sku, skewness Ssk, developed interfacial area ratio Sdr, the light reflectance of a machined workpiece, a feature of an image of a machined surface, and the like. Alternatively, the determination data D1 may be data obtained by inputting a file that contains results of evaluation of surface quality by a skilled worker or directly inputting results of evaluation of surface quality through an interface such as a keyboard or data obtained by using or converting that data. Examples of the determination data D2 acquired by the determination data acquisition section 24 include data on machining time actually measured by the acceleration and deceleration controller 10 and data obtained by using or converting that data. a reward calculation unit [Reward calculation sections 28 fig. 2] configured to calculate a reward for machining based on the parameters based on data observed by the state observer and the determination data acquired by the determination condition acquirer; [0044] In the machine learning apparatus 20 of the acceleration and deceleration controller 10 shown in FIG. 2, the learning section 26 includes a reward calculation section 28 for finding a reward R relating to a result (determination data D representing the surface quality and the machining time of a machined workpiece) of machining performed based on a certain state variable S and a value function update section 30 for updating a function Q representing the value of a speed distribution (Nth-order time-derivative component of the speed) of each axis using the reward R. The learning section 26 learns such speed distribution (Nth-order time-derivative component of the speed) of each axis that improves the surface quality of a machined workpiece and that shortens the machining time, by the value function update section 30 repeating the update of the function Q. [READ further paragraph 0047-0053] a value function update unit [Value Function update section 30 fig. 2] configured to update a value function for calculating a value of a machining state based on the parameters based on the reward; and [0044] In the machine learning apparatus 20 of the acceleration and deceleration controller 10 shown in FIG. 2, the learning section 26 includes a reward calculation section 28 for finding a reward R relating to a result (determination data D representing the surface quality and the machining time of a machined workpiece) of machining performed based on a certain state variable S and a value function update section 30 for updating a function Q representing the value of a speed distribution (Nth-order time-derivative component of the speed) of each axis using the reward R. The learning section 26 learns such speed distribution (Nth-order time-derivative component of the speed) of each axis that improves the surface quality of a machined workpiece and that shortens the machining time, by the value function update section 30 repeating the update of the function Q. [READ further paragraph 0045-0046, 0050-0053] a decision maker [Decision making section 52 fig. 5] configured to estimate a combination of set values of the parameters more suitable for the machining based on the updated value function, and output the estimated combination of the set values of the parameters. [0038] By repeating the above-described learning cycle, the learning section 26 can automatically recognize features implying the correlation between a speed distribution (Nth-order time-derivative component of the speed) of each axis and a combination of the surface quality and the machining time of a machined workpiece. The correlation between a speed distribution (Nth-order time-derivative component of the speed) of each axis and a combination of the surface quality and the machining time of a machined workpiece is substantially unknown. The learning section 26 gradually recognizes features and interprets the correlation as learning progresses. When the correlation between a speed distribution (Nth-order time-derivative component of the speed) of each axis and a combination of the surface quality and the machining time of a machined workpiece is interpreted to some reliable level, learning results repeatedly outputted by the learning section 26 can be used for making a selection of an action (that is, decision-making) as to what surface quality of a machined workpiece and what machining time should be derived for the current state (that is, the speed distribution (Nth-order time-derivative component of the speed) of each axis). Specifically, as the learning algorithm progresses, the learning section 26 can make the correlation between the speed distribution (Nth-order time-derivative component of the speed) of each axis and an action derived from the state which includes the surface quality and the machining time of a machined workpiece gradually closer to the optimal solution. [0065] A decision-making section 52 can be configured as, for example, one function of a CPU of a computer. Alternatively, the decision-making section 52 can be configured as, for example, software that causes a CPU of a computer to work. The decision-making section 52 generates a command value C to a machine tool that performs machining based on the speed distribution (Nth-order time-derivative component of the speed) of each axis learned by the learning section 26, and outputs the generated command value C. In the case where the command value C based on the speed distribution (Nth-order time-derivative component of the speed) of each axis learned by the decision-making section 52 is outputted to the machine tool, the state (speed distribution (Nth-order time-derivative component of the speed) S1 of each axis) of the environment changes in response to the outputted command value C. [READ further paragraph 0066-0067] Regarding claim 2, an evaluation program capable of evaluating at least one of the machining accuracy and the machined surface quality is allowed to be registered, and a reward related to at least one of the machining accuracy and the machined surface quality is calculated using the evaluation program [SEE par. 0032, 0050-0054, 0079]. Regarding claim 3, the decision maker estimates a combination of set values of the parameters more shortening a machining time in the machining and more suitable for the machining based on the updated value function, and outputs the estimated combination of the set values of the parameters [SEE par. 0044]. Regarding claim 8, it is directed to the a computer-readable storage medium storing a program to implement the system as set forth in claim 1. Therefore, they are rejected on the same basis as set forth hereinabove. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US Pub. No. 2019/0129386 to Sato teaches a machine learning device of a machining condition adjustment device includes: a state observation unit that observes each of machining condition data indicative of a machining condition of each used tool and cycle time data indicative of a cycle time of one machining, as a state variable; a determination data acquisition unit that acquires determination data indicative of a result of an appropriateness determination of one machining in the case where an adjustment of the machining condition is performed; and a learning unit that performs learning by associating the machining condition data and the cycle time data with the adjustment of the machining condition using the state variable and the determination data so as to enables effective use of the allowance of a cycle time. US Pub. No. 2018/0210406 to Shimizu et al. teach a numerical controller has a machine learning device that performs machine learning of the adjustment of a setting value used in override control. The machine learning device acquires state data showing states of the numerical controller and a machine, sets reward conditions, calculates a reward based on the state data and the reward conditions, performs the machine learning of the adjustment of the setting value used in override control, and determines the adjustment of the setting value used in override control, based on a machine learning result and the state data. Any inquiry concerning this communication or earlier communications from the examiner should be directed to VINCENT HUY TRAN whose telephone number is (571)272-7210. The examiner can normally be reached M-F 7:00-4:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kamini S Shah can be reached at 571-272-2279. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. VINCENT H TRAN Primary Examiner Art Unit 2115 /VINCENT H TRAN/Primary Examiner, Art Unit 2115
Read full office action

Prosecution Timeline

Sep 19, 2023
Application Filed
Mar 05, 2026
Non-Final Rejection — §101, §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602304
SELF-LEARNING GREEN APPLICATION WORKLOADS
2y 5m to grant Granted Apr 14, 2026
Patent 12596349
COMPUTER-AUTOMATED SCRIPTED ELECTRONIC ACTOR CONTROL
2y 5m to grant Granted Apr 07, 2026
Patent 12596387
FLUID CONTROL DEVICE, FLUID CONTROL METHOD, AND FLUID CONTROL PROGRAM
2y 5m to grant Granted Apr 07, 2026
Patent 12589279
SYSTEMS AND METHODS OF USING ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR GENERATING AN ALIGNMENT PLAN CAPABLE OF ENABLING THE ALIGNING OF A USER'S BODY DURING A TREATMENT SESSION
2y 5m to grant Granted Mar 31, 2026
Patent 12585257
AUTOMATED DATA TRANSFER BETWEEN AUTOMATION SYSTEMS AND THE CLOUD
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
87%
Grant Probability
96%
With Interview (+9.3%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 1083 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month