Prosecution Insights
Last updated: April 19, 2026
Application No. 18/246,094

TRAINED AUTOENCODER, TRAINED AUTOENCODER GENERATION METHOD, NON-STATIONARY VIBRATION DETECTION METHOD, NON-STATIONARY VIBRATION DETECTION DEVICE, AND COMPUTER PROGRAM

Final Rejection §101§102§103
Filed
Mar 21, 2023
Examiner
BARBEE, MANUEL L
Art Unit
2857
Tech Center
2800 — Semiconductors & Electrical Systems
Assignee
Si Synergy Technology Co. Ltd.
OA Round
2 (Final)
82%
Grant Probability
Favorable
3-4
OA Rounds
3y 1m
To Grant
96%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
747 granted / 913 resolved
+13.8% vs TC avg
Moderate +14% lift
Without
With
+14.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
30 currently pending
Career history
943
Total Applications
across all art units

Statute-Specific Performance

§101
25.5%
-14.5% vs TC avg
§103
36.4%
-3.6% vs TC avg
§102
22.8%
-17.2% vs TC avg
§112
12.0%
-28.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 913 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: a first recording unit, a receiving unit, a measured vibration feature data generation unit, a first arithmetic unit and a second arithmetic unit in claim 7. The limitations are disclosed as being implemented by a programmed computer (Figure 6, pars. 52-68). Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 13 and 18-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because claim 13 (and dependent claims 18-20) is directed to one or more computer-readable media, which includes transitory embodiments such as signals, which are non-statutory (MPEP 2106.03, Subsection I). Amending claim 13 to be directed to a non-transitory computer-readable medium would overcome this rejection. Claims 7 and 9-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Per step 1 of the Subject Matter Eligibility Test (See MPEP 2106), claim 7 is directed to a device, which is a product and falls within a statutory category (See MPEP 2106.03). Per step 2A, prong 1, claim 7 recites a first recording unit having a trained autoencoder recorded therein, wherein the trained autoencoder is obtained by performing pre-training of an autoencoder that encodes input data being predetermined data and then decodes the encoded predetermined data to obtain data having the same dimensions as dimensions of the input data, wherein the input data is stationary vibration feature data generated from stationary vibration data that is data having a specific duration about stationary vibration including vibration generated in a stationary state from an object for which detection of non-stationarity is performed based on vibration, the stationary vibration feature data being data about a feature of stationary vibration identified by the stationary vibration data, and output data is estimated stationary vibration feature data, and wherein the pre-training is performed by inputting a plurality of pieces of the stationary vibration feature data so that a difference between the stationary vibration feature data being the input data and the estimated stationary vibration feature data being the output data with respect to the input data is minimized. The claim limitation for how the trained autoencoder is obtained describes encoding and decoding data, which requires processing data through an autoencoder which includes weights at various stages of the model, which falls into the mathematical processes grouping. The description of the input data as stationary data simply describes the type of data that is being processed by the autoencoder. The pretraining further d escribes a mathematical process because it recites inputting data to the autoencoder till a difference is minimized (MPEP 2106.04(a)(2), subsection I). Claim 7 further recites configured to receive measured vibration data that is data having a specific duration about measured vibration including vibration generated from the object for which detection of non-stationarity based on vibration is performed; … configured to generate, from the measured vibration data received by the receiving unit, measured vibration feature data that is data about a feature of measured vibration identified by the measured vibration data, by the same method as a method of generating the stationary vibration feature data from the stationary vibration data in pre-training; … configured to read the trained autoencoder recorded in the first recording unit and to input the measured vibration feature data generated by the measured vibration feature data generation unit to the trained autoencoder to obtain estimated measured vibration feature data that is an output from the trained autoencoder in response to the input measured vibration feature data; and … configured to obtain a difference between the measured vibration feature data generated by the measured vibration feature data generation unit and the estimated measured vibration feature data generated from the measured vibration feature data by the first arithmetic unit and to determine, when the difference is larger than a predetermined range, that measured vibration identified by measured vibration data from which the measured vibration feature data is derived is non- stationary vibration and generate result data indicating occurrence of non-stationary vibration. Receiving the measured vibration data and generating, from the measured vibration data, measured vibration feature data is disclosed as a spectrogram (par. 56). Generating a spectrogram is a calculation that falls into the mathematical concepts grouping. Processing the feature data through the encoder and obtaining a difference between the measured vibration feature and the feature data generated using the autoencoder requires calculations as described above with regard to training the autoencoder. Therefore, claim 7 also recites a mathematical concept. The additional elements in claim 7 are a first recording unit, a receiving unit, a measured vibration feature data generation unit, a first arithmetic unit and a second arithmetic unit. Per step 2A, prong 2, as discussed above, each of these units are disclosed as a computer programmed to perform the recited functions. The recitation of the units amounts to instructions to implement the abstract idea on a generic computer (See MPEP 2106.05(f)). Therefore, the recited additional elements do not integrate the abstract idea into a practical application. Per step 2B, the additional elements recited in claim 7 do not amount to significantly more than the abstract idea for the same reason. Claims 9-11 depend from claim 7. Claims 9-11 recite further details of the abstract idea. Claims 9-11 do not recite any additional elements, and are rejected for the same reason. Claims 12 and 13 recite a similar abstract idea to the abstract idea recited in claim 7 and does not recite any additional elements. Therefore, claims 12 and 13 are rejected for the same reason. Claims 15-17 depend from claim 12 and claims 18-20 depend from claim 13. Claims 15-17 and 18-20 recite further details of the abstract idea. Claims 15-17 and 18-20 do not recite any further additional elements, and are rejected for the same reason. Claim 14 recites a similar abstract idea to that recited in claim 7 with similar additional elements. Claim 14 further recites a step A of inputting, to the trained autoencoder, the stationary vibration feature data generated from the stationary vibration data not used for training of the trained autoencoder, the stationary vibration feature data being data about a feature of stationary vibration identified by the stationary vibration data, to obtain the estimated stationary vibration feature data as an output of the trained autoencoder; a step B of generating a loss function for a difference between the stationary vibration feature data input to the trained autoencoder in the step A and the estimated stationary vibration feature data generated from the stationary vibration feature data by the trained autoencoder; and a step C of determining the threshold value based on a mean and a variance of an amplitude related to an error of the loss function obtained in the step B. Similar to the previously discussed limitations, inputting data to an encoder requires mathematical calculations. The step of generating a loss function and determining a threshold value based on a mean and a variance are also mathematical functions, which fall into the mathematical concepts grouping. The only additional elements are the units, which are disclosed to be a programmed computer. As discussed above, the recitation of the units amounts to instructions to implement the abstract idea on a generic computer. Therefore, the recited additional elements do not integrate the abstract idea into a practical application, and further do not amount to significantly more than the abstract idea. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 7, 9, 10, 12, 13, 15, 16, 18 and 19 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by US Patent Application Publication 2021/0327456 to Yamaguchi et al. (Yamaguchi). Claims 7, 12, and 13 With regard to a first recording unit having a trained autoencoder recorded therein, Yamaguchi teaches a recording unit used to store the program to carry out the functions (Fig. 24, recording unit 2020; pars. 273, 274). With regard to wherein the trained autoencoder is obtained by performing pre-training of an autoencoder that encodes input data being predetermined data and then decodes the encoded predetermined data to obtain data having the same dimensions as dimensions of the input data, Yamaguchi teaches an autoencoder learning apparatus that includes an input generating unit and a loss function calculating unit (Fig. 9, autoencoder learning apparatus 400, input generating unit 110, normal sound recording unit 910, pars. 163, 164). With regard to wherein the input data is stationary vibration feature data generated from stationary vibration data that is data having a specific duration about stationary vibration including vibration generated in a stationary state from an object for which detection of non-stationarity is performed based on vibration, Yamaguchi teaches an input generating unit generating input data from the normal sound (pars. 165, 166; Fig. 9, input generating unit 110). With regard to wherein the stationary vibration feature data being data about a feature of stationary vibration identified by the stationary vibration data; Yamaguchi teaches a normal sound recording unit (par. 165). With regard to wherein output data is estimated stationary vibration feature data; Yamaguchi teaches that input data is generated from the normal sound data (par. 166). With regard to wherein the pre-training is performed by inputting a plurality of pieces of the stationary vibration feature data so that a difference between the stationary vibration feature data being the input data and the estimated stationary vibration feature data being the output data with respect to the input data is minimized; Yamaguchi teaches that the restored data is output from the autoencoder and used in a loss function calculating unit and updating a parameter of the autoencoder to minimize the value of the loss function (Fig. 9, blocks 420, 430, 440, 450; pars. 166-175). With regard to a receiving unit configured to receive measured vibration data that is data having a specific duration about measured vibration including vibration generated from the object for which detection of non-stationarity based on vibration is performed; Yamaguchi teaches that an anomaly detection target sound is input to the input data generating unit (Fig. 13, input data generating unit 110; par. 184, 185). With regard to a measured vibration feature data generation unit configured to generate, from the measured vibration data received by the receiving unit, measured vibration feature data that is data about a feature of measured vibration identified by the measured vibration data, by the same method as a method of generating the stationary vibration feature data from the stationary vibration data in pre-training; Yamaguchi teaches that the input data generating unit generates input data from the anomaly detection target sound (Fig. 13, input data generating unit 110, par. 185). With regard to a first arithmetic unit configured to read the trained autoencoder recorded in the first recording unit and to input the measured vibration feature data generated by the measured vibration feature data generation unit to the trained autoencoder to obtain estimated measured vibration feature data that is an output from the trained autoencoder in response to the input measured vibration feature data; Yamaguchi teaches a restored input calculating unit (Fig. 5, restored input calculating unit 621; Fig. 13, blocks 620, 63; pars. 156-158). With regard to a second arithmetic unit configured to obtain a difference between the measured vibration feature data generated by the measured vibration feature data generation unit and the estimated measured vibration feature data generated from the measured vibration feature data by the first arithmetic unit and to determine, when the difference is larger than a predetermined range, that measured vibration identified by measured vibration data from which the measured vibration feature data is derived is non- stationary vibration and generate result data indicating occurrence of non-stationary vibration, Yamaguchi teaches determining whether equipment is abnormal based on a calculation using the restored input data (pars. 187-189). With regard to wherein, in order to obtain the difference between the measured vibration feature data generated by the measured vibration feature data generation unit and the estimated measured vibration feature data generated from the measured vibration feature data by the first arithmetic unit, the second arithmetic unit is configured to generate a loss function for the measured vibration feature data and the estimated measured vibration feature data, and determine that measured vibration identified by measured vibration data from which the measured vibration feature data is derived is non-stationary vibration when the number of values of the loss function which exceed a predetermined threshold value is a predetermined number or more; Yamaguchi teaches determining an anomaly degree and comparing the anomaly degree to a threshold to determine if the sound is abnormal (par. 63). The threshold corresponds to the predetermined threshold. In this case, only one instance of exceeding the threshold is sufficient and corresponds to a predetermined number. Claims 9, 15 and 18 Yamaguchi teaches that the measured vibration feature data is a frequency spectrogram generated from the measured vibration data (par. 15, logarithmic amplitude spectrum). Claim 10, 16 and 19 Yamaguchi teaches that the measured vibration is a sound generated during a period in which detection of non-stationarity based on vibration is performed (Fig. 4, normal sound recording unit 910). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 11, 17 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yamaguchi in view of admitted prior art. Claims 11, 17 and 20 Yamaguchi teaches all the limitations of claim 10 upon which claim 11 depends, claim 16 upon which claim 17 depends and claim 19 upon which claim 20 depends.. Yamaguchi does not teach that the stationary vibration feature data is a mel-frequency spectrogram generated from the stationary vibration data. Paragraph 42 of the specification states that obtaining sound feature data, which is the mel-frequency spectrogram is well known as a data format for representing a feature of a sound. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the anomaly detection, as taught by Yamaguchi, to include using mel-frequency spectrogram to determine the sound features, because then well known data format with well known methods for determination would have been available to determine the sound features. Response to Arguments Applicant's arguments filed 31 October 2025 have been fully considered but they are not persuasive. 35 U.S.C. § 101 Applicant states that claim was amended to recite one or more computer-readable media. However, a computer-readable media includes transitory embodiments which are non-statutory (MPEP 2106.03, Subsection I). Applicant states that claims the amended claims are not directed to an abstract idea at least because of the features associated with measurement of vibration of an object (e.g., machine or bridge, as described at paragraphs [0025-0026] of the present published application), the determination of non-stationary vibration (e.g., indicative of malfunction), and the generation of results indicating occurrence of non-stationary vibration, which features are not in the form of the mathematical processes discussed in the Office Action. However, the only non-abstract elements (additional elements) are the various recited units, which are disclosed as a computer programmed to perform the recited functions. Applicant refers to the 2024 Guidance, at Section III(A)(1)(A), that states "A claim does not recite a mathematical concept (i.e., the claim limitations do not fall within the mathematical concept grouping) if it is only based on or involves a mathematical concept." However, the claims limitations do not merely involve a mathematical concept. The claim limitations require calculations. An autoencoder necessarily requires mathematical calculations to process the inputs. Encoding the input data as a spectrogram requires mathematical calculations (Specification par. 56). And obtaining a difference is a mathematical calculation. Applicant states that under Step 2A, Prong Two, the claims recite additional elements that integrate the exceptions into a practical application by "implementing a judicial exception with, or using a judicial exception in conjunction with, a particular machine or manufacture that is integral to the claim, as discussed in MPEP § 2106.05(b)" (MPEP §2106.04(d)), such that the relevant elements are a particular machine or manufacture that is integral to the claim. Applicant further states that In the present application, the additional elements recited in claims 7 and 9-15, as amended, reflect an improvement to a computer or other technology described in the specification and cover a particular solution to a problem or a particular way to achieve a desired outcome. However, the only additional elements recited are the various units which are disclosed as a programmed computer (Fig. 6, pars. 52-68). The recitation of a programmed computer is recited at a high level of generality and is not a particular machine. With regard to Step 2B, Applicant states that processing is well beyond what is well-understood, routine, conventional activity in the field, and therefore represents an inventive concept for which claims 7 and 9-15 are patent eligible. However, the claimed processing steps are directed to an abstract idea. The “inventive concept” is furnished by an element or combination of elements that is recited in the claim in addition to the judicial exception (MPEP 2106.05, subsection I). The only additional elements are the units which are disclosed as a programmed computer. As discussed above, the recitation of a programmed computer amounts to mere instructions to implement the abstract idea, which is not significantly more (MPEP 2106.05(f)). 35 U.S.C. §§ 102 and 103 Applicant states that Yamaguchi fails to teach or suggest at least the aspects of counting a number of values of a loss function exceeding a threshold value to determine whether or not a vibration is non-stationary vibration, as recited in amended independent claims 7, 12, and 13. However, the claim requires that determining a non-stationary vibration when the number of the values of the loss function which exceed a predetermined threshold is a predetermined number or more. In Yamaguchi that number may be 1. It is noted that claim 14 recites additional limitations that are not found in claims 7, 12 and 13. These limitations include steps A, B and C f a method for determining a threshold. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MANUEL L BARBEE whose telephone number is (571)272-2212. The examiner can normally be reached M-F: 9-5:30.. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Shelby A Turner can be reached at 571-272-6334. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MANUEL L BARBEE/Primary Examiner, Art Unit 2857
Read full office action

Prosecution Timeline

Mar 21, 2023
Application Filed
Jun 27, 2025
Non-Final Rejection — §101, §102, §103
Oct 31, 2025
Response Filed
Mar 04, 2026
Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596364
DATA ANALYTICS FOR PREDICTIVE MAINTENANCE
2y 5m to grant Granted Apr 07, 2026
Patent 12596351
SYSTEMS, APPARATUS, AND METHODS FOR ADJUSTING PARAMETERS IN RESPONSE TO ANTICIPATED COMPONENT STATE
2y 5m to grant Granted Apr 07, 2026
Patent 12591019
METHOD FOR GENERATING ELECTROCHEMICAL IMPEDANCE SPECTROSCOPY OF BATTERY, MEDIUM, AND COMPUTER DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12584821
SENSOR NETWORK-BASED ANALYSIS AND/OR PREDICTION METHOD, AND REMOTE MONITORING SENSOR DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12569200
ELECTRONIC DEVICE FOR PROVIDING BIOMETRIC INFORMATION AND OPERATING METHOD THEREOF
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
82%
Grant Probability
96%
With Interview (+14.5%)
3y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 913 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month