Prosecution Insights
Last updated: April 19, 2026
Application No. 18/546,847

DATA GENERATION DEVICE, DATA GENERATION METHOD, AND PROGRAM

Non-Final OA §102§103§112
Filed
Aug 17, 2023
Examiner
VAUGHN, RYAN C
Art Unit
2125
Tech Center
2100 — Computer Architecture & Software
Assignee
Omron Corporation
OA Round
1 (Non-Final)
62%
Grant Probability
Moderate
1-2
OA Rounds
3y 9m
To Grant
81%
With Interview

Examiner Intelligence

Grants 62% of resolved cases
62%
Career Allow Rate
145 granted / 235 resolved
+6.7% vs TC avg
Strong +19% interview lift
Without
With
+19.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 9m
Avg Prosecution
45 currently pending
Career history
280
Total Applications
across all art units

Statute-Specific Performance

§101
23.9%
-16.1% vs TC avg
§103
40.1%
+0.1% vs TC avg
§102
7.6%
-32.4% vs TC avg
§112
21.9%
-18.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 235 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Claims 1-12 are presented for examination. Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Information Disclosure Statement The information disclosure statement filed December 23, 2025 fails to comply with 37 CFR 1.98(a)(2), which requires a legible copy of each cited foreign patent document (in this case, WO 2012073140); each non-patent literature publication or that portion which caused it to be listed; and all other information or that portion which caused it to be listed. It has been placed in the application file, but the information referred to therein has not been considered. The information disclosure statement s (IDS) submitted on August 31, 2023 and February 18, 2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner. Drawings The drawings are objected to because some text in Figs. 1, 5, 8, 12, and 14-15 is too small to be read without significant zooming, see 37 CFR § 1.84(p)(3) . Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Specification The lengthy specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware in the specification. The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. Claim Objections Claim s 7-9 are objected to because of the following informalities: “data is” should be “data are” . Appropriate correction is required. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “original data displaying unit”, “parameter receiver”, “generated data displaying unit”, and “adoptability receiver” in claim s 1 and 12; “generated data preserver” in claims 2 and 10; “similarity calculator” in claim 5; “generated data displaying unit” in claims 5-6 and 10; “use purpose receiver” in claim 6; and “data set designation receiver” in claim 10 . Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claim s 1-10 and 12 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. The claim limitations enumerated above in the section entitled “Claim interpretation” invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed functions and to clearly link the structure, material, or acts to the functions. Therefore, it is unclear whether Applicant had possession of the claimed invention as of the effective filing date of the claimed invention. See rejection under 35 USC § 112(b) infra for further analysis. The following is a quotation of 35 U.S.C. 112(b): (b ) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the appl icant regards as his invention. Claim s 1-10 and 12 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The claim limitations enumerated above in the section entitled “Claim interpretation” invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed functions and to clearly link the structure, material, or acts to the functions. Regarding the “original data displaying unit”, at most paragraph 60 of the specification as published recites the claimed function of the unit, but does not indicate how or by what means the data are displayed. Regarding the “parameter receiver”, paragraph 63 merely repeats the claimed functions without providing an algorithm for how the claimed functions are carried out. Regarding the “generated data displaying unit”, paragraphs 61, 63, 75, 79, and 86 merely repeat the claimed functions without indicating how the data augmentation is carried out. Regarding the “adoptability receiver”, insofar as the specification mentions this element at all, it does nothing more than repeat the claim language (e.g., paragraph 24) with no further elucidation whatsoever. Regarding the “generated data preserver”, insofar as the specification mentions this element at all, it does nothing more than repeat the claim language (e.g., paragraph 21) with no further elucidation whatsoever. Regarding the “similarity calculator”, the specification mentions this element in paragraphs 72-74 and 78, but again does little more than repeat the claimed functions, without providing an algorithm as to how they are carried out. Regarding the “use purpose receiver”, the specification mentions this element in paragraphs 82, 84-85, and 89, but again does little more than repeat the claimed functions, without providing an algorithm as to how they are carried out. Regarding the “data set designation receiver”, insofar as the specification mentions this element at all, it does nothing more than repeat the claim language (e.g., paragraph 21) with no further elucidation whatsoever. Therefore, the claims are indefinite and are rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph. For purposes of examination, any computer software that performs the claimed functions will be deemed to read on the claims. Applicant may: (a) Amend the claims so that the claim limitations will no longer be interpreted as limitations under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph; (b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed functions, without introducing any new matter (35 U.S.C. 132(a)); or (c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the functions recited in the claim, without introducing any new matter (35 U.S.C. 132(a)). If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the functions so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed functions, applicant should clarify the record by either: (a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed functions and clearly links or associates the structure, material, or acts to the claimed functions, without introducing any new matter (35 U.S.C. 132(a)); or (b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed functions. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181. All claims dependent on a claim rejected hereunder are also rejected for being dependent on a rejected base claim. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis ( i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale , or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim s 1-4 and 6-12 are rejected under 35 U.S.C. 102 (a)(2) as being anticipated by Song et al. (WO 2021178909) (“Song”) . Regarding claim 1 , Song discloses “[a] data generation device configured to generate data in machine learning for making a determination on an object, the data generation device comprising: an original data displaying unit configured to display, on a displaying unit, first original data on which data augmentation is to be performed, the first original data including the object ( to provide for interaction with a user, embodiments may be implemented on a computer having a display device, e.g., a CRT or LCD monitor – Song, p. 39, ll. 6-18; candidate training data [original data including objects] used specifically in training candidate machine learning models during policy search steps are obtained, and proxy machine learning models are trained on the training data augmented with each of a plurality of point cloud augmentation policies – id. at p. 25, l. 9-p. 26, l. 7 ) ; a parameter receiver configured to receive an input of a parameter related to the data augmentation ( system receives data defining [receiving an input of] a plurality of data augmentation policy parameters such as point cloud augmentation policy parameters – Song, p. 28, ll. 11-17 ) ; a generated data displaying unit configured to display, on the displaying unit, generated data generated by the data augmentation for something other than the object in the first original data on the basis of the parameter ( to provide for interaction with a user, embodiments may be implemented on a computer having a display device, e.g., a CRT or LCD monitor – Song, p. 39, ll. 6-18; for each of the cloud augmentation policies, a quality measure of the current cloud augmentation policy is generated [data generated by the data augmentation = data generated by the particular augmentation policy plus the policy itself ; note that these data are generated using the entire training dataset, including data points other than the object] – id. at Fig. 6 and p. 26, l. 30-p. 27, l. 3; note also that Fig. 3 shows that each data augmentation policy is determined by parameters ) ; and an adoptability receiver configured to receive whether or not to adopt the data augmentation based on the parameter ( system can repeatedly determine a current point cloud augmentation policy, train a proxy ML model using the current policy, and determine a quality measure of the current policy until a search termination criterion is met, and after determining that the criterion is satisfied [i.e., after receiving a signal to adopt the augmentation], a final point cloud augmentation policy is generated [i.e., a data augmentation is adopted] – Song, p. 27, ll. 10-17; note also that Fig. 3 shows that each data augmentation policy is determined by parameters ) . ” Claim 11 is a method claim corresponding to device claim 1 and is rejected for the same reasons as given in the rejection of that claim. Similarly, c laim 12 is a non-transitory computer-readable medium claim corresponding to device claim 1 and is rejected for the same reasons as given in the rejection of that claim. Regarding claim 2 , Song discloses “ a generated data preserver configured to preserve the generated data in a case where the data augmentation based on the parameter is adopted ( central processing unit receives instructions and data from a read-only memory or a random-access memory [generated data preserver] or both – Song, p. 38, ll. 18-30; trained machine learning model is generated using the training data and a final [adopted] point cloud augmentation policy [note that the use of the final policy to train a model suggests its being stored/preserved somewhere] – id. at p. 10, ll. 18-22 ) . ” Regarding claim 3 , Song discloses that “ the parameter is information regarding the something other than the object and/or a changing method of a data acquisition condition ( point cloud augmentation policy is composed of “sub-policies” each of which is composed of transformation operations, data point processing operations, intensity perturbing operations, jittering operations, or dropout operations; each transformation operation has an associated magnitude and probability [i.e., information regarding how training data objects, including datapoints other than the object, are to be perturbed] – Song, p. 22, ll. 16-25 ) . ” Regarding claim 4 , Song discloses that “ the parameter is information regarding a degree of the changing method ( point cloud augmentation policy is composed of “sub-policies” each of which is composed of transformation operations, data point processing operations, intensity perturbing operations, jittering operations, or dropout operations [changing methods of data acquisition]; each transformation operation has an associated magnitude [degree] and probability – Song, p. 22, ll. 16-25 ) . ” Regarding claim 6 , Song discloses “ a use purpose receiver configured to receive an input of a use purpose of the generated data generated by the data augmentation ( Song Fig. 3 shows that the purpose of the data generated by the data augmentation at candidate machine learning models A-N is to generate a trained machine learning model 202 [note that the transmission of this use purpose is implicit in the transmission of the augmented data to the model to train] ) ; and a parameter storage configured to store, in association with the use purpose, the information of the parameter used when the data augmentation is adopted ( training system is configured to generate a trained machine learning model by training a machine learning model using [i.e., with the use purpose of training] the training data and a “final” [adopted] point cloud augmentation policy 208 [including information regarding its parameters] – Song, p. 10, ll. 18-22; central processing unit receives instructions and data from a read-only memory or a random-access memory [parameter storage] or both – id. at p. 38, ll. 18-30 ) , wherein the generated data displaying unit is configured to perform the data augmentation on the basis of the parameter associated with a use purpose matching the use purpose input, and display the generated data on the displaying unit ( to provide for interaction with a user, embodiments may be implemented on a computer having a display device, e.g., a CRT or LCD monitor – Song, p. 39, ll. 6-18; system generates a final trained machine learning model by training a final machine learning model on the training data and using the final point cloud augmentation policy; the system generates an augmented set of training data [performs the data augmentation] by applying the final policy [including its parameters] to training data; the system then trains an instance of the machine learning model on the augmented training data [i.e., the parameters are associated with the use purpose of training the model] – id. at p. 27, ll. 18-28 ) . ” Regarding claim 7 , Song discloses that “ the first original data [are] image data ( inputs can include text data, image data, or video data – Song, p. 22, ll. 7-15 ) , and the parameter is at least one of a change in a photographing distance, a change in a photographing angle, a change in a photographing time, a change in a background image, and a change in a weather condition during photographing ( if the inputs include image data, the data transformation operations may be any image processing operations, such as color inversion operations [which have the effect of changing the background of the image] – Song, p. 22, ll. 7-15 ) . ” Regarding claim 8 , Song discloses that “ the first original data [are] voice data or waveform data ( inputs can include text data, image data, or video data [note that video data generally include sound or encoded waveforms] – Song, p. 22, ll. 7-15 ) , and the parameter is a changing method of at least one of environmental sound imparting and noise imparting ( magnitude of a transformation operation is an ordered collection of one or more numerical values that specifies how the transformation operation should be applied to a training input; the magnitude of an intensity perturbation operation may specify the absolute value of random noise to be added to respective coordinates of data points in a point cloud – Song, p. 22, ll. 26-31 ) . ” Regarding claim 9 , Song discloses that “ the first original data [are] text data ( inputs can include text data, image data, or video data – Song, p. 22, ll. 7-15 ) , and the parameter is a changing method of at least one of substitution, word order exchange, and exclamation word imparting ( if the inputs include text data, the data transformation operations may include, e.g., word or punctuation removal operations [note that text removal substitutes the text with a null string] – Song, p. 22, ll. 7-15 ) . ” Regarding claim 10 , Song discloses “ an original data storage configured to store one or more data sets including a plurality of pieces of first original data ( Song Fig. 3 depicts a storage [original data storage] for training data 206 [plurality of pieces of original data] ) ; and a data set designation receiver configured to receive designation of a data set on which the data augmentation is to be performed ( to generate a trained machine learning model and to determine a final data augmentation policy, the system maintains a population repository storing a plurality of candidate machine learning models; population repository stores, for each candidate machine learning model, a set of maintained values including data augmentation policy parameters that define a sequence of transformation operations used in training the candidate machine learning model [i.e., the system receives a designation of the training data 206 and the data augmentation policy parameters for the particular model to be trained] – Song, p. 17, l. 21-p. 18, l. 9 ) , wherein the generated data displaying unit is configured to display a result of the data augmentation performed on one of the plurality of pieces of first original data included in the data set designated ( to provide for interaction with a user, embodiments may be implemented on a computer having a display device, e.g., a CRT or LCD monitor – Song, p. 39, ll. 6-18; training system may combine a predetermined number (e.g., 5) of point cloud augmentation policies generated by the training system with the highest quality measures to generate the final point cloud augmentation policy; once trained, the training system can provide data specifying the trained model to an on-board system of a vehicle for use in classifying objects within point cloud data [classification = result of data augmentation, as the model that generates the classification is the result of data augmentation] – id. at p. 16, ll. 3-23 ) ; and the generated data preserver is configured to perform, in a case where the data augmentation is adopted, the adopted data augmentation on all pieces of the first original data included in the data set and preserve all pieces of the generated data ( central processing unit receives instructions and data from a read-only memory or a random-access memory [generated data preserver for preserving the generated data] or both – Song, p. 38, ll. 18-30; system generates a final trained machine learning model by training a final machine learning model on the training data and using the final point cloud augmentation policy; the system generates an augmented set of training data [performs the adopted data augmentation] by applying the final [adopted] policy to training data – id. at p. 27, ll. 18-28 ) . ” Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness . This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Song in view of Urmanov et al. (US 20180322363) (“ Urmanov ”) . Regarding claim 5 , Song further discloses “ a parameter storage configured to store the first original data used when the data augmentation is performed as second original data, and store, in association with the second original data, information of the parameter used when the data augmentation is adopted ( Song Fig. 3 depicts a storage for training data 206 [ first original data] that are processed through N candidate machine learning models that augment the data using data augmentation policy parameters [thereby creating second original data, which are stored along with the parameters] until a final augmentation policy is adopted; central processing unit receives instructions and data from a read-only memory or a random-access memory [parameter storage] or both – id. at p. 38, ll. 18-30 ) ; … wherein the generated data displaying unit is configured to perform the data augmentation on the basis of the parameter associated with the second original data … , and display the generated data on the displaying unit ( to provide for interaction with a user, embodiments may be implemented on a computer having a display device, e.g., a CRT or LCD monitor – Song, p. 39, ll. 6-18 ; Fig. 3 depicts a storage for training data 206 that are processed through N candidate machine learning models that augment the data using data augmentation policy parameters [thereby creating second original data] until a final augmentation policy is adopted [i.e., the augmented data are the ones generated by the final policy] ) . ” Song appears not to disclose explicitly the further limitations of the claim. However, Urmanov discloses “ a similarity calculator configured to calculate similarity between the second original data … and the first original data ( multi-distance clustering logic selects a pair of similar data points to create an initial cluster; the pair of data points [first and second original data] having the highest positive similarity can be selected as the initial pair – Urmanov , paragraph 61 ) … , wherein … t he second original data hav [e a] highest similarity with the first original data ( multi-distance clustering logic selects a pair of similar data points to create an initial cluster; the pair of data points [first and second original data] having the highest positive similarity can be selected as the initial pair – Urmanov , paragraph 61 ) ….” Urmanov and the instant application both relate to similarity calculations in machine learning and are analogous. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Song to calculate similarities between data points and select the points with highest similarity to each other , as disclosed by Urmanov , and an ordinary artisan could reasonably expect to have done so successfully. Doing so would allow similar data to be treated similarly, thereby easing the computational burden associated with treating all data differently. See Urmanov , paragraph 61 . Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT RYAN C VAUGHN whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (571)272-4849 . The examiner can normally be reached FILLIN "Work Schedule?" \* MERGEFORMAT M-R 7:00a-5:00p ET . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FILLIN "SPE Name?" \* MERGEFORMAT Kamran Afshar , can be reached at FILLIN "SPE Phone?" \* MERGEFORMAT 571-272-7796 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RYAN C VAUGHN/ Primary Examiner, Art Unit 2125
Read full office action

Prosecution Timeline

Aug 17, 2023
Application Filed
Mar 03, 2026
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602448
PROGRESSIVE NEURAL ORDINARY DIFFERENTIAL EQUATIONS
2y 5m to grant Granted Apr 14, 2026
Patent 12602610
CLASSIFICATION BASED ON IMBALANCED DATASET
2y 5m to grant Granted Apr 14, 2026
Patent 12561583
Systems and Methods for Machine Learning in Hyperbolic Space
2y 5m to grant Granted Feb 24, 2026
Patent 12541703
MULTITASKING SCHEME FOR QUANTUM COMPUTERS
2y 5m to grant Granted Feb 03, 2026
Patent 12511526
METHOD FOR PREDICTING A MOLECULAR STRUCTURE
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
62%
Grant Probability
81%
With Interview (+19.4%)
3y 9m
Median Time to Grant
Low
PTA Risk
Based on 235 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month