Prosecution Insights
Last updated: April 19, 2026
Application No. 18/610,415

RECORDING MEDIUM, DATA GATHERING APPARATUS, AND METHOD FOR GATHERING DATA

Non-Final OA §101§103
Filed
Mar 20, 2024
Examiner
ASPINWALL, EVAN S
Art Unit
2152
Tech Center
2100 — Computer Architecture & Software
Assignee
Fujitsu Limited
OA Round
3 (Non-Final)
83%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
554 granted / 669 resolved
+27.8% vs TC avg
Strong +17% interview lift
Without
With
+16.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
19 currently pending
Career history
688
Total Applications
across all art units

Statute-Specific Performance

§101
29.1%
-10.9% vs TC avg
§103
41.3%
+1.3% vs TC avg
§102
6.9%
-33.1% vs TC avg
§112
12.2%
-27.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 669 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/8/2025 has been entered. Priority Acknowledgment is made of applicant's claim for foreign priority based on an application filed in PCT/JP2021/035083 on 09/24/2021. It is noted, however, that applicant has not filed a certified copy of the PCT/JP2021/035083 application as required by 37 CFR 1.55. Arguments and amendments filed on 12/8/2025 have been examined. Claims 1, 5 and 9 are amended. Claims 2, 6 and 10 are cancelled. Thus, Claims 1, 3-5, 7-9 and 11-12 are currently pending. Response to Arguments Applicant’s arguments with respect to claim(s) and the previous prior art rejection have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. As to the argument (concerning abstract idea rejections under 35 USC 101): “1. The amended claims recite technical solutions to technical problems rather than an abstract mental process The Examiner characterizes the claims as reciting the abstract idea of classifying or organizing information. However, the pending claims do not merely label or organize data. The amended claims introduce, among other technical features, the following essential components: performing, by a trained feature extraction model, data augmentation on unlabeled data controlling, based on a control parameter predicted by a machine learning model, a position and a posture of a data obtainer These features are neither conventional nor capable of being performed mentally. A trained feature extraction model applies learned feature representations to generate augmented data in a manner that preserves essential characteristics of the input data, improving generalization of downstream machine learning models. Likewise, the control of the physical data obtainer (e.g., a camera or a touch sensor) based on a machine-learned control parameter directly affects real-world data acquisition, improving the quality and consistency of the labeled training dataset. These steps necessarily require a computer system and cannot be performed as a mental act. Thus, the pending claims are not directed to an abstract idea, but rather to specific improvements in computer-implemented data processing and data acquisition systems.” The Examiner respectfully disagrees. Applicant asserts: “A trained feature extraction model applies learned feature representations to generate augmented data in a manner that preserves essential characteristics of the input data, improving generalization of downstream machine learning models. Likewise, the control of the physical data obtainer (e.g., a camera or a touch sensor) based on a machine-learned control parameter directly affects real-world data acquisition, improving the quality and consistency of the labeled training dataset”; Applicant's assertion above is plainly contrary to recent decisions from U.S. Court of Appeals for the Federal Circuit, See generally Recentive Analytics, Inc. v. Fox Corp., Decision No. 2023-2437, holding that "We have consistently held, in the context of computer-assisted methods, that such claims are not made patent eligible under§ 101 simply because they speed up human activity. See, e.g., Content Extraction, 776 F.3d at 1347; DealerTrack, 674 F.3d at 1333. Whether the issue is raised at step one or step two, the increased speed and efficiency resulting from use of computers (with no improved computer techniques) do not themselves create eligibility. See, e.g., Trinity Info Media, LLC v. Covalent, Inc., 72 F.4th 1355, 1363 (Fed. Cir. 2023) (rejecting argument that "humans could not mentally engage in the 'same claimed process' because they could not perform 'nanosecond comparisons' and aggregate 'result values with huge numbers of polls and members"? (internal citation omitted); Customedia Techs., LLC v. Dish Network Corp., 951 F.3d 1359, 1365 (Fed. Cir. 2020) (holding claims abstract where 'Tt]he only improvements identified in the specification are generic speed and efficiency improvements inherent in applying the use of a computer to any task'?; compare McRo, 837 F.3d at 1314 16 (finding eligibility of claims to use specific computer techniques different from those humans use on their own to produce natural-seeming lip motion for speech)" (emphasis added); i.e. the Federal Circuit in Recentive Analytics does not consider the Applicant of generic "machine learning" to render an invention eligible , e.g. as stated by the Court: "Whether the issue is raised at step one or step two, the increased speed and efficiency resulting from use of computers (with no improved computer techniques) do not themselves create eligibility''; additionally, the Recentive Analytics decision also states: "We see no merit to Recentive's argument that its patents are eligible because they apply machine learning to this new field of use. We have long recognized that "[a]n abstract idea does not become nonabstract by limiting the invention to a particular field of use or technological environment." Intel/. Ventures I LLC v. Capital One Bank (USA), 792 F.3d 1363, 1366 (Fed. Cir. 2015); see also Alice, 573 U.S. at 222; Parker v. Flook, 437 U.S. 584, 593 (1978); Stanford, 989 F.3d at 1373 (rejecting argument that a claim was not abstract where patentee contended "the specific application of the steps [was] novel and enable[d] scientists to ascertain more haplotype information than was previously possible'). We have also held the application of existing technology to a novel database does not create patent eligibility. See, e.g., SAP Am., Inc. v. lnvestPic, LLC, 898 F.3d 1161, 1168 (Fed. Cir. 2018); Elec. Power, 830 F.3d at 1353 tTWJe have treated collecting information, including when limited to particular content (which does not change its character as information), as within the realm of abstract ideas." (citing Internet Pats. Corp. v. Active Network, Inc., 790 F.3d 1343, 1349 (Fed. Cir. 2015); OIP Techs., Inc. v. Amazon.com, Inc., 788 F.3d 1359, 1363 (Fed. Cir. 2015); Content Extraction, 776 F.3d at 1347; Digitech Image Techs., LLC v. Elecs. for Imaging, Inc., 758 F.3d 1344, 1351 (Fed. Cir. 2014); CyberSource Corp. v. Retail Decisions, Inc., 654 F.3d 1366, 1370 (Fed. Cir. 2011))). Stated differently, patents may be directed to abstract ideas where they disclose the use of an "already available [technology], with [its] already available basic functions, to use as [a] tool[] in executing the claimed process." SAP Am., 898 F.3d at 1169-70. We think those cases are equally applicable in the machine learning context. Recentive's argument that its patents are eligible simply because they introduce machine learning techniques to the fields of event planning and creating network maps directly conflicts with our§ 101 jurisprudence." (Recentive Analytics p. 14-15). Thus, as analyzed under the decisions above, the Examiner finds the above arguments unconvincing and the rejection under 35 USC 101 remains. As to the argument: “2. The claimed improvements constitute a practical application under Step 2A, Prong 2 of the 2019 PEG Under Step 2A, Prong 2 of the 2019 Revised Patent Subject Matter Eligibility Guidance, a claim is patent-eligible when it integrates a judicial exception into a practical application. The present claims satisfy this standard because they improve the functioning of a computer-implemented data gathering system by: - generating higher-quality augmented training data using a trained feature extraction model, - enabling efficient and consistent propagation of labels across augmented data, and - optimizing the position and posture of the data obtaining hardware based on a control parameter predicted by a machine learning model. These improvements enhance both the performance and the accuracy of machine learning model training pipelines, and represent a technological solution to a technological problem, not a mathematical concept or mental process” The Examiner respectfully disagrees. Applicant asserts above that: “improvements enhance both the performance and the accuracy of machine learning model training pipelines, and represent a technological solution to a technological problem” however, the test is not whether the claim is confined to a particular field of use or technological environment, see Intellectual Ventures ILLC v. Capital One Bank (USA), 792 F.3d 1363, 1366 (Fed. Cir. 2015) ("[a]n abstract idea does not become nonabstract by limiting the invention to a particular field of use or technological environment"). The relevant question, even at the first step of the Mayo/Alice analysis, is "whether the claims are directed to an improvement in computer functionality versus being directed to an abstract idea." Enfish, LLC v. Microsoft Corp., 822 F.3d 1327, 1335 (Fed. Cir. 2016). Here, the invention uses computer technology, but the Specification describes the claimed solution as a scheme in collecting, storing and managing electronic records over time (see for example: specification para. [0071] "The label detecting unit 114 performs a labeling process on unlabeled data Un. The label detecting unit 114 stores the success or failure in labeling.". And collecting, storing, and organizing information describes the abstract idea to which Appellants' claims are directed, not an improvement in computer technology. Erie Indemnity Co., 850 F.3d at 1328 ("the heart of the claimed invention lies in creating and using an index to search for and retrieve data ... an abstract concept"). Thus, as the test for patent eligibility is not whether the claim is confined to a particular field of use or technological environment, (see, again Intellectual Ventures ILLC v. Capital One Bank (USA)), the Examiner is unconvinced the claims are directed to eligible subject matter and thus this argument is moot. As to the argument: “3. The claims recite significantly more than a judicial exception under Step 2B Even assuming, arguendo, that any aspect of the claims were to be viewed as involving an abstract idea, the claims satisfy Step 2B because they recite significantly more than the alleged abstract idea. The combined use of (i) a trained feature extraction model for data augmentation and (ii) a machine-learned prediction model to control the posture and position of the data obtainer is not routine, well-understood, or conventional. The Examiner has not identified - and cannot identify- any prior art showing these combined features as conventional. Therefore, the present claims recite an "inventive concept" that transforms any alleged abstract idea into a patent-eligible application. 4. Conclusion For the reasons above, the pending claims, when properly considered as a whole, are: - not directed to an abstract idea, but instead to concrete technological improvements; and - recite significantly more than any alleged abstract idea, including nonconventional machine learning-based data augmentation and physical device control based on predicted control parameters. Accordingly, Applicant respectfully submits that the rejection under 35 U.S.C. §101 is not applicable and requests its withdrawal.” The Examiner respectfully disagrees. Applicant asserts above that the claims: “recite significantly more than any alleged abstract idea, including nonconventional machine learning-based data augmentation and physical device control based on predicted control parameters” however, the test is not whether the claim is confined to a particular field of use or technological environment, see Intellectual Ventures ILLC v. Capital One Bank (USA), 792 F.3d 1363, 1366 (Fed. Cir. 2015) ("[a]n abstract idea does not become nonabstract by limiting the invention to a particular field of use or technological environment"). The relevant question, even at the first step of the Mayo/Alice analysis, is "whether the claims are directed to an improvement in computer functionality versus being directed to an abstract idea." Enfish, LLC v. Microsoft Corp., 822 F.3d 1327, 1335 (Fed. Cir. 2016). Here, the invention uses computer technology, but the Specification describes the claimed solution as a scheme in collecting, storing and managing electronic records over time (see for example: specification para. [0071] "The label detecting unit 114 performs a labeling process on unlabeled data Un. The label detecting unit 114 stores the success or failure in labeling.". And collecting, storing, and organizing information describes the abstract idea to which Appellants' claims are directed, not an improvement in computer technology. Erie Indemnity Co., 850 F.3d at 1328 ("the heart of the claimed invention lies in creating and using an index to search for and retrieve data ... an abstract concept"). Thus, as the test for patent eligibility is not whether the claim is confined to a particular field of use or technological environment, (see, again Intellectual Ventures ILLC v. Capital One Bank (USA)), the Examiner is unconvinced the claims are directed to eligible subject matter and thus this argument is moot. Additionally, Applicant asserts above that the claims: “recite significantly more than any alleged abstract idea, including nonconventional machine learning-based data augmentation and physical device control based on predicted control parameters” however, nowhere does Applicant explain specifically how the invention is directed towards “nonconventional machine learning-based data augmentation”; nor does Applicant explain specifically how the claims provide for “physical device control based on predicted control parameters” beyond the use of generic “control parameters”; thus this argument is moot. Thus the examiner is unconvinced, this argument is thus moot and the subject matter eligibility rejection under 35 USC 101 remains. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 3-5, 7-9 and 11-12 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claim 1 recites: (Step 2a, Prong One) performing data augmentation on unlabeled data; The limitation of performing data augmentation on unlabeled data, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is, other than reciting generic computers/medium, nothing in the claim element precludes the step from practically being performed in the mind. For example, but for the computers/medium language, “performing” in the context of this claim encompasses the user manually determining generic “data augmentation” using generic “unlabeled data” steps. Similarly, the limitation(s) of providing and providing and controlling, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. For example, but for the computers/medium language, providing and providing in the context of this claim encompasses the user manually receiving generic “augmented data pieces” and performing a generic providing of a generic “specification label” steps using a generic “controlling” step. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas (concepts performed in the human mind (including an observation, evaluation, judgment, opinion)). Further, these concepts also recite “Certain Methods of Organizing Human Activity”; (such as commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations) where performing generic performing steps of augmenting data and providing generic specification labels is a method of human activity in commercial or legal interactions. Accordingly, the claim recites an abstract idea. (Step 2a, Prong Two) This judicial exception is not integrated into a practical application. In particular, the claim only recites one additional element – using computers with a medium/”storing device” to perform both the providing; providing; controlling and performing steps. The computers with a medium/ storing device in both steps is recited at a high level of generality (i.e., as a generic computer processor performing a generic computer function of “performing” data augmentation) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. (Step 2b) The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of computers with a media/storing device to perform both the providing; providing; controlling and performing steps amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim(s) is/are not patent eligible. Referring to claim 3, (Step 2a, Prong One) this further merely performs an additional abstract mental step of “wherein the data obtainer is a camera”. (Step 2a, Prong Two) This judicial exception is not integrated into a practical application. In particular, the claim only recites the additional elements of “wherein the data obtainer is a camera” steps to perform both the aforementioned providing; providing; controlling and performing steps. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. (Step 2b) The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using “wherein the data obtainer is a camera” steps to perform both the aforementioned providing; providing; controlling and performing steps amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim(s) is/are not patent eligible. Referring to claim 4, (Step 2a, Prong One) this further merely performs an additional abstract mental step of “wherein the data obtainer is a touch sensor”. (Step 2a, Prong Two) This judicial exception is not integrated into a practical application. In particular, the claim only recites the additional elements of “wherein the data obtainer is a touch sensor” steps to perform both the aforementioned providing; providing; controlling and performing steps. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. (Step 2b) The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using “wherein the data obtainer is a touch sensor” steps to perform both the aforementioned providing; providing; controlling and performing steps amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim(s) is/are not patent eligible. Claim 5 recites: (Step 2a, Prong One) performing data augmentation on unlabeled data; The limitation of performing data augmentation on unlabeled data, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is, other than reciting generic processor/memory, nothing in the claim element precludes the step from practically being performed in the mind. For example, but for the processor/memory language, “performing” in the context of this claim encompasses the user manually determining generic “data augmentation” using generic “unlabeled data” steps. Similarly, the limitation(s) of providing and providing and controlling, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. For example, but for the processor/memory language, providing and providing in the context of this claim encompasses the user manually receiving generic “augmented data pieces” and performing a generic providing of a generic “specification label” steps with a generic “controlling” operation. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas (concepts performed in the human mind (including an observation, evaluation, judgment, opinion)). Further, these concepts also recite “Certain Methods of Organizing Human Activity”; (such as commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations) where performing generic performing steps of augmenting data and providing generic specification labels is a method of human activity in commercial or legal interactions. Accordingly, the claim recites an abstract idea. (Step 2a, Prong Two) This judicial exception is not integrated into a practical application. In particular, the claim only recites one additional element – using a processor with a memory/storing device to perform both the providing; providing; controlling and performing steps. The processor with a memory in both steps is recited at a high level of generality (i.e., as a generic computer processor performing a generic computer function of “performing” data augmentation) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. (Step 2b) The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of a processor with a memory/storing device to perform both the providing; providing; controlling and performing steps amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim(s) is/are not patent eligible. Referring to claim 7, (Step 2a, Prong One) this further merely performs an additional abstract mental step of “wherein the data obtainer is a camera”. (Step 2a, Prong Two) This judicial exception is not integrated into a practical application. In particular, the claim only recites the additional elements of “wherein the data obtainer is a camera” steps to perform both the aforementioned providing; providing; controlling and performing steps. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. (Step 2b) The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using “wherein the data obtainer is a camera” steps to perform both the aforementioned providing; providing; controlling and performing steps amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim(s) is/are not patent eligible. Referring to claim 8, (Step 2a, Prong One) this further merely performs an additional abstract mental step of “wherein the data obtainer is a touch sensor”. (Step 2a, Prong Two) This judicial exception is not integrated into a practical application. In particular, the claim only recites the additional elements of “wherein the data obtainer is a touch sensor” steps to perform both the aforementioned providing; providing; controlling and performing steps. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. (Step 2b) The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using “wherein the data obtainer is a touch sensor” steps to perform both the aforementioned providing; providing; controlling and performing steps amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim(s) is/are not patent eligible. Claim 9 recites: (Step 2a, Prong One) performing data augmentation on unlabeled data; The limitation of performing data augmentation on unlabeled data, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is, other than reciting a generic computer-implemented method, nothing in the claim element precludes the step from practically being performed in the mind. For example, but for the computer-implemented method language, “performing” in the context of this claim encompasses the user manually determining generic “data augmentation” using generic “unlabeled data” steps. Similarly, the limitation(s) of providing and providing and controlling, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. For example, but for the computer-implemented method language, providing and providing in the context of this claim encompasses the user manually receiving generic “augmented data pieces” and performing a generic providing of a generic “specification label” steps and generic controlling steps. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas (concepts performed in the human mind (including an observation, evaluation, judgment, opinion)). Further, these concepts also recite “Certain Methods of Organizing Human Activity”; (such as commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations) where performing generic performing steps of augmenting data and providing generic specification labels is a method of human activity in commercial or legal interactions. Accordingly, the claim recites an abstract idea. (Step 2a, Prong Two) This judicial exception is not integrated into a practical application. In particular, the claim only recites one additional element – using a computer-implemented method with a generic storing device to perform both the providing; providing; controlling and performing steps. The computer-implemented method in both steps is recited at a high level of generality (i.e., as a generic computer processor performing a generic computer function of “performing” data augmentation) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. (Step 2b) The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of a computer-implemented method with a storing device to perform both the providing; providing; controlling and performing steps amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim(s) is/are not patent eligible. Referring to claim 11, (Step 2a, Prong One) this further merely performs an additional abstract mental step of “wherein the data obtainer is a camera”. (Step 2a, Prong Two) This judicial exception is not integrated into a practical application. In particular, the claim only recites the additional elements of “wherein the data obtainer is a camera” steps to perform both the aforementioned providing; providing; controlling and performing steps. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. (Step 2b) The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using “wherein the data obtainer is a camera” steps to perform both the aforementioned providing; providing; controlling and performing steps amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim(s) is/are not patent eligible. Referring to claim 12, (Step 2a, Prong One) this further merely performs an additional abstract mental step of “wherein the data obtainer is a touch sensor”. (Step 2a, Prong Two) This judicial exception is not integrated into a practical application. In particular, the claim only recites the additional elements of “wherein the data obtainer is a touch sensor” steps to perform both the aforementioned providing; providing; controlling and performing steps. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. (Step 2b) The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using “wherein the data obtainer is a touch sensor” steps to perform both the aforementioned providing; providing; controlling and performing steps amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim(s) is/are not patent eligible. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 3, 5, 7, 9 and 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over McKay et al., US Pub. No. 2021/0192394 A1, in view of Tsukatani US Pub. No. 2022/0335085 A1, in view of Jang et al., US Pub. No. 2020/0338722 A1. As to claim 1 (and substantially similar claim 5 and claim 9), McKay discloses a non-transitory computer-readable recording medium storing a gathering program that causes one or more computers to execute a process, (McKay abstract, [0014, 0052, 0232-0235]) the process comprising: performing, by a trained feature extraction model, (McKay teaches trained ML labelers, i.e. a “a trained feature extraction model” [0047] As discussed above, the labeling platform may provide a set of use case templates, where each use case template corresponds to a labeling problem to be solved ( e.g., "image classification," "video frame classification," etc.) and includes an ML labeler configuration. The end user of a labeling platform may select a labeling problem (e.g., select a use case template), provide a minimum amount of training configuration and provide data to be labeled according to the use case.; see also [0175] Thus, an ML labeler created based on the configuration of FIG. 13A and FIG. 13B would represent a model trained using the TensorFlow framework) data augmentation on unlabeled data; (McKay teaches generates an augmented result for each of the labeling requests for unlabeled records, i.e. “data augmentation on unlabeled data” see Fig. 10 items “Image->Labelling request->Unlabeled Input records”->Active Learning Record Selector->Labeled image result; see also [0005] Based on the generated inference results, at least a portion of the labeling requests is selected. The generated inference results for the selected labeling requests are corrected using a directed graph of labelers having one or more labelers, where the directed graph generates an augmented result for each of the labeling requests in the selected portion based on associated quality and cost metrics. The augmented result includes a label corresponding to the data item, where the label meets a target confidence threshold) McKay does not disclose: providing a specification label to a group of augmented data pieces generated by the data augmentation, the specification label indicating that labels of the augmented data pieces all match; and providing, when a label for one data piece of the augmented data pieces is determined, the label to one or more data pieces each provided with a specification label that is same as a specification label of the one data piece; However, Tsukatani discloses: providing a specification label to a group of augmented data pieces generated by the data augmentation, the specification label indicating that labels of the augmented data pieces all match; and (Tsukatani teaches providing identification information / pseudo label generation process for clusters of labels, i.e. “specification label indicating that labels of the augmented data pieces all match” See [0033] the feature extractor generation unit 11 executes a pseudo label generation process with the data set A of the labeled data pieces, the data set B of the unlabeled data pieces, and the number S of samples to be additionally labeled as input. In the pseudo label generation process, the feature extractor generation unit 11 provides pseudo labels to each data piece aEA and each data piece bEB, and outputs a data set A in which each data piece a is provided with a pseudo label and a data set B in which each data piece b is provided with a pseudo label as pseudo datasets. On this occasion, the feature extractor generation unit 11 performs k-means clustering on the data set A and the data set B, based on respective intermediate features of each data piece a and each data piece b when each data piece is inputted into Convolutional Neural Network (CNN), which is the source of the feature extractor, to thereby provide identification information corresponding to a cluster, to which each data piece belongs, to the data piece belonging to the cluster as a pseudo label.) providing, when a label for one data piece of the augmented data pieces is determined, the label to one or more data pieces each provided with a specification label that is same as a specification label of the one data piece (Tsukatani teaches providing identification information corresponding to a cluster, to which each data piece belongs, to the data piece belonging to the cluster as a pseudo label, i.e. providing label to one or more data pieces each provided with a specification label see [0033] On this occasion, the feature extractor generation unit 11 performs k-means clustering on the data set A and the data set B, based on respective intermediate features of each data piece a and each data piece b when each data piece is inputted into Convolutional Neural Network (CNN), which is the source of the feature extractor, to thereby provide identification information corresponding to a cluster, to which each data piece belongs, to the data piece belonging to the cluster as a pseudo label) It would have been obvious to one having ordinary skill in the art at the time the time of the effective filing date to apply providing identification information / pseudo label generation process for clusters of labels as taught by Tsukatani to the system of McKay since it was known in the art that data labelling systems provide the feature information of each data piece is obtained (extracted) by use of the feature extractor that has provided the pseudo label, which can be automatically generated only from the data set A of the labeled data pieces and the data set B of the unlabeled data pieces, to each data piece and has learned; therefore, the sampling based on the feature space effective for the target task is performed, and as a result, it is possible to select the data piece, that is to be labeled and is effective for the target task, from among data sets of unlabeled data pieces. (Tsukatani [0047]). McKay/Tsukatani do not disclose: and controlling based on a control parameter predicted by a machine learning model, a position and a posture of a data obtainer that obtains training data such that a possibility that the label or the first label is successfully provided becomes highest; However, Jang discloses: and controlling based on a control parameter predicted by a machine learning model, (Jang teaches using a semantic grasping model for generating control commands, i.e. “controlling based on a control parameter predicted by a machine learning model” see [0058] For example, an optimization technique can be utilized to sample a plurality of candidate end effector motion vectors in a given iteration, each of those end effector motion vectors processed (along with the same image(s) 161C) using the semantic grasping model 124B, and one of the sampled candidate end effector motion vectors selected for use in generating control commands in the given iteration; See also [0045] As illustrated in FIG. 1A, the pose of the vision components 184Arelative to the robot 180A is different than the pose of the vision components 184B relative to the robot 180B. As described herein, in some implementations this may be beneficial to enable generation of varied training examples that can be utilized to train various neural networks to produce corresponding output that is robust to and/or independent of camera calibration.; see also [0009] Following training, (1) a candidate motion vector that defines a candidate motion (if any) of a grasping end effector of a robot (i.e. motion from one pose to another, additional pose), and (2) at least one image that captures at least a portion of the work space of the robot, can be applied as input to the trained joint network. Further, a joint output can be generated using the trained joint network based on the applied inputs;) a position and a posture of a data obtainer that obtains training data such that a possibility that the label or the first label is successfully provided becomes highest; (Jang teaches generating motion commands that control pose of a robot end effector to generate semantic grasp training examples where the end effector command to fully or substantially conform to the candidate end effector motion vector with a "highest" value based on applying to increase the probability for a semantic feature that matches the desired object semantic feature, i.e. “a position and a posture of a data obtainer that obtains training data such that a possibility that the label or the first label is successfully provided becomes highest” See [0152] For example, the system may generate the end effector command to fully or substantially conform to the candidate end effector motion vector with: the grasp success measure that is most indicative of a successful grasp; and corresponding semantic feature(s) that match the desired object semantic feature(s). Also, for example, the system may generate the end effector command to fully or substantially conform to the candidate end effector motion vector with a "highest" value based on applying: its grasp success measure and its probability for a semantic feature that matches the desired object semantic feature, to a function. A control system of a robot of the system may generate motion command(s) to actuate one or more actuators of the robot to move the end effector based on the end effector motion vector. See also [0153-0154] [0153] Also, for instance, if the result is between the first and second thresholds, a motion command may be generated that substantially or fully conforms to the candidate end effector motion vector with the grasp success measure determined at block 960 that is most indicative of successful grasp and that also includes corresponding sematic feature (s) that correspond to the desired object semantic feature(s). The end effector command generated by the system may be a single group of one or more commands, or a sequence of groups of one or more commands. [0154] The grasp success measure if no candidate end effector motion vector is utilized to generate new motion commands may be based on the measure for the candidate end effector motion vector utilized in a previous iteration of the method 900 and/or based on applying a "null" motion vector and the image (and optionally the additional image) at an additional iteration of blocks of method 900. See also [0072] At block 356, the system determines and implements an end effector movement. For example, the system may generate one or more motion commands to cause one or more of the actuators that control the pose of the end effector to actuate, thereby changing the pose of the end effector. see also [0043-0049] [0043] [0043] As described in more detail herein, the grasp training examples of grasp training examples database 117 can include semantic grasp training examples that include training example output that identifies semantic feature(s) (e.g., a class) of an object grasped in a corresponding grasp attempt of the training example.. [0049] Robots 180A, 180B, and/or other robots may be utilized to perform a large quantity of grasp attempts and data associated with the grasp attempts may be utilized by the training example generation system 110 to generate grasp training examples of grasp training examples database 117.) It would have been obvious to one having ordinary skill in the art at the time the time of the effective filing date to apply using a generating motion commands that control pose of a robot end effector to generate semantic grasp training examples as taught by Jang to the system of McKay/Tsukatani since it was known in the art that data labelling systems provide the pose of the vision components relative to the robot where this may be beneficial to enable generation of varied training examples that can be utilized to train various neural networks to produce corresponding output that is robust to and/or independent of camera calibration. (see generally Jang [0045]). As to claim 3, Jang as modified discloses: the non-transitory computer-readable recording medium according to claim 1, wherein the data obtainer is a camera (Jang teaches a robot's camera, i.e. “wherein the data obtainer is a camera” see [0036] In some implementations, a semantic grasping model described herein takes, as input, images I,, 10 , which correspond to the current image seen by a robot's camera (I,), and an initial image seen by the robot's camera during a grasping episode (I0).) Referring to claim 7, this dependent claim recites similar limitations as claim 3; therefore, the arguments above regarding claim 3 are also applicable to claim 7. Referring to claim 11, this dependent claim recites similar limitations as claim 3; therefore, the arguments above regarding claim 3 are also applicable to claim 11. Claim(s) 4, 8 and 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over McKay et al., US Pub. No. 2021/0192394 A1, in view of Tsukatani US Pub. No. 2022/0335085 A1, in view of Jang et al., US Pub. No. 2020/0338722 A1, in view of Diankov et al., US Patent No. 10,456,915 B1. As to claim 4, While Jang as modified discloses pressure/proximity sensors (see Jang [0170]) McKay / Tsukatani / Jang do not explicitly disclose: wherein the data obtainer is a touch sensor; However, Diankov discloses: the non-transitory computer-readable recording medium according to claim 1, wherein the data obtainer is a touch sensor (Diankov teaches using contact sensors for label scanning for a poseable sensor system, i.e. “wherein the data obtainer is a /tactiletouch sensor” col. 8 ln. 18-60: In some embodiments, the task can include scanning the target object 112, such as for logging the item for shipping/ receiving. To accomplish the scanning portion of the task, the imaging devices 222 can include one or more scanners ( e.g., barcode scanners and/or QR code scanners) configured to scan the identification information during transfer (e.g., between the start location 114 and the task location 116) Accordingly, the robotic system 100 can calculate a motion plan for presenting one or more portions of the target object 112 to one or more of the scanners In some embodiments, for example, the sensors 216 can include position sensors 224 (e.g., position encoders, potentiometers, etc.) configured to detect positions of structural members (e.g., the robotic arms and/or the end-effectors) and/or corresponding joints of the robotic system 100. The robotic system 100 can use the position sensors 224 to track locations and/or orientations of the structural members and/ or the joints during execution of the task. In some embodiments, for example, the sensors 216 can include contact sensors 226 (e.g., pressure sensors, force sensors, strain gauges, piezoresistive/piezoelectric sensors capacitive sensors, elastoresistive sensors, and/or other tactile sensors) configured to measure a characteristic associated with a direct contact between multiple physical structures or surfaces. The contact sensors 226 can measure the characteristic that corresponds to a grip of the end-effector e.g., the gripper) on the target object 112. Accordingly, the contact sensors 226 can output a contact measure that represents a quantified measure ( e.g., a measured force, torque, position, etc.) corresponding to a degree of contact or attachment between the gripper and the target object 112. For example, the contact measure can include one or more force or torque readings associated with forces applied to the target object 112 by the end-effector. Details regarding the contact measure are described below. Initial Pose and Uncertainty Determinations FIG. 3A, FIG. 3B, and FIG. 3C are illustrations of an object 302 in various poses (e.g., a first pose 312, a second pose 314, and/or a third pose 316). A pose can represent a position and/or an orientation of the object 302. In other words, the pose can include a translational component and/or a rotational component according to a grid system utilized by the robotic system 100). It would have been obvious to one having ordinary skill in the art at the time the time of the effective filing date to apply tactile/touch sensors as taught by Diankov to the system of McKay / Tsukatani / Jang since it was known in the art that data labelling systems provide sensors which can include contact sensors (e.g., pressure sensors, force sensors, strain gauges, piezoresistive/piezoelectric sensors capacitive sensors, elastoresistive sensors, and/or other tactile sensors) configured to measure a characteristic associated with a direct contact between multiple physical structures or surfaces where the contact sensors can measure the characteristic that corresponds to a grip of the end-effector e.g., the gripper) on the target object and accordingly, the contact sensors can output a contact measure that represents a quantified measure ( e.g., a measured force, torque, position, etc.) corresponding to a degree of contact or attachment between the gripper and the target object where the robotic system can use the position sensors to track locations and/or orientations of the structural members and/ or the joints during execution of the task. (Diankov col. 8 ln. 33-51). Referring to claim 8, this dependent claim recites similar limitations as claim 4; therefore, the arguments above regarding claim 4 are also applicable to claim 8. Referring to claim 12, this dependent claim recites similar limitations as claim 4; therefore, the arguments above regarding claim 4 are also applicable to claim 12. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Okuda et al., JP2009211294 A, teaches improvements to provide a neural network device capable of saving labor for complicated work and capable of constructing an optimum model in a shorter time than before, a robot camera control device using the neural network device, and a neural network program. A robot camera control device (1) operates a sensor camera (10) for photographing a subject, a subject detection device (20) for detecting the position of the subject, a robot camera (30) having a camera for photographing the subject, and a robot camera (30). The robot camera operating device 40 and a learning control device 50 for controlling the learning of the neural network device 100 and the photographing operation of the robot camera 30 are provided. The learning control device 50 includes a step interval value when inputting data before the current time and Input layer optimization means 120 for setting the number of steps is provided. CONTACT INFORMATION Any inquiry concerning this communication or earlier communications from the examiner should be directed to EVAN S ASPINWALL whose telephone number is (571)270-7723. The examiner can normally be reached Monday-Friday 8am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Neveen Abel-Jalil can be reached at 571-270-0474. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Evan Aspinwall/Primary Examiner, Art Unit 2152
Read full office action

Prosecution Timeline

Mar 20, 2024
Application Filed
Mar 05, 2025
Non-Final Rejection — §101, §103
Jun 11, 2025
Response Filed
Sep 04, 2025
Final Rejection — §101, §103
Dec 08, 2025
Request for Continued Examination
Dec 18, 2025
Response after Non-Final Action
Jan 07, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602420
SYSTEMS AND METHODS FOR GENERATING CODES AND CODE BOOKS BASED USING COSINE PROXIMITY
2y 5m to grant Granted Apr 14, 2026
Patent 12602379
OPTIMIZING DATABASE CURSOR OPERATIONS IN KEY-VALUE STORES
2y 5m to grant Granted Apr 14, 2026
Patent 12596690
SYSTEM FOR INTELLIGENT DATABASE MODELLING
2y 5m to grant Granted Apr 07, 2026
Patent 12572971
ARTIFICIAL CROWD INTELLIGENCE VIA NETWORKING RECOMMENDATION ENGINES
2y 5m to grant Granted Mar 10, 2026
Patent 12572518
DYNAMIC OIL AND GAS DATA QUALITY VISUALIZATION SUGGESTION
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
83%
Grant Probability
99%
With Interview (+16.8%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 669 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month