Prosecution Insights
Last updated: April 19, 2026
Application No. 17/800,305

TRAINING DATA GENERATION DEVICE, TRAINING DATA GENERATION METHOD, AND PROGRAM RECORDING MEDIUM

Final Rejection §101§103
Filed
Aug 17, 2022
Examiner
BEJCEK II, ROBERT H
Art Unit
2148
Tech Center
2100 — Computer Architecture & Software
Assignee
NEC Corporation
OA Round
2 (Final)
64%
Grant Probability
Moderate
3-4
OA Rounds
3y 8m
To Grant
87%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
162 granted / 251 resolved
+9.5% vs TC avg
Strong +22% interview lift
Without
With
+22.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
24 currently pending
Career history
275
Total Applications
across all art units

Statute-Specific Performance

§101
22.6%
-17.4% vs TC avg
§103
40.1%
+0.1% vs TC avg
§102
11.8%
-28.2% vs TC avg
§112
21.4%
-18.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 251 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Title The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. Examiner believes that the title of the invention is imprecise. A descriptive title indicative of the invention will help in proper indexing, classifying, searching, etc. See MPEP 606.01. However, the title of the invention should be limited to 500 characters. Examiner suggests including the aspect(s) of the claims which Applicant believes to be novel or nonobvious over the prior art. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-11 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: Claim 1 is a device claim. Claim 9 is a method claim. Claim 11 is a CRM claim. Therefore, claims 1, 9, and 11 are directed to either a process, machine, manufacture or composition of matter. With respect to Claim 1: Step 2A Prong 1: generate information regarding the smell data by performing voice recognition of the audio information (mental process – user can manually generate information regarding the smell data by performing voice recognition of the audio information) generate a plurality of label candidates based on the information regarding the smell data (mental process – user can manually generate a plurality of label candidates based on the information regarding the smell data) generate training data based on the selected label and the smell data (mental process – user can manually generate training data based on the selected label and the smell data) Step 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: at least one memory storing instructions (mere instructions to apply the exception using a generic computer component) at least one processor configured to access the at least one memory and execute the instructions (mere instructions to apply the exception using a generic computer component) acquire smell data based on a time-series data of detected values output from a membrane-type surface stress sensor (MSS), in which, a detected value thereof changes according to attachment and detachment of a molecule contained in a target gas containing a plurality of molecules of different types, wherein the MSS outputs the detected values based on changes in physical quantities of a member of the sensor that occur in response to the attachment and detachment of the molecule with respect to a receptor (Adding insignificant extra-solution activity to the judicial exception - see MPEP 2106.05(g)) obtain audio information from a microphone corresponding to the acquired smell data (Adding insignificant extra-solution activity to the judicial exception - see MPEP 2106.05(g)) control a display device to display a selectable graphical user interface (GUI) comprising the generated plurality of label candidates (mere instructions to apply the exception using a generic computer component & adding insignificant extra-solution activity to the judicial exception - see MPEP 2106.05(g)) receive selection of a label from the plurality of label candidates displayed on the GUI (Adding insignificant extra-solution activity to the judicial exception - see MPEP 2106.05(g)) Step 2B: The claim does not include additional elements considered individually and in combination that are sufficient to amount to significantly more than the judicial exception. Additional elements: at least one memory storing instructions (mere instructions to apply the exception using a generic computer component) at least one processor configured to access the at least one memory and execute the instructions (mere instructions to apply the exception using a generic computer component) acquire smell data based on a time-series data of detected values output from a membrane-type surface stress sensor (MSS), in which, a detected value thereof changes according to attachment and detachment of a molecule contained in a target gas containing a plurality of molecules of different types, wherein the MSS outputs the detected values based on changes in physical quantities of a member of the sensor that occur in response to the attachment and detachment of the molecule with respect to a receptor (MPEP 2106.05(d)(II) indicate that merely “storing and retrieving information in memory” or “receiving or transmitting data over a network” is a well‐understood, routine, conventional function when it is claimed in a merely generic manner (as it is in the present claim). Thereby, a conclusion that the claimed step is well-understood, routine, conventional activity is supported under Berkheimer) obtain audio information from a microphone corresponding to the acquired smell data (MPEP 2106.05(d)(II) indicate that merely “storing and retrieving information in memory” or “receiving or transmitting data over a network” is a well‐understood, routine, conventional function when it is claimed in a merely generic manner (as it is in the present claim). Thereby, a conclusion that the claimed step is well-understood, routine, conventional activity is supported under Berkheimer) control a display device to display a selectable graphical user interface (GUI) comprising the generated plurality of label candidates (mere instructions to apply the exception using a generic computer component; MPEP 2106.05(d)(II) indicate that merely “storing and retrieving information in memory” or “receiving or transmitting data over a network” is a well‐understood, routine, conventional function when it is claimed in a merely generic manner (as it is in the present claim). Thereby, a conclusion that the claimed step is well-understood, routine, conventional activity is supported under Berkheimer) receive selection of a label from the plurality of label candidates displayed on the GUI (MPEP 2106.05(d)(II) indicate that merely “storing and retrieving information in memory” or “receiving or transmitting data over a network” is a well‐understood, routine, conventional function when it is claimed in a merely generic manner (as it is in the present claim). Thereby, a conclusion that the claimed step is well-understood, routine, conventional activity is supported under Berkheimer) Conclusion: The claim is not patent eligible. Claims 9 and 11 are rejected on the same grounds as claim 1. Additionally, for claim 11: Claim 11 has the additional elements of a CRM. This element are mere instructions to apply the exception using a generic computer component under Step 2A prong 2 and Step 2B. Regarding Claims 2-5, 10: The limitation, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is, other than the additional elements, nothing in the claim limitation precludes the step from practically being performed in the mind. For claim 2: the limitation encompasses the user manually using wherein the audio information is a speech regarding the smell data, and generate the plurality of label candidates based on the speech. The limitation includes the additional element of the at least one processor is further configured to execute the instructions. These judicial exceptions are not integrated into a practical application. The additional element(s) of the at least one processor is further configured to execute the instructions are recited at a high-level of generality such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, this does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element(s) of the at least one processor is further configured to execute the instructions amount to no more than mere instructions to apply the exception using a generic computer component or operation. Mere instructions to apply an exception using a generic computer component or operation cannot provide an inventive concept. Accordingly, the claims are not patent eligible. For claim 3: the limitation encompasses the user manually using generate the plurality of label candidates further based on the text. The limitation includes the additional element of the at least one processor is further configured to execute the instructions. These judicial exceptions are not integrated into a practical application. The additional element(s) of the at least one processor is further configured to execute the instructions are recited at a high-level of generality such that it amounts no more than mere instructions to apply the exception using a generic computer component. The additional element(s) of receive text data regarding the smell data is mere extra solution activity. Accordingly, this does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element(s) of the at least one processor is further configured to execute the instructions is further configured to execute the instructions amount to no more than mere instructions to apply the exception using a generic computer component or operation. Mere instructions to apply an exception using a generic computer component or operation cannot provide an inventive concept. The additional element(s) of receive text data regarding the smell data recite merely “storing and retrieving information in memory” or “receiving or transmitting data over a network” is a well‐understood, routine, conventional function when it is claimed in a merely generic manner (as it is in the present claim) (MPEP 2106.05(d)(II)). Thereby, a conclusion that the claimed storing step is well-understood, routine, conventional activity is supported under Berkheimer. Accordingly, the claims are not patent eligible. For claim 4: the limitation encompasses the user manually using wherein the information regarding the smell data is an image including a measurement target of the smell data. The limitation includes the additional element of the at least one processor is further configured to execute the instructions to: output the generated label candidates based on the image. These judicial exceptions are not integrated into a practical application. The additional element(s) of the at least one processor is further configured to execute the instructions are recited at a high-level of generality such that it amounts no more than mere instructions to apply the exception using a generic computer component. The additional element(s) of output the generated label candidates based on the image recite adding insignificant extra-solution activity to the judicial exception - see MPEP 2106.05(g). Accordingly, this does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element(s) of the at least one processor is further configured to execute the instructions amount to no more than mere instructions to apply the exception using a generic computer component or operation. Mere instructions to apply an exception using a generic computer component or operation cannot provide an inventive concept. The additional element(s) of output the generated label candidates based on the image recite merely “storing and retrieving information in memory” or “receiving or transmitting data over a network” is a well‐understood, routine, conventional function when it is claimed in a merely generic manner (as it is in the present claim) (MPEP 2106.05(d)(II)). Thereby, a conclusion that the claimed storing step is well-understood, routine, conventional activity is supported under Berkheimer. Accordingly, the claims are not patent eligible. For claim 5: the limitation encompasses the user manually use wherein the information regarding the smell data is a trained model trained using a relationship between the smell data and the label, and generate the label candidates based on the acquired smell data. The limitation includes the additional element of the at least one processor is further configured to execute the instructions to: and the trained model. These judicial exceptions are not integrated into a practical application. The additional element(s) of the additional element of the at least one processor is further configured to execute the instructions are recited at a high-level of generality such that it amounts no more than mere instructions to apply the exception using a generic computer component. The additional element(s) of the trained model recite merely adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f). Accordingly, this does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element(s) of the at least one processor is further configured to execute the instructions amount to no more than mere instructions to apply the exception using a generic computer component or operation. Mere instructions to apply an exception using a generic computer component or operation cannot provide an inventive concept. The additional element(s) of the trained model recite adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f). Accordingly, the claims are not patent eligible. For claim 10: the limitation includes the additional element of generating a learning model based on the generated training data. These judicial exceptions are not integrated into a practical application. The additional element(s) of generating a learning model based on the generated training data recite merely adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f). Accordingly, this does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element(s) of generating a learning model based on the generated training data recite adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f). Accordingly, the claims are not patent eligible. Regarding Claims 6-8: These limitations, as drafted, are a process that, under its broadest reasonable interpretation, covers performance of the limitations in the mind. That is, nothing in the claim limitation precludes the step from practically being performed in the mind. For claim 6: the limitation encompasses the user manually using wherein the trained model is trained using a relationship between the smell data and a sensory evaluation result for a smell. For claim 7: the limitation encompasses the user manually using wherein the trained model is trained using a relationship between the smell data and data indicating a chemical property of a measurement target of the smell data. For claim 8: the limitation encompasses the user manually using wherein the trained model is trained using a relationship between the smell data and data indicating a biological reaction when sniffing the smell. These judicial exceptions are not integrated into a practical application. In particular, the claims do not recite any additional elements. Accordingly, this does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, no additional elements are cited. Accordingly, the claim is not patent eligible. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 2, 5, 9-11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Shinzaki et al. (hereinafter Shinzaki), U.S. Patent Application Publication 2021/0158101 in view of Kalashnikov et al. (hereinafter Kalashnikov), A Semantics-Based Approach for Speech Annotation of Images, further in view of Imamura et al. (hereinafter Imamura), Smell identification of spices using nanomechanical membrane-type surface stress sensors. Regarding Claim 1, Shinzaki discloses a training data generation device comprising: at least one memory storing instructions [“implemented by a computer having a known hardware configuration, and includes…memory” ¶68]; and at least one processor configured to access the at least one memory [“implemented by a computer having a known hardware configuration, and includes: a processor” ¶68] and execute the instructions to: control a display device to display a selectable graphical user interface (GUI) comprising the generated plurality of label candidates [the recognition result screen in which a plurality of candidate objects (objects a, c, g) with respective scores equal to or higher than a threshold value are arranged together” ¶85; Fig. 9]; receive selection of a label from the plurality of label candidates displayed on the GUI [“When a user determines that one of the objects a, c, and g corresponds to the recognition target, the user selects (clicks) an image of that object on the recognition result screen 75 as an approval operation” ¶85; Fig. 9]; and generate training data based on the selected label and the smell data [“in generation of training data” ¶97]. However, Shinzaki fails to explicitly disclose obtain audio information from a microphone corresponding to the acquired smell data; generate information regarding the smell data by performing voice recognition of the audio information; generate a plurality of label candidates based on the information regarding the smell data. Kalashnikov discloses obtain audio information from a microphone corresponding to the acquired smell data [“the user would take a picture and speak the desired tags into the device’s microphone” §1 ¶4]; generate information regarding the smell data by performing voice recognition of the audio information [“A speech recognizer would transcribe the audio signal into text” §1 ¶4]; generate a plurality of label candidates based on the information regarding the smell data [“This text can be used in assigning tags to the image” §1 ¶4]. It would have been obvious to one having ordinary skill in the art, having the teachings of Shinzaki and Kalashnikov before him before the effective filing date of the claimed invention, to modify the device of Shinzaki to incorporate the speech labeling of Kalashnikov. Given the advantage of having all the benefits of human tagging without the cumbersomeness and impracticality, one having ordinary skill in the art would have been motivated to make this obvious modification. However, Shinzaki fails to explicitly disclose acquire smell data based on a time-series data of detected values output from a membrane-type surface stress sensor (MSS), in which, a detected value thereof changes according to attachment and detachment of a molecule contained in a target gas containing a plurality of molecules of different types, wherein the MSS outputs the detected values based on changes in physical quantities of a member of the sensor that occur in response to the attachment and detachment of the molecule with respect to a receptor; Imamura discloses acquire smell data based on a time-series data of detected values output from a membrane-type surface stress sensor (MSS), in which, a detected value thereof changes according to attachment and detachment of a molecule contained in a target gas containing a plurality of molecules of different types, wherein the MSS outputs the detected values based on changes in physical quantities of a member of the sensor that occur in response to the attachment and detachment of the molecule with respect to a receptor [“identification of spices by smell using nanomechanical membrane-type surface stress sensors (MSS). Features were extracted from the sensing signals obtained from four MSS coated with different types of polymers, focusing on the chemical interactions between polymers and odor molecules.” Abstract]; It would have been obvious to one having ordinary skill in the art, having the teachings of Shinzaki, Kalashnikov, and Imamura before him before the effective filing date of the claimed invention, to modify the combination to substitute smell data instead of image data, and to incorporate acquiring the smell data by use of a MSS of Imamura. Given the advantage of utilizing smell information for labeling data and using an accurate smell sensor to obtain the smell data, one having ordinary skill in the art would have been motivated to make this obvious modification. Regarding Claim 2, Shinzaki, Kalashnikov, and Imamura disclose the training data generation device according to claim 1. Shinzaki further discloses generate the plurality of label candidates [“a plurality of candidate objects (objects a, c, g) with respective scores equal to or higher than a threshold value” ¶85]. However, Shinzaki fails to explicitly disclose wherein the information is a speech regarding the smell data, and the at least one processor is further configured to execute the instructions to: generate the plurality of label candidates based on the speech. Kalashnikov discloses wherein the information is a speech regarding the smell data, and the at least one processor is further configured to execute the instructions to: generate the plurality of label candidates based on the speech [“the user would take a picture and speak the desired tags into the device’s microphone. A speech recognizer would transcribe the audio signal into text. The speech to text transcription could either happen on the device itself or be done on a remote machine. This text can be used in assigning tags to the image.” §1 ¶4]. It would have been obvious to one having ordinary skill in the art, having the teachings of Shinzaki and Kalashnikov before him before the effective filing date of the claimed invention, to modify the device of Shinzaki to incorporate the speech labeling of Kalashnikov. Given the advantage of having all the benefits of human tagging without the cumbersomeness and impracticality, one having ordinary skill in the art would have been motivated to make this obvious modification. However, Shinzaki fails to explicitly disclose the smell data. Imamura discloses the smell data [“identification of spices by smell using nanomechanical membrane-type surface stress sensors (MSS). Features were extracted from the sensing signals obtained from four MSS coated with different types of polymers, focusing on the chemical interactions between polymers and odor molecules.” Abstract]; It would have been obvious to one having ordinary skill in the art, having the teachings of Shinzaki, Kalashnikov, and Imamura before him before the effective filing date of the claimed invention, to modify the combination to substitute smell data instead of image data, and to incorporate acquiring the smell data by use of a MSS of Imamura. Given the advantage of utilizing smell information for labeling data and using an accurate smell sensor to obtain the smell data, one having ordinary skill in the art would have been motivated to make this obvious modification. Regarding Claim 5, Shinzaki, Kalashnikov, and Imamura disclose the training data generation device according to claim 1. Shinzaki further discloses wherein the information regarding the smell data is a trained model trained using a relationship between the smell data and the label [“an identification model” ¶25; “a plurality of candidate objects (objects a, c, g) with respective scores equal to or higher than a threshold value” ¶85”], and the at least one processor is further configured to execute the instructions to: generate the label candidates based on the acquired smell data and the trained model [“An object ID information acquirer 12 is configured to acquire one or more candidate objects which the server device 4 recognizes from captured images and respective scores (recognition confidence scores based on a prescribed identification model) therefor.” ¶56]. However, Shinzaki fails to explicitly disclose smell data. Imamura discloses smell data [“Features were extracted from the sensing signals obtained from four MSS coated with different types of polymers” Abstract]. It would have been obvious to one having ordinary skill in the art, having the teachings of Shinzaki, Kalashnikov, and Imamura before him before the effective filing date of the claimed invention, to modify the combination to incorporate the smell analysis of Imamura. Given the advantage of generating training data using various data types to be able to train models, one having ordinary skill in the art would have been motivated to make this obvious modification. Claim 9 is rejected on the same grounds as claim 1. Regarding Claim 10, Shinzaki, Kalashnikov, and Imamura disclose the training data generation method according to claim 9. Shinzaki further discloses further comprising: generating a learning model based on the generated training data [“a learner is trained with the collected training data to create a trained model” ¶2]. Claim 11 is rejected on the same grounds as claim 1. Claim(s) 3-4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Shinzaki, Kalashnikov, and Imamura, in view of Todoran, Smelling Objects for Multimedia Database Applications. Regarding Claim 3, Shinzaki and Oshita disclose the training data generation device according to claim 1. Shinzaki further discloses generate the plurality of label candidates [“a plurality of candidate objects (objects a, c, g) with respective scores equal to or higher than a threshold value” ¶85]. However, Shinzaki fails to explicitly disclose wherein the at least one processor is further configured to execute the instructions to: received text data regarding the smell data, and generate the plurality of label candidates further based on the text. Todoran discloses wherein the at least one processor is further configured to execute the instructions to: received text data regarding the smell data, and generate the plurality of label candidates further based on the text [“integrating digital smell along with audio-visual information” §1 ¶1; “four composite-media object classes integrating olfactory information along with the typical multimedia data (text, image, audio, and video)” §3 ¶2]. It would have been obvious to one having ordinary skill in the art, having the teachings of Shinzaki, Kalashnikov, Imamura, and Todoran before him before the effective filing date of the claimed invention, to modify the combination to incorporate the inclusion of text information along with smell information of Todoran. Given the advantage of increase classification accuracy by including additional variables of different types, one having ordinary skill in the art would have been motivated to make this obvious modification. Regarding Claim 4, Shinzaki, Kalashnikov, and Imamura disclose the training data generation device according to claim 1. Shinzaki further discloses including a measurement target of the smell data [“an image acquirer 11 is configured to sequentially acquires captured images generated by a corresponding image capturing device 2. An object ID information acquirer 12 is configured to acquire one or more candidate objects which the server device 4 recognizes from captured images and respective scores (recognition confidence scores based on a prescribed identification model) therefor.” ¶56], and output the generated label candidates [“recognition result screen in which a plurality of candidate objects (objects a, c, g) with respective scores equal to or higher than a threshold value” ¶85]. However, Shinzaki fails to explicitly disclose wherein the information regarding the smell data is an image including a measurement target of the smell data, and the at least one processor is further configured to execute the instructions to: output the generated candidate labels based on the image. Todoran discloses wherein the information regarding the smell data is an image including a measurement target of the smell data, and the at least one processor is further configured to execute the instructions to: output the generated candidate labels based on the image [“integrating digital smell along with audio-visual information” §1 ¶1; “four composite-media object classes integrating olfactory information along with the typical multimedia data (text, image, audio, and video)” §3 ¶2]. It would have been obvious to one having ordinary skill in the art, having the teachings of Shinzaki, Kalashnikov, Imamura, and Todoran before him before the effective filing date of the claimed invention, to modify the combination to incorporate the inclusion of image information along with smell information of Todoran. Given the advantage of increase classification accuracy by including additional variables of different types, one having ordinary skill in the art would have been motivated to make this obvious modification. Claim(s) 6-8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Shinzaki, Kalashnikov, and Imamura in view of Oshita et al. (hereinafter Oshita), U.S. Patent Application Publication 2020/0240669 Regarding Claim 6, Shinzaki, Kalashnikov, and Imamura disclose the training data generation device according to claim 5. Shinzaki further discloses wherein the trained model is trained using a relationship between [“a trained model trained with training data” ¶59]. However, Shinzaki fails to explicitly disclose the smell data and a sensory evaluation result for a smell. Oshita discloses the smell data and a sensory evaluation result for a smell [“database 13 is for accumulating evaluation data obtained by associating the output of the detecting unit 11 and the output of the input unit” ¶38; “a gas sensor (first sensor) that detects the smell in the room” ¶26]. It would have been obvious to one having ordinary skill in the art, having the teachings of Shinzaki, Kalashnikov, Imamura, and Oshita before him before the effective filing date of the claimed invention, to modify the combination to incorporate the smell database of Oshita. Given the advantage of extending data evaluation to integrate smells, one having ordinary skill in the art would have been motivated to make this obvious modification. Regarding Claim 7, Shinzaki, Kalashnikov, and Imamura disclose the training data generation device according to claim 5. Shinzaki further discloses wherein the trained model is trained using a relationship between [“a trained model trained with training data” ¶59]. However, Shinzaki fails to explicitly disclose the smell data and data indicating a chemical property of a measurement target of the smell data. Oshita discloses the smell data and data indicating a chemical property of a measurement target of the smell data [“database 13 is for accumulating evaluation data obtained by associating the output of the detecting unit 11 and the output of the input unit” ¶38; “QCM sensor includes an oscillator and a gas absorbing film” ¶29]. It would have been obvious to one having ordinary skill in the art, having the teachings of Shinzaki, Kalashnikov, Imamura, and Oshita before him before the effective filing date of the claimed invention, to modify the combination to incorporate the smell database of Oshita. Given the advantage of extending data evaluation to integrate smells, one having ordinary skill in the art would have been motivated to make this obvious modification. Regarding claim 8, Shinzaki, Kalashnikov, and Imamura disclose the training data generation device according to claim 5. Shinzaki further discloses wherein the trained model is trained using a relationship between [“a trained model trained with training data” ¶59]. However, Shinzaki fails to explicitly disclose the smell data and data indicating a biological reaction when sniffing the smell. Oshita discloses the smell data and data indicating a biological reaction when sniffing the smell [“database 13 is for accumulating evaluation data obtained by associating the output of the detecting unit 11 and the output of the input unit” ¶38; “a gas sensor (first sensor) that detects the smell in the room” ¶26]. It would have been obvious to one having ordinary skill in the art, having the teachings of Shinzaki, Kalashnikov, Imamura, and Oshita before him before the effective filing date of the claimed invention, to modify the combination to incorporate the smell database of Oshita. Given the advantage of extending data evaluation to integrate smells, one having ordinary skill in the art would have been motivated to make this obvious modification. Examiner’s Note The Examiner respectfully requests of the Applicant in preparing responses, to fully consider the entirety of the reference(s) as potentially teaching all or part of the claimed invention. It is noted, REFERENCES ARE RELEVANT AS PRIOR ART FOR ALL THEY CONTAIN. “The use of patents as references is not limited to what the patentees describe as their own inventions or to the problems with which they are concerned. They are part of the literature of the art, relevant for all they contain.” In re Heck, 699 F.2d 1331, 1332-33, 216 USPQ 1038, 1039 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006, 1009, 158 USPQ 275, 277 (CCPA 1968)). A reference may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art, including non-preferred embodiments (see MPEP 2123). The Examiner has cited particular locations in the reference(s) as applied to the claim(s) above for the convenience of the Applicant. Although the specified citations are representative of the teachings of the art and are applied to the specific limitations within the individual claim(s), typically other passages and figures will apply as well. Additionally, any claim amendments for any reason should include remarks indicating clear support in the originally filed specification. Response to Arguments Regarding the Title, the addition of the phrase FOR OBTAINING DESIRED LABEL does not make the title a descriptive title indicative of the invention that will help in proper indexing, classifying, or searching. The title remains generic. Specific elements should be included which differentiate the invention based on how the invention functions. For example, information such as using smell data and speech data to generate a plurality of candidate labels and providing a GUI for a user to select the correct label should be included. Regarding the §101 rejections, Applicant's arguments have been fully considered but have been found unpersuasive. Applicant argues that 1) claim 1 does not recite a judicial exception, 2) the claim integrates the judicial exception into a practical application, and 3) the claim recites significantly more than the judicial exception. Examiner disagrees for at least the following reasons. First, the claim does recite a judicial exception in the limitations that can be performed mentally by a person. For example, generating information regarding the smell data by performing voice recognition of the audio information can be done manually by a person listening to the audio and then generating information regarding the smell. The other two generating steps can likewise by done manually by a person. Second, the additional elements do not integrate the judicial exception into a practical application. The memory, processor, and display are merely generic computer components. The acquiring smell data, obtaining audio information, controlling a display, and receiving a selection are all merely extra solution activity involving the transmission of data. Additionally, any alleged improvement is to the abstract idea. None of these additional elements integrate the judicial exception into a practical application. Third, the additional elements do not provide an inventive concept which is significantly more than the judicial exception. The memory, processor, and display are merely generic computer components. The acquiring smell data, obtaining audio information, controlling a display, and receiving a selection are well-understood, routine, and conventional as explained in MPEP 2106.05(d)(II). Additionally, the argument that the cited prior are does not disclose the features is irrelevant. Examiner refers Applicant to MPEP 2106 which states, "The question of whether a particular claimed invention is novel or obvious is "fully apart" from the question of whether it is eligible. Diamond v. Diehr, 450 U.S. 175, 190, 209 USPQ 1,9 (1981)." This can be further supported by SAP America v Investpic, a precedential case by the Federal Circuit. In this case the court states: We may assume that the techniques claimed are “[g]roundbreaking, innovative, or even brilliant,” but that is not enough for eligibility. Ass’n for Molecular Pathology v. Myriad Genetics, Inc., 569 U.S. 576, 591 (2013); accord buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1352 (Fed. Cir. 2014). Nor is it enough for subject-matter eligibility that claimed techniques be novel and nonobvious in light of prior art, passing muster under 35 U.S.C. §§ 102 and 103. See Mayo Collaborative Servs. v. Prometheus Labs., Inc., 566 U.S. 66, 89-90 (2012); Synopsys, Inc. v. Mentor Graphics Corp., 839 F.3d 1138, 1151 (Fed. Cir. 2016) (“[ A] claim for a new abstract idea is still an abstract idea. The search for a § 101 inventive concept is thus distinct from demonstrating § 102 novelty.”); Intellectual Ventures I LLC v. Symantec Corp., 838 F.3d 1307 ,1315 (Fed. Cir. 2016) (same for obviousness) (Symantec). The claims here are ineligible because their innovation is an innovation in ineligible subject matter. Accordingly, the §101 rejections are maintained. Regarding the prior art rejections, Applicant's arguments with respect to the claims have been considered but are moot because the arguments do not apply to the references being used in the current rejection of the limitations. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROBERT H BEJCEK II whose telephone number is (571)270-3610. The examiner can normally be reached Monday - Friday: 9:00am - 5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michelle T. Bechtold can be reached at (571) 431-0762. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /R.B./ Examiner, Art Unit 2148 /MICHELLE T BECHTOLD/ Supervisory Patent Examiner, Art Unit 2148
Read full office action

Prosecution Timeline

Aug 17, 2022
Application Filed
Jun 10, 2025
Non-Final Rejection — §101, §103
Aug 21, 2025
Interview Requested
Sep 03, 2025
Applicant Interview (Telephonic)
Sep 03, 2025
Examiner Interview Summary
Sep 12, 2025
Response Filed
Sep 30, 2025
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12554961
BLOCK TRANSFER OF NEURON OUTPUT VALUES THROUGH DATA MEMORY FOR NEUROSYNAPTIC PROCESSORS
2y 5m to grant Granted Feb 17, 2026
Patent 12530563
PROVIDING ARTIFICAL INTELLIGENCE BASED MODEL TO NODE BASED ON REPRESENTATION OF TASK PERFORMED BY ARTIFICAL INTELLIGENCE BASED MODEL
2y 5m to grant Granted Jan 20, 2026
Patent 12400109
FUNCTIONAL SYNTHESIS OF NETWORKS OF NEUROSYNAPTIC CORES ON NEUROMORPHIC SUBSTRATES
2y 5m to grant Granted Aug 26, 2025
Patent 12393853
PROJECTING DATA TRENDS USING CUSTOMIZED MODELING
2y 5m to grant Granted Aug 19, 2025
Patent 12361314
Creation, Use And Training Of Computer-Based Discovery Avatars
2y 5m to grant Granted Jul 15, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
64%
Grant Probability
87%
With Interview (+22.4%)
3y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 251 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month