Prosecution Insights
Last updated: April 19, 2026
Application No. 18/036,707

MACHINE-LEARNED MODELS FOR SENSORY PROPERTY PREDICTION

Non-Final OA §103§112
Filed
May 12, 2023
Examiner
BRAHMACHARI, MANDRITA
Art Unit
2144
Tech Center
2100 — Computer Architecture & Software
Assignee
Osmo Labs Pbc
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
311 granted / 407 resolved
+21.4% vs TC avg
Strong +30% interview lift
Without
With
+29.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
27 currently pending
Career history
434
Total Applications
across all art units

Statute-Specific Performance

§101
10.5%
-29.5% vs TC avg
§103
54.5%
+14.5% vs TC avg
§102
7.8%
-32.2% vs TC avg
§112
17.9%
-22.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 407 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION The action is in response to claims dated 12/8/2023 Claims pending in the case: 1-18 Claims withdrawn: 19-20 Election/Restrictions The application has been restricted as Claim 1-18 (Group I) pertain to training a model and Claims 19-20 (Group II) pertain to designing a molecular structure and these two inventions are distinct. The inventions have acquired a separate status in the art due to their recognized divergent subject matter. Applicant’s attorney representative Michael Schmitt has confirmed election of Claims 1-18 (Group 1) without traverse over a phone conference on 2/2/2026. Please refer to the attached interview record. Applicant is requested to cancel the withdrawn claims. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim(s) 14 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. Claim(s) 14 in the relevant part read: “identifying, by the one or more computing devices, other molecules that have sensory properties that are similar to the predicted sensory properties of the selected molecule …”. The limitation uses a relative term “similar” which is not defined in the spec. Based on the claim language, it is unclear what criteria may be used to determine that the properties are similar. As such, a person of reasonable skill in the art would not be apprised of the metes and bounds of the invention. For the purpose of examination, the limitation is interpreted as identifying sensory properties of molecules. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-6, 17-18 is/are rejected under 35 U.S.C. 103 as being unpatentable over (Lengeling – Machine Learning for Scent: Learning Generalizable Perceptual Representations of Small Molecules) in view of Pan (A Survey on Transfer Learning). Regarding Claim 1, Langeline teaches, A computer-implemented method for training a sensory prediction model for predicting sensory properties for a prediction task having limited available training data for a second sensory prediction task, the computer-implemented method comprising: obtaining, by a computing system comprising one or more computing devices, a first sensory prediction task training dataset comprising first training data associated with a first sensory prediction task, the first training data comprising molecular structure data labeled with first sensory properties associated with the first sensory prediction task (Langeline: Pg. 2 Section 1 [3]: obtain dataset to train model to predict odor based on molecule’s graph structure with labeled odor descriptors); training, by the computing system, a machine-learned sensory prediction model based at least in part on the first sensory prediction task training dataset to predict the first sensory properties associated with the first sensory prediction task (Langeline: Pg. 2 Section 1 [3], Pg. 3 section 3.1: train model to predict specific odors such as floral (task)); obtaining, by the computing system, a second sensory prediction task training dataset comprising second training data associated with a second sensory prediction task, the second training data comprising molecular structure data labeled with second sensory properties associated with the second sensory prediction task, wherein a number of data items of the first sensory prediction task training dataset is greater than a number of data items of the second sensory prediction task training dataset (Langeline: Pg. 8 Section 6.3: perform transfer learning using limited data on a second new prediction task); and training, by the computing system, the machine-learned sensory prediction model based at least in part on the second sensory prediction task training dataset to predict the second sensory properties associated with the second sensory prediction task (Langeline: Pg. 8 Section 6.3: transfer learning using limited data on a second new prediction task); Although not explicitly recited, it is obvious that transfer learning process may involve training using the second set of limited data. Nonetheless, Pan teaches, training, by the computing system, the machine-learned model based at least in part on the second prediction task training dataset (Pan: Pg. 3 Table 1, Pg. 4 Table 2, Pg. 11 [2]: to adapt the learned model, model is trained using fewer training data); It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Langeline and Pan because the combination would enable using transfer learning to train a model for a new task. One of ordinary skill in the art would have been motivated to combine the teachings because in the case where we “have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts” (see Pan Abstract). The Examiner further notes that the fact that the size of the dataset being different is not functionally involved in the steps recited as claimed. Thus, this distinction in the size of the dataset does not distinguish the claimed invention from a retraining of a model with a different data set. Regarding claim 2, Langeline and Pan teach the invention as claimed in claim 1 above and, wherein the machine-learned sensory prediction model comprises a sensory embedding model, wherein training the machine-learned sensory prediction model based at least in part on the first sensory prediction task training dataset comprises training the sensory embedding model with a first prediction task model based at least in part on the first sensory prediction task training dataset, and wherein training the machine-learned sensory prediction model based at least in part on the second sensory prediction task training dataset comprises training the sensory embedding model with a second prediction task model based at least in part on the second sensory prediction task training dataset (Langeline: Pg. 2 Section 1 [3]: obtain dataset to train model to predict odor based on molecule’s graph structure with labeled odor descriptors; Pg. 8 Section 6.3: perform transfer learning using limited data on a second new prediction task; Pg 7 section 6.2: embeddings based model). Regarding claim 3, Langeline and Pan teach the invention as claimed in claim 2 above and, wherein the sensory embedding model is configured to produce a sensory embedding and wherein the first sensory prediction task model and the second sensory prediction task model are configured to receive the sensory embedding as input (Langeline: Pg. 8 Section 6.3: perform transfer learning using limited data on a second new prediction task; Pg 7-8 section 6.2-6.3: embeddings based learning). Regarding claim 4, Langeline and Pan teach the invention as claimed in claim 1 above and, wherein at least one of the first training data or the second training data comprises a plurality of example chemical structures, each example chemical structure labeled with one or more sensory property labels that describe sensory properties of the example chemical structure (Langeline: Pg. 2 Section 1 [3]: obtain dataset with labeled odor descriptors). Regarding claim 5, Langeline and Pan teach the invention as claimed in claim 1 above and, wherein the first prediction task is associated with a first species and wherein the second prediction task is associated with a second species, the second species being different from the first species (Langeline: Pg. 8 Section 6.3, Pg. 9 Fig. 8: perform transfer learning which may be to a different species). It is noted here that limiting the task to a specific species such as human or an animal is an obvious variation of the model usage. This distinction is only on the data types and not functionally involved in the limitations as claimed and therefore does not distinguish the limitation from the teachings in the prior art. Regarding claim 6, Langeline and Pan teach the invention as claimed in claim 1 above and, wherein the first sensory prediction task training dataset comprises human perception data and the second sensory prediction task training dataset comprises nonhuman perception data (Langeline: Pg. 8 Section 6.3: perform transfer learning where first task is order prediction and second task may be) (Pan: Pg. 4 [1]: the target task may be different from the source task (human/non-human perceptions are examples of two different tasks). Regarding claim 17, Langeline and Pan teach the invention as claimed in claim 1 above and, wherein the first prediction task is associated with a first species and wherein the second prediction task is associated with a second species, the second species being different from the first species (Langeline: Pg. 8 Section 6.3, Pg. 9 Fig. 8: perform transfer learning which may be to a different species). It is noted here that limiting the task to a specific species such as human or an animal is an obvious variation of the model usage. This distinction is only on the data types and not functionally involved in the limitations as claimed and therefore does not distinguish the limitation from the teachings in the prior art. Regarding Claim 18, Langeline teaches, One or more non-transitory computer-readable media comprising a sensory embedding, the sensory embedding generated as output from a machine-learned embedding model, wherein the machine-learned embedding model has been trained using a first sensory prediction task training dataset for a first sensory prediction task and a second sensory prediction task training dataset for a second sensory prediction task (Langeline: Pg. 2 Section 1 [3]: train model to predict specific odors such as floral (task); Pg. 8 Section 6.3: perform transfer learning using limited data on a second new prediction task; Pg 7 section 6.2: embeddings based model); wherein a number of data items of the first sensory prediction task training dataset is greater than a number of data items of the second sensory prediction task training dataset (Langeline: Pg. 8 Section 6.3: transfer learning using limited data on a second new prediction task); Although not explicitly recited, that the model is trained for two tasks, it is obvious that transfer learning process may involve training using the second set of limited data. Nonetheless, Pan teaches, training for the second prediction task (Pan: Pg. 3 Table 1, Pg. 4 Table 2, Pg. 11 [2]: to adapt the learned model, model is trained using fewer training data); The same motivation to combine stated above applies. Claim(s) 7, 10-14 is/are rejected under 35 U.S.C. 103 as being unpatentable over (Lengeling – Machine Learning for Scent: Learning Generalizable Perceptual Representations of Small Molecules). Regarding Claim 7, Langeline teaches, A computer-implemented method for predicting sensory properties for a prediction task having limited available training data, the computer-implemented method comprising: obtaining, by one or more computing devices, a machine-learned sensory prediction model trained to predict sensory properties of molecules based at least in part on chemical structure data associated with the molecules, wherein the machine-learned sensory prediction model has been trained using a first sensory prediction task training dataset for a first sensory prediction task (Langeline: Pg. 2 Section 1 [3]: obtain dataset to train model to predict odor based on molecule’s graph structure with labeled odor descriptors); obtaining, by the one or more computing devices, input data that describes a chemical structure of a selected molecule; providing, by the one or more computing devices, the input data that describes the chemical structure of the selected molecule as input to the machine-learned sensory prediction model (Langeline: Pg. 2 Section 1 [3], Pg. 3 section 3.1, Pg. 9 section 7: model to predict odor based on molecule’s structure); receiving, by the one or more computing devices, prediction data descriptive of one or more second sensory properties of the selected molecule associated with a second sensory prediction task as an output of the machine-learned sensory prediction model (Langeline: Pg. 8 Section 6.3: perform transfer learning using limited data on a second new prediction task); and providing, by the one or more computing devices, the prediction data descriptive of the one or more second sensory properties of the selected molecule as an output (Langeline: Pg. 3 section 3.1, Pg. 8 Section 6.3: perform transfer learning to obtain model for a second new prediction task; ); Although Langeline does not explicitly recite receiving input and providing prediction output data. Langeline teaches generating the model that is capable of performing the claimed function and thus it is obvious that the models in Langeline receives input and provides prediction output. Regarding claim 10, Langeline teaches the invention as claimed in claim 7 above and, wherein the sensory prediction model comprises one or more graph neural networks, and wherein the input data comprises a graph that graphically describes a chemical structure of a selected molecule (Langeline: abstract, Pg. 9 Section 7: graph neural network; Pg. 2 Section 1 [3], Pg. 3 section 2.1.1: molecule descriptors, graph topology). Regarding claim 11, Langeline teaches the invention as claimed in claim 10 above and, wherein the graph that graphically describes the chemical structure of the selected molecule comprises a two-dimensional graph structure indicative of a two-dimensional representation of the chemical structure of the selected molecule (Langeline: abstract, Pg. 9 Section 7: graph neural network; Pg. 2 Section 1 [3], Pg. 3 section 2.1.1: molecule descriptors, graph topology). Regarding claim 12, Langeline teaches the invention as claimed in claim 10 above and, wherein the graph that graphically describes the chemical structure of the selected molecule comprises a three-dimensional graph structure indicative of a three-dimensional representation of the chemical structure of the selected molecule (Langeline: Fig. 1: molecular structures are three-dimensional), and wherein the method further comprises performing, by the one or more computing devices, one or more quantum chemical calculations to identify the three-dimensional representation of the chemical structure of the selected molecule (Langeline: Pg. 4 Fig. 2 description: derive features from molecules structure). Regarding Claim 13, Langeline teaches the invention as claimed in claim 7 above and, further comprising: performing, by the one or more computing devices, an iterative search process to identify an additional molecule that exhibits one or more desired sensory properties associated with the second prediction task (Langeline: Pg. 2 Section 1 [3]: model to predict (iterative process) specific odor of an input (additional molecule)), wherein the iterative search process comprises, for each of a plurality of iterations: generating, by the one or more computing devices, a candidate molecule graph that graphically describes a candidate chemical structure of a candidate molecule (Langeline: abstract, Pg. 9 Section 7: Pg. 2 Section 1 [3], Pg. 3 section 2.1.1: generate model input of molecule descriptors); providing, by the one or more computing devices, the candidate molecule graph that graphically describes the candidate chemical structure of the candidate molecule as input to the machine-learned graph neural network (Langeline: abstract, Pg. 9 Section 7: Pg. 2 Section 1 [3], Pg. 3 section 2.1.1: model input of molecule descriptors); receiving, by the one or more computing devices, prediction data descriptive of one or more predicted sensory properties of the candidate molecule as an output of the machine-learned graph neural network (Langeline: abstract, Pg. 3 section 3.1: model prediction output); and comparing, by the one or more computing devices, the one or more predicted sensory properties of the candidate molecule to the one or more desired sensory properties (Langeline: Pg. 6 Fig. 4: compare model output to properties). Regarding claim 14, Langeline teaches the invention as claimed in claim 7 above and, wherein the prediction data indicative of the one or more predicted sensory properties of the selected molecule comprises a numerical embedding; and the method further comprises identifying, by the one or more computing devices, other molecules that have sensory properties that are similar to the predicted sensory properties of the selected molecule by comparing the numerical embedding with other numerical embeddings output for the other molecules by the machine-learned graph neural network (Langeline: Pg. 2-3: numerical embedding to vectors; Pg 1-2 section 1: dataset with other similar molecules). Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over (Lengeling – Machine Learning for Scent: Learning Generalizable Perceptual Representations of Small Molecules) in view of (Pan – A Survey on Transfer Learning). Regarding claim 8, Langeline teaches the invention as claimed in claim 7 above and, wherein the sensory prediction model is further trained using a second sensory prediction task training dataset for the second sensory prediction task, wherein a number of data items of the first sensory prediction task training dataset is greater than a number of data items of the second sensory prediction task training dataset (Lengeline: Pg. 8 Section 6.3: perform transfer learning using limited data on a second new prediction task); Although not explicitly recited, it is obvious that transfer learning process may involve training using the second set of limited data. Nonetheless, Pan teaches, training, by the computing system, the machine-learned model based at least in part on the second prediction task training dataset (Pan: Pg. 3 Table 1, Pg. 4 Table 2, Pg. 11 [2]: to adapt the learned model, model is trained using fewer training data); It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Langeline and Pan because the combination would enable using transfer learning to train a model for a new task. One of ordinary skill in the art would have been motivated to combine the teachings because in the case where we “have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts” (see Pan Abstract). Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over (Lengeling – Machine Learning for Scent: Learning Generalizable Perceptual Representations of Small Molecules) in view of Pichara (US 20200365053). Regarding claim 9, Langeline teaches the invention as claimed in claim 7 above and, wherein the one or more second sensory properties associated with the second sensory prediction task comprise one or more of: optical properties of the selected molecule; gustatory properties of the selected molecule; biodegradability of the selected molecule; stability of the selected molecule; or toxicity of the selected molecule (Langeline: Pg. 9 Section 7: property may be sensory properties as smell; Pg. 17 Fig S3: taste properties); The examiner finds that it would have been obvious to use the teachings in Langeline to predict other sensory properties known in the art; Nonetheless, Pichara teaches, model to predict taste (Pichara: [15, 20]: model using molecular descriptors; [25]: “Training of the prediction models may not use sensorial descriptors (e.g., flavor, color, texture or taste) as the data features for matching to the target food item”); It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Langeline and Pichara because the combination would enable using the model to predict other sensory properties such as taste. The combination extends the “use of machine learning to mimic target food items” (see Pichara [1]). Claim(s) 15-16 is/are rejected under 35 U.S.C. 103 as being unpatentable over (Lengeling – Machine Learning for Scent: Learning Generalizable Perceptual Representations of Small Molecules) in view of Nozaki (Predictive modeling for odor character of a chemical using machine learning combined with natural language processing). Regarding claim 15, Langeline teaches the invention as claimed in claim 7 above and, Nozaki, teaches, further comprising: generating, by the one or more computing devices, visualization data descriptive of a relative importance of one or more structural units of chemical structure of the selected molecule to the predicted sensory properties associated with the selected molecule and the second prediction task; and providing, by the one or more computing devices, the visualization data in association with the prediction data indicative of the one or more olfactory properties (Nozaki: Pg. 4 [3] – Pg. 5: visualization of descriptors to the properties); It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Langeline and Nozaki because the arts pertain to the same field of predicting sensory outputs and the combination would enable using visualization of the data for better understanding and large scale sensory evaluation (see Nozaki abstract). Regarding claim 16, Langeline teaches the invention as claimed in claim 7 above and, Nozaki, teaches, further comprising: generating, by the one or more computing devices, data indicative of how a structural change to the chemical structure of the selected molecule affects the predicted sensory properties associated with the selected molecule (Nozaki: Pg. 4 [3] – Pg. 5: visualization of descriptors to the properties showing spread of prediction with structure); The same motivation to combine stated above applies. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure in attached 892. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MANDRITA BRAHMACHARI whose telephone number is (571)272-9735. The examiner can normally be reached Monday to Friday, 11 am to 8 pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tamara Kyle can be reached at 571 272 4241. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Mandrita Brahmachari/Primary Examiner, Art Unit 2144
Read full office action

Prosecution Timeline

May 12, 2023
Application Filed
Feb 02, 2026
Examiner Interview (Telephonic)
Feb 13, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596746
AUDIO PREVIEWING METHOD, APPARATUS AND STORAGE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12596469
COMBINED DATA DISPLAY WITH HISTORIC DATA ANALYSIS
2y 5m to grant Granted Apr 07, 2026
Patent 12591358
DAMAGE DETECTION PORTAL
2y 5m to grant Granted Mar 31, 2026
Patent 12585979
MANAGING DATA DRIFT AND OUTLIERS FOR MACHINE LEARNING MODELS TRAINED FOR IMAGE CLASSIFICATION
2y 5m to grant Granted Mar 24, 2026
Patent 12585992
MACHINE LEARNING WITH ATTRIBUTE FEEDBACK BASED ON EXPRESS INDICATORS
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+29.8%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 407 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month