Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
The following NON-FINAL Office Action is in response to application 18/561,610 filed on 11/16/2023. This communication is the first action on the merits.
Status of Claims
Claims 1 and 10-41 are currently pending and have been rejected as follows.
Drawings
The drawings filed on 11/16/2023 are accepted.
Domestic Benefit/National Stage
The claim to domestic benefit/national stage has been acknowledged and the corresponding documents have been received.
IDS
The information disclosure statement has been received and considered.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1 and 10-41 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception without significantly more. A subject matter eligibility analysis is set forth below. See MPEP 2106.
Claim 1 recites:
A computing system comprising:
a one or more sensors configured to generate electrical signals indicative of presence of one or more chemical compounds in an environment;
a machine-learned model trained to receive and process the electrical signals to generate an embedding in an embedding space;
wherein the machine-learned model has been trained using a training dataset comprising a plurality of training examples, each training example comprising a ground truth property label applied to a set of electrical signals generated by one or more test sensors when exposed to one or more training chemical compounds, each ground truth property label descriptive of a property of the one or more training chemical compounds;
one or more processors;
and one or more non-transitory computer-readable media that collectively store instructions that, when executed by the one or more processors, cause the computing system to perform operations, the operations comprising:
generating, by the one or more sensors, sensor data indicative of presence of a specific chemical compound in the environment;
and processing, by the one or more processors, the sensor data with the machine- learned model to generate an embedding output in the embedding space.
The bolded language in the claim limitations indicate abstract ideas, and the remaining limitations are considered to be additional elements.
Under Step 1 of the analysis, claim 1 does belong to a statutory category, namely it is a machine claim. Claim 19 is a process claim. Claim 20 is a machine claim.
Under Step 2A, Prong One: This part of the eligibility analysis evaluates whether the claim recites a judicial exception. As explained in MPEP 2106.04, subsection II, a claim “recites” a judicial exception when the judicial exception is “set forth” or “described” in the claim.
Under Step 2A, Prong One, the broadest reasonable interpretation consistent with the specification of the limitations recited in Claim 1 recite at least one judicial exception, that being a mathematical concept (mathematical relationship/calculations).
According to the specification, the processing of electrical signals to generate an embedding in an embedding space as well as processing…the sensor data with the machine-learned model to generate an embedding output in the embedding space is (based on graph-representations of chemical compounds. Moreover, in some implementations, the embedding space can be populated with embedding labels descriptive of chemical mixture names or properties, which may be generated based on human-input or automatic prediction. In some implementations, the embeddings can be vectors of numbers. In some cases, the embeddings may represent graphs or molecular property descriptions. [0029], [0050]; Figure 2).
The claim limitation involves a mathematical concept in that the nature of processing electrical signals to generate embeddings fundamentally relies on a vector Rn, matrix multiplications, activation functions, projections, and optimization using loss minimization. Therefore, the claim limitation essentially involves generating numerical representations of data using mathematical operations, processing sensor data using a trained mathematical model and mapping input signals to a vector space using learned parameters.
Claim 19 recites similar abstract ideas as claim 1.
Claim 20 recites similar abstract ideas as claim 1 as well as the following abstract ideas of determining one or more sensory properties based on the embedding output in the embedding space and the plurality of stored sensory property data sets, which includes (determining sensory properties such as smell taste and color… 'cinnamon', 'cucumber', 'apple' and 'feces' [0102, 0036]). Given the level of generality of the claim limitation under the broadest reasonable interpretation, the abstract ideas in this claim limitation fall under the category of mental process (observations/evaluation/judgement/or opinion) given that one of ordinary skill in the art could perform the determining process given an embedding output and plurality of stored sensory property data sets by using their senses, comparisons, pattern recognition and categorization.
Step 2A, Prong Two of the eligibility analysis evaluates whether the claim as a whole integrates the recited judicial exception(s) into a practical application of the exception. This evaluation is performed by (a) identifying whether there are any additional elements recited in the claim beyond the judicial exception, and (b) evaluating those additional elements individually and in combination to determine whether the claim as a whole integrates the exception into a practical application. 2019 PEG Section III(A)(2), 84 Fed. Reg. at 54-55.
The additional elements in the preambles of all independent claims are recited in generality and represent insignificant extra-solution activity (field-of-use limitations) that is not meaningful to indicate a practical application.
Claim 1 recites additional elements:
one or more sensors configured to generate electrical signals indicative of presence of one or more chemical compounds in an environment…
a machine-learned model trained to receive…
wherein the machine-learned model has been trained using a training dataset comprising a plurality of training examples, each training example comprising a ground truth property label applied to a set of electrical signals generated by one or more test sensors when exposed to one or more training chemical compounds, each ground truth property label descriptive of a property of the one or more training chemical compounds;
one or more processors;
and one or more non-transitory computer-readable media that collectively store instructions that, when executed by the one or more processors, cause the computing system to perform operations, the operations comprising:
generating, by the one or more sensors, sensor data indicative of presence of a specific chemical compound in the environment.
Claim 19 recites similar additional elements as claim 1 as well as providing, by the computer system, the one or more labels for display.
Claim 20 also recites the additional elements of obtaining a plurality of stored sensory property data sets, wherein the plurality of stored sensory property data sets comprises stored embeddings in the embedding space paired with a respective sensory property data set associated with the respective stored embedding;
and providing the one or more sensory properties for display.
In regards to all independent claims, the generic computer components such as “one or more processors”, “one or more non-transitory computer-readable media”, “the computer system”, and “stored data/embeddings” are recited generality and not meaningful and, therefore, are not qualified as particular machines to indicate a practical application.
The limitations that generically recite collecting/outputting, by sensors/devices/displays, measurement data (all independent claims) represent insignificant extra-solution activity of mere data gathering/outputting results as well as using the trained machine-learning model. According to the October update on 2019 SME Guidance such steps are “performed in order to gather data for the mental analysis step, and is a necessary precursor for all uses of the recited exception. It is thus extra-solution activity, and does not integrate the judicial exception into a practical application”.
Under Step 2B, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. When re-evaluated under Step 2B, the claim limitations are found to be well-understood, routine, and conventional as explained by MPEP 2106.05(d)(II) (describing conventional activities that include transmitting and receiving data over a communication network) as referenced by Khomami and Lee.
Therefore, similarly the combination and arrangement of the above identified additional elements when analyzed under Step 2B also fails to necessitate a conclusion that claim 1, as well as claims 19 and 20, amount to significantly more than the abstract idea.
With regards to dependent claims 10-18 and 21-41, they provide additional features/steps which are part of an expanded abstract idea of the independent claims (additionally comprising abstract idea steps) and, therefore, these claims are not eligible without meaningful additional elements that reflect a practical application and/or additional elements that qualify for significantly more for substantially similar reasons as discussed with regards to Claim 1.
Regarding claim 35, which recites wherein the operations further comprise triggering, by the one or more processors, an automated alert or automated action in response to the prediction of the property. The claim limitation is recited in generality and merely amounts to insignificant extra solution activity.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1, 10-13, 15-16, 18-26, 28-30, 32-34, 37-39 and 41 are rejected under 35 USC 102 as being anticipated by Khomami (US 20200272900 A1).
Regarding claim 1, Khomami teaches a computing system comprising [Abstract] a one or more sensors configured to generate electrical signals indicative of presence of one or more chemical compounds in an environment; (FIG. 1A and FIG. 1B depict views of an exemplary chemical sensing unit 100. Chemical sensing unit 100 includes a sensor array 120 [0083] where chemical sensors are physical devices that exhibit a measurable response to the presence of one or more chemical species. [0073]).
a machine-learned model trained to receive and process the electrical signals to generate an embedding in an embedding space; wherein the machine-learned model has been trained using a training dataset comprising a plurality of training examples, ([Figures 3-10 and their corresponding descriptions [0063-0070]).
each training example comprising a ground truth property label applied to a set of electrical signals generated by one or more test sensors when exposed to one or more training chemical compounds, each ground truth property label descriptive of a property of the one or more training chemical compounds; (The data may include values representing signals output by the chemical sensing units (e.g., the “sensing modules” shown in operations 510 and 520). The data additionally or alternatively may include metadata representing information about the corresponding analytes, their concentrations, the environment, and/or the configuration…the metadata may be associated with some or all of the signals output by the chemical sensing units. The metadata may be used to determine labels corresponding to the values representing signals output by the chemical sensing units. A set of the labels and the values to which they correspond may comprise a training dataset, which may be used to train at least one model as described herein. [0148]).
one or more processors;
and one or more non-transitory computer-readable media that collectively store instructions that, when executed by the one or more processors, cause the computing system to perform operations, the operations comprising [Abstract]
generating, by the one or more sensors, sensor data indicative of presence of a specific chemical compound in the environment; (FIG. 1A and FIG. 1B depict views of an exemplary chemical sensing unit 100. Chemical sensing unit 100 includes a sensor array 120 [0083] which can generate a response pattern specific to a particular chemical combination [0003] where chemical sensors are physical devices that exhibit a measurable response to the presence of one or more chemical species. [0073]).
and processing, by the one or more processors, the sensor data with the machine- learned model to generate an embedding output in the embedding space. (Fig. 2A, step 230; …the feature vectors generated in operation 220 can be applied to a first sub-model that relates the feature representation Z to a latent representation Φ (e.g., from feature representation 221 to latent representation 231, as depicted in FIG. 2B) to generate latent value vectors corresponding to the feature vectors [0114]).
Regarding claim 10, Khomami teaches wherein the machine-learned model [[is]] has been trained jointly with a graph neural network (…training one or more models (e.g. statistical models, machine learning models, neural network models, etc.) to map signals output by a plurality of chemical sensing units of a chemical sensing system to a common representation space that at least partially accounts for differences in signals output by the different chemical sensing units. [0077]) where the descriptors (e.g., feature vectors) generated in a Z space can be applied to a second model or sub-model that relates the feature representation Z to a latent representation Φ to generate latent vectors corresponding to the feature vectors. A latent representation Φ can be an inner product space. The latent representation Φ can be of the same dimension as the manifold Z′. As a non-limiting example, when Z′, which represents the manifolds within Z, is a plane embedded in Z, the latent representation Φ may be a two-dimensional space (e.g., a latent representation, as depicted in FIG. 4). [0141]) to generate a single, combined output within the embedding space. ([Abstract; FIGS. 11A-C depict graphic representations of exemplary latent and mutual latent spaces]).
Regarding claim 11, Khomami discloses the claim limitations as discussed in claim 10.
Regarding claim 12, Khomami teaches obtaining a chemical compound training example comprising electrical signal training data and a respective training label, wherein the electrical signal training data and the respective training label are descriptive of a specific training chemical compound; processing the electrical signal training data with the machine-learned model to generate a chemical compound embedding output; (as discussed in claim 1)
processing the chemical compound embedding output with a classification model to determine a chemical compound label; (…subsequent to the learning of the mutual representation, an appropriate inference model (e.g., corresponding to the inference block described elsewhere herein) may be chosen for the purpose of making inferences (e.g. discriminating between) analytes. Such models may include, but are not limited to, additional layers trained to classify samples, additional neural networks, or classical machine learning methods such as support vector machines. Such models may relate values in the mutual latent representation with a unique semantics that may correspond to a unique chemical composition (pure or complex chemical compositions [0201]).
evaluating a loss function that evaluates a difference between the chemical compound label and the respective training label; and adjusting one or more parameters of the machine-learned model based at least in part on the loss function (…a partially labeled dataset may provide weak supervision to map individual latent spaces Φ.sup.i to the final mutual latent space Φ.sup.S, where the following objectives direct the construction of a loss function used for the learning process: [0166] The manifolds should be correspondent and hence the neighborhood characteristics of the Φ.sup.i will be equivalent to the neighborhood characteristics of Φ.sup.S. For example, the mapped representation of close/far samples in Φ.sup.i to the Φ.sup.S will stay close/far. [0167] The semantics of the latent representations Φ.sup.i should match the semantics in Φ.sup.S so that latent variables in Φ.sup.i spaces with same or very similar semantics will match latent variables in Φ.sup.S that are close to each other (respecting the similarity of their semantics) and latent variables in Φ.sup.i spaces with different or very distinguishable semantics will match latent variables in Φ.sup.S that are far each other (respecting the dissimilarity of their semantics) [0165-0167]).
Regarding claim 13, Khomami discloses the claim limitations as discussed in claim 12.
Regarding claim 15, Khomami teaches wherein the machine-learned model comprises a transformer model (The exemplary process of FIG. 6 involves learning a map that transforms the output signals of the new chemical sensing unit to a mutual latent representation with limited training data. [0158; Figure 6]) and (At operation 930, the feature vector may be provided as input to a function φ.sup.m (e.g., via a model of the set of models) that transforms the values in Z.sup.m into values in the latent representation Φ.sup.m. [0176]) and (At operation 1030, the feature vector may be provided as input to a model of the set of models that transforms the values in the feature vector into values in the latent representation. [0182]).
Regarding claim 16, Khomami teaches wherein the operations further comprise [[ing:]] storing, by the one or more processors, the embedding output (…the results of learning system 570, which may include trained models as described herein, may be stored in a model repository 580 [0156]).
Regarding claim 18, Khomami teaches wherein processing, by the one or more processors, the sensor data with the machine-learned model to generate the embedding output in the embedding space comprises [[:]] (as discussed in claim 1)
compressing the sensor data to a fixed length vector representation (…the association of one-dimensional manifolds in Ω with analytes and the generation of output representation V can be combined into a single act. In this single act, each element of V may be described by a unique directional vector ŝ.sub.k in Ψ. [Figure 2B; 0121])
Regarding claim 19, Khomami discloses the claim limitations as discussed in claim 1 as well as providing, by the computing system, the one or more labels for display (Computing device 1200 may also include a network input/output (I/O) interface 1240 via which the computing device may communicate with other computing devices (e.g., over a network), and may also include one or more user I/O interfaces 1250, via which the computing device may provide output to and receive input from a user. The user I/O interfaces may include devices such as a keyboard, a mouse, a microphone, a display device (e.g., a monitor or touch screen), speakers, a camera, and/or various other types of I/O devices. [0203] and [Figures 11A-C]).
Regarding claim 20, Khomami teaches one or more non-transitory computer readable media that collectively store instructions that, when executed by one or more processors, cause a computing system to perform operations, the operations comprising: obtaining sensor data with one or more sensors, wherein the sensor data is descriptive of electrical signals generated due to the presence of one or more chemical compounds in an environment; processing the sensor data with a machine-learned model to generate an embedding output in an embedding space, wherein the machine-learned model is trained to receive and process data descriptive of electrical signals to generate an embedding in the embedding space; (as discussed in claim 1)
obtaining a plurality of stored property data sets, wherein the plurality of stored property data sets comprises stored embeddings in the embedding space paired with a respective property data set associated with the respective stored embedding; (…the results of learning system 570, which may include trained models as described herein, may be stored in a model repository 580 [0156] and …the signals may be stored in the at least one non-transitory computer-readable storage medium of the system [0048] and …controller 130 includes at least one storage device (e.g., a memory) configured to store parameters that define one or more of the mapping operations, described in more detail below [0085] and …for a given chemical sensing unit 100, the shared information may be stored, for example, on a storage device associated with the controller 130 of the chemical sensing unit or on external storage medium 190 [0089] and …the one or more models may include parameters and hyper-parameters stored in a memory component of the chemical sensing system [0096] and Optionally the processed data (e.g., a representation of the maps ζ.sup.1, ζ.sup.2, . . . , ζ.sup.n) may be stored in processed data repository 550. Processed data repository 550 may be a shared data repository with raw data repository 520 [0150])
determining one or more properties based on the embedding output in the embedding space and the plurality of stored property data sets; and providing the one or more properties for display (Figs. 2A, 2B, 5 and 11A-C).
Regarding claim 21, Khomami teaches assessing whether a new sensing element improves the ability to cover the space of odor recognized by a human, improves the ability to recognize a specific odor label, or a combination thereof based on the one or more labels (A chemical sensing unit designed in accordance with some embodiments may operate in a manner akin to olfaction in biological systems. To detect smells of interest and the smell strength in an environment, each individual chemical sensor of the chemical sensing unit may be designed to respond to one or more functional chemistry groups of interest, corresponding to the smells of interest. The smells of interest may be selected, for example, as part of a particular application for which only certain smells may be of relevance (e.g., detecting smoke, rotting food, or other phenomena associated with certain odors). In general, the environment of a chemical sensing system may be a volume, referred to herein as a headspace, in which the chemical sensing system resides [0080]).
Regarding claim 22, Khomami teaches wherein the operations further comprise: generating, by the one or more sensors, data indicative of a plurality of chemical compounds; processing, by the one or more processors, the data indicative of the plurality of chemical compounds with the machine-learned model to generate a plurality of embedding outputs; (as discussed in claim 1)
and applying, by the one or more processors, a clustering operation on the plurality of embedding outputs to identify similar chemicals in the embedding space (Each Z space may be a metric space and distance in the Z space represents the similarity of the semantics (e.g., chemical compositions) within the application. As shown in FIG. 4, due to the characteristics of the metric space, clusters form. Each cluster may represent a manifold, a dense data structure where information about common semantics may be concentrated. When the semantics are similar across sensing units, the formation of clusters in the feature spaces Z may also be similar. [0138]; see [0142, 196-197] as well).
(The above-described embodiments can be implemented in any of numerous ways. For example, the embodiments may be implemented using hardware, software or a combination thereof. When implemented in software, the software code can be executed on any suitable processor (e.g., a microprocessor) or collection of processors…[0204]).
Regarding claim 23, Khomami teaches wherein the operations further comprise generating, by the one or more processors, a prediction of a property with a classification model based on the embedding output (…subsequent to the learning of the mutual representation, an appropriate inference model (e.g., corresponding to the inference block described elsewhere herein) may be chosen for the purpose of making inferences (e.g. discriminating between) analytes. Such models may include, but are not limited to, additional layers trained to classify samples, additional neural networks, or classical machine learning methods such as support vector machines. Such models may relate values in the mutual latent representation with a unique semantics that may correspond to a unique chemical composition (pure or complex chemical compositions [0201]…The above-described embodiments can be implemented in any of numerous ways. For example, the embodiments may be implemented using hardware, software or a combination thereof. When implemented in software, the software code can be executed on any suitable processor (e.g., a microprocessor) or collection of processors…[0204]).
Regarding claim 24, Khomami teaches wherein the property is one or more of a sensory property and an olfactory property (A chemical sensing unit designed in accordance with some embodiments may operate in a manner akin to olfaction in biological systems. To detect smells of interest and the smell strength in an environment, each individual chemical sensor of the chemical sensing unit may be designed to respond to one or more functional chemistry groups of interest, corresponding to the smells of interest. The smells of interest may be selected, for example, as part of a particular application for which only certain smells may be of relevance (e.g., detecting smoke, rotting food, or other phenomena associated with certain odors). In general, the environment of a chemical sensing system may be a volume, referred to herein as a headspace, in which the chemical sensing system resides [0080]).
Regarding claim 25, Khomami teaches wherein the prediction is a property prediction for the environment (A chemical sensing unit designed in accordance with some embodiments may operate in a manner akin to olfaction in biological systems. To detect smells of interest and the smell strength in an environment, each individual chemical sensor of the chemical sensing unit may be designed to respond to one or more functional chemistry groups of interest, corresponding to the smells of interest. The smells of interest may be selected, for example, as part of a particular application for which only certain smells may be of relevance (e.g., detecting smoke, rotting food, or other phenomena associated with certain odors). In general, the environment of a chemical sensing system may be a volume, referred to herein as a headspace, in which the chemical sensing system resides [0080]).
Regarding claim 26, Khomami teaches wherein the property is presence or absence of one or more flammable chemicals, one or more poisonous chemicals, one or more unstable chemicals, one or more dangerous chemicals, or a combination thereof in the environment (…detecting smoke…[0080]).
Regarding claim 28, Khomami teaches wherein the operations further comprise identifying, by the one or more processors, a chemical mixture with a classification model based on the embedding output (…subsequent to the learning of the mutual representation, an appropriate inference model (e.g., corresponding to the inference block described elsewhere herein) may be chosen for the purpose of making inferences (e.g. discriminating between) analytes. Such models may include, but are not limited to, additional layers trained to classify samples, additional neural networks, or classical machine learning methods such as support vector machines. Such models may relate values in the mutual latent representation with a unique semantics that may correspond to a unique chemical composition (pure or complex chemical compositions [0201]…The above-described embodiments can be implemented in any of numerous ways. For example, the embodiments may be implemented using hardware, software or a combination thereof. When implemented in software, the software code can be executed on any suitable processor (e.g., a microprocessor) or collection of processors…[0204]).
Regarding claim 29, Khomami teaches wherein the operations further comprise determining, by the one or more processors, a chemical compound in the environment and concentration of the chemical compound with a classification model based on the embedding output (At step 1050, the inference block generates an inference based on the values in the mutual latent representation. In the example of FIG. 10, the inference is a concentration of at least one analyte in the sample to which the sensors of the chemical sensing unit were exposed [0184]).
Regarding claim 30, Khomami teaches wherein the ground truth property label is one or more of: a label for presence or absence of humans, animals, or plants in a diseased state; a food label; a flavor label; a ripeness label of an agricultural crop; a label for presence or absence of food spoilage; and a human label of a property (A chemical sensing unit designed in accordance with some embodiments may operate in a manner akin to olfaction in biological systems. To detect smells of interest and the smell strength in an environment, each individual chemical sensor of the chemical sensing unit may be designed to respond to one or more functional chemistry groups of interest, corresponding to the smells of interest. The smells of interest may be selected, for example, as part of a particular application for which only certain smells may be of relevance (e.g., detecting smoke, rotting food, or other phenomena associated with certain odors) [0080]).
Regarding claim 32, Khomami teaches wherein the operations further comprise generating, by the one or more processors, labeled embeddings in the embedding space, wherein generating the labeled embeddings comprises pairing the embedding output and the prediction of the property (the descriptors (e.g., feature vectors) generated in a Z space can be applied to a second model or sub-model that relates the feature representation Z to a latent representation Φ to generate latent vectors corresponding to the feature vectors. A latent representation Φ can be an inner product space. The latent representation Φ can be of the same dimension as the manifold Z′. As a non-limiting example, when Z′, which represents the manifolds within Z, is a plane embedded in Z, the latent representation Φ may be a two-dimensional space (e.g., a latent representation, as depicted in FIG. 4) [0141] and (FIGS. 11A-C depict graphic representations of exemplary latent and mutual latent spaces [0071]).
Regarding claim 33, Khomami teaches wherein the sensor data are generated at intervals (…samples that span some time interval [t.sub.i, t.sub.T.sub.i]…[0105]).
Regarding claim 34, Khomami teaches wherein the sensor data are generated continually (A chemical sensing unit designed in accordance with some embodiments may produce values from measured signals in a continuous range, resulting in the formation of continuous data distributions and manifolds [0132]).
Regarding claim 37, Khomami teaches wherein the classification model comprises one or more classification heads (Figure 10).
Regarding claim 38, Khomami teaches wherein the operations comprise determining, by the one or more processors, by the classification model, an associated label for the embedding output based on a threshold similarity determined at least in part on values of the embedding output or location of the embedding output in the embedding space (the directionality criterion may depend on a cosine value of the angle between {right arrow over (z.sub.1z.sub.2)} and the vector. When the cosine value is less than a threshold, the directionality criterion may not be satisfied. For example, a first point z.sub.3 in the sequence may be in a neighborhood of z.sub.2 and may satisfy the directionality criterion when a cosine value of the angle between {right arrow over (z.sub.1z.sub.2)} and {right arrow over (z.sub.1z.sub.3)} is greater than a threshold [0110]) and (…a partially labeled dataset may provide weak supervision to map individual latent spaces Φ.sup.i to the final mutual latent space Φ.sup.S, where the following objectives direct the construction of a loss function used for the learning process: [0166] The manifolds should be correspondent and hence the neighborhood characteristics of the Φ.sup.i will be equivalent to the neighborhood characteristics of Φ.sup.S. For example, the mapped representation of close/far samples in Φ.sup.i to the Φ.sup.S will stay close/far. [0167] The semantics of the latent representations Φ.sup.i should match the semantics in Φ.sup.S so that latent variables in Φ.sup.i spaces with same or very similar semantics will match latent variables in Φ.sup.S that are close to each other (respecting the similarity of their semantics) and latent variables in Φ.sup.i spaces with different or very distinguishable semantics will match latent variables in Φ.sup.S that are far each other (respecting the dissimilarity of their semantics) [0165-0167]).
Regarding claim 39, Khomami teaches a graph neural network trained to receive and process graph representation data to generate an embedding in an embedding space, wherein the operations additionally comprise: obtaining, by the one or more processors, graph representation data of the one or more chemical compounds; and processing, by the one or more processors, the graph representation data with the graph neural network model (…training one or more models (e.g. statistical models, machine learning models, neural network models, etc.) to map signals output by a plurality of chemical sensing units of a chemical sensing system to a common representation space that at least partially accounts for differences in signals output by the different chemical sensing units. [0077]) where the descriptors (e.g., feature vectors) generated in a Z space can be applied to a second model or sub-model that relates the feature representation Z to a latent representation Φ to generate latent vectors corresponding to the feature vectors. A latent representation Φ can be an inner product space. The latent representation Φ can be of the same dimension as the manifold Z′. As a non-limiting example, when Z′, which represents the manifolds within Z, is a plane embedded in Z, the latent representation Φ may be a two-dimensional space (e.g., a latent representation, as depicted in FIG. 4). [0141] and FIGS. 11A-C depict graphic representations of exemplary latent and mutual latent spaces [0071]).
Regarding claim 41, Khomami teaches wherein the operations, when executed by the one or more processors, cause the graph neural network model to process the graph representation data before being processed by the machine-learned model (Figure 10).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 14, 17, 31, and 40 are rejected under 35 USC 103 as being unpatentable over Khomami.
Regarding claim 14, Khomami teaches the computing system of claim 1, but does not explicitly teach wherein the sensor data is descriptive of voltage or current, or voltage and current.
Before the effective filing date of the claimed invention, one of ordinary skill in the art would have found it obvious to modify Khomami to acquire voltage or current measurements from sensors when calibrating electronic chemical sensors to improve the characterization of the sensor response, to establish baseline behavior, to correct drift/noise and to normalize outputs before generating reliable embeddings. It is known that sensors, in this context, inherently operate by transducing chemical interactions into electrical signals. Measuring voltage or current is a fundamental way of characterizing a sensor response.
Regarding claim 17, the claim limitations are directed to the rejection of claim 14.
Regarding claim 31, Khomami teaches the computing system of claim 30 and labeled odor data ([Figures 11A-C]), but Khomami does not teach wherein the human label of a property comprises human labeled odor data.
Before the effective filing date of the claimed invention, one of ordinary skill in the art would have found it obvious to modify Khomami to include labeled odor data that has been labeled by humans given that a convenient and effective method of labeling would be for humans to identify the smells.
Regarding claim 40, Khomami teaches the computing system of claim 39 and the graph neural network model generating an embedding output that is processed by a classification model but does not explicitly teach …without first being processed by an embedding model.
Before the effective filing date of the claimed invention, one of ordinary skill in the art would have found it obvious to modify Khomami to cause the graph neural network model to generate an embedding output that is processed by a classification model without first being processed by an embedding model to reduce information loss, improve computational efficiency and allow more accurate chemical property labeling by processing the embedding directly with the classification model.
Claim 27 is rejected under 35 USC 103 as being unpatentable over Khomami in view of Wilson, Alphus Dan. “Electronic-Nose Devices - Potential for Noninvasive Early Disease-Detection Applications.” Annals of Clinical Case Reports, vol. 2, 2017, pp. 1–3, research.fs.usda.gov/treesearch/54652. Accessed 5 Mar. 2026.
Regarding claim 27, Khomami acknowledges that chemicals in the environment may include…early indications of processes such as disease. Khomami teaches the classification model based on the embedding output. However, Khomami fails to explicitly teach detecting a disease state.
Wilson teaches detecting a disease state (E-nose devices are particularly useful for noninvasive early detection of diseases by sensing specific mixtures of volatile organic compounds (VOCs) that serve as effective chemical biomarkers of disease…[p.1]).
Before the effective filing date of the claimed invention, one of ordinary skill in the art would have found it obvious to modify Khomami with the teachings of Wilson to include disease detection in the operation using a classification model based on an embedding output to achieve earlier disease-detection screenings/treatments [Wilson, p.1] for humans, crops, animals, etc… Using learned feature representations to improve signal discrimination and diagnostic accuracy is a well-established, routine extension of standard sensor calibration and pattern-recognition techniques as referenced by Wilson (…artificial gas-sensing systems, usually containing chemical cross-reactive multi-sensor arrays capable of characterizing the aroma patterns of volatile organic compounds (VOCs), which utilize pattern-recognition algorithms for aroma classification [p.1]).
Claims 35 and 36 are rejected under 35 USC 103 as being unpatentable over Khomami in view of Hsiung (US 20120041574 A1).
Regarding claim 35, Khomami teaches one or more processors and the prediction of the property, but does not explicitly teach wherein the operations further comprise triggering, by the one or more processors, an automated alert or automated action in response to the prediction of the property.
Hsiung teaches triggering…an automated alert or automated action in response to the prediction of the property (…where application of a first model produces a first predicted descriptor in agreement with a second predicted descriptor, the process state assessment is confirmed and the output may reflect a degree of certainty as to the state of the process. This reflection may be in the form of the content of the output (i.e. a process fault is definitely indicated) and/or in the form of the output (i.e. a pager is activated to immediately alert the human user to a high priority issue) [0075]).
Before the effective filing date of the claimed invention, one of ordinary skill in the art would have found it obvious to modify Khomami with the teachings of Hsiung to provide an automated alert in response to the prediction of the property to enable timely intervention for the detection of rotting food or diseases, for example.
Regarding claim 36, Khomami teaches the computer system of claim 35, but does not teach wherein the automated alert or automated action control of one or more machines to stop, control of one or more machines to contain a process, or both.
Hsiung teaches wherein the automated alert or automated action control of one or more machines to stop, control of one or more machines to contain a process, or both (…where application of a first model produces a first predicted descriptor in agreement with a second predicted descriptor, the process state assessment is confirmed and the output may reflect a degree of certainty as to the state of the process. This reflection may be in the form of the content of the output (i.e. a process fault is definitely indicated) and/or in the form of the output (i.e. a pager is activated to immediately alert the human user to a high priority issue) [0075]).
(Pareto Chart is a vertical bar graph showing problems in a prioritized order, so it can be determined which problems should be tackled first [0464, Table 5]).
Before the effective filing date of the claimed invention, one of ordinary skill in the art would have found it obvious to modify Khomami with the teachings of Hsiung to provide an automated alert in response to the prediction of the property and to have that alert contain a process that prioritizes issues to enable timely, high-priority intervention for the detection of rotting food or diseases, for example.
Conclusion
An inquiry concerning this communication or earlier communication from the examiner should be directed to LOGAN D COONS whose telephone number is
(571) 272-2698. (via email: logan.coons@uspto.gov “without a written authorization by applicant in place, the USPTO will not respond via internet e-mail to an internet correspondence” MPEP 502.02 II). The examiner can normally be reached on M-F 9:30am – 6pm ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, SPE Shelby Turner, can be reached at (571) 272-6334. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/LOGAN D COONS/Examiner, Art Unit 2857
/ALEXANDER SATANOVSKY/Primary Examiner, Art Unit 2857