DETAILED ACTION
This action is in response to the filing on 11/21/2025. Claims 1-20, are pending and have been considered below.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more.
Independent Claims 1, 19, and 20
Step 1:
Claims 1, 19, and 20 recite a method, manufacture, and system; therefore, they are directed to one of the four categories of statutory subject matter (process/method, machine/product/apparatus, manufacture, or composition of matter).
Step 2A Prong 1:
Claims 1, 19, and 20 recite a method, manufacture, and system comprising:
generating second sensor data of the second modality corresponding to the object using the trained neural network, wherein the generated second sensor data is of the second modality that is different than the first modality — Under its broadest reasonable interpretation, this limitation encompasses the abstract idea of a mental process, or a concept that can be performed in the human mind with the use of a physical aid (e.g. pen and paper), including observation, evaluation, judgement or opinion (see MPEP § 2106.04(a)(2)(III)). Or a mathematical concept achievable through mathematical computation (see MPEP § 2106.04(a)(2)(I)), specifically organizing information and manipulating information through mathematical correlations.
Step 2A Prong 2:
This judicial exception is not integrated into a practical application.
Claim 1 recites the additional elements of:
obtaining, from a first sensor, first sensor data corresponding to an object, wherein the first sensor data is of a first modality — This element amounts to no more than insignificant extra-solution activity in the form of mere data gathering and output (see MPEP § 2106.05(g)), and is well-understood, routine, conventional activity (see MPEP § 2106.05(d)(II), receiving or transmitting data over a network).
providing the first sensor data to a neural network trained to convert, using the neural network, from the first modality to a second modality — This element amounts to no more than insignificant extra-solution activity in the form of mere data gathering and output (see MPEP § 2106.05(g)), and is well-understood, routine, conventional activity (see MPEP § 2106.05(d)(II), receiving or transmitting data over a network).
generating second sensor data of the second modality corresponding to the object using the trained neural network, wherein the generated second sensor data is of the second modality that is different than the first modality — This element amounts to no more than adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP § 2106.05(f)).
Claim 19 recites the additional elements of:
a non-transitory, computer-readable medium storing one or more instructions executable by a computer system to perform operations comprising — This element amounts to no more than generally linking the use of a judicial exception to a particular technological environment or field of use (see MPEP § 2106.05(h)). This element merely limits the use of the abstract idea to a generic computer component.
obtaining, from a first sensor, first sensor data corresponding to an object, wherein the first sensor data is of a first modality — This element amounts to no more than insignificant extra-solution activity in the form of mere data gathering and output (see MPEP § 2106.05(g)), and is well-understood, routine, conventional activity (see MPEP § 2106.05(d)(II), receiving or transmitting data over a network).
providing the first sensor data to a neural network trained to convert from the first modality to a second modality — This element amounts to no more than insignificant extra-solution activity in the form of mere data gathering and output (see MPEP § 2106.05(g)), and is well-understood, routine, conventional activity (see MPEP § 2106.05(d)(II), receiving or transmitting data over a network).
generating second sensor data of the second modality corresponding to the object using the trained neural network, wherein the generated second sensor data is of the second modality that is different than the first modality — This element amounts to no more than adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP § 2106.05(f)).
Claim 20 recites the additional elements of:
one or more processors — This element amounts to no more than generally linking the use of a judicial exception to a particular technological environment or field of use (see MPEP § 2106.05(h)). This element merely limits the use of the abstract idea to a generic computer component.
machine-readable media interoperably coupled with the one or more processors and storing one or more instructions that, when executed by the one or more processors, perform operations comprising — This element amounts to no more than generally linking the use of a judicial exception to a particular technological environment or field of use (see MPEP § 2106.05(h)). This element merely limits the use of the abstract idea to a generic computer component.
obtaining, from a first sensor, first sensor data corresponding to an object, wherein the first sensor data is of a first modality — This element amounts to no more than insignificant extra-solution activity in the form of mere data gathering and output (see MPEP § 2106.05(g)), and is well-understood, routine, conventional activity (see MPEP § 2106.05(d)(II), receiving or transmitting data over a network).
providing the first sensor data to a neural network trained to convert from the first modality to a second modality — This element amounts to no more than insignificant extra-solution activity in the form of mere data gathering and output (see MPEP § 2106.05(g)), and is well-understood, routine, conventional activity (see MPEP § 2106.05(d)(II), receiving or transmitting data over a network).
generating second sensor data of the second modality corresponding to the object using the trained neural network, wherein the generated second sensor data is of the second modality that is different than the first modality — This element amounts to no more than adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP § 2106.05(f)).
Step 2B:
The claims do not contain significantly more than the judicial exception.
Claim 1 recites the additional elements of:
obtaining, from a first sensor, first sensor data corresponding to an object, wherein the first sensor data is of a first modality — This element amounts to no more than insignificant extra-solution activity in the form of mere data gathering and output (see MPEP § 2106.05(g)), and is well-understood, routine, conventional activity (see MPEP § 2106.05(d)(II), receiving or transmitting data over a network).
providing the first sensor data to a neural network trained to convert from the first modality to a second modality — This element amounts to no more than insignificant extra-solution activity in the form of mere data gathering and output (see MPEP § 2106.05(g)), and is well-understood, routine, conventional activity (see MPEP § 2106.05(d)(II), receiving or transmitting data over a network).
generating second sensor data of the second modality corresponding to the object using the trained neural network, wherein the generated second sensor data is of the second modality that is different than the first modality — This element amounts to no more than adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP § 2106.05(f)).
Claim 19 recites the additional elements of:
a non-transitory, computer-readable medium storing one or more instructions executable by a computer system to perform operations comprising — This element amounts to no more than generally linking the use of a judicial exception to a particular technological environment or field of use (see MPEP § 2106.05(h)). This element merely limits the use of the abstract idea to a generic computer component.
obtaining, from a first sensor, first sensor data corresponding to an object, wherein the first sensor data is of a first modality — This element amounts to no more than insignificant extra-solution activity in the form of mere data gathering and output (see MPEP § 2106.05(g)), and is well-understood, routine, conventional activity (see MPEP § 2106.05(d)(II), receiving or transmitting data over a network).
providing the first sensor data to a neural network trained to convert from the first modality to a second modality — This element amounts to no more than insignificant extra-solution activity in the form of mere data gathering and output (see MPEP § 2106.05(g)), and is well-understood, routine, conventional activity (see MPEP § 2106.05(d)(II), receiving or transmitting data over a network).
generating second sensor data of the second modality corresponding to the object using the trained neural network, wherein the generated second sensor data is of the second modality that is different than the first modality — This element amounts to no more than adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP § 2106.05(f)).
Claim 20 recites the additional elements of:
one or more processors — This element amounts to no more than generally linking the use of a judicial exception to a particular technological environment or field of use (see MPEP § 2106.05(h)). This element merely limits the use of the abstract idea to a generic computer component.
machine-readable media interoperably coupled with the one or more processors and storing one or more instructions that, when executed by the one or more processors, perform operations comprising — This element amounts to no more than generally linking the use of a judicial exception to a particular technological environment or field of use (see MPEP § 2106.05(h)). This element merely limits the use of the abstract idea to a generic computer component.
obtaining, from a first sensor, first sensor data corresponding to an object, wherein the first sensor data is of a first modality — This element amounts to no more than insignificant extra-solution activity in the form of mere data gathering and output (see MPEP § 2106.05(g)), and is well-understood, routine, conventional activity (see MPEP § 2106.05(d)(II), receiving or transmitting data over a network).
providing the first sensor data to a neural network trained to convert from the first modality to a second modality — This element amounts to no more than insignificant extra-solution activity in the form of mere data gathering and output (see MPEP § 2106.05(g)), and is well-understood, routine, conventional activity (see MPEP § 2106.05(d)(II), receiving or transmitting data over a network).
generating second sensor data of the second modality corresponding to the object using the trained neural network, wherein the generated second sensor data is of the second modality that is different than the first modality — This element amounts to no more than adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP § 2106.05(f)).
As such claims 1, 19, and 20 are not patent eligible.
Dependent Claims 2-18
Step 1:
Claims 2-18 recite a method; therefore, they are directed to one of the four categories of statutory subject matter (process/method, machine/product/apparatus, manufacture, or composition of matter).
Step 2A Prong 1:
Claims 2-18 merely narrow the previously cited abstract idea limitations. For the reasons described above with respect to independent claim 1 this judicial exception is not meaningfully integrated into a practical application, or significantly more than the abstract idea. The claims disclose similar limitations described for the independent claim above and do not provide anything more than the abstract idea.
Claim 2 recites a method comprising:
generating third sensor data of the second modality using the neural network based on the training data of the first modality — Under its broadest reasonable interpretation, this limitation encompasses the abstract idea of a mental process, or a concept that can be performed in the human mind with the use of a physical aid (e.g. pen and paper), including observation, evaluation, judgement or opinion (see MPEP § 2106.04(a)(2)(III)). Or a mathematical concept achievable through mathematical computation (see MPEP § 2106.04(a)(2)(I)), specifically organizing information and manipulating information through mathematical correlations.
generating data of the first modality using the neural network based on the generated third sensor data of the second modality — Under its broadest reasonable interpretation, this limitation encompasses the abstract idea of a mental process, or a concept that can be performed in the human mind with the use of a physical aid (e.g. pen and paper), including observation, evaluation, judgement or opinion (see MPEP § 2106.04(a)(2)(III)). Or a mathematical concept achievable through mathematical computation (see MPEP § 2106.04(a)(2)(I)), specifically organizing information and manipulating information through mathematical correlations.
adjusting one or more weights of the neural network based on a difference between the training data of the first modality and the generated data of the first modality — Under its broadest reasonable interpretation, this limitation encompasses the abstract idea of a mental process, or a concept that can be performed in the human mind with the use of a physical aid (e.g. pen and paper), including observation, evaluation, judgement or opinion (see MPEP § 2106.04(a)(2)(III)). Or a mathematical concept achievable through mathematical computation (see MPEP § 2106.04(a)(2)(I)), specifically organizing information and manipulating information through mathematical correlations.
Claim 3 recites a method comprising:
detecting, using one or more other sensors, the object based on the generated second sensor data of the second modality — Under its broadest reasonable interpretation, this limitation encompasses the abstract idea of a mental process, or a concept that can be performed in the human mind with the use of a physical aid (e.g. pen and paper), including observation, evaluation, judgement or opinion (see MPEP § 2106.04(a)(2)(III)). Or a mathematical concept achievable through mathematical computation (see MPEP § 2106.04(a)(2)(I)), specifically organizing information and manipulating information through mathematical correlations.
Claim 6 recites a method comprising:
generating the second sensor data of the second modality based on the output of the trained neural network — Under its broadest reasonable interpretation, this limitation encompasses the abstract idea of a mental process, or a concept that can be performed in the human mind with the use of a physical aid (e.g. pen and paper), including observation, evaluation, judgement or opinion (see MPEP § 2106.04(a)(2)(III)). Or a mathematical concept achievable through mathematical computation (see MPEP § 2106.04(a)(2)(I)), specifically organizing information and manipulating information through mathematical correlations.
Claim 7 recites a method comprising:
determining that the generated second sensor data of the second modality satisfies a threshold value — Under its broadest reasonable interpretation, this limitation encompasses the abstract idea of a mental process, or a concept that can be performed in the human mind with the use of a physical aid (e.g. pen and paper), including observation, evaluation, judgement or opinion (see MPEP § 2106.04(a)(2)(III)). Or a mathematical concept achievable through mathematical computation (see MPEP § 2106.04(a)(2)(I)), specifically organizing information and manipulating information through mathematical correlations.
Claim 8 recites a method comprising:
wherein a distance between a location of the first sensor and a location of the second sensor satisfies a distance threshold — Under its broadest reasonable interpretation, this limitation encompasses the abstract idea of a mathematical concept achievable through mathematical computation (see MPEP § 2106.04(a)(2)(I)), specifically organizing information and manipulating information through mathematical correlations.
Claim 10 recites a method comprising:
wherein generating the second sensor data of the second modality comprises — Under its broadest reasonable interpretation, this limitation encompasses the abstract idea of a mental process, or a concept that can be performed in the human mind with the use of a physical aid (e.g. pen and paper), including observation, evaluation, judgement or opinion (see MPEP § 2106.04(a)(2)(III)). Or a mathematical concept achievable through mathematical computation (see MPEP § 2106.04(a)(2)(I)), specifically organizing information and manipulating information through mathematical correlations.
generating, based on the first sensor data, additional data representing the object, wherein the additional data is of the second modality — Under its broadest reasonable interpretation, this limitation encompasses the abstract idea of a mental process, or a concept that can be performed in the human mind with the use of a physical aid (e.g. pen and paper), including observation, evaluation, judgement or opinion (see MPEP § 2106.04(a)(2)(III)). Or a mathematical concept achievable through mathematical computation (see MPEP § 2106.04(a)(2)(I)), specifically organizing information and manipulating information through mathematical correlations.
Claim 11 recites a method comprising:
identifying the object based on the additional data of the second modality — Under its broadest reasonable interpretation, this limitation encompasses the abstract idea of a mental process, or a concept that can be performed in the human mind with the use of a physical aid (e.g. pen and paper), including observation, evaluation, judgement or opinion (see MPEP § 2106.04(a)(2)(III)). Or a mathematical concept achievable through mathematical computation (see MPEP § 2106.04(a)(2)(I)), specifically organizing information and manipulating information through mathematical correlations.
Claim 16 recites a method comprising:
generating a plurality of cost metrics for two or more groupings of a plurality of sensors used to obtain data, wherein a grouping of the two or more groupings includes at least one sensor of the plurality of sensors, and wherein the plurality of sensors includes the first sensor that obtains data of the first modality a second sensor that obtains data of the second modality — Under its broadest reasonable interpretation, this limitation encompasses the abstract idea of a mental process, or a concept that can be performed in the human mind with the use of a physical aid (e.g. pen and paper), including observation, evaluation, judgement or opinion (see MPEP § 2106.04(a)(2)(III)). Or a mathematical concept achievable through mathematical computation (see MPEP § 2106.04(a)(2)(I)), specifically organizing information and manipulating information through mathematical correlations.
selecting a first grouping of the two or more groupings based on the plurality of cost metrics to be included in a sensor stack, wherein the first grouping includes the first sensor and not the second sensor — Under its broadest reasonable interpretation, this limitation encompasses the abstract idea of a mental process, or a concept that can be performed in the human mind with the use of a physical aid (e.g. pen and paper), including observation, evaluation, judgement or opinion (see MPEP § 2106.04(a)(2)(III)). Or a mathematical concept achievable through mathematical computation (see MPEP § 2106.04(a)(2)(I)), specifically organizing information and manipulating information through mathematical correlations.
Claim 17 recites a method comprising:
wherein the plurality of cost metrics are calculated, for each of the two or more groupings, as a function of one or more of a size of each sensor in the grouping, a weight of each sensor in the grouping, power requirements of each sensor in the grouping, or a cost of each sensor in the grouping — Under its broadest reasonable interpretation, this limitation encompasses the abstract idea of a mental process, or a concept that can be performed in the human mind with the use of a physical aid (e.g. pen and paper), including observation, evaluation, judgement or opinion (see MPEP § 2106.04(a)(2)(III)). Or a mathematical concept achievable through mathematical computation (see MPEP § 2106.04(a)(2)(I)), specifically organizing information and manipulating information through mathematical correlations.
Claim 18 recites a method comprising:
generating a data signal configured to turn off one or more sensors in one or more sensor stacks, wherein the one or more sensors are not included in the selected first grouping — Under its broadest reasonable interpretation, this limitation encompasses the abstract idea of a mental process, or a concept that can be performed in the human mind with the use of a physical aid (e.g. pen and paper), including observation, evaluation, judgement or opinion (see MPEP § 2106.04(a)(2)(III)). Or a mathematical concept achievable through mathematical computation (see MPEP § 2106.04(a)(2)(I)), specifically organizing information and manipulating information through mathematical correlations.
Step 2A Prong 2:
This judicial exception is not integrated into a practical application.
Claim 2 recites the additional elements of:
further comprising training the neural network, wherein training the neural network comprises — This element amounts to no more than generally linking the use of a judicial exception to a particular technological environment or field of use (see MPEP § 2106.05(h)). This element merely limits the use of the abstract idea to training a neural network.
obtaining training data of the first modality — This element amounts to no more than insignificant extra-solution activity in the form of mere data gathering and output (see MPEP § 2106.05(g)), and is well-understood, routine, conventional activity (see MPEP § 2106.05(d)(II), receiving or transmitting data over a network).
Claim 4 recites the additional element of:
wherein the object is a human or a vehicle — This element amounts to no more than generally linking the use of a judicial exception to a particular technological environment or field of use (see MPEP § 2106.05(h)). This element merely limits the use of the abstract idea to an environment with a particular object.
Claim 5 recites the additional element of:
wherein the neural network is trained using a plurality of data modalities to learn one or more latent spaces between at least two data modalities of the plurality of data modalities, wherein the at least two data modalities include the first modality and the second modality — This element amounts to no more than generally linking the use of a judicial exception to a particular technological environment or field of use (see MPEP § 2106.05(h)). This element merely limits the use of the abstract idea to latent spaces between data modalities.
Claim 6 recites the additional element of:
obtaining, from a third sensor, third sensor data corresponding to the object wherein the third sensor data is of a third modality — This element amounts to no more than insignificant extra-solution activity in the form of mere data gathering and output (see MPEP § 2106.05(g)), and is well-understood, routine, conventional activity (see MPEP § 2106.05(d)(II), receiving or transmitting data over a network).
providing the third sensor data and the first sensor data to the trained neural network to generate the output of the trained neural network — This element amounts to no more than insignificant extra-solution activity in the form of mere data gathering and output (see MPEP § 2106.05(g)), and is well-understood, routine, conventional activity (see MPEP § 2106.05(d)(II), receiving or transmitting data over a network).
Claim 7 recites the additional element of:
in response to determining that the generated second sensor data satisfies the threshold value, providing output to a user device indicating a capability of the neural network to replace data of the second modality obtained by a second sensor with the generated second sensor data — This element amounts to no more than insignificant extra-solution activity in the form of mere data gathering and output (see MPEP § 2106.05(g)), well-understood, routine, conventional activity (see MPEP § 2106.05(d)(II), receiving or transmitting data over a network).
Claim 9 recites the additional element of:
wherein the first sensor and the second sensor are included within a sensor stack — This element amounts to no more than a sensor stack.
Claim 12 recites the additional element of:
obtaining, from a second sensor, third sensor data of the second modality, wherein the third sensor data represents a portion of the object — This element amounts to no more than insignificant extra-solution activity in the form of mere data gathering and output (see MPEP § 2106.05(g)), and is well-understood, routine, conventional activity (see MPEP § 2106.05(d)(II), receiving or transmitting data over a network).
Claim 13 recites the additional element of:
wherein the third sensor data of the second modality is a result of data degradation and represents a portion of the object that is less than the whole object due to the data degradation — This element amounts to no more than generally linking the use of a judicial exception to a particular technological environment or field of use (see MPEP § 2106.05(h)). This element merely limits the use of the abstract idea to sensor data with data degradation.
Claim 14 recites the additional element of:
wherein the data degradation is due to environmental conditions — This element amounts to no more than generally linking the use of a judicial exception to a particular technological environment or field of use (see MPEP § 2106.05(h)). This element merely limits the use of the abstract idea to sensor data with environmental data degradation.
Claim 15 recites the additional element of:
providing the third sensor data of the second modality to the neural network trained to convert from the first modality to the second modality, wherein the output of the trained neural network is generated by processing the first sensor data and the third sensor data — This element amounts to no more than insignificant extra-solution activity in the form of mere data gathering and output (see MPEP § 2106.05(g)), and is well-understood, routine, conventional activity (see MPEP § 2106.05(d)(II), receiving or transmitting data over a network).
Claim 18 recites the additional element of:
sending the data signal to the one or more sensor stacks — This element amounts to no more than insignificant extra-solution activity in the form of mere data gathering and output (see MPEP § 2106.05(g)), and is well-understood, routine, conventional activity (see MPEP § 2106.05(d)(II), receiving or transmitting data over a network).
Step 2B:
The claims do not contain significantly more than the judicial exception.
Claim 2 recites the additional elements of:
further comprising training the neural network, wherein training the neural network comprises — This element amounts to no more than generally linking the use of a judicial exception to a particular technological environment or field of use (see MPEP § 2106.05(h)). This element merely limits the use of the abstract idea to training a neural network.
obtaining training data of the first modality — This element amounts to no more than insignificant extra-solution activity in the form of mere data gathering and output (see MPEP § 2106.05(g)), and is well-understood, routine, conventional activity (see MPEP § 2106.05(d)(II), receiving or transmitting data over a network).
Claim 4 recites the additional element of:
wherein the object is a human or a vehicle — This element amounts to no more than generally linking the use of a judicial exception to a particular technological environment or field of use (see MPEP § 2106.05(h)). This element merely limits the use of the abstract idea to an environment with a particular object.
Claim 5 recites the additional element of:
wherein the neural network is trained using a plurality of data modalities to learn one or more latent spaces between at least two data modalities of the plurality of data modalities, wherein the at least two data modalities include the first modality and the second modality — This element amounts to no more than generally linking the use of a judicial exception to a particular technological environment or field of use (see MPEP § 2106.05(h)). This element merely limits the use of the abstract idea to latent spaces between data modalities.
Claim 6 recites the additional element of:
obtaining, from a third sensor, third sensor data corresponding to the object wherein the third sensor data is of a third modality — This element amounts to no more than insignificant extra-solution activity in the form of mere data gathering and output (see MPEP § 2106.05(g)), and is well-understood, routine, conventional activity (see MPEP § 2106.05(d)(II), receiving or transmitting data over a network).
providing the third sensor data and the first sensor data to the trained neural network to generate the output of the trained neural network — This element amounts to no more than insignificant extra-solution activity in the form of mere data gathering and output (see MPEP § 2106.05(g)), and is well-understood, routine, conventional activity (see MPEP § 2106.05(d)(II), receiving or transmitting data over a network).
Claim 7 recites the additional element of:
in response to determining that the generated second sensor data satisfies the threshold value, providing output to a user device indicating a capability of the neural network to replace data of the second modality obtained by a second sensor with the generated second sensor data — This element amounts to no more than insignificant extra-solution activity in the form of mere data gathering and output (see MPEP § 2106.05(g)), well-understood, routine, conventional activity (see MPEP § 2106.05(d)(II), receiving or transmitting data over a network).
Claim 9 recites the additional element of:
wherein the first sensor and the second sensor are included within a sensor stack — This element amounts to no more than a sensor stack.
Claim 12 recites the additional element of:
obtaining, from a second sensor, third sensor data of the second modality, wherein the third sensor data represents a portion of the object — This element amounts to no more than insignificant extra-solution activity in the form of mere data gathering and output (see MPEP § 2106.05(g)), and is well-understood, routine, conventional activity (see MPEP § 2106.05(d)(II), receiving or transmitting data over a network).
Claim 13 recites the additional element of:
wherein the third sensor data of the second modality is a result of data degradation and represents a portion of the object that is less than the whole object due to the data degradation — This element amounts to no more than generally linking the use of a judicial exception to a particular technological environment or field of use (see MPEP § 2106.05(h)). This element merely limits the use of the abstract idea to sensor data with data degradation.
Claim 14 recites the additional element of:
wherein the data degradation is due to environmental conditions — This element amounts to no more than generally linking the use of a judicial exception to a particular technological environment or field of use (see MPEP § 2106.05(h)). This element merely limits the use of the abstract idea to sensor data with environmental data degradation.
Claim 15 recites the additional element of:
providing the third sensor data of the second modality to the neural network trained to convert from the first modality to the second modality, wherein the output of the trained neural network is generated by processing the first sensor data and the third sensor data — This element amounts to no more than insignificant extra-solution activity in the form of mere data gathering and output (see MPEP § 2106.05(g)), and is well-understood, routine, conventional activity (see MPEP § 2106.05(d)(II), receiving or transmitting data over a network).
Claim 18 recites the additional element of:
sending the data signal to the one or more sensor stacks — This element amounts to no more than insignificant extra-solution activity in the form of mere data gathering and output (see MPEP § 2106.05(g)), and is well-understood, routine, conventional activity (see MPEP § 2106.05(d)(II), receiving or transmitting data over a network).
As such claims 2-18 are not patent eligible.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-3, 6, 10-11, and 19-20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by NITSCH et al. (US 2021/0174133 A1) hereinafter Nitsch.
Regarding claim 1, Nitsch teaches a method comprising (A method for classifying objects [Abstract]):
obtaining, from a first sensor, first sensor data corresponding to an object, wherein the first sensor data is of a first modality (modality-independent features can be detected in the measuring data of at least two sensor modalities of the same object. [para. 8]; There is preferably at least a first and a second sensor modality, wherein the method is configured to extract modality-independent features from measuring data from a sensor of the first sensor modality [para. 10]);
providing the first sensor data to a neural network trained to convert, using the neural network, from the first modality to a second modality (Sensor data is provided to a neural network of an encoder-decoder architecture [see para. 11, and para. 22]. The first sensor data is provided to the neural network to convert from the first modality to a second modality [see para. 10], and the neural network is trained for the conversion [see para. 46]);
generating second sensor data of the second modality corresponding to the object, using the trained neural network, wherein the generated second sensor data is of the second modality that is different than the first modality (Sensor data corresponds to an object [see para. 8], and is provided to a neural network of an encoder-decoder architecture [see para. 11, and para. 22]. The first sensor data is provided to the neural network to convert from the first modality to a second modality [see para. 10], wherein the first modality and second modality are different [see para. 18], and the neural network is trained for the conversion [see para. 46]).
Regarding claim 2, Nitsch as applied in claim 1 above teaches all the limitations of claim 1 and further teaches:
further comprising training the neural network, wherein training the neural network comprises (The feature transformation unit and/or the respective feature extractors are trained in particular by means of unmonitored learning. The respective neural network, which is trained, thereby comprises weights, which are specified by means of the learning. [para. 45]);
obtaining training data of the first modality (The encoder and decoder of each sensor modality is thereby learned separately from the other sensor modalities, so that they can be learned on different data sets. [para. 46]);
generating third sensor data of the second modality using the neural network based on the training data of the first modality (The feature extraction unit (13) further comprises a feature transformation unit (17). The feature transformation unit (17) comprises a neural network (17a) for the measuring data of the first sensor modality, and a neural network (17b) for the measuring data of the second sensor modality. As input, they receive the respective code of the feature extractors. The feature transformation unit (17) is configured to detect modality-independent features (24). They live in a common feature space (26). The feature transformation unit (17) can further issue modality-dependent features (25), which live in their own feature spaces, namely in a feature space (27) for modality-dependent features of the first sensor modality, and a feature space (28) for modality-dependent features of the second sensor modality. [para. 112—para. 113]);
generating data of the first modality using the neural network (The feature retransformation unit (29) is configured to generate code from the input again, namely an image code (32) and a point cloud code (33). The respective decoders can generate modality-dependent data from the corresponding codes again. The decoder (31) for the second sensor modality generates an output (31a), which corresponds to regenerated image data. The decoder (30) for the first sensor modality generates an output (30a), which corresponds to a regenerated lidar point cloud. [para. 116]) based on the generated third sensor data of the second modality (The feature retransformation unit (29) comprises a neural network (29a) for the first sensor modality, and a neural network (29b) for the second sensor modality. As input, they receive the modality-independent features (24), and optionally the modality-dependent features (25) of the feature transformation unit (17). [para. 115]);
adjusting one or more weights of the neural network based on a difference between the training data of the first modality and the generated data of the first modality (Nitsch discloses the conditions of the loss function used in adjusting the weights of the neural network, which is based on a difference between the training data and the generated data [see para. 60—para. 65], as part of the training algorithm [see para. 114]).
Regarding claim 3, Nitsch as applied in claim 1 above teaches all the limitations of claim 1 and further teaches:
detecting, using one or more other sensors, the object based on the generated second sensor data of the second modality (The method in particular uses a single classification unit for classifying all features of the sensors of all sensor modalities, from which measuring data is generated and provided. [para. 73]).
Regarding claim 6, Nitsch as applied in claim 1 above teaches all the limitations of claim 1 and further teaches:
obtaining, from a third sensor, third sensor data corresponding to the object wherein the third sensor data is of a third modality (modality-independent features can be detected in the measuring data of at least two sensor modalities of the same object. [para. 8]; The method can further comprise the generation of measuring data from a sensor of a third sensor modality, which is likewise provided for the feature extraction unit. [para. 10]);
providing the third sensor data and the first sensor data to the trained neural network to generate the output of the trained neural network (First sensor data and third sensor data are provided [see para. 20] to the encoder-decoder neural network [see para. 11, and para. 22]. The encoder-decoder is trained to generate an output [see para. 46]);
generating the second sensor data of the second modality based on the output of the trained neural network (The second sensor data is generated by the output of the decoder for the second sensor modality [see para. 11], and the encoder-decoder neural network is trained [see para. 46]).
Regarding claim 10, Nitsch as applied in claim 1 above teaches all the limitations of claim 1 and further teaches:
wherein generating the second sensor data of the second modality comprises (There is preferably at least a first and a second sensor modality, wherein the method is configured to extract modality-independent features from measuring data from a sensor of the first sensor modality in such a way that measuring data from a sensor of the second measuring modality can be reconstructed. [para. 10]);
generating, based on the first sensor data, additional data representing the object, wherein the additional data is of the second modality (There is preferably at least a first and a second sensor modality, wherein the method is configured to extract modality-independent features from measuring data from a sensor of the first sensor modality in such a way that measuring data from a sensor of the second measuring modality can be reconstructed. [para. 10]; The feature transformation unit could further additionally also issue modality-dependent features. In other words, the feature transformation unit looks for features, which can be detected in all measuring data of the different sensor modalities, i.e. which have all sensor modalities in common. These modality-independent features are issued. However, the features, which only appear in one sensor modality, i.e. the modality-dependent features, could additionally also be issued. [para. 35]).
Regarding claim 11, Nitsch as applied in claim 1 above teaches all the limitations of claim 1 and further teaches:
identifying the object based on the additional data of the second modality (The method comprises especially the transfer of at least one feature vector from the feature extraction unit to the classification unit. This feature vector can include only the modality-independent features or additionally also modality-dependent features. The classification comprises the comparison of the received feature vector with a respective previously specified average feature vector for each class, wherein a corresponding class label is issued when falling below a previously specified deviation limit. [para. 76]).
Regarding claim 19, claim 19 contains substantially similar limitations to those found in claim 1. Therefore it is rejected for the same reason as claim 1 above. Additionally, Nitsch further teaches:
A non-transitory, computer-readable medium storing one or more instructions executable by a computer system to perform operations comprising (The solution moreover relates to a computer-readable storage medium, on which a program is stored, which, after it was loaded into the memory of the computer, makes it possible for a computer to carry out an above-described method for classifying objects and/or for the distance measurement, optionally together with an above-described device. [para. 95]).
Regarding claim 20, claim 20 contains substantially similar limitations to those found in claim 1. Therefore it is rejected for the same reason as claim 1 above. Additionally, Nitsch further teaches:
a system, comprising: one or more processors; and machine-readable media interoperably coupled with the one or more processors and storing one or more instructions that, when executed by the one or more processors, perform operations comprising (The solution moreover relates to a computer-readable storage medium, on which a program is stored, which, after it was loaded into the memory of the computer, makes it possible for a computer to carry out an above-described method for classifying objects and/or for the distance measurement, optionally together with an above-described device. [para. 95]).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over NITSCH et al. (US 2021/0174133 A1) hereinafter Nitsch, as applied in claim 3 above, in view of SIVAN et al. (US 2021/0056365 A1) hereinafter Sivan.
Regarding claim 4, Nitsch as applied in claim 3 above teaches all the limitations of claim 3.
However, Nitsch fails to teach wherein the object is a human or a vehicle.
In the same field of endeavor, Sivan teaches:
wherein the object is a human or a vehicle (identifying and/or estimating physical attributes and movement patterns of the detected objects, for example, a person, an animal, a tree, a car, a structure, a traffic infrastructure object (e.g. traffic light, sign, sign pole, etc.) and/or the like. [para. 46]).
It would have been obvious to one of ordinary skill, in the art at the time before the effective filing date of the invention to incorporate wherein the object is a human or a vehicle as suggested in Sivan into Nitsch because both methods use sensor data for object detection (see Nitsch, Abstract; see Sivan, Abstract). Incorporating the teaching of Sivan into Nitsch would increase reliability of object detection (see para. 3).
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over NITSCH et al. (US 2021/0174133 A1) hereinafter Nitsch, as applied in claim 1 above, in view of Junginger (US 12/092,741 B2) hereinafter Junginger.
Regarding claim 5, Nitsch as applied in claim 1 above teaches all the limitations of claim 1.
However, Nitsch fails to teach wherein the neural network is trained using a plurality of data modalities to learn one or more latent spaces between at least two data modalities of the plurality of data modalities, wherein the at least two data modalities include the first modality and the second modality.
In the same field of endeavor, Junginger teaches:
wherein the neural network is trained using a plurality of data modalities to learn one or more latent spaces between at least two data modalities of the plurality of data modalities, wherein the at least two data modalities include the first modality and the second modality (Junginger discloses that the encoder-decoder neural network is trained for the latent spaces between a plurality of source and target modalities [see Col. 4, lines 35-51]. Thus, the first modality and second modality would be encompassed within the plurality of source and target modalities. The first modality can be any source modality, and the second modality can be any target modality).
It would have been obvious to one of ordinary skill, in the art at the time before the effective filing date of the invention to incorporate wherein the neural network is trained using a plurality of data modalities to learn one or more latent spaces between at least two data modalities of the plurality of data modalities, wherein the at least two data modalities include the first modality and the second modality as suggested in Junginger into Nitsch because both methods convert measured data from one modality to a second modality (see Nitsch, Abstract; see Junginger, Abstract). Incorporating the teaching of Junginger into Nitsch would allow the decoder to be able to process measured data of various source measurement modalities (see Col. 4, lines 35-43) and when targeting new modalities the encoder may remain unchanged, and merely one new decoder specific for the new sensor is to be trained (see Col. 4, lines 44-51).
Claims 7 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over NITSCH et al. (US 2021/0174133 A1) hereinafter Nitsch, as applied in claim 1 above, in view of FIETZEK RAFAEL et al. (WO 2021/028322 A1) hereinafter Fietzek.
Regarding claim 7, Nitsch as applied in claim 1 above teaches all the limitations of claim 1 and further teaches:
the generated second sensor data of the second modality (There is preferably at least a first and a second sensor modality, wherein the method is configured to extract modality-independent features from measuring data from a sensor of the first sensor modality in such a way that measuring data from a sensor of the second measuring modality can be reconstructed. [para. 10]);
the generated second sensor data (There is preferably at least a first and a second sensor modality, wherein the method is configured to extract modality-independent features from measuring data from a sensor of the first sensor modality in such a way that measuring data from a sensor of the second measuring modality can be reconstructed. [para. 10]);
data of the second modality obtained by a second sensor (The method in particular comprises the generation of measuring data from a sensor of a second sensor modality [para. 18]); generated second sensor data (There is preferably at least a first and a second sensor modality, wherein the method is configured to extract modality-independent features from measuring data from a sensor of the first sensor modality in such a way that measuring data from a sensor of the second measuring modality can be reconstructed. [para. 10]).
However, Nitsch fails to teach determining that the generated sensor data satisfies a threshold value; and in response to determining that the generated sensor data satisfies the threshold value, providing output to a user device indicating a capability of the neural network to replace data obtained by a sensor with the generated sensor data.
In the same field of endeavor, Fietzek teaches:
determining that the generated sensor data satisfies a threshold value (At the end of the training, one would access the accuracy of the generated model. If it is not good enough, one might decide, afterwards, that the sensor is actually not replaceable (in any case, the ultimate decision of the replaceability of a sensor comes after the model is trained). If the model is good enough, then the sensor can be replaced [para. 57]);
in response to determining that the generated sensor data satisfies the threshold value, providing output to a user device indicating a capability of the neural network to replace data obtained by a sensor with the generated sensor data (The above object is achieved by a method for determining a sensor configuration in a vehicle which includes a plurality of sensors, comprising the steps of: establishing a preliminary sensor configuration for the vehicle, which sensor configuration includes a first number of real sensors, each of which outputting a real sensor signal; determining whether at least one of the real sensors can be replaced by a virtual sensor; and changing the preliminary sensor configuration into a final sensor configuration which includes a second number of real sensors and at least one virtual sensor, wherein the second number is smaller than the first number. [para. 5]).
It would have been obvious to one of ordinary skill, in the art at the time before the effective filing date of the invention to incorporate determining that the generated sensor data satisfies a threshold value; and in response to determining that the generated sensor data satisfies the threshold value, providing output to a user device indicating a capability of the neural network to replace data obtained by a sensor with the generated sensor data as suggested in Fietzek into Nitsch because both systems generate virtual sensor data (see Nitsch, Abstract; see Fietzek, Abstract). Incorporating the teaching of Fietzek into Nitsch would improve redundancy and possibly safety of the sensor configuration (see para. 16).
Regarding claim 9, the combination of Nitsch and Fietzek as applied in claim 7 above teaches all the limitations of claim 7 and further teaches:
wherein the first sensor and the second sensor are included within a sensor stack (The device in particular comprises a sensor of a first sensor modality, preferably a sensor of a second sensor modality, and/or a sensor of a third sensor modality, and/or a sensor of a fourth sensor modality. [see Nitsch, para. 90]).
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over NITSCH et al. (US 2021/0174133 A1) hereinafter Nitsch, in view of FIETZEK RAFAEL et al. (WO 2021/028322 A1) hereinafter Fietzek, as applied in claim 7 above, and further in view of WANG (US 2021/0110218 A1) hereinafter Wang.
Regarding claim 8, the combination of Nitsch and Fietzek as applied in claim 7 above teaches all the limitations of claim 7.
However, the combination of Nitsch and Fietzek fails to teach wherein a distance between a location of the first sensor and a location of the second sensor satisfies a distance threshold.
In the same field of endeavor, Wang teaches:
wherein a distance between a location of the first sensor and a location of the second sensor satisfies a distance threshold (In some embodiments, the sound sensor can be arranged at the first position and the vision sensor can be arranged at the second position. The distance between the first position and the second position can be greater than or equal to 0 and less than a distance threshold. [para. 88]).
It would have been obvious to one of ordinary skill, in the art at the time before the effective filing date of the invention to incorporate wherein a distance between a location of the first sensor and a location of the second sensor satisfies a distance threshold as suggested in Wang into the combination of Nitsch and Fietzek because both methods use sensors to gather data (see Nitsch, Abstract; see Wang, Abstract). Incorporating the teaching of Wang into the combination of Nitsch and Fietzek would provide a more accurate environment recognition result, thereby improving the robustness of vehicle control (see para. 92).
Claims 12-15 are rejected under 35 U.S.C. 103 as being unpatentable over NITSCH et al. (US 2021/0174133 A1) hereinafter Nitsch, as applied in claim 1 above, in view of Shah (US 2022/0185266 A1) hereinafter Shah.
Regarding claim 12, Nitsch as applied in claim 1 above teaches all the limitations of claim 1 and further teaches:
a second sensor (generation of measuring data from a sensor of a second sensor modality as well as of measuring data from a sensor of a first sensor modality [para. 18]); third sensor data of the second modality (generation of measuring data from a sensor of a second sensor modality as well as of measuring data from a sensor of a first sensor modality [para. 18]); third sensor data (generation of measuring data from a sensor of a second sensor modality as well as of measuring data from a sensor of a first sensor modality [para. 18]).
However, Nitsch fails to teach obtaining, from a sensor, sensor data, wherein the sensor data represents a portion of the object.
In the same field of endeavor, Shah teaches:
obtaining, from a sensor, sensor data, wherein the sensor data represents a portion of the object (identification of an object 106 that is occluded or partially occluded from view of the vehicle 102. In some example, the vehicle computing system may be configured to detect objects utilizing sensor data captured by one or more sensors. [para. 37]).
It would have been obvious to one of ordinary skill, in the art at the time before the effective filing date of the invention to incorporate obtaining, from a sensor, sensor data, wherein the sensor data represents a portion of the object as suggested in Shah into Nitsch because both methods can be used for object detection based on sensor data (see Nitsch, para. 3; see Shah, para. 11). Incorporating the teaching of Shah into Nitsch would improve collision prediction and avoidance between a vehicle and objects in an environment (see Abstract).
Regarding claim 13, the combination of Nitsch and Shah as applied in claim 12 above teaches all the limitations of claim 12 and further teaches:
wherein the third sensor data of the second modality (generation of measuring data from a sensor of a second sensor modality as well as of measuring data from a sensor of a first sensor modality [see Nitsch, para. 18]) is a result of data degradation and represents a portion of the object that is less than the whole object due to the data degradation (identification of an object 106 that is occluded or partially occluded from view of the vehicle 102. [see Shah, para. 37]).
Regarding claim 14, the combination of Nitsch and Shah as applied in claim 13 above teaches all the limitations of claim 13 and further teaches:
wherein the data degradation is due to environmental conditions (identification of an object 106 that is occluded or partially occluded from view of the vehicle 102. [see Shah, para. 37]).
Regarding claim 15, the combination of Nitsch and Shah as applied in claim 12 above teaches all the limitations of claim 12 and further teaches:
providing the third sensor data of the second modality to the neural network trained to convert from the first modality to the second modality, wherein the output of the trained neural network is generated by processing the first sensor data and the third sensor data (First sensor data and third sensor data are provided [see para. 20] to the encoder-decoder neural network [see para. 11, and para. 22]. The encoder-decoder is trained to generate an output [see para. 46]).
Claims 16-18 are rejected under 35 U.S.C. 103 as being unpatentable over NITSCH et al. (US 2021/0174133 A1) hereinafter Nitsch, as applied in claim 1 above, in view of de CASARE et al. (US 2018/0348843 A1) hereinafter Cesare.
Regarding claim 16, Nitsch as applied in claim 1 above teaches all the limitations of claim 1 and further teaches:
wherein the plurality of sensors includes the first sensor that obtains data of the first modality a second sensor that obtains data of the second modality (The method in particular comprises the generation of measuring data from a sensor of a second sensor modality, wherein the method comprises the provision of the measuring data for the feature extraction unit. The second sensor modality and the first sensor modality preferably differ. [para. 18]).
However, Nitsch fails to teach generating a plurality of cost metrics for two or more groupings of a plurality of sensors used to obtain data, wherein a grouping of the two or more groupings includes at least one sensor of the plurality of sensors; and selecting a first grouping of the two or more groupings based on the plurality of cost metrics to be included in a sensor stack, wherein the first grouping includes the first sensor and not the second sensor.
In the same field of endeavor, Cesare teaches:
generating a plurality of cost metrics for two or more groupings of a plurality of sensors used to obtain data, wherein a grouping of the two or more groupings includes at least one sensor of the plurality of sensors (According to some embodiments, the sensors coupled to the SOC can be logically separated into different sensor groups to perform proximity detection procedures in a manner that promotes even greater power efficiency for the computing device. For example, the described embodiments can group a first subset of sensors that consume power at a lower rate relative to a second subset of sensors. [para. 24]);
selecting a first grouping of the two or more groupings based on the plurality of cost metrics to be included in a sensor stack, wherein the first grouping includes the first sensor and not the second sensor (As will be depicted in greater detail in FIG. 2D, the sensor processing module 122 can selectively activate the light sensor 126-2 which is also associated with the sensor group 202, to gather more data to make a more informed decision as to whether the user 210 is approaching the computing device 102. Additionally, by specifically activating the light sensor 126-1, the sensor processing module 122 also minimizes the number of sensors/devices that are used for user proximity determinations by not activating additional sensors/devices included in other sensor groups, such as sensor groups 204 and 206, to gather data at this point, thereby promoting efficient power usage. [para. 59]).
It would have been obvious to one of ordinary skill, in the art at the time before the effective filing date of the invention to incorporate generating a plurality of cost metrics for two or more groupings of a plurality of sensors used to obtain data, wherein a grouping of the two or more groupings includes at least one sensor of the plurality of sensors; and selecting a first grouping of the two or more groupings based on the plurality of cost metrics to be included in a sensor stack, wherein the first grouping includes the first sensor and not the second sensor as suggested in Cesare into Nitsch because both methods collect sensor data (see Nitsch, Abstract; see de Cesare, Abstract). Incorporating the teaching of Cesare into Nitsch would achieve greater power savings (see para. 47).
Regarding claim 17, the combination of Nitsch and Cesare as applied in claim 16 above teaches all the limitations of claim 16 and further teaches:
wherein the plurality of cost metrics are calculated, for each of the two or more groupings, as a function of one or more of a size of each sensor in the grouping, a weight of each sensor in the grouping, power requirements of each sensor in the grouping, or a cost of each sensor in the grouping (According to some embodiments, the sensors coupled to the SOC can be logically separated into different sensor groups to perform proximity detection procedures in a manner that promotes even greater power efficiency for the computing device. For example, the described embodiments can group a first subset of sensors that consume power at a lower rate relative to a second subset of sensors. [see Cesare, para. 24]).
Regarding claim 18, the combination of Nitsch and Cesare as applied in claim 16 above teaches all the limitations of claim 16 and further teaches:
generating a data signal configured to turn off one or more sensors in one or more sensor stacks, wherein the one or more sensors are not included in the selected first grouping (Based on recognition of the pattern, the sensor processing module 122 can conserve power resources by deactivating/not activating the sensors/devices included in the sensor groups 204 and 206 to prevent them from performing any procedures until further instructions are provided by the sensor processing module 122. [para. 63]);
sending the data signal to the one or more sensor stacks (Based on recognition of the pattern, the sensor processing module 122 can conserve power resources by deactivating/not activating the sensors/devices included in the sensor groups 204 and 206 to prevent them from performing any procedures until further instructions are provided by the sensor processing module 122. [para. 63]).
Response to Amendment
Applicant’s amendment to the title is accepted and the objection to the specification is respectfully withdrawn.
Response to Arguments
Applicant’s arguments, filed 11/21/2025, traversing the rejections of claims 1-20 under 35 U.S.C. 101 have been fully considered and are not persuasive. Applicant’s arguments that claim 1 cannot be practically performed in the human mind because it involves use of a neural network akin to Example 39 from PEG, further argues that claim 1 does not recite a judicial exception, and lastly argues that even if claim 1 recited a judicial exception the claim as a whole is integrated into a practical application of converting a first modality to a second modality. Examiner respectfully disagrees.
With respect to Example 39, while Example 39 was found to not recite a judicial exception, just the involvement of a neural network does not prohibit the instant application’s claims from reciting a judicial exception. As identified by the applicant, Example 47 also involved a neural network but was found to recite a judicial exception. Thus, the instant application’s claim 1 is similar to Example 47 as it involves a neural network and recites a judicial exception.
With respect to the claim limitation not being performable in the human mind, although the generation step is being performed by a neural network, the neural network is still being used to perform the generating step which was identified as a judicial exception of a mental process or a mathematical concept. Specifically, generating sensor data of a different modality than the first modality could be reasonably performed in the human mind, or is a mathematical concept achievable through mathematical computation to generate sensor data different than another sensor data. For example, one sensor could be a camera, and the other sensor could be a motion sensor, a human could reasonably look at the image data taken by a camera either as a photo or a frame of a video, and generate data for the motion sensor as to whether or not motion is detected (i.e., motion is either detected or not detected). Thus, the claim recites a judicial exception and is performable in the human mind.
With respect to claim 1 being integrated into a practical application, while claim 1 recites a neural network trained to convert from a first modality to a second modality, claim 1 does not recite converting from a first modality to a second modality and instead recites generating data of the second modality wherein the second modality is different than the first modality. Further, Applicant points to para. 30-31, 44, 115, and 119-120 as evidence of the claims reflecting an improvement that practical integrates the judicial exception. However, para. 30-31 discloses a 3 sensor stack used to track a missing person, fugitive, vehicle, or other object while claim 1 does not reflect tracking or having 3 sensors. Similarly, para. 44 discloses reidentifying a car based on sensor data while claim 1 does not reflect any identifying or identifying steps. Similarly, para. 119 discloses a cost metric which is not reflected in claim 1, and para. 120 further discloses groups of sensors related to the cost metrics. While para. 115 discloses an improvement that a model can augment or replace any damaged, defective, or non-existent sensor, it discloses that the synthetic sensor data is based on first sensor data and previous second sensor data which is not reflected in the claim. However, it is important to note, the judicial exception alone cannot provide the improvement, the improvement can be provided by one or more additional elements (see MPEP 2106.05(a)). Thus, while the specification may support generating second sensor data to improve the functioning of a machine with a damaged/defective/non-existent sensor as disclosed in para. 119, the improvement would be provided by the abstract idea of generating second sensor data as previously identified above which cannot practically integrate the judicial exception.
For at least the aforementioned reasons, claim 1 remains rejected under 35 U.S.C. 101, claims 19-20 remain rejected for similar reasons as they recite similar subject matter, and claims 2-18 remain rejected as the dependent claims fail to integrate the judicial exception of claim 1 into a practical application.
Applicant’s arguments, filed 11/21/2025, traversing the rejection of claims 1-3, 6, 10-11, and 19-20 under 35 U.S.C. 102(a)(1) have been fully considered and are not persuasive. Applicant argues that Nitsch cannot disclose the features of amended claim 1 because it is directed to a method for classifying objects by extracting modality-independent features to enable a single classification unit to handle data from various sensor modalities by relying upon different neural networks. Examiner respectfully disagrees.
With respect to Nitsch being directed to a method for classifying objects, while Nitsch is directed to classifying objects, the methodology used to arrive at their disclosure discloses the steps and limitations recited in the instant application’s claimed invention, thus, it is applicable prior art.
With respect to applicant’s argument that Nitsch relies upon different neural networks, it is important to note that Nitsch discloses using an encoder decoder architecture (see Nitsch, para. 46 and FIG. 4-5), and it is known within the art that encoder decoder architectures are neural networks. Kyunghyun Cho (“Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation”, arXiv:1406.1078v3, Sep 3, 2014) is a non-patent literature frequently cited for encoder-decoder neural networks and explains the general details of the architecture where the encoder and decoder both comprise a neural network and are paired together to form an encoder-decoder neural network. The encoder-decoder architecture detailed in Cho is reflected in the disclosure of Nitsch where each of the encoders and decoders are neural networks and paired based on sensor modality, to arrive at an encoder-decoder neural network with encoders and decoders for each sensor modality. Thus, while Nitsch relies upon different neural networks, the neural networks are encompassed under the architecture of the singular encoder-decoder neural network and thus the claimed invention which recites converting a first modality to a second modality using a trained neural network can be found in the disclosure of Nitsch which uses an encoder-decoder neural network to convert from a first sensor modality to a second sensor modality.
For at least the aforementioned reasons, claim 1 remains rejected under 35 U.S.C. 102(a)(1), claims 19-20 recite similar subject matter as claim 1 and are rejected under 35 U.S.C. 102(a)(1) for similar reasons, and claims 2-3, 6, 10-11 by dependency of claim 1 are also rejected under 35 U.S.C. 102(a)(1) in view of Nitsch.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Souly et al. (US 2022/0261658 A1) teaches translating sensor data between first sensor operating characteristics, for example LiDAR, and second sensor operating characteristics, for example a camera, using an encoder-decoder neural network.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAKE BREEN whose telephone number is (571)272-0456. The examiner can normally be reached Monday - Friday, 7:00 AM - 3:00 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Welch can be reached at (571) 272-7212. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J.T.B./Examiner, Art Unit 2143
/JENNIFER N WELCH/Supervisory Patent Examiner, Art Unit 2143