DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because they are directed to an abstract idea without significantly more.
Regarding claim 1:
Step 1: is the claim directed to one of the four statutory categories?
Yes, the claim is directed to a method.
Step 2A, prong 1: Is the claim directed to a law of nature, a natural phenomenon, or an abstract
idea?
Yes. The limitation: “and performing a task based on the inference information,” is directed to a mental process of judgment under MPEP 2106.04(a)(2)(III).
Step 2A, prong 2: Do the additional elements integrate into a practical application?
No. The limitations: “receiving encoded data at a first device from a second device separate from the first device, wherein the encoded data is generated using an artificial intelligence (Al) encoder model included in the second device based on sensor data collected by at least one sensor included in the second device; providing the encoded data to an Al inference model to obtain inference information;” are directed to mere data gathering under MPEP 2106.05(g).
Further, the limitation: “wherein the Al encoder model and the Al inference model are jointly trained based on an output of an Al teacher model.” Is directed to field of use under MPEP 2106.05(h).
Step 2B: Does the claim recite additional elements that amount to significantly more than the
judicial exception?
No. The limitations: “receiving encoded data at a first device from a second device separate from the first device, wherein the encoded data is generated using an artificial intelligence (Al) encoder model included in the second device based on sensor data collected by at least one sensor included in the second device; providing the encoded data to an Al inference model to obtain inference information;” are directed to the well-understood, routine, and conventional activity of “Receiving or transmitting data over a network” under MPEP 2106.05(d).
Further, the limitation: “wherein the Al encoder model and the Al inference model are jointly trained based on an output of an Al teacher model.” Is directed to field of use under MPEP 2106.05(h).
Regarding claim 2:
Step 2A, prong 1: Is the claim directed to a law of nature, a natural phenomenon, or an abstract
idea?
Yes, the claim is dependent on claim 1.
Step 2A, prong 2: Do the additional elements integrate into a practical application?
No. The limitation: “wherein a size of the Al inference model is smaller than a size of the Al teacher model” is directed to field of use under MPEP 2106.05(h).
Step 2B: Does the claim recite additional elements that amount to significantly more than the
judicial exception?
No. The limitation: “wherein a size of the Al inference model is smaller than a size of the Al teacher model” is directed to field of use under MPEP 2106.05(h).
Regarding claim 3:
Step 2A, prong 1: Is the claim directed to a law of nature, a natural phenomenon, or an abstract
idea?
Yes, the claim is dependent on claim 1.
Step 2A, prong 2: Do the additional elements integrate into a practical application?
No. The limitation: “wherein a size of the encoded data is smaller than a size of the sensor data” is directed to field of use under MPEP 2106.05(h).
Step 2B: Does the claim recite additional elements that amount to significantly more than the
judicial exception?
No. The limitation: “wherein a size of the encoded data is smaller than a size of the sensor data” is directed to field of use under MPEP 2106.05(h).
Regarding claim 4:
Step 2A, prong 1: Is the claim directed to a law of nature, a natural phenomenon, or an abstract
idea?
Yes, the claim is dependent on claim 1.
Step 2A, prong 2: Do the additional elements integrate into a practical application?
No. The limitations: “obtaining a plurality of pieces of encoded data at the first device from a plurality of second devices which are separate from the first device, wherein the plurality of pieces of encoded data are generated using a plurality of Al encoder models included in the plurality of second devices; And combining the plurality of pieces of encoded data with the encoded data to generate aggregated data, wherein the inference information is generated by the Al inference model based on the aggregated data, and wherein the plurality of Al encoder models are jointly trained with the Al encoder model and the Al inference model based on the output of the Al teacher model” are directed to mere data gathering under MPEP 2106.05(g).
Step 2B: Does the claim recite additional elements that amount to significantly more than the
judicial exception?
No. The limitations: “obtaining a plurality of pieces of encoded data at the first device from a plurality of second devices which are separate from the first device, wherein the plurality of pieces of encoded data are generated using a plurality of Al encoder models included in the plurality of second devices; And combining the plurality of pieces of encoded data with the encoded data to generate aggregated data, wherein the inference information is generated by the Al inference model based on the aggregated data, and wherein the plurality of Al encoder models are jointly trained with the Al encoder model and the Al inference model based on the output of the Al teacher model” are directed to the well-understood, routine, and conventional activity of “Receiving or transmitting data over a network” under MPEP 2106.05(d).
Regarding claim 5:
Step 2A, prong 1: Is the claim directed to a law of nature, a natural phenomenon, or an abstract
idea?
Yes, the limitation: “wherein the encoded data is quantized by the Al encoder model before being transmitted to the first device” is directed to a mathematical concept under MPEP 2106.04(a)(2)(I).
Regarding claim 6:
Step 2A, prong 1: Is the claim directed to a law of nature, a natural phenomenon, or an abstract
idea?
Yes, the claim is dependent on claim 1.
Step 2A, prong 2: Do the additional elements integrate into a practical application?
No. The limitation: “wherein the second device comprises a surveillance camera as the at least one sensor, and wherein the task comprises detecting at least one of an object and an event observed by the surveillance camera” is directed to field of use under MPEP 2106.05(h).
Step 2B: Does the claim recite additional elements that amount to significantly more than the
judicial exception?
No. The limitation: “wherein the second device comprises a surveillance camera as the at least one sensor, and wherein the task comprises detecting at least one of an object and an event observed by the surveillance camera” is directed to field of use under MPEP 2106.05(h).
Regarding claim 7:
Step 2A, prong 1: Is the claim directed to a law of nature, a natural phenomenon, or an abstract
idea?
Yes, the claim is dependent on claim 1.
Step 2A, prong 2: Do the additional elements integrate into a practical application?
No. The limitation: “wherein the second device comprises a wearable device, and wherein the task comprises detecting a health event associated with a user wearing the wearable device” is directed to field of use under MPEP 2106.05(h).
Step 2B: Does the claim recite additional elements that amount to significantly more than the
judicial exception?
No. The limitation: “wherein the second device comprises a wearable device, and wherein the task comprises detecting a health event associated with a user wearing the wearable device” is directed to field of use under MPEP 2106.05(h).
Regarding claim 8:
Step 2A, prong 1: Is the claim directed to a law of nature, a natural phenomenon, or an abstract
idea?
Yes, the claim is dependent on claim 1.
Step 2A, prong 2: Do the additional elements integrate into a practical application?
No. The limitation: “wherein the second device comprises an internet of things (loT) device, and wherein the encoded data is received using massive machine-type communications (mMTC)” is directed to field of use under MPEP 2106.05(h).
Step 2B: Does the claim recite additional elements that amount to significantly more than the
judicial exception?
No. The limitation: “wherein the second device comprises an internet of things (loT) device, and wherein the encoded data is received using massive machine-type communications (mMTC)” is directed to field of use under MPEP 2106.05(h).
Regarding claim 9:
Step 2A, prong 1: Is the claim directed to a law of nature, a natural phenomenon, or an abstract
idea?
Yes, the claim is dependent on claim 1.
Step 2A, prong 2: Do the additional elements integrate into a practical application?
No. The limitation: “wherein the Al inference model comprises a first neural network model, and wherein the Al teacher model comprises at least one from among a second neural network model, a support vector machine (SVM) model, and an ensemble model” is directed to field of use under MPEP 2106.05(h).
Step 2B: Does the claim recite additional elements that amount to significantly more than the
judicial exception?
No. The limitation: “wherein the Al inference model comprises a first neural network model, and wherein the Al teacher model comprises at least one from among a second neural network model, a support vector machine (SVM) model, and an ensemble model” is directed to field of use under MPEP 2106.05(h).
Regarding claim 10:
Step 1: is the claim directed to one of the four statutory categories?
Yes, the claim is directed to a machine.
Step 2A, prong 1: Is the claim directed to a law of nature, a natural phenomenon, or an abstract
idea?
Yes. The limitation: “and perform a task based on the inference information,” is directed to a mental process of judgment under MPEP 2106.04(a)(2)(III).
Step 2A, prong 2: Do the additional elements integrate into a practical application?
No. The limitation: “receive encoded data at a first device from a second device separate from the first device, wherein the encoded data is generated using an artificial intelligence (Al) encoder model included in the second device based on sensor data collected by at least one sensor included in the second device, provide the encoded data to an Al inference model to obtain inference information,” is directed to mere data gathering under MPEP 2106.05(g).
Further, the limitation: “wherein the Al encoder model and the Al inference model are jointly trained based on an output of an Al teacher model” is directed to field of use under MPEP 2106.05(h).
Further, the limitation: “at least one memory storing computer-readable instructions; and at least one processor configured to execute the computer-readable instructions to” is directed to extra-solutional activity under MPEP 2106.05(g).
Step 2B: Does the claim recite additional elements that amount to significantly more than the
judicial exception?
No. The limitation: “receive encoded data at a first device from a second device separate from the first device, wherein the encoded data is generated using an artificial intelligence (Al) encoder model included in the second device based on sensor data collected by at least one sensor included in the second device, provide the encoded data to an Al inference model to obtain inference information,” is directed to the well-understood, routine, and conventional activity of “Receiving or transmitting data over a network” under MPEP 2106.05(d).
Further, the limitation: “wherein the Al encoder model and the Al inference model are jointly trained based on an output of an Al teacher model” is directed to field of use under MPEP 2106.05(h).
Further, the limitation: “at least one memory storing computer-readable instructions; and at least one processor configured to execute the computer-readable instructions to” is directed to generic computing components under MPEP 2106.05(d).
Claim 11 is rejected for the same reasons as claim 2.
Claim 12 is rejected for the same reasons as claim 3.
Claim 13 is rejected for the same reasons as claim 4.
Claim 14 is rejected for the same reasons as claim 5.
Claim 15 is rejected for the same reasons as claim 6.
Claim 16 is rejected for the same reasons as claim 7.
Claim 17 is rejected for the same reasons as claim 8.
Claim 18 is rejected for the same reasons as claim 9.
Claim 19 is rejected for the same reasons as claim 1.
Regarding claim 20:
Step 2A, prong 1: Is the claim directed to a law of nature, a natural phenomenon, or an abstract
idea?
Yes, the claim is dependent on claim 19.
Step 2A, prong 2: Do the additional elements integrate into a practical application?
No. The limitation: “wherein a size of the Al inference model is smaller than a size of the Al teacher model, and wherein a size of the encoded data is smaller than a size of the sensor data” is directed to field of use under MPEP 2106.05(h).
Step 2B: Does the claim recite additional elements that amount to significantly more than the
judicial exception?
No. The limitation: “wherein a size of the Al inference model is smaller than a size of the Al teacher model, and wherein a size of the encoded data is smaller than a size of the sensor data” is directed to field of use under MPEP 2106.05(h).
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-6, 8-14, and 16-18 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by US Pre Grant Patent 2021/0241108 (Chai et al; Chai).
Regarding claims 1 and analogous claim 19:
1. A method of managing sensor data, the method comprising: receiving encoded data at a first device from a second device separate from the first device,
(Chai, ¶0066)
“FIG. 17 illustrates an exemplary hierarchy of computing nodes in accordance with the disclosed embodiments. This hierarchy includes a number of basic runtime engines (REs) 1701-1708, which can be located in edge devices, such as motion sensors, cameras or microphones [i.e. A method of managing sensor data, the method comprising:]. These basic REs 1701-1708 assume the existence of an associated intermediate or high-end device capable of delivering DNN models to basic REs 1701-1708 and collecting log information from basic REs 1701-1708 [i.e. receiving encoded data at a first device from a second device separate from the first device,].”
2. wherein the encoded data is generated using an artificial intelligence (Al) encoder model included in the second device based on sensor data collected by at least one sensor included in the second device;
(Chai, ¶0096)
“If an erroneous inference is detected (e.g. via user input or other DNN inferences), then the erroneous pathway indicates the visual features that produces the erroneous inference results. A comparison of the erroneous pathway against the activation heat map can show locations where the erroneous pathway differs from the statistical distribution of pathways in the activation heat map. To improve DNN accuracy, we can generate additional training data specifically to correct the area where there is a difference in the pathways (e.g. against the heat map) [i.e. included in the second device based on sensor data collected by at least one sensor included in the second device;]. The additional training data can be synthesized using a generative adversarial network (GAN) training methodology [i.e. wherein the encoded data is generated using an artificial intelligence (Al) encoder model].”
3. providing the encoded data to an Al inference model to obtain inference information;
(Chai, ¶0097)
“Hence, the above-described profiling process and the generation of the activation heat map essentially produces an explanation of how the DNN produces an inference result. The process in comparing the erroneous pathways essentially produces an explanation of how the DNN is not robust to that input data set [i.e. providing the encoded data to an Al inference model to obtain inference information;].”
4. and performing a task based on the inference information, wherein the Al encoder model and the Al inference model are jointly trained based on an output of an Al teacher model.
(Chai, ¶0137)
“FIG. 19 illustrates an example of dynamic runtime execution of the DNN system…”
(Chai, ¶0139)
“The context-specific models 1906 can be generated using a knowledge distillation process in a distill workflow [i.e. and performing a task based on the inference information,]. In the distill workflow, the original model, which was developed for the cloud, serves as a teacher model, while the context-specific models 1908 are student models that learn from the teacher model. By using a distillation-loss parameter within a training loss function, the training process for a student model can be guided to learn similar representations to those in the teacher model [i.e. wherein the Al encoder model and the Al inference model are jointly trained based on an output of an Al teacher model].”
Regarding claim 10:
1. A device for managing sensor data, the device comprising: at least one memory storing computer-readable instructions;
(Chai, ¶0045)
“The computer-readable storage medium includes, but is not limited to, volatile memory, non-volatile memory, magnetic and optical storage devices such as disk drives, magnetic tape, CDs (compact discs), DVDs (digital versatile discs or digital video discs), or other media capable of storing computer-readable media now known or later developed [i.e. at least one memory storing computer-readable instructions;].”
2. and at least one processor configured to execute the computer-readable instructions to:
(Chai, ¶0046)
“When a computer system reads and executes the code and/or data stored on the computer-readable storage medium, the computer system performs the methods and processes embodied as data structures and code and stored within the computer-readable storage medium [i.e. and at least one processor configured to execute the computer-readable instructions to:].”
3. receive encoded data at a first device from a second device separate from the first device,
(Chai, ¶0066)
“FIG. 17 illustrates an exemplary hierarchy of computing nodes in accordance with the disclosed embodiments. This hierarchy includes a number of basic runtime engines (REs) 1701-1708, which can be located in edge devices, such as motion sensors, cameras or microphones. These basic REs 1701-1708 assume the existence of an associated intermediate or high-end device capable of delivering DNN models to basic REs 1701-1708 and collecting log information from basic REs 1701-1708 [i.e. receive encoded data at a first device from a second device separate from the first device,].”
4. wherein the encoded data is generated using an artificial intelligence (Al) encoder model included in the second device based on sensor data collected by at least one sensor included in the second device,
(Chai, ¶0096)
“If an erroneous inference is detected (e.g. via user input or other DNN inferences), then the erroneous pathway indicates the visual features that produces the erroneous inference results. A comparison of the erroneous pathway against the activation heat map can show locations where the erroneous pathway differs from the statistical distribution of pathways in the activation heat map. To improve DNN accuracy, we can generate additional training data specifically to correct the area where there is a difference in the pathways (e.g. against the heat map) [i.e. included in the second device based on sensor data collected by at least one sensor included in the second device;]. The additional training data can be synthesized using a generative adversarial network (GAN) training methodology [i.e. wherein the encoded data is generated using an artificial intelligence (Al) encoder model].”
5. provide the encoded data to an Al inference model to obtain inference information,
(Chai, ¶0097)
“Hence, the above-described profiling process and the generation of the activation heat map essentially produces an explanation of how the DNN produces an inference result. The process in comparing the erroneous pathways essentially produces an explanation of how the DNN is not robust to that input data set [i.e. providing the encoded data to an Al inference model to obtain inference information;].”
6. and perform a task based on the inference information, wherein the Al encoder model and the Al inference model are jointly trained based on an output of an Al teacher model.
(Chai, ¶0139)
“The context-specific models 1906 can be generated using a knowledge distillation process in a distill workflow [i.e. and performing a task based on the inference information,]. In the distill workflow, the original model, which was developed for the cloud, serves as a teacher model, while the context-specific models 1908 are student models that learn from the teacher model. By using a distillation-loss parameter within a training loss function, the training process for a student model can be guided to learn similar representations to those in the teacher model [i.e. wherein the Al encoder model and the Al inference model are jointly trained based on an output of an Al teacher model].”
Regarding claim 2 and analogous claim 11:
Chai teaches:
1. wherein a size of the Al inference model is smaller than a size of the Al teacher model.
(Chai, ¶0139)
“In the distill workflow, the original model, which was developed for the cloud, serves as a teacher model, while the context-specific models 1908 are student models that learn from the teacher model.”
(Chai, ¶0139)
“Note that a context-specific model 1908 can be configured to have fewer parameters (e.g. less width or depth of layers) than the original model 1902, so that the context-specific model 1908 can run within constraints of the target runtime parameters [i.e. wherein a size of the Al inference model is smaller than a size of the Al teacher model].”
Regarding claim 3 and analogous claim 12:
Chai teaches:
1. wherein a size of the encoded data is smaller than a size of the sensor data.
(Chai, ¶0138)
“In order to run on the smartphone, the model can go through a build process 1906 and a run process 1910 to condition it for dynamic runtime execution. The build process 1906 includes workflows for distill, compress, and compile operations, to optimize the DNN model [i.e. wherein a size of the encoded data is smaller than a size of the sensor data].”
Regarding claim 4 and analogous claim 13:
Chai teaches:
1. obtaining a plurality of pieces of encoded data at the first device from a plurality of second devices which are separate from the first device, wherein the plurality of pieces of encoded data are generated using a plurality of Al encoder models included in the plurality of second devices;
(Chai, ¶0097)
“Hence, the above-described profiling process and the generation of the activation heat map essentially produces an explanation of how the DNN produces an inference result. The process in comparing the erroneous pathways essentially produces an explanation of how the DNN is not robust to that input data set. The process in producing additional data, through data collection or synthesis using GAN [i.e. obtaining a plurality of pieces of encoded data at the first device from a plurality of second devices which are separate from the first device], is essentially an adversarial training approach to make the DNN more robust based on profiling process [i.e. wherein the plurality of pieces of encoded data are generated using a plurality of Al encoder models included in the plurality of second devices;].”
Examiner notes that a GAN involves two neural networks, an encoder and a decoder. See attached NPL: GAN, Wikipedia.
2. and combining the plurality of pieces of encoded data with the encoded data to generate aggregated data, wherein the inference information is generated by the Al inference model based on the aggregated data,
(Chai, ¶0097)
“The process in producing additional data, through data collection or synthesis using GAN, is essentially an adversarial training approach to make the DNN more robust based on profiling process [i.e. and combining the plurality of pieces of encoded data with the encoded data to generate aggregated data, wherein the inference information is generated by the Al inference model based on the aggregated data,;].”
2. and wherein the plurality of Al encoder models are jointly trained with the Al encoder model and the Al inference model based on the output of the Al teacher model.
“The process in producing additional data, through data collection or synthesis using GAN, is essentially an adversarial training approach to make the DNN more robust based on profiling process [i.e. and combining the plurality of pieces of encoded data with the encoded data to generate aggregated data, wherein the inference information is generated by the Al inference model based on the aggregated data,;].”
Regarding claim 5 and analogous claim 14:
Chai teaches:
1. wherein the encoded data is quantized by the Al encoder model before being transmitted to the first device.
(Chai, ¶0110)
“Quantization module 112 quantizes the values for DNN parameters to reduce the memory footprint.”
Regarding claim 6 and analogous claim 15:
Chai teaches:
1. wherein the second device comprises a surveillance camera as the at least one sensor, and wherein the task comprises detecting at least one of an object and an event observed by the surveillance camera.
(Chai, ¶0150)
“FIG. 20 illustrates another example of dynamic runtime execution of the DNN model. Similar to the example in FIG. 19, in order to run on the edge devices, the model can go through a build process 2006 and a run process 2010 to condition the model for dynamic runtime execution... The context-specific model 2008 for person detection can run on a video doorbell edge device [i.e. wherein the second device comprises a surveillance camera as the at least one sensor], while the context-specific model 2008 for face recognition can run on the network edge (e.g., a content-delivery-network or CDN) [i.e. wherein the task comprises detecting at least one of an object and an event observed by the surveillance camera].”
Examiner interprets the object as a person and the event as their presence in the vicinity of the video doorbell.
Regarding claim 8 and analogous claim 17:
Chai teaches:
1. wherein the second device comprises an internet of things (loT) device, and wherein the encoded data is received using massive machine-type communications (mMTC).
(Chai, ¶0155)
“Having models move to different locations in the hierarchy of computing nodes can help track objects in motion. For example, if a tracking application has detected a blue sedan in the proximity of IOT sensors in the hierarchy of computing nodes [i.e. and wherein the encoded data is received using massive machine-type communications (mMTC).], then a specific model for blue sedans, generated in the build process 2006 from an original model 2002 [i.e. wherein the second device comprises an internet of things (loT) device,], can be deployed in the run process 2010, as described previously.”
Regarding claim 9 and analogous claim 18:
Chai teaches:
1. wherein the Al inference model comprises a first neural network model, and wherein the Al teacher model comprises at least one from among a second neural network model, a support vector machine (SVM) model, and an ensemble model.
“The process in producing additional data, through data collection or synthesis using GAN, is essentially an adversarial training approach to make the DNN more robust based on profiling process [i.e. wherein the Al inference model comprises a first neural network model, and wherein the Al teacher model comprises at least one from among a second neural network model, a support vector machine (SVM) model, and an ensemble model].”
Examiner notes that a GAN teaches the use of two neural network models, the generator and the discriminator. See attached NPL: Generative adversarial network, Wikipedia.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 7 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over US Pre-Grant Patent 2021/0241108 (Chai et al; Chai) in view of US Pre-Grant Patent 2020/0357513 (Katra et al; Katra).
Regarding claim 7 and analogous claim 15:
Chai does not explicitly teach:
1. wherein the second device comprises a wearable device, and wherein the task comprises detecting a health event associated with a user wearing the wearable device.
Katra teaches:
1. wherein the second device comprises a wearable device, and wherein the task comprises detecting a health event associated with a user wearing the wearable device.
(Katra, ¶0066)
“Computing device(s) 2 may interface with and/or monitor medical device(s) 6, for example, by imaging the implantation site of the medical device(s) 6, in accordance with one or more techniques of this disclosure. In addition, computing device(s) 2 may interrogate medical device(s) 6 to obtain data from medical device(s) 6, such as performance data, historical data stored to memory, battery strength of the medical device(s) 6, impedance, pulse width, pacing percentage, pulse amplitude, pacing mode, internal device temperature, etc. In some examples, computing device(s) 2 may perform an interrogation subsession with medical device(s) 6 by establishing a wireless communication with one or more of the medical device(s) 6 [i.e. and wherein the task comprises detecting a health event associated with a user wearing the wearable device.]. In some instances, medical device(s) 6 may or may not include an IMD. In an example, computing device(s) 2 interrogate the memory of a wearable medical device 6 in order to determine device operating parameters as the interrogation data [i.e. wherein the second device comprises a wearable device,].”
One of ordinary skill in the art, at the time the invention was filed, would have been motivated to modify Chai with Katra. The motivation is to generally apply the well-known techniques of Chai to the UI as mentioned in Katra, as it would have been obvious to try applying a generative adversarial network to a wearable device for a patient, as “Implantation infections are estimated to occur in about 0.5% of IMD implants and about 2% of IMD replacements. Early diagnosis of IMD infections can help drive effective antibiotic therapy or device removals to treat the infection (Katra, ¶0044).” Therefore, “This non-trivial development has resulted in the UIs described herein, which are likely to provide significant cognitive and ergonomic efficiencies and advantages over previous systems going forward. The interactive and dynamic UIs include improved human-computer interactions that may provide, for a user, reduced mental workloads/burdens, improved decision-making, reduced work stress, etc (Katra, ¶0063).”
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PAUL JUSTIN BREENE whose telephone number is (571)272-6320. Examiner
interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-
based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO
Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michael J Huntley can be reached on 303-297-4307. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786 9199 (IN USA OR CANADA) or 571-272-1000.
/P.J.B./ Examiner, Art Unit 2129
/MICHAEL J HUNTLEY/Supervisory Patent Examiner, Art Unit 2129