Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of the Claims
Claims 1-3 and 5-15 are currently pending.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on October 21, 2025 has been entered.
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitations use a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are:
“A receiver,” recited in Claims 1-3 and 5-11;
“A processing unit,” recited in Claims 1-3 and 5-11;
“A natural language processing unit,” recited in Claims 1-3 and 5-15;
“A transmitter,” recited in Claims 1-3 and 5-11;
“An interface,” recited in Claim 11;
“A convolutional neuronal network processing unit” recited in Claims 1-3 and 5-15;
“A message-to-audio converter,” recited in Claim 6.
Because these claim limitations are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have these limitations interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitations to avoid them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitations recite sufficient structure to perform the claimed functions so as to avoid them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-3 and 5-15 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
Step 1
Claims 1-15 are within the four statutory categories. Claims 1-3 and 5-11 are drawn to systems for patient health messaging, which are within the four statutory categories (i.e. machine). Claims 12-15 are drawn to methods for patient health messaging, which are within the four statutory categories (i.e. process).
Prong 1 of Step 2A
Claim 1, which is representative of the inventive concept, recites: A dialogue-based medical decision system comprising:
a receiver for receiving audio dialogue, video data, and data associated with a health state of the patient sent from a device associated with the patient;
a processing unit comprising a natural language processing unit and a convolutional neuronal network processing unit, the processing unit adapted to:
recognize, via the natural language processing unit, whether the patient suffers of pain based on wording of a spoken sequence of the audio dialogue; and
determine, via the convolutional neuronal network processing unit, that an implant of the patient is malfunctioning based on a motion sequence of the patient; and
select a message associated with the health state of the patient based on the received data and based on stored patient data; and
a transmitter for transmitting the message to the device and/or to another device associated with the patient.
The underlined limitations as shown above, given the broadest reasonable interpretation, cover the abstract idea of a certain method of organizing human activity because they recite managing personal behavior or relationships or interactions between people (i.e. social activities, teaching, and following rules or instructions – in this case, the steps of receiving audio dialogue, video data, and health data for a patient, recognizing whether the patient is suffering pain from the audio dialogue, determining that a patient implant is malfunctioning based on a motion of the patient, selecting a message based on the received patient health data, and transmitting the selected message to the patient are reasonably interpreted as following rules or instructions for handling patient health data and for notifying patients of conditions), e.g. see MPEP 2106.04(a)(2). Any limitations not identified above as part of the abstract idea are deemed “additional elements,” and will be discussed in further detail below.
Furthermore, the abstract idea for Claims 11, 12, and 14 is identical as the abstract idea for Claim 1, because the only difference between Claims 1, 11, 12, and 14 is that Claim 1 recites executing the abstract idea on a dialogue-based medical decision system, whereas Claim 11 recites executing the abstract idea on a patient device in communications with a dialog-based medical decision system, Claim 12 recites a method that mirrors the functions of Claim 1, and Claim 14 recites a method that mirrors the functions of Claim 11.
Dependent Claims 2-3, 5-10, 13, and 15 include other limitations, for example Claim 2 recites the contents of the transmitted message, Claim 3 recites types of data that make up the data associated with the health state of the patient, Claim 5 recites that selecting the message from a set of stored dialogues, Claim 6 recites transmitting the message to an audio converter prior to the transmission of the message, Claim 7 recites the transmission of the message as an audio and/or video output, Claim 8 recites a low-latency communication, Claim 9 recites activating the system based on a request and/or a predetermined schedule, and Claim 10 recites activating the system based on sensor data and/or periodically, Claim 13 recites performing the steps of the method at least twice, Claim 15 recites performing the functions of the invention as a computer program on a computer, but these only serve to further narrow the abstract idea, and a claim may not preempt abstract ideas, even if the judicial exception is narrow, e.g. see MPEP 2106.04, and/or do not further narrow the abstract idea and instead only recite additional elements, which will be further addressed below. Hence dependent Claims 2-3, 5-10, 13, and 15 are nonetheless directed towards fundamentally the same abstract idea as independent Claims 1, 11-12, and 14.
Prong 2 of Step 2A
Claims 1, 11, 12, and 14 are not integrated into a practical application because the additional elements (i.e. the non-underlined limitations above – in this case, the receiver, the processing unit, and the devices associated with the patient) amount to no more than limitations which:
amount to mere instructions to apply an exception – for example, the recitation of a computer, which amounts to merely invoking a computer as a tool to perform the abstract idea, e.g. see line 29, pg. 2, through line 7, pg. 3, and line 25, pg. 27, through line 13, pg. 28 of the as-filed Specification, see MPEP 2106.05(f);
generally link the abstract idea to a particular technological environment or field of use – for example, the claim language of the device being associated with the patient and the claim language indicating that the data and message being health data, which amounts to limiting the abstract idea to the field of healthcare, and the language of the “convolutional neuronal network” processing unit, which amounts to limiting the abstract idea to the field of machine learning, see MPEP 2106.05(h); and/or
add insignificant extra-solution activity to the abstract idea – for example, the recitation of receiving the data from a patient device, which amounts to selecting a particular data source or type of data to be manipulated, see MPEP 2106.05(g).
Additionally, dependent Claims 2-3, 5-10, 13, and 15 include other limitations, but these limitations also amount to no more than mere instructions to apply an exception (e.g. the non-transitory computer-readable medium recited in dependent Claim 15), generally linking the abstract idea to a particular technological environment or field of use (e.g. the types of data recited in dependent Claims 2-3 and 7), and/or adding insignificant extra-solution activity to the abstract idea (e.g. the output of the message recited in dependent Claim 7), and/or do not include any additional elements beyond those already recited in independent Claims 1, 11, 12, and 14, and hence also do not integrate the aforementioned abstract idea into a practical application.
Hence Claims 1-3 and 5-15 do not include additional elements that integrate the judicial exception into a practical application.
Step 2B
Claims 1, 11, 12, and 14 do not include additional elements that are sufficient to amount to “significantly more” than the judicial exception because the additional elements (i.e. the non-underlined limitations above – in this case, the receiver, the processing unit, and the devices associated with the patient), as stated above, are directed towards no more than limitations that amount to mere instructions to apply the exception, generally link the abstract idea to a particular technological environment or field of use, and/or add insignificant extra-solution activity to the abstract idea, wherein the additional elements comprise limitations which:
amount to elements that have been recognized as well-understood, routine, and conventional activity in particular fields, as demonstrated by:
The present Specification expressly disclosing that the structural additional elements are well-understood, routine, and conventional in nature:
Line 29, pg. 2, through line 7, pg. 3, and line 25, pg. 27, through line 13, pg. 28 of the as-filed Specification discloses that the additional elements (i.e. the patient device, the receiver, the processing unit, and transmitter) comprise a plurality of different types of generic computing systems;
Relevant court decisions: The functional limitations interpreted as additional elements are analogized to the following examples of court decisions demonstrating well-understood, routine and conventional activities, e.g. see MPEP 2106.05(d)(II):
Receiving or transmitting data over a network, e.g. see Intellectual Ventures v. Symantec – similarly, the current invention receives health data of a patient over a network, for example the Internet, e.g. see lines 18-30, pg. 29 of the as-filed Specification;
Electronic recordkeeping, e.g. see Alice Corp v. CLS Bank – similarly, the current invention merely recites the storing of patient data and corresponding messages on a database and/or electronic memory;
Storing and retrieving information in memory, e.g. see Versata Dev. Group, Inc. v. SAP Am., Inc. – similarly, the current invention recites storing patient data in a database and/or electronic memory, and retrieving message data corresponding to the patient data from storage in order to transmit the message data to a patient device;
Dependent Claims 2-3, 5-10, 13, and 15 include other limitations, but none of these limitations are deemed significantly more than the abstract idea because the additional elements recited in the aforementioned dependent claims similarly amount to mere instructions to apply the exception (e.g. the non-transitory computer-readable medium recited in dependent Claim 15), generally link the abstract idea to a particular technological environment or field of use (e.g. the types of data recited in dependent Claims 2-3 and 7), receiving or transmitting data over a network (e.g. the transmitting and output of the message recited dependent Claim 7), storing and retrieving information in memory (e.g. the limitation reciting selecting the message from a stored set of dialogues recited in dependent Claim 5), and/or the limitations recited by the dependent claims do not recite any additional elements not already recited in independent Claims 1, 11, 12, and 14, and hence do not amount to “significantly more” than the abstract idea.
Hence, Claims 1-3 and 5-15 do not include any additional elements that amount to “significantly more” than the judicial exception.
Thus, taken alone, the additional elements do not amount to significantly more than the abstract idea identified above. Furthermore, looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually, and there is no indication that the combination of elements improves the functioning of a computer or improves any other technology, and their collective functions merely provide conventional computer implementation.
Therefore, whether taken individually or as an ordered combination, Claims 1-3 and 5-15 are nonetheless rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3 and 6-15 are rejected under 35 U.S.C. 103 as being unpatentable over Kupershmidt (US 2020/0194121) in view of Odessky (US 2014/0236627), further in view of Katra (US 2020/0357513) and Netscher (US 2019/0287376).
Regarding Claim 1, Kupershmidt teaches the following: A dialogue-based medical decision system (The system includes a digital health system, e.g. see Kupershmidt Fig. 1.) comprising:
a receiver for receiving audio dialogue and data associated with a health state of the patient sent from a device associated with the patient (The digital health system includes an interface engine that receives current and historical health data, user audio data recorded from a microphone, and/or event data for a patient, e.g. see Kupershmidt [0021]-[0022] and [0049], wherein the user may interact with the digital health system via a client device, e.g. see Kupershmidt [0021], Fig. 1.);
a processing unit adapted to select a message associated with the health state of the patient based on the received data and based on stored patient data (The interface engine provides personalized questions to the user regarding the user’s health condition, wherein the provided questions are customized based on previous questions answered by the user, and wherein the user information is stored in a user data store, e.g. see Kupershmidt [0022]-[0023]. Additionally, the interface engine may also receive physiological event data from a user, and in response perform an analysis on the event data, and provide the user and/or a healthcare provider of the user with the results of the analysis, for example data describing the physiological condition of the user in a report, e.g. see Kupershmidt [0049].); and
a transmitter for transmitting the message to the device and/or to another device associated with the patient (The questions and/or report may be provided (i.e. transmitted) to the user and/or healthcare provider of the user (i.e. another device associated with the patient), e.g. see Kupershmidt [0014] and [0049], Figs. 6A-6D.).
But Kupershmidt does not teach and Odessky teaches the following:
wherein the processing unit comprises a natural language processing unit, and is further adapted to recognize, via the natural language processing unit, whether the patient suffers of pain based on the wording of a spoken sequence of the audio dialogue (The system includes an interactive voice response system (IVR) that includes a natural language engine including a voice-to-text functionality that enables a user to verbally communicate with the system regarding the user’s condition, for example whether the user is feeling pain, e.g. see Odessky [0027].).
Furthermore, before the effective filing date, it would have been obvious to one ordinarily skilled in the art of healthcare to modify Kupershmidt to incorporate the natural language engine to recognize patient speech as taught by Odessky in order to enable the system to quickly and conveniently determine a medical condition of a patient and arrange for communication with a suitable medical provider, e.g. see Odessky [0002].
But the combination of Kupershmidt and Odessky does not teach and Katra teaches the following:
wherein the receiver also receives video data (The system includes edge devices that receive video data for a patient from computing devices, for example showing gait data, e.g. see Katra [0107], [0203], and [0258].);
the system further comprising a neuronal network processing unit (The system includes an AI engine that utilizes artificial neural networks to analyze patient data, e.g. see Katra [0103] and [0107].); and
the neuronal network processing unit adapted to determine, via the neuronal network processing unit, that an implant of the patient is malfunctioning based on a motion sequence of the patient (The AI engine is trained to analyze the gait of a patient (i.e. a motion sequence of the patient) to determine the presence of an abnormality (i.e. a malfunction) with the patient and/or a patient medical device, for example an implanted medical device (IMD), e.g. see Katra [0107] and [0203].).
Furthermore, before the effective filing date, it would have been obvious to one ordinarily skilled in the art of healthcare to modify the combination of Kupershmidt and Odessky to incorporate the video analysis to detect a problem with the IMD as taught by Katra in order to accurately determine whether any complications have arisen with the patient or the device, e.g. see Katra [0008] and [0207].
But the combination of Kupershmidt, Odessky, and Katra does not teach and Netscher teaches the following:
wherein the neuronal network processing unit is a convolutional neuronal network processing unit (The system obtains video data, and transmits the video data through a convolutional neural network in order to detect events, e.g. see Netscher [0012].).
Furthermore, before the effective filing date, it would have been obvious to one ordinarily skilled in the art of healthcare to modify the combination of Kupershmidt, Odessky, and Katra to incorporate the convolutional neural network as taught by Netscher in order to provide a practical way to analyze high-dimensional data such as image data from cameras to search for events, e.g. see Netscher [0008]-[0011].
Regarding Claim 2, the combination of Kupershmidt, Odessky, Katra, and Netscher teaches the limitations of Claim 1, and Kupershmidt further teaches the following:
The dialogue-based medical decision system according to claim 1, wherein the message transmitted to the device and/or to the another device includes the associated health state of the patient (The system provides additional questions (i.e. information) to a user, wherein the user provides answers (i.e. sent data based on the information) to the questions, e.g. see Kupershmidt [0022]-[0023]. Additionally, the system may generate a plurality of event reports, wherein the reports shift over time according to the patient’s changing condition (i.e. the health state of the patient), e.g. see Kupershmidt [0050].).
Regarding Claim 3, the combination of Kupershmidt, Odessky, Katra, and Netscher teaches the limitations of Claim 1, and Kupershmidt further teaches the following:
The dialogue-based medical decision system according to claim 1, wherein the data associated with the health state of the patient includes audio and/or visual data and the system is further adapted to transmit the message comprising audio and/or visual data (The data provided by the user may be verbal (i.e. audio data) and/or text (i.e. visual data), e.g. see Kupershmidt [0021]-[0022], and the data in the report (i.e. the message) may be in the form of a summary report describing the user’s health trajectories (i.e. visual data), e.g. see Kupershmidt [0049].).
Regarding Claim 6, the combination of Kupershmidt, Odessky, Katra, and Netscher teaches the limitations of Claim 1, and Kupershmidt further teaches the following:
The dialogue-based medical decision system according to claim 1, wherein the system is configured to transmit the message to a message-to-audio-converter prior to transmitting the message to the device and/or the another device (The interface engine of the digital health system may communicate with a user in audio form using a text-to-speech algorithm, e.g. see Kupershmidt [0021].).
Regarding Claim 7, the combination of Kupershmidt, Odessky, Katra, and Netscher teaches the limitations of Claim 1, and Kupershmidt further teaches the following:
The dialogue-based medical decision system according to claim 1, wherein the system is adapted to further transmit information to enable an output of the message by the device and/or by the another device as an audio output and/or as a visual output (The interface engine of the digital health system may communicate with a user in audio form using a text-to-speech algorithm, e.g. see Kupershmidt [0021].).
Regarding Claim 8, the combination of Kupershmidt, Odessky, Katra, and Netscher teaches the limitations of Claim 1, and Kupershmidt further teaches the following:
The dialogue-based medical decision system according to claim 1, wherein the receiver and/or transmitter are adapted to communicate based on a low-latency communication system (Given the broadest reasonable interpretation, a “low-latency” communication system may be interpreted as a WiFi, LTE, 5G, NFL, Bluetooth, ethernet, USB, serial, FireWire, and/or HDMI communication system, and/or any communication system “for which an operator does not perceive any waiting time in between the transmission of a message and the reception of a respective response, similarly as in a phone call,” e.g. see line 28, pg. 6, through line 4, pg. 7, and line 25, pg. 22 through line 1, pg. 23 of the as-filed Specification. The client devices may communicate with the digital health system utilizing technolgoies such as Bluetooth or WiFi, e.g. see Kupershmidt [0018].).
Regarding Claim 9, the combination of Kupershmidt, Odessky, Katra, and Netscher teaches the limitations of Claim 1, and Kupershmidt further teaches the following:
The dialogue-based medical decision system according to claim 1, wherein the system is adapted to be activated based at least in part on a request from the device and/or the another device and/or based on a predetermined schedule (The interface engine of the digital health system receives user audio data and/or event data from the client device, and in response to the received data provides the user with the questions and/or report regarding the patient health, e.g. see Kupershmidt [0021]-[0023] and [0049] – that is, the user device submitting the audio and/or event data is interpreted as the request triggering the functions of the digital health system.).
Regarding Claim 10, the combination of Kupershmidt, Odessky, Katra, and Netscher teaches the limitations of Claim 1, and Kupershmidt further teaches the following:
The dialogue-based medical decision system according to claim 1, wherein the system is automatically activated periodically and/or based on sensor data of one or more sensors associated with the patient (The interface engine of the digital health system receives user data from sensors in communication with the client device, and in response to the received data provides the user with the questions and/or report regarding the patient health, e.g. see Kupershmidt [0015], [0021]-[0023], and [0049] – that is, the user device submitting the data from the sensors is interpreted as the trigger activating the functions of the digital health system.).
Regarding Claim 11, Kupershmidt teaches the following: A device associated with a patient (The system includes a client device, e.g. see Kupershmidt Fig. 1.), comprising:
a transmitter configured to transmit audio dialogue and data associated with a health state of the patient to a dialogue-based medical decision system (The client device transmits sensor data, user audio data recorded from a microphone, and/or event data for a patient to a digital health system (i.e. a dialogue-based medical decision system), e.g. see Kupershmidt [0015], [0021]-[0022], and [0049], Fig. 1.);
a receiver configured to receive, from the dialogue-based medical decision system, a message associated with the health state of the patient based on the transmitted data and based on patient data stored by the dialogue-based medical decision system (The digital health system includes an interface engine that provides personalized questions to the user regarding the user’s health condition, wherein the provided questions are customized based on previous questions answered by the user, and wherein the user information is stored in a user data store, e.g. see Kupershmidt [0022]-[0023]. Additionally, the interface engine may also receive physiological event data from a user, and in response perform an analysis on the event data, and provide the user and/or a healthcare provider of the user with the results of the analysis, for example data describing the physiological condition of the user in a report, e.g. see Kupershmidt [0049].); and
an interface adapted to indicate the received message to the patient (The questions and/or report may be provided (i.e. indicated) to the user and/or healthcare provider of the user (i.e. another device associated with the patient), e.g. see Kupershmidt [0014] and [0049], Figs. 6A-6D.).
But Kupershmidt does not teach and Odessky teaches the following:
a processing unit comprising a natural language processing unit, the processing unit adapted to recognize, via a natural language processing unit, whether the patient suffers of pain based on wording of a spoken sequence of the audio dialogue (The system includes an interactive voice response system (IVR) that includes a natural language engine including a voice-to-text functionality that enables a user to verbally communicate with the system regarding the user’s condition, for example whether the user is feeling pain, e.g. see Odessky [0027].).
Furthermore, before the effective filing date, it would have been obvious to one ordinarily skilled in the art of healthcare to modify Kupershmidt to incorporate the natural language engine to recognize patient speech as taught by Odessky in order to enable the system to quickly and conveniently determine a medical condition of a patient and arrange for communication with a suitable medical provider, e.g. see Odessky [0002].
But the combination of Kupershmidt and Odessky does not teach and Katra teaches the following:
wherein the transmitter also transmits video data (The system includes edge devices that receive video data for a patient from computing devices, for example showing gait data, e.g. see Katra [0107], [0203], and [0258].);
the processing unit further comprising a neuronal network processing unit (The system includes an AI engine that utilizes artificial neural networks to analyze patient data, e.g. see Katra [0103] and [0107].); and
the neuronal network processing unit adapted to determine, via the neuronal network processing unit, that an implant of the patient is malfunctioning based on a motion sequence of the patient (The AI engine is trained to analyze the gait of a patient (i.e. a motion sequence of the patient) to determine the presence of an abnormality (i.e. a malfunction) with the patient and/or a patient medical device, for example an implanted medical device (IMD), e.g. see Katra [0107] and [0203].).
Furthermore, before the effective filing date, it would have been obvious to one ordinarily skilled in the art of healthcare to modify the combination of Kupershmidt and Odessky to incorporate the video analysis to detect a problem with the IMD as taught by Katra in order to accurately determine whether any complications have arisen with the patient or the device, e.g. see Katra [0008] and [0207].
But the combination of Kupershmidt, Odessky, and Katra does not teach and Netscher teaches the following:
wherein the neuronal network processing unit is a convolutional neuronal network processing unit (The system obtains video data, and transmits the video data through a convolutional neural network in order to detect events, e.g. see Netscher [0012].).
Furthermore, before the effective filing date, it would have been obvious to one ordinarily skilled in the art of healthcare to modify the combination of Kupershmidt, Odessky, and Katra to incorporate the convolutional neural network as taught by Netscher in order to provide a practical way to analyze high-dimensional data such as image data from cameras to search for events, e.g. see Netscher [0008]-[0011].
Regarding Claims 12 and 14, the limitations of Claims 12 and 14 are substantially similar to those claimed in Claims 1 and 11 respectively, with the sole difference being that Claim 1 recites a system and Claim 11 recites a patient device, whereas Claims 12 and 14 recite methods that execute the same functions as the system and patient device. Specifically pertaining to Claim 12 and 14, Examiner notes that Kupershmidt teaches both structures (i.e. a system and device) and methods, e.g. see Kupershmidt [0012], and hence the grounds of rejection provided above for Claims 1 and 11 are similarly applied to Claims 12 and 14.
Regarding Claim 13, the combination of Kupershmidt, Odessky, Katra, and Netscher teaches the limitations of Claim 12, and Kupershmidt further teaches the following:
The method according to claim 12, wherein the method steps are adapted to be executed at least two times to converse with the patient and/or a relative of the patient and/or a doctor of the patient (The interface engine of the digital health system provides the user with multiple questions and/or reports regarding the patient health, e.g. see Kupershmidt [0021]-[0023], and [0049].).
Regarding Claim 15, the combination of Kupershmidt, Odessky, Katra, and Netscher teaches the limitations of Claim 12, and Kupershmidt further teaches the following:
A non-transitory computer-readable medium comprising a computer program, comprising instructions which, when executed, cause a computer to perform the steps of the method according to claim 12 (The functions of the system may be embodied as computer program code stored on a computer-readable non-transitory medium, wherein the program code is executed by a computer processor, e.g. see Kupershmidt [0058].).
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over the combination of Kupershmidt, Odessky, Katra, and Netscher in view of Shriberg (US 2019/0385711).
Regarding Claim 5, the combination of Kupershmidt, Odessky, Katra, and Netscher does not teach but Shriberg teaches the following:
The dialogue-based medical decision system according to claim 1, wherein the processing unit is adapted to select the message from a set of stored dialogues (The system stores all dialogue actions that may be taken by the system, including all questions that may be asked of a patient, e.g. see Shriberg [0188] and [0206].).
Furthermore, before the effective filing date, it would have been obvious to one ordinarily skilled in the art of healthcare to modify the combination of Kupershmidt, Odessky, Katra, and Netscher to incorporate storing questions that may be asked of a patient as taught by Shriberg in order to enable the use of the questions as training data for a machine learning algorithm, e.g. see Shriberg [0188].
Response to Arguments
Applicant’s arguments, see Remarks, filed October 21, 2025, with respect to the rejection of Claim 2 under 35 U.S.C. 112(b) have been fully considered and, in combination with the claim amendments, are persuasive. The previous grounds of rejection of Claim 2 under 35 U.S.C. 112(b) has been withdrawn.
Applicant’s arguments, see Remarks, filed October 21, 2025, with respect to the rejections of Claims 1-3 and 5-15 under 35 U.S.C. 101 have been fully considered but are not persuasive.
Applicants first allege that the claimed invention is patent eligible because it is not directed towards an abstract idea, specifically because it performs functions “beyond merely receiving health data, selecting a message, and transmitting the message” in that it incorporates a natural language processing unit that determines whether a patient is suffering pain from audio dialogue, e.g. see pg. 8 of Remarks – Examiner disagrees.
As an initial matter, Examiner notes that, as shown above, the natural language processing unit itself is not interpreted as part of the abstract idea, but is instead considered an additional element and analyzed as such. However, the function of recognizing whether a patient suffers from pain based on a wording of a spoken sequence of audio is properly interpreted as being part of the abstract idea of organizing human activities because it is reasonably interpreted as following rules or instructions (e.g. basic syntax analysis) to process the patient health data.
Additionally, regarding the now-amended CNN limitation, Examiner notes that the claim language does not actually claim a CNN, but instead claims a CNN processing unit. Hence, as presently claimed, the “CNN” is merely a descriptor of the “processing unit,” wherein the function of the CNN processing unit is to determine (via a non-explicit algorithm) whether an implant is malfunctioning based on a motion sequence of the patient. The aforementioned language does not require, for example, inputting the motion sequence into a CNN, and obtaining a judgment as to the status of the implant.
However, even assuming, arguendo, that the claim language recites a CNN itself and not just a “CNN processing unit,” as shown above, the CNN processing unit is interpreted as an additional element and not part of the abstract idea. Furthermore, the CNN processing unit is claimed such that it amounts to generally linking the abstract idea to the field of machine learning, as there are no details recited as to how the CNN functions and/or how the CNN differs from any known, off-the-shelf CNN.
Applicant further alleges that the claimed invention is patent eligible because it integrates any abstract idea into a practical application, specifically because the natural language processing unit and the CNN processing unit’s functions of recognizing a patient is suffering from pain and a malfunction of an implant is a non-conventional, novel approach to perform a diagnosis or assess a treatment, e.g. see pgs. 8-9 of Remarks – Examiner disagrees.
The publication of the present Application (PG Pub. US 2024/0185968) (“the ‘968 publication”) discloses that the system “may continuously and iteratively be improved by (further) training the underlying artificial intelligence,” wherein the continuous training may comprise the functions of the natural language processing unit, e.g. see [0049] of the ‘968 publication. Additionally, [0049] of the ‘968 publication discloses a plurality of various types of artificial intelligence algorithms. However, [0049] of the ‘968 publication does not disclose any specific unconventional training steps and/or artificial intelligence algorithms. That is, the present invention is not directed towards an unconventional, unique method of training, for example a neural network, and there is no language claiming such a feature. Similarly, the claimed limitations do not recite any type of unconventional and/or unique artificial intelligence algorithms, but instead [0049] of the ‘968 publication merely discloses the high level use of known artificial intelligence algorithms for processing a particular input (i.e. audio dialogue and data associated with a health state of the patient) for a particular purpose (i.e. selecting a message to be transmitted).
Additionally, [0051] of the ‘968 publication recites the use of a convolutional neuronal network with three dimensions, but does not recite any steps or functions of the convolutional neuronal network that represent unconventional and/or novel steps/functions. That is, [0051] of the ‘968 publication merely discloses the use of a CNN in its typical, conventional usage (i.e. to recognize patterns in data).
Furthermore, the “novelty of any element or steps in a process, or even of the process itself, is of no relevance in determining whether the subject matter of a claim falls within the 101 categories of possibly patentable subject matter,” and specifically, lack of novelty under 35 U.S.C. 102 or obviousness under 35 U.S.C. 103 of a claimed invention does not necessarily indicate that additional elements are well-understood, routine, conventional elements. Because they are separate and distinct requirements from eligibility, patentability of the claimed invention under 35 U.S.C. 102 and 103 with respect to the prior art is neither required for, nor a guarantee of, patent eligibility under 35 U.S.C. 101, e.g. see MPEP 2106.05I(I).
Additionally, regarding the conventionality of the natural language processing, the step of recognizing that the user is suffering from pain and the step of determining whether an implant of the patient is malfunctioning are claimed at such a high level of generality that there is no indication that the natural language processing and/or CNN is executed in any unconventional manner. Furthermore, as stated above, the examples of the artificial intelligence algorithms utilized by the claimed invention (which are not presently claimed, but are recited in [0049] and [0051] of the ‘968 publication) are existing, conventional artificial intelligence algorithms. Hence, the natural language processing operations are properly interpreted as reciting operations that are well-understood, routine, and/or conventional.
For the aforementioned reasons, Claims 1-3 and 5-15 are rejected under 35 U.S.C. 101.
Applicant’s arguments, see Remarks, filed October 21, 2025, regarding the rejections of Claims 1-3 and 5-15 under 35 U.S.C. 103 have been considered but are moot because the arguments do not apply to any of the references being used in the current rejection. As stated above, the newly amended claim limitations of Claims 1, 11-12, and 14 have necessitated the new grounds of rejection, and Katra and Netscher are now cited to address the newly amended claim limitations.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOHN P GO whose telephone number is (703)756-1965. The examiner can normally be reached Monday-Friday 9am-6pm PST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, PETER H CHOI can be reached at (469)295-9171. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JOHN P GO/Examiner, Art Unit 3681