Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claims 1, 8, and 14 are objected to because of the following informalities:
Claims 1 and 8 recite “the a dynamic number” in line 7 and line 11 respectively, which contains a typographical error Examiner believes was intended to recite “a dynamic number”.
Claim 14 recites “The server of claim 7,” where claim 7 is directed to a method, as well as reciting identical limitations to those in claim 7. Claim 14 appears to be have been intended to depend from claim 8 rather than claim 7. Examiner requests that Applicant amend the claim to reflect the intended dependency.
Claims 16-20 each recite the limitation "the non-transitory computer readable medium of claim 13," where claim 13 is directed to a server. Claims 16-20 appear to be have been intended to depend from claim 15 rather than claim 13. Examiner requests that Applicant amend the claims to reflect the intended dependency.
Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Claims 1-7 are drawn to a method, claims 8-14 are drawn to a server, and claims 15-20 are drawn to a non-transitory computer readable medium, each of which is within the four statutory categories.
Step 2A(1)
Independent claim 1 recites, in part, performing the steps of:
receiving a selection of a focal patient;
vectorizing at least one healthcare record item associated with the selected focal patient and other patients, wherein the selected focal patient and the other patients are without slot labels, and wherein a dynamic number of slot labels are used to label one or more patients, with different patients having different slot labels;
determining at least one selected from the group including key healthcare record items and marker values from the healthcare record for each patient;
forming vectors from at least one selected from the group including key healthcare record items and marker values;
concatenating the separate vectors together to form final vectors for the patients;
performing a similarity search using the final vectors to determine a group of similar patients from the vectorized patients;
transmitting the similar patients;
receiving a selection of the patients that are within a neighboring dimensional space as the focal patient;
labelling the selected patients in batch, wherein the selection of the patients and the focal patient within the same slots have similar slot labels;
determining whether a sufficient number of patients required to train a machine learning model has been labelled, wherein the sufficient number of patients is at least partially based on at least one selected from the group including
a number of the patient record items,
a measure of patient health heterogeneity, and
a number of patients;
receiving selections of patients that are within the same dimensional space as a patient to be labelled; and
labelling a next batch of patients based on first labels of a first batch of patients and existing marker values for the patients.
These elements amount to a form of managing personal behavior and therefore fall within the scope of an abstract idea in the form of a certain method of managing human activity. Fundamentally the process is that of identifying a set of patients which are similar to an initial patient based on healthcare records and marker values, labeling that set of patients and continuing until a sufficient number have been labeled, and again identifying patients similar to a patient to be labeled and labeling a next batch of patients based on labels of a first batch. These steps could be performed by an individual as part of preparing a corpus of patient data by applying labels to groups of similar patients. While certain of the above elements include mathematical calculations, such elements also fall within the abstract idea recited by the claim.
Examiner notes that, while one of the above elements includes a reference to a machine learning model, it does not recite the machine learning model as performing any function. Rather, the term “machine learning model” as included above only serves as a description of the “sufficient number” of patients.
Independent claims 8 and 15 recite similar limitations and also recite an abstract idea under the same analysis.
Step 2A(2)
This judicial exception is not integrated into a practical application because the additional elements within the claims only amount to:
A. Instructions to Implement the Judicial Exception. MPEP 2106.05(f)
Claims 1, 8, and 15 further recite additional elements of a) a server recited as performing data processing tasks including receiving selections of patients, forming vectors and performing the similarity search, and labelling patients, b) an electronic system stored in at least one storage device communicatively coupled to the server, used to provide the patients, c) a machine learning model recited as being trained based on the labelled patients in accordance with a dynamic determination that the sufficient number of patients has been labelled and as labeling the next batch of patients, d) a communication network used to transmit the similar patients, and e) a computer used to display the similar patients.
Claim 8 additionally recites a memory and a processor coupled to the memory, recited as configured with executable instructions to perform the subsequent processing steps.
Claim 15 additionally recites a non-transitory computer readable medium storing executable instructions configured to cause a server to perform the subsequent data processing steps.
Paragraph 53 states that “server 600 may include a processor 601 coupled to volatile memory 602 and a large capacity nonvolatile memory, such as a disk drive 603” and that “processor 601 is configured with processor-executable instructions that are stored on a non-transitory processor-readable medium, such as a peripheral memory access device such as a floppy disc drive, compact disc (CD) or digital video disc (DVD) drive 606 coupled to the processor 601.” Paragraphs 24 and 54 further provide that “system 100 may include a server 102 that may communicate with a number of patient record databases 104, 106 via a network (e.g., the Internet, a hospital network, a local area wired and wireless network, etc.), and one or more computers 110 having a display” and that “server 600 may also include network access ports 604 (or interfaces) coupled to the processor 601 for establishing data connections with a network, such as the Internet and/or a local area network coupled to patient databases and servers, as well as computers with displays.” No further details are provided with respect to the server, processor, memory/non-transitory computer readable medium, electronic system, communication network, or computer. Each of these elements is therefore construed as encompassing respective generic forms of computing devices.
Paragraphs 14, 19, 27, and 38 are among paragraphs describing a machine learning model in the context of being trained and retrained using labeled patient data. However, no further information is provided regarding either the type of machine learning model or any particular way in which the machine learning model is trained. The machine learning model and its training are construed as encompassing generic forms thereof, i.e. computerized machine learning algorithms.
Each of the above elements amounts to mere instructions to implement the abstract idea using computing elements as tools. For example, elements such as the server, processor, memory, and network are each only recited at a high level of generality as used to perform the various functions and likewise only disclosed at high level of generality. The machine learning model is also only recited at a high level of generality and only broadly disclosed within the specification.
These elements are therefore not sufficient to integrate the recited abstract idea into a practical application. The above claims, as a whole, are therefore directed to an abstract idea.
Step 2B
The present claims do not include additional elements that are sufficient to amount to more than the abstract idea because the additional elements or combination of elements amount to no more than a recitation of:
A. Instructions to Implement the Judicial Exception. MPEP 2106.05(f)
As explained above, claims 1, 8, and 15 only recite the server, electronic system stored in at least one storage device, machine learning model, communication network, computer, memory and processor, and non-transitory computer readable medium as tools for performing the steps of the abstract idea, and mere instructions to perform the abstract idea using a computer is not sufficient to amount to significantly more than the abstract idea. MPEP 2106.05(f)
Thus, taken alone, the additional elements do not amount to significantly more than the above-identified judicial exception. Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually.
Depending Claims
Claims 2, 9, and 16 recite wherein receiving a selection of a focal patient comprises receiving a randomly selected patient. These limitations fall within the scope of the abstract idea as set out above.
Claims 2, 9, and 16 recite the additional elements of a) a server which receives the randomly selected patient, and b) an electronic system which provides the randomly selected patient.
Paragraph 53 states that “server 600 may include a processor 601 coupled to volatile memory 602 and a large capacity nonvolatile memory, such as a disk drive 603,” while paragraph 24 further provides that “system 100 may include a server 102 that may communicate with a number of patient record databases 104, 106 via a network (e.g., the Internet, a hospital network, a local area wired and wireless network, etc.), and one or more computers 110 having a display.” No further details are provided with respect to the server or electronic system. Each of these elements is therefore construed as encompassing respective generic forms of computing devices.
The recited server and electronic system amount to mere instructions to implement the abstract idea using computing elements as tools. Specifically, the server and electronic system are each only recited at a high level of generality as used to perform their respective functions and likewise only disclosed at high level of generality.
These elements are therefore not sufficient to integrate the recited abstract idea into a practical application or to amount to significantly more than the abstract idea.
Claims 3, 10, and 17 recite wherein vectorizing at least one healthcare record item associated with the selected focal patient and other patients comprises vectorizing the at least one of the healthcare record items associated with the selected focal patient and other patients using all the statistical characteristics taken together (minimum, maximum, variance, skewness, range, mean, mode, median) of the sequence of a patient's data elements to extract new insights that are not apparent from the individual longitudinal raw data points. These limitations fall within the scope of the abstract idea as set out above.
Claims 3, 10, and 17 further recite the additional element of the system, where the other patients are recited as “in” the system.
Paragraph 24 states that “system 100 may include a server 102 that may communicate with a number of patient record databases 104, 106 via a network (e.g., the Internet, a hospital network, a local area wired and wireless network, etc.), and one or more computers 110 having a display.” No further details are provided with respect to the system. The system is therefore construed as encompassing respective generic forms of computing devices.
The recited server and electronic system amount to mere instructions to implement the abstract idea using computing elements as tools. Specifically, the electronic system is only recited at a high level of generality insofar as the patients are recited as “in” the system, and likewise is only disclosed at high level of generality.
This element is therefore not sufficient to integrate the recited abstract idea into a practical application or to amount to significantly more than the abstract idea.
Claims 4, 11, and 18 recite wherein vectorizing at least one healthcare record item associated with the selected focal patient and other patients comprises vectorizing columns of the at least one of the healthcare record items associated with the selected focal patient and other patients by at least one selected from a group including patient type, patient description, patient diagnosis, treatment type, treatment length, treatment description, and treatment category name list; and concatenating the vectorized columns together to form final vectors of the focal patient and other patients. These limitations fall within the scope of the abstract idea as set out above.
Claims 4, 11, and 18 further recite the additional element of the system, where the other patients are recited as “in” the system.
Paragraph 24 states that “system 100 may include a server 102 that may communicate with a number of patient record databases 104, 106 via a network (e.g., the Internet, a hospital network, a local area wired and wireless network, etc.), and one or more computers 110 having a display.” No further details are provided with respect to the system. The system is therefore construed as encompassing respective generic forms of computing devices.
The recited server and electronic system amount to mere instructions to implement the abstract idea using computing elements as tools. Specifically, the electronic system is only recited at a high level of generality insofar as the patients are recited as “in” the system, and likewise is only disclosed at high level of generality.
This element is therefore not sufficient to integrate the recited abstract idea into a practical application or to amount to significantly more than the abstract idea.
Claims 5, 12, and 19 recite assigning a unique field index to a group of the labeled patients. These limitations fall within the scope of the abstract idea as set out above.
Claims 6, 13, and 20 recite assigning one or more patients to an existing field. These limitations fall within the scope of the abstract idea as set out above.
Claims 7 and 14 recite receiving individual record data from a non-medical record source; processing the individual record data to recognize and extract information that could be relevant to patient categorizing and treatment planning; reformatting extracted personal data into format consistent with patient databases or compatible with the machine learning system; and forming vectors from the extracted personal data. These limitations fall within the scope of the abstract idea as set out above.
Examiner notes that the patient databases and machine learning system included above are not recited as performing any function, and are only recited as part of the description of the data format.
Claims 7 and 14 recite the additional element of a trained/fine-tuned large language model used to recognize and extract the information.
Paragraphs 16, 18, and 49 are among the paragraphs describing the large language model. However, no description of the large language model is provided other than stating that “a large language model or neural network may be trained and/or fine-tuned to automate the receipt and processing of information about patients from a range of sources…”. No further information is provided on the specific models or training. The trained/fine-tuned large language model is therefore construed as encompassing a generic large language model.
The recited trained/fine-tuned large language model only amounts to instructions to implement the abstract idea using computing elements as tools. The large language model is only recited at a high level of generality as used to recognize and extract the information, and likewise is only disclosed at high level of generality. This element is therefore not sufficient to integrate the recited abstract idea into a practical application or to amount to significantly more than the abstract idea.
Claims 1-20 are therefore rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter.
Claim Rejections - 35 USC § 112(a)
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for pre-AIA the inventor(s), at the time the application was filed, had possession of the claimed invention.
In order to satisfy the written description requirement, the specification must describe the claimed invention in sufficient detail that one skilled in the art can reasonably conclude that the inventor had possession of the claimed invention. See MPEP 2161.01(I). However, generic claim language in the original disclosure does not satisfy the written description requirement if it fails to support the scope of the genus claimed, and even original claims may fail to satisfy the written description requirement when the invention is claimed and described in functional language but the specification does not sufficiently identify how the invention achieves the claimed function. See MPEP 2161.01(I) citing in part Ariad, 598 F.3d at 1349 ("[A]n adequate written description of a claimed genus requires more than a generic statement of an invention's boundaries."). “In Ariad, the court recognized the problem of using functional claim language without providing in the specification examples of species that achieve the claimed function:
The problem is especially acute with genus claims that use functional language to define the boundaries of a claimed genus. In such a case, the functional claim may simply claim a desired result, and may do so without describing species that achieve that result. But the specification must demonstrate that the applicant has made a generic invention that achieves the claimed result and do so by showing that the applicant has invented species sufficient to support a claim to the functionally-defined genus.” – MPEP 2161.01
Specifically with regard to computer-implemented functional claims, the specification must provide a disclosure of the computer and the algorithm in sufficient detail to demonstrate to one of ordinary skill in the art that the inventor possessed the invention, including how to program the disclosed computer to perform the claimed function. MPEP 2161.01(I).
With regard to claims 1, 8, and 15, the disclosure does not provide sufficient written description of the claimed subject matter to show that applicant had possession of a method or system performing the function of dynamically training the machine learning model based on the labelled patients by iteratively transmitting predictions by the machine learning model for marker values of a patient representation to be labelled.
The only description of this function in the disclosure is in paragraph 38 of the specification as originally filed. However, paragraph 38 only mirrors the exact language of the claims, and does not provide any further description of how the machine learning model is actually trained based on the labelled patients specifically by iteratively transmitting predictions by the machine learning model for marker values of a patient representation to be labelled. Examiner notes that training “by iteratively transmitting predictions by the machine learning model for marker values of a patient representation to be labelled” is not itself sufficient to actually result in a trained machine learning model.
By not providing any example of an algorithm or steps used to achieve the above function, Applicant has failed to show the actual subject matter in their possession at the time of the invention in a way sufficient to reasonably convey to one skilled in the relevant art that Applicant had possession of the claimed invention at the time the application was filed.
With further regard to claims 1, 8, and 15, the disclosure does not provide sufficient written description of the claimed subject matter to show that applicant had possession of a method or system performing the function of determining whether a sufficient number of patients required to train the machine learning model has been labelled at least partially based on a measure of patient health heterogeneity. The only description of this function in the disclosure is in paragraph 37 of the specification as originally filed. However, paragraph 37 only mirrors the exact language of the claims, and does not provide any further description of what constitutes “a measure of patient health heterogeneity” or how such a measure is used to determine whether a sufficient number of patients required to train the machine learning model has been labelled.
Claims 2-7, 9-14, and 16-20 inherit the deficiencies of claims 1, 8, and 15 through dependency and are likewise rejected.
With regard to claims 3, 10, and 17, the specification does not provide sufficient written description of the claimed subject matter to show that applicant had possession of a method or system performing the function of vectorizing the at least one of the healthcare record items associated with the selected focal patient and other patients in the system using all the statistical characteristics taken together (minimum, maximum, variance, skewness, range, mean, mode, median) of the sequence of a patient's data elements to extract new insights that are not apparent from the individual longitudinal raw data points.
The only description of this function in the disclosure is in paragraph 28 of the specification as originally filed. However, paragraph 28 only mirrors the above language as recited in the claims, and does not provide any further description of how the vectorization is performed in the manner recited using all the statistical characteristics taken together (minimum, maximum, variance, skewness, range, mean, mode, median) of the sequence of a patient's data elements.
By not providing any example of an algorithm or steps used to achieve the above function, Applicant has failed to show the actual subject matter in their possession at the time of the invention in a way sufficient to reasonably convey to one skilled in the relevant art that Applicant had possession of the claimed invention at the time the application was filed.
Claim Rejections - 35 USC § 112(b)
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 1, 8, and 15 are indefinite because Examiner is unable to determine the metes and bounds of each claim based on the recitation of "the selection of the patients and the focal patient within the same slots" in line 26, line 27, and line 27 respectively. Initially, there is insufficient antecedent basis for this limitation in the claims for the limitation as a whole because there is no prior recitation in either claim of any selection of “the patients and the focal patient within the same slots” and it is not clear how this relates to the previously recited “selection of the patients that are within a neighboring dimensional space as the focal patient.” Additionally, there is no antecedent basis for the portion reciting “the same slots” given that there is no prior recitation of any “slots” and it is not clear what “the same slots” is referencing in the context of the claims.
Furthermore, it is not clear from the disclosure or the context of the claims what is the intended scope and meaning of “slots” in the context of the claims. While the disclosure uses the terms “slots” and “slot labels” in the same language and context of the claims, it does not provide any additional information or detail about what this term is intended to encompass. Examiner requests that Applicant clarify the above language.
Claims 2-7, 9-14, and 16-20 inherit the deficiencies of claims 1, 8, and 15 and are likewise rejected.
Claims 3, 10, and 17 are indefinite because Examiner is unable to determine the metes and bounds of each claim based on the recitation of “all the statistical characteristics taken together (minimum, maximum, variance, skewness, range, mean, mode, median) of the sequence of a patient's data elements…”.
Initially, there is insufficient antecedent basis for “the statistical characteristics” in the respective claims because there is no prior recitation of any such statistical characteristics. Likewise, there is insufficient antecedent basis for “the sequence of a patient’s data elements” because there is no prior recitation of any such sequence of a patient’s data elements.
Furthermore, a broad range or limitation together with a narrow range or limitation that falls within the broad range or limitation (in the same claim) may be considered indefinite if the resulting claim does not clearly set forth the metes and bounds of the patent protection desired. See MPEP § 2173.05(c). In the present instance, the recitation of the broad limitation “all the statistical characteristics” is followed by a narrower recitation of “(minimum, maximum, variance, skewness, range, mean, mode, median)”. The claim(s) are considered indefinite because there is a question or doubt as to whether the feature introduced by such narrower language is (a) merely exemplary of the remainder of the claim, and therefore not required, or (b) a required feature of the claims.
Lastly, the scope and meaning of “taken together” is unclear because neither the disclosure nor the claims provide any information on what functionality or analysis would actually be encompassed by statistical characteristics “taken together.” Examiner notes that the use of “together” implies that all of the statistical characteristics are being used in the same analysis.
Claims 7 and 14 are indefinite because Examiner is unable to determine the metes and bounds of each claim based on the recitation of “information that could be relevant to patient categorizing and treatment planning.” Specifically, neither the claim nor the disclosure provides sufficient description to determine the scope of “could be relevant” and what information would fall within the scope of the limitation as a whole.
Claims 7 and 14 are further indefinite because Examiner is unable to determine the metes and bounds of each claim based on the recitation of “trained/fine-tuned.” Each of these terms has a different scope, and it is not clear whether the claim is reciting them in the alternative or requires both functions.
Claims not Rejected under 35 USC 102/103
Claims 1-20 are not presently rejected under 35 USC 102/103 in light of the closest prior art of record.
Agrawal et al (US Patent Application Publication 2019/0079938) discloses the following limitations of claim 1:
receiving, at a server, a selection of a focal patient from an electronic system stored in at least one storage device communicatively coupled to the server (see e.g. Figure 1, [7], [9], [23]-[25], [27], and [55] describe receiving a query patient from a patient database containing electronic medical records);
vectorizing at least one healthcare record item associated with the selected focal patient and other patients in the system ([25], [28], [29], and [37] describe a series of feature selection steps and vectorization of the selected features), wherein the selected focal patient and the other patients in the system are without slot labels, and wherein the a dynamic number of slot labels are used to label one or more patients, with different patients having different slot labels, and wherein a machine learning model at the server is untrained in labeling the one or more patients with the slot labels (Figure 1, Figure 2 elements 52-62, [24], [25], and [35] describe an initial stage where initial feature selection and clustering is performed, and multiple clusters are subsequently generated, i.e. the patients are not assigned to clusters prior to the initial step and patients may be assigned to different clusters, and the clustering algorithm is initially untrained);
determining, at the server, at least one selected from the group including key healthcare record items and marker values from the healthcare record for each patient of the system ([20] and [28] describe performing feature selection on the patient medical record data);
forming, at the server, vectors from at least one selected from the group including key healthcare record items and marker values ([25], [28], [29], and [37] describe vectorizing the selected features for each patient. Examiner notes that the feature value for each feature, e.g. vρ1f1, may constitute a 1x1 vector);
concatenating, at the server, the separate vectors together to form final vectors for the patients ([29] and [37] show the feature vectors concatenated into final vectors);
performing, at the server, a similarity search using the final vectors to determine a group of similar patients from the vectorized patients of the system ([25], [29], and [37] describe using the feature vectors to determine similar patients based on a calculated distance metric);
transmitting, via a communication network coupled to the server, the similar patients to a computer for display (Figure 1, [25], [27], and [31] describe transmitting the similar patients to a clinician device for review);
receiving, at the server, a selection of the patients that are within a neighboring dimensional space as the focal patient (Figures 1-3, [24], [31], and [50]-[52] describe the clinician selecting similar patients based on the dimensional feature values, i.e. patients within a neighboring dimensional space);
labelling, at the server, the selected patients in batch, wherein the selection of the patients and the focal patient within the same slots have similar slot labels (Figures 1-3, [24], [31]-[33], and [50]-[52] describe subsequently determining which new clusters the patients should belong to);
dynamically training, at the server, the machine learning model based on the labelled patients (Figure 1 element 42, Figure 2 element 56, [25], [31]-[33], [35], and [37] describe training the similarity model using the feedback and clustered new patients);
receiving selections of patients that are within the same dimensional space as a patient to be labelled (Figures 1-3, [24], [31], and [50]-[52] describe the clinician selecting patients in a cluster based on the dimensional feature values); and
dynamically retraining, at the server, a classifier of the machine learning model that labels a next batch of patients based on first labels of a first batch of patients and existing marker values for the patients of the electronic system (Figure 1, Figure 2 elements 60 and 62, [9], [25], [33], [35], and [37] describe the training process as a loop in which the model uses each previous iteration to produce new clusters of patients).
The closest prior art of record does not appear to disclose, in combination with the above limitations and when construed as a whole, the limitations reciting:
determining, at the server, whether a sufficient number of patients required to train the machine learning model has been labelled, wherein the sufficient number of patients is at least partially based on at least one selected from the group including a number of the patient record items, a measure of patient health heterogeneity, and a number of patients;
in accordance with a dynamic determination that the sufficient number of patients has been labelled, training the machine learning model by iteratively transmitting predictions by the machine learning model for marker values of a patient representation to be labelled.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Gokalp et al (US 11,120,364);
Blue (US Patent Application Publication 2016/0063213);
Eun et al (US Patent Application Publication 2021/0027896);
Friedlander et al (US Patent Application Publication 2008/0082356);
Ramarajan et al (US Patent Application Publication 2020/0286599);
Li et al, A Joint Model of Clinical Domain Classification and Slot Filling Based on RCNN and BiGRU-CRF.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to WILLIAM G LULTSCHIK whose telephone number is (571)272-3780. The examiner can normally be reached 9am - 5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Fonya Long can be reached at (571) 270-5096. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Gregory Lultschik/Examiner, Art Unit 3682