DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Claim 4 has been canceled and new claim 21 is introduced.
Applicant’s arguments, see remarks, filed 10/16/2025, with respect to the claim objections and the claim rejection under 35 U.S.C. 112(b) have been fully considered and are persuasive. The rejection of claim 9 under 35 U.S.C. 112(b) and the claim objections of claims 17-19 have been withdrawn.
Applicant's arguments filed 10/16/2025, regarding the rejection of the claims under 35 U.S.C. 101 have been fully considered but they are not persuasive. Applicant argues, on page 9 of the remarks that one “
PNG
media_image1.png
130
800
media_image1.png
Greyscale
.
Examiner disagrees. The claim fails to specify anything specific to artificial intelligence beyond reciting a “machine learning model”, a “training dataset”, and re-training the ML model using the training dataset. This is all generic black box computer models with no specificity regarding the type of ML model or the type of training; the claim simply recites a method of evaluating numerous images from a scientific instrument and deciding if the images are to be included as training data for some ML model; these are steps a trained physician can conduct using a microscope and their own human vision when substituted for the generic computer model (i.e. the doctor is the model choosing the items to include in a training data set and training is akin to human memory). Applicant’s arguments hinges on requiring more AI details in the claim.
Applicant further argues, on page 9 of the remarks that Applicants claims are analogous to Example 39 given by the USPTO: “
PNG
media_image2.png
312
710
media_image2.png
Greyscale
PNG
media_image3.png
236
780
media_image3.png
Greyscale
”.
Examiner disagrees. Although both sets of claims have similar aspects of creating image training datasets for a machine learning/neural network model, Example 39 above, has far greater technical aspects that Applicant’s claim does not, such as the step of applying one or more transformations to each digital facial image including mirroring, rotating, smoothing, or contrast reduction to create a modified set of digital facial images; this step cannot be practically done by a human with their own vision and pen and paper; Applicant’s present claim does not have enough technical aspects that could not be practically be done by a human doctor; further, example 39 above has a multi-step training process while Applicant’s present claim simply states to re-train the model based on simple filtering of the images based on observed features.
Applicant further argues, on page 10 of the remarks that “
PNG
media_image4.png
278
774
media_image4.png
Greyscale
”.
Examiner disagrees. The only section in the claim Applicant can point to involving a theoretical practical application and improvement is that the set of images are acquired via a scientific instrument; the rest of the independent claim simply recites generic aspects of analyzing features of images that meet a threshold, and selecting those images that meet the threshold for training a generic model. No specific scientific instrument is given in the claim and no improvement of a machine learning/AI is given in the claim; analyzing features and thresholding to select images are generic processes in image analysis; the independent claims are conventional, routine, and are merely collecting and comparing data, without specifying the actual field of technology beyond “scientific instrument support”.
Therefore, the rejection of the claims under 35 U.S.C. 101 is maintained.
Applicant's arguments filed 10/16/2025, regarding the rejection of the claims under 35 U.S.C. 103, have been fully considered but they are not persuasive. Applicant argues, on page 14, of the remarks, that “
PNG
media_image5.png
206
796
media_image5.png
Greyscale
”.
Examiner disagrees. Moore teaches the concept of using machine learning to analyze features of image data taken from a scientific instrument (microscope), and Machek teaches the concept of analyzing image set features to determine whether each respective image should be included in a training data set for training a machine learning model; when the references are combined, the output of the ML model from Moore (image features) are analyzed via the image feature thresholding filtering, from Machek, to determine if the images are to be included in the training data set so the ML model from Moore can be re-trained. Applicant states that the combined references would not teach or suggest using the output of a trained model generated for an inputted image to select whether the inputted image should be used for re-training the trained model, however Applicant provides no evidence of why this would be true. Machek is determining images to be in the training data set for an artificial neural network based upon features from the images; so that concept is easily applied to the features output from the machine learning model of Moore with microscope image inputs. It is irrelevant whether Machek does this process of thresholding image features to select images for a training dataset on simulated images with simulated features since when the references are combined, the process from Machek is applied to the real image features output from the already trained ML model of Moore that takes images from a microscope as input.
Therefore, the rejection of the claims under 35 U.S.C. 103 is maintained.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f):
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f). The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f). The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f), because the claim limitations uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are: “feature identification logic” in claim 1 “image selection logic” in claims 1-4, 6, 8, and 10-13, and “training logic” in claims 1, 2, and 14.
Because these claim limitations are being interpreted under 35 U.S.C. 112(f), they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have these limitations interpreted under 35 U.S.C. 112(f), applicant may: (1) amend the claim limitation(s) to avoid them being interpreted under 35 U.S.C. 112(f) (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitations recite sufficient structure to perform the claimed function so as to avoid them being interpreted under 35 U.S.C. 112(f).
Para. [0042] of Applicant’s present specification recites that “For example, as illustrated in FIG. 1A, the CPM support module1000 may include data triage logic 1002 and, optionally, model promotion logic1004. As used herein, the term "logic" may include an apparatus that is to perform a set of operations associated with the logic. For example, any of the logic elements included in the support module1000 may be implemented by one or more computing devices programmed with instructions to cause one or more processing devices of the computing devices to perform the associated set of operations. In a particular embodiment, a logic element may include one or more non-transitory computer-readable media having instructions thereon that, when executed by one or more processing devices of one or more computing devices, cause the one or more computing devices to perform the associated set of operations. As used herein, the term "module" may refer to a collection of one or more logic elements that, together, perform a function associated with the module. Different ones of the logic elements in a module may take the same form or may take different forms. For example, some logic in a module may be implemented by a programmed general-purpose processing device, while other logic in a module may be implemented by an application-specific integrated circuit (ASIC). In another example, different ones of the logic elements in a module may be associated with different sets of instructions executed by one or more processing devices. A module may not include all of the logic elements depicted in the associated drawing; for example, a module may include a subset of the logic elements depicted in the associated drawing when that module is to perform a subset of the operations discussed herein with reference to that module.”; therefore the claim terms “feature identification logic” in claim 1 “image selection logic” in claims 1-4, 6, 8, and 10-13, and “training logic” in claims 1, 2, and 14, interpreted under 35 U.S.C. 112(f) is correct and does not raise issues of indefiniteness under 35 U.S.C. 112(b).
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 16, and 20 are rejected are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., an abstract idea) without integration into a practical application or recitation of significantly more.
In the analysis below, the method of independent claim 16 is considered representative of independent claims 1 and 20 since all of the independent claims recite identical steps despite being directed to different statutory matter. Furthermore, independent claims 1, 16, and 20 are directed to one of the four statutory categories of eligible subject matter (an apparatus for independent claim 1, a process for independent claim 16, and a non-transitory computer readable medium having instructions thereon for independent claim 20); thus, the claims pass Step 1 of the Subject Matter Eligibility Test (See flowchart in MPEP 2106).
Step 2A, prong 1 analysis:
The independent claims are directed to receiving one or more selection criteria; receiving one or more identified features in a set of images, the one or more identified features generated; determining whether the set of images satisfies the one or more selection criteria; including the set of images, including the one or more identified features, in a training dataset in response to a determination that the set of images satisfies the one or more selection criteria; determining whether the set of images satisfies the one or more selection criteria by generating a metric for the one or more identified features and determining that the set of images satisfies the one or more selection criteria in response to the metric satisfying a predetermined threshold; and retraining using the training dataset.
Each of the above steps can be performed mentally. In particular, a scientist, doctor, or trained medical professional observes medical images taken of a patient, such as a patient with cancer having tumors/lesions; the selection criteria (metric) are whether the lesions observed are benign or malignant based on a threshold of how lesions appear to the medical professional’s educated vision; the medical professional observes the images with their own human vision and selects/identifies the relevant malignant tumors/lesions in the images indicating cancer; a tumor has to look malignant enough to qualify; the doctor then sorts the image set of the patient by whether the lesion is there not and saves those images for further evaluation as well as being a reference for checking additional patients in the future (re-training using training dataset); the doctor learns from each identified lesion in the images; therefore, this process can all be done mentally.
As such, the description in independent claims 1, 16, and 20 is an abstract idea – namely, a mental process. Accordingly, the analysis under prong one of step 2A of the Subject Matter Eligibility Test does not result in a conclusion of eligibility (See flowchart in MPEP 2106).
Additional elements:
The additional elements recited in independent claims 1, 16, and 20 are a computing device for providing scientific instrument support, a scientific instrument, and a machine-learning model.
Step 2A, prong 2 analysis:
The above-identified additional elements do not integrate the judicial exception into a practical application. Taking images using a scientific instrument is not specific enough to integrate into a practical application without knowing the type of scientific instrument, and a machine learning model, without any additional details as the functionality or the specific type of machine learning model, means the model amounts to nothing more than a generic computing device.
Each of the other additional elements (a computing device for providing scientific instrument support, a scientific instrument, and a machine-learning model) amounts to merely using different devices as tools to perform the claimed mental process. Implementing an abstract idea on a computer or using known generic devices does not integrate a judicial exception into a practical application (See MPEP 2106.05(f)).
Moreover, the additional elements of the claims do not recite an improvement in the functioning of a computer or other technology or technical field, the claimed steps are not performed using a particular machine, the claimed steps do not effect a transformation, and the claims do not apply the judicial exception in any meaningful way beyond generically linking the use of the judicial exception to a particular technological environment (See MPEP 2106.04(d)). Therefore, the analysis under prong two of step 2A of the Subject Matter Eligibility Test does not result in a conclusion of eligibility (See flowchart in MPEP 2106).
Step 2B:
Finally, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Each of the other additional elements (a computing device for providing scientific instrument support, a scientific instrument, and a machine-learning model) are generic computer features which perform generic computer functions that are well-understood, routine, and conventional and do not amount to more than implementing the abstract idea with a computerized system. Thus, taken alone, the additional elements do not amount to significantly more than the above-identified judicial exception (the abstract idea).
Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation, and mere implementation on a generic computer does not add significantly more to the claims. Accordingly, the analysis under step 2B of the Subject Matter Eligibility Test does not result in a conclusion of eligibility (See flowchart in MPEP 2106).
For all of the foregoing reasons, independent claims 1, 16, and 20 do not recite eligible subject matter under 35 USC 101.
Claim 2 recites wherein at least one of the image selection logic and the training logic is implemented by a computing device remote from the scientific instrument. A doctor takes images of patient at one location and sends the image data to another medical professional top analyze the images in another location; therefore, this process can all be done mentally.
Claim 3 recites wherein the one or more identified features include line indicated termination features. A doctor recognizes, using their own human vision, line indicated termination features, such as lines and stripes in Chest Radiography, for example; therefore, this process can all be done mentally.
Claim 5 recites wherein the metric is based on a slope of at least one selected from a group consisting of a plot representing a number of features identified in each image in the set of images, a plot representing a feature area identified in each image in the set of images, and a plot representing feature distances for each image in the set of images. A doctor takes the necessary data of the identified lesions in the medical images of the patient and creates a plot recognizing the number of features or a plot of the distances; creating a slope of data is a simple mathematical process that is done by hand; therefore, this process can all be done mentally.
Claim 6 recites wherein the one or more selection criteria includes a predetermined reference for a characteristic of the one or more identified features and wherein the image selection logic determines whether the set of images satisfies the one or more selection criteria by identifying an anomaly of the one or more identified features as compared to the predetermined reference. The metric is whether a lesion looks malignant or whether it is benign which is decided by a doctor using their own human vision; the doctor can also compare the current images to previously taken images of malignant tumors of other patients to confirm an observation of cancer; a tumor has to look malignant enough to qualify; therefore, this process can all be done mentally.
Claim 7 recites wherein the predetermined reference for the characteristic of the one or more identified features includes at least one selected from a group consisting of a predetermined reference size of the one or more identified features, a predetermined reference number of the one or more identified features, a predetermined reference position of the one or more identified features, a predetermined reference shape of the one or more identified features, and a predetermined reference distance between two of the one or more identified features. A doctor observes the medical images to identify lesions/tumors indicating cancer that can be differentiated from benign lesions; determining size, distance between different lesions, position, and shape are all characteristics the doctor identifies with their own human vision as compared to previous medical image sets of other patients to verify the analysis; therefore, this process can all be done mentally.
Claim 8 recites wherein the one or more selection criteria includes a characteristic of the one or more identified features and wherein the image selection logic determines whether the set of images satisfies the one or more selection criteria by identifying a pattern of the characteristic over multiple sets of images. A doctor observes the medical images to identify lesions/tumors indicating cancer that can be differentiated from benign lesions; the doctor takes multiple image sets over time of a patient for example to examine how the lesions have grown, for example; therefore, this process can all be done mentally.
Claim 9 recites wherein the characteristic of the one or more identified features includes at least one selected from a group consisting of a size of the one or more identified features, a number of the one or more identified features, a position of the one or more identified features, and a shape of the one or more identified features. A doctor observes the medical images to identify lesions/tumors indicating cancer that can be differentiated from benign lesions; determining size, distance between different lesions, position, and shape are all characteristics the doctor identifies with their own human vision; therefore, this process can all be done mentally.
Claim 10 recites wherein the one or more identified features include one or more first identified features of a first set of images and wherein the image selection logic excludes a second set of images, including one or more second identified features of the second set of images, from the training dataset. A doctor observes the medical images to identify lesions/tumors indicating cancer that can be differentiated from benign lesions; if the images only have benign lesions and no malignant ones then those images are excluded from further analysis by the doctor; therefore, this process can all be done mentally.
Claim 11 recites wherein the training dataset includes an annotation dataset and wherein the image selection logic provides a user interface and, in response to receiving an indication through the user interface, assign the set of images to at least one selected from a group consisting of a retraining dataset, a testing dataset, and a validation dataset. A doctor annotates medical images to indicate if a lesion/tumor is benign or malignant using a generic computer interface such as a touch screen and the images are sorted according to if they potentially indicate cancer or not; therefore, this process can all be done mentally.
Claim 12 recites wherein the training dataset includes an annotation dataset and wherein the image selection logic provides a user interface and, in response to receiving an indication through the user interface, exclude the set of images from at least one selected from a group consisting of a retraining dataset, a testing dataset, and a validation dataset. A doctor annotates medical images to indicate if a lesion/tumor is benign or malignant using a generic computer interface such as a touch screen and the images are sorted/selected/included/excluded according to if they potentially indicate cancer or not; therefore, this process can all be done mentally.
Claim 13 recites wherein the training dataset includes an annotation dataset and wherein image selection logic, in response to assigning the set of images to the annotation dataset, generates and transmits a link selectable by a user to access the set of images assigned to the annotation dataset within a user interface. A doctor annotates medical images to indicate if a lesion/tumor is benign or malignant in the image set and using generic computer commands, creates a link to the annotated images within a computer with a user interface that when clicked goes to the images; using generic computer technology this is all carried out by the doctor with their own human decision making; therefore, this process can all be done mentally.
Claim 14 recites wherein the training logic retrains the machine-learning model using the training dataset in response to a triggering event. A doctor observing medical images of potential tumors/malignant lesions saves the images for future reference if malignance is observed as opposed to benign lesions which is considered a “triggering event”; the human doctor carries out the functionality of the machine learning model (generic computer) and “training” amounts to the doctor simply remembering the images having the malignant tumors; therefore, this process can all be done mentally.
Claim 15 recites wherein the triggering event includes at least one selected from a group consisting of a number of user-annotated images included in the training dataset, an increase in a size of the training dataset, an increase in a number of user-annotated images for a predetermined feature in the training dataset, an availability of one or more training resources, and a manual initiation. A doctor observing medical images of potential tumors/malignant lesions saves the images for future reference if malignance is observed as opposed to benign lesions; the triggering event is adding a new image with malignant tumors/lesions to the annotated data set which increases the size of the training data set; therefore, this process can all be done mentally.
Claim 17 recites wherein the one or more identified features in the set of images includes one or more first identified features in a first set of images and further comprising receiving one or more second identified features in a second set of images acquired via the scientific instrument, the one or more second identified features generated using the machine-learning model; providing the first set of images and the one or more first identified features to a user interface; providing the second set of images and the one or more second identified features to the user interface; excluding the first set of images from the training dataset in response to a receiving a first indication through the user interface; and including the second set of images in the training dataset in response to receiving a second indication through the user interface. A doctor annotates medical images to indicate if a lesion/tumor is benign or malignant using a generic computer interface such as a touch screen and the images are sorted according to if they potentially indicate cancer or not; therefore, this process can all be done mentally. Further, the machine learning model is a generic computer that the human doctor executes the functionality of identifying features in the medical images; therefore, this process can all be done mentally.
Claim 18 recites wherein the one or more selection criteria includes one or more first selection criteria and wherein the one or more identified features of the set of images includes one or more first identified features of a first set of images and further comprising receiving one or more second selection criteria; receiving one or more second identified features in a second set of images acquired via the scientific instrument, the one or more second identified features generated using the machine-learning model; determining whether the second set of images satisfies the one or more second selection criteria; and including the second set of images, including the one or more second identified features, in the training dataset in response to a determination that the second set of images satisfies the one or more second selection criteria. A doctor annotates medical images to indicate if a lesion/tumor is benign or malignant using a generic computer interface such as a touch screen and the images are sorted according to if they potentially indicate cancer or not (selection criteria); therefore, this process can all be done mentally; therefore, this process can all be done mentally.
Claim 19 recites wherein the one or more identified features in the set of images includes one or more first identified features in a first set of images and further comprising receiving one or more second identified features in a second set of images acquired via the scientific instrument, the one or more second identified features generated using the machine-learning model; providing the second set of images and the one or more second identified features to a user interface; receiving an annotation associated with the second set of images through the user interface; and including the second set of images, including the annotation, in the training dataset. A doctor annotates medical images to indicate if a lesion/tumor is benign or malignant in the image set and using generic computer commands, creates a link to the annotated images within a computer with a user interface that when clicked goes to the images; using generic computer technology this is all carried out by the doctor with their own human decision making. Further, the machine learning model is a generic computer that the human doctor executes the functionality of identifying features in the medical images; therefore, this process can all be done mentally.
Claim 21 recites wherein the one or more identified features includes a plurality of identified features and wherein the characteristic of the plurality of identified features includes a distance between two of the plurality of identified features. A doctor identifies the lesions in the medical images as malignant benign indicating whether the patient has cancer and uses their own human vision to determine how close together different lesions are from one another or uses a ruler; the distance between lesions on medical imaging, particularly when measured in the context of staging (e.g., the maximum distance between the furthest lesions in lymphoma), is a strong indicator of tumor dissemination and disease spread, which can help characterize the severity and spread of cancer; therefore, this process can all be done mentally.
Therefore, dependent claims 2-15 and 17-19 and 21 recite the same abstract idea of a mental process which can be performed in the mind with the aid of pen and paper, and are therefore also rejected under 35 U.S.C. 101.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 10, 16, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication No.: 2022/0260826 (Moore et al.) (hereinafter Moore), in view of U.S. Patent Application Publication No.: 2021/0049749 (Machek et al.) (hereinafter Machek).
Regarding claim 1, Moore teaches a scientific instrument support apparatus, comprising: (Moore, para. [0069], lines 1-4; FIG. 1: The various analysis steps described above may be performed by any of the devices and systems described herein. For example, as shown in FIGS. 8-11, a system may include a self-contained computational microscope. In some embodiments, the system may optionally include an embedded user interface. In some embodiments the system may also optionally include a server 220, such as that shown in FIG. 1. Data acquired using one or more devices or systems described herein or analyses performed using one or more devices or systems herein may be transmitted to and/or stored in server 220. In some embodiments, such data and/or analyses may be used to form a database of sample types and their associated analyses, conditions (e.g., health state, disease type, etc.), demographics (e.g. age, sex, etc.), past medical history (e.g., prior surgeries, disease history, etc.) to, for example, train machine learning or deep learning models to detect certain types of conditions, sample types, etc. or as a learning resource for physicians, students, etc. Alternatively or additionally, server 220 may function to receive data and/or analyses from one or more systems described herein and output an indication (e.g., disease state, sample type, etc.) or update or sync with an electronic health record of a patient.”;
PNG
media_image6.png
676
1072
media_image6.png
Greyscale
);
feature identification logic to generate, using a machine-learning model, one or more identified features in an image of a set of images acquired via a scientific instrument (Moore, para. [0067]: “As illustrated in FIG. 1, once the sample is inserted into a device or system (see block S130), the sample may be analyzed (see block S140). In some embodiments, analysis may include one or more of autofocusing or defocusing on the sample S142, illuminating the sample and capturing one or more images of the sample S144, loading the one or more images S146, receiving one or more input parameters S148, iteratively reconstructing the one or more images into a high resolution image S150, post-processing S152 (e.g., image stitching), and assessing the sample using feature extraction and/or one or more machine and/or deep learning models S154. In some embodiments, a deep learning model may be used to apply a digital stain to high resolution quantitative phase images.”; see FIG. 1 above).
Moore fails to teach
image selection logic to determine whether the set of images satisfies one or more selection criteria and assign the set of images, including the one or more identified features, to a training dataset in response to a determination that the set of images satisfies the one or more selection criteria; wherein the image selection logic determines whether the set of images satisfies the one or more selection criteria by generating a metric for the one or more identified features, wherein the image selection logic determines that the set of images satisfies the one or more selection criteria in response to the metric satisfying a predetermined threshold; and training logic to retrain the machine-learning model using the training dataset.
Machek teaches
image selection logic to determine whether the set of images satisfies one or more selection criteria and assign the set of images, including the one or more identified features, to a training dataset in response to a determination that the set of images satisfies the one or more selection criteria; wherein the image selection logic determines whether the set of images satisfies the one or more selection criteria by generating a metric for the one or more identified features, wherein the image selection logic determines that the set of images satisfies the one or more selection criteria in response to the metric satisfying a predetermined threshold (Machek, para. [0090]-[0091]: “One or more embodiments include filtering the simulated specimen images based on qualifying criteria (Operation 208). Qualifying criteria for simulated specimen images are obtained from a data repository. Qualifying criteria are used for determining whether a simulated specimen image qualifies as training data for an ANN. As an example, a qualifying criteria may require that simulated defects in a simulated specimen image be associated with a level of visibility that is above a threshold value. A level of visibility of a simulated defect may be determined based on a contrast level between the simulated defect and the surrounding areas of the simulated specimen image. Each simulated specimen image is evaluated based on the qualifying criteria. If a simulated specimen image satisfies the qualifying criteria, then the simulated specimen image is used as training data. If a simulated specimen image does not satisfy the qualifying criteria, then the simulated specimen image is not used as training data.”; a threshold is used for qualifying criteria of simulated defects to include or exclude certain images from the training data set); and
training logic to retrain the machine-learning model using the training dataset (Machek, para. [0092]-[0093]; para. ]0096]: “One or more embodiments include inputting the simulated specimen images as training data into a training algorithm to generate an ANN (Operation 210). The simulated specimen images that satisfy the qualifying criteria are input into a training algorithm. Based on the simulated specimen images, the training algorithm determines weights associated with connections between artificial neurons within an ANN. Additionally or alternatively, the training algorithm determines other attributes of the ANN, such as the connections between the artificial neurons, the number of layers of artificial neurons, and/or the functionality of the artificial neurons. One or more embodiments include applying the ANN to obtain machine-generated identification of defects within a set of captured specimen images (Operation 212). A set of captured specimen images are obtained. As an example, a set of captured specimen images may be obtained via user input. The captured specimen images may include real defects that a user would like to identify and/or analyze.”; “In an embodiment, new training data may be input into the training algorithm at any time to generate a modified ANN. Optionally, the captured specimen images, labeled with the actual identification of defects, may be fed back as training data into the training algorithm. Accordingly, the training algorithm may generate a modified ANN.”).
It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify the scientific instrument support apparatus, as taught by Moore, to include image selection logic to determine whether the set of images satisfies one or more selection criteria and assign the set of images, including the one or more identified features, to a training dataset in response to a determination that the set of images satisfies the one or more selection criteria, wherein the image selection logic determines whether the set of images satisfies the one or more selection criteria by generating a metric for the one or more identified features, wherein the image selection logic determines that the set of images satisfies the one or more selection criteria in response to the metric satisfying a predetermined threshold, and training logic to retrain the machine-learning model using the training dataset, as taught by Machek.
The suggestion/motivation for doing so would have to allow for determining whether a simulated specimen image qualifies as training data for an artificial neural network (ANN) and to allow new training data to be input into the training algorithm (Machek, para. [0090]-[0091]; para. [0096]) and that “a larger set of training data typically improves the accuracy of the ANN” (Machek, para. [0011]).
Therefore, it would have been obvious to combine Moore, with Machek, to obtain the invention as specified in claim 1.
Regarding claim 10, Moore, in view of Machek, teaches the scientific instrument support apparatus of claim 1, wherein the one or more identified features include one or more first identified features of a first set of images and wherein the image selection logic excludes a second set of images, including one or more second identified features of the second set of images, from the training dataset (Machek, para. [0090]-[0091]; a threshold is used for qualifying criteria of simulated defects to include or exclude certain images from the training data set).
With regards to claim 16, it recites the functions of the apparatus of claim 1, as a process. Thus, the analysis in rejecting claim 1 is equally applicable to claim 16.
Regarding claim 20, Moore teaches one or more non-transitory computer-readable media having instructions thereon that, when executed by one or more processing devices of a support apparatus for the scientific instrument (Moore, para. [0163]: “as shown in FIG. 34, the processor 330 is coupled, via one or more buses, to the memory 340 in order for the processor 330 to read information from and write information to the memory 340. The processor 330 may additionally or alternatively contain memory 340. The memory 340 can include, for example, processor cache. The memory 340 may be any suitable computer-readable medium that stores computer-readable instructions for execution by computer-executable components. In various embodiments, the computer-readable instructions include application software 345 stored in a non-transitory format. The software, when executed by the processor 330, causes the processor 330 to perform one or more methods described elsewhere herein.”).
With regards to the remaining limitations of claim 20, they recite the functions of the apparatus of claim 1, as a non-transitory computer-readable medium having instructions. Thus, the analysis in rejecting claim 1 is equally applicable to claim 20.
Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Moore, in view of Machek, and in further view of U.S. Patent Application Publication No.: 2019/0287761 (Schoenmakers et al.) (hereinafter Schoenmakers).
Regarding claim 2, Moore, in view of Machek, teaches the scientific instrument support apparatus of claim 1.
Moore, in view of Machek, fails to teach
wherein at least one of the image selection logic and the training logic is implemented by a computing device remote from the scientific instrument.
Schoenmakers teaches
wherein at least one of the image selection logic and the training logic is implemented by a computing device remote from the scientific instrument (Schoenmakers, para. [0037]: “For training the network in the cloud, training data comprising microscopic images are uploaded into the cloud and a network is trained by the microscopic images. The uploading can take place by any means of data transfer such as by cables, wireless and/or both and can be done sequentially, in packages and/or in parallel.”).
It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify the image selection logic and training logic, as taught by Moore, in view of Machek, to be implemented by a computing device remote from the scientific instrument, as taught by Schoenmakers.
The suggestion/motivation for doing so would have been to save on computing power by distributing the computing over multiple systems in different locations.
Therefore, it would have been obvious to combine Moore and Machek, with Schoenmakers, to obtain the invention as specified in claim 2.
Claims 3 is rejected under 35 U.S.C. 103 as being unpatentable over Moore, in view of Machek, and in further view of U.S. Patent Application Publication No.: 2020/0279362 (Miller et al.) (hereinafter Miller).
Regarding claim 3, Moore, in view of Machek, teaches the scientific instrument support apparatus of claim 1.
Moore, in view of Machek, fails to teach
wherein the one or more identified features include line indicated termination features.
Miller teaches
wherein the one or more identified features include line indicated termination feature (Miller, para. [0004]; para. [0048]; FIG. 4: “An example method at least includes obtaining an image of a surface of a sample, the sample including a plurality of features, analyzing the image to determine whether an end point has been reached, the end point based on a feature of interest out of the plurality of features observable in the image, and based on the end point not being reached, removing a layer of material from the surface of the sample.”; “FIG. 4 is an example image sequence 400 including associated MLS analysis of images in accordance with an embodiment disclosed herein. The image sequence 400 shows locations within a sample and associated class probability as determined by an at least partially trained artificial neural network. The class probability shows determinations of class probabilities for features in images being either a source, drain, or a gate. As indicated in FIG. 4, Prob1 is for a gate determination and Prob2 is for a S/D determination. While FIG. 4 does not include a desired end point, any of the locations and/or images may be a desired end point based on where a point of analysis, e.g., feature of interest, may be located. For example, if a point of analysis includes the feature in image 415, then the processing end point may be a few nanometers before the location of image 415. In some embodiments, the image sequence 400 may also be used as training data.”;
PNG
media_image7.png
590
890
media_image7.png
Greyscale
).
It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify the one or more modified features, as taught Moore, in view of Machek, to include a line indicated termination feature, as taught by Miller.
The suggestion/motivation for doing do would have been that “the desired end point may be a stopping place based on a feature present in an image” (Miller, para. [0026]).
Therefore, it would have been obvious to combine Moore and Machek, with Miller, to obtain the invention as specified in claim 3.
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Moore, in view of Machek, in further view of U.S. Patent Application Publication No.: 2020/0161083 (Larson et al.) (hereinafter Larson), and in further view of U.S. Patent Application Publication No.: 2021/0010054 (Uchiho et al.) (hereinafter Uchiho).
Regarding claim 5, Moore, in view of Machek, teaches the scientific instrument support apparatus of claim 1.
Moore, in view of Machek, fails to teach
wherein the metric is based on at least one selected from a group consisting of a plot representing a number of features identified in each image in the set of images, a plot representing a feature area identified in each image in the set of images, and a plot representing feature distances for each image in the set of images.
Larson teaches
wherein the metric is based on at least one selected from a group consisting of a plot representing a number of features identified in each image in the set of images, a plot representing a feature area identified in each image in the set of images, and a plot representing feature distances for each image in the set of images (Larson, para. [0032]; para. [0045]; FIG. 4-5: “FIG. 2 includes a number of example illustrations 200 of features in TEM images. The images in FIG. 2 provide examples of the variations in both feature shape/size and image quality, as discussed above and can affect robust automated metrology. Illustrations 200 include images 220A through 220D, with each image 220 showing respective features 222 and 224. The features 222A-D and 224A-D may be features of interest and desired metrology information, such as width of feature 222 at various locations, the height of feature 222, and thickness of feature 224, is obtained through the techniques disclosed herein.”; “As can be seen in graph 401, as iterations of PEN 314 are performed, the pixel step size based on L2 norm becomes sub-pixel changes between the models and the image after 6 iterations. This implies that the differences between the models and the image are less than a pixel size and results in high precision metrology of the features based on the obtain model parameters Pn.”;
PNG
media_image8.png
592
714
media_image8.png
Greyscale
PNG
media_image9.png
494
338
media_image9.png
Greyscale
).
It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify the metric, as taught by Moore, in view of Machek, to be based on at least one selected from a group consisting of a plot representing a number of features identified in each image in the set of images, a plot representing a feature area identified in each image in the set of images, and a plot representing feature distances for each image in the set of images, as taught by Larson.
The suggestion/motivation for doing so would have been to allows to show the change in pixel step size, or the change in error between the images and the model per iteration of the parameter estimation network (PEN).
Moore, in view of Machek, and in view of Larson, fails to teach
wherein the metric is based on a slope of a plot.
Uchiho teaches
wherein the metric is based on a slope of a plot (Uchiho, para. [0075]; FIG. 6: “FIG. 6 is a diagram showing results of plotting the area of bacteria in the images and the mean of luminance values as examples of feature. As shown in FIG. 6, the area of bacteria increases up to 300 minutes, but decrease afterward. On the other hand, the mean of luminance values varies little for 6 hours from the initial culture stage. Accordingly, the maximum area of bacteria or the gradient at each measurement time is calculated as an example of feature based on a time variation in step 330. For example, when the determination time is 6 hours, the maximum value of the area of bacteria at 6 hours, the gradient obtained from the area of bacteria at 6 hours and the area of bacteria immediately before 6 hours, the difference in mean of luminance values between at 6 hours and at 0 hours, and the like are calculated.”;
PNG
media_image10.png
444
284
media_image10.png
Greyscale
).
It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify the metric, as taught by Moore, in view of Machek, and in view of Larson, to include being based on a slope, as taught by Uchiho.
The suggestion/motivation for doing so would have been “detecting the growth of bacteria with high accuracy even when the growth of bacteria has occurred over the entire image, and an automatic binarization process causes an error in detection of bacteria due to an improper threshold setting” (Uchiho, para. [0011]).
Moore, in view of Machek, in view of Larson, and in view of Uchiho, teaches wherein the metric is based on a slope of at least one selected from a group consisting of a plot representing a number of features identified in each image in the set of images, a plot representing a feature area identified in each image in the set of images, and a plot representing feature distances for each image in the set of images (Larson, para. [0032]; para. [0045]; FIG. 4-5; Uchiho, para. [0075]; FIG. 6; the slope/gradient process of area of bacteria taught in Uchiho is applied to the feature distance plot shown in Larson; calculating slope is a known math process that may be applied to any dataset such as the feature distances shown in Uchiho).
Therefore, it would have been obvious to combine Moore, Machek, and Miller, with Larson and Uchiho, to obtain the invention as specified in claim 5.
Claims 6 and 7 are rejected under 35 U.S.C. 103 as being unpatentable over Moore, in view of Machek, in further view of U.S. Patent Application Publication No.: 2019/0287230 (Lu et al.) (hereinafter Lu).
Regarding claim 6, Moore, in view of Machek, teaches the scientific instrument support apparatus of claim 1, wherein the one or more selection criteria includes a predetermined reference for a characteristic of the one or more identified features and wherein the image selection logic determines whether the set of images satisfies the one or more selection criteria as compared to the predetermined reference (Machek, para. [0090]-[0091]; see rejection of claim 1 above; a threshold is used for qualifying criteria of simulated defects to include or exclude certain images from the training data set).
Moore, in view Machek, fails to teach
identifying an anomaly of the one or more identified features as compared to the predetermined reference.
Lu teaches
identifying an anomaly of the one or more identified features as compared to the predetermined reference (Lu, para. [0045]-[0048]: “The model is applied at 103 using a processor to find one or more anomalies in image patches. The model can generate reconstruction errors and/or probabilities. The model can predict whether a patch is abnormal by examining the patch level reconstruction error and/or probabilities. The anomaly region can be identified by thresholding the pixel-level reconstruction error and/or probabilities. For example, reconstructed images can be generated from input SEM images by applying the model at 103. The autoencoder may perform best on repeated patterns like an array or dot. Other methods like a generative adversarial network (GAN) can be used to reconstruct more complex patterns. At 104, a presence of one or more anomalies in an image is determined using the model. Threshold reconstruction errors or probabilities can be used to find an anomaly patch or region in the image. For example, a difference between reconstructed and original SEM images may be calculated at 104 to locate the anomaly patterns (e.g., defects).”).
It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify the image selection logic, as taught by Moore, in view of Machek, to include identifying an anomaly of the one or more identified features as compared to the predetermined reference, as taught by Lu.
The suggestion/motivation for doing so would have been that “an operator only needs to select clean SEM images for the training data set, which can be easier than annotating defective images” (Lu, para. [0040]; further, “this avoid tedious and error-prone manual labeling of detects by operators; this can eliminate the need to search or paint defects, which reduces the time needed to provide data to train the model” (Lu, para. [0065]).
Therefore, it would have been obvious to combine Moore and Machek, with Lu, to obtain the invention as specified in claim 6.
Regarding claim 7, Moore, in view of Machek, and in view of Lu, teaches the scientific instrument support apparatus of claim 6.
Moore, in view of Machek, and in view of Lu, fails to teach
wherein the predetermined reference for the characteristic of the one or more identified features includes at least one selected from a group consisting of a predetermined reference size of the one or more identified features, a predetermined reference number of the one or more identified features, a predetermined reference position of the one or more identified features, a predetermined reference shape of the one or more identified features, and a predetermined reference distance between two of the one or more identified features.
Lu further teaches
wherein the predetermined reference for the characteristic of the one or more identified features includes at least one selected from a group consisting of a predetermined reference size of the one or more identified features, a predetermined reference number of the one or more identified features, a predetermined reference position of the one or more identified features, a predetermined reference shape of the one or more identified features, and a predetermined reference distance between two of the one or more identified features (Lu, para. [0062]: “In a first embodiment, outliers can be determined using distance in a feature space. Some machine learning feature vectors are extracted from the defect-free training images. When new images are passed in during a test job run, the same types of feature vectors can be extracted from these new images. How far a feature vector of one new image is from the feature vectors of all defect-free training images can be determined. If the distance exceeds a threshold, then the new image is considered an outlier. For example, a center of mass for the image dataset in the defect-free training data can be determined. The distance between the new image and this center of mass can be determined, which can be used to find outliers.”).
It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify the predetermined reference for the characteristic of the one or more identified features, as taught by Moore, in view of Machek, and in view of Lu, to include at least one selected from a group consisting of a predetermined reference size of the one or more identified features, a predetermined reference number of the one or more identified features, a predetermined reference position of the one or more identified features, a predetermined reference shape of the one or more identified features, and a predetermined reference distance between two of the one or more identified features, as further taught by Lu.
The suggestion/motivation for doing so would have been that “semi-supervised or unsupervised techniques can be used to improve performance with more complex patterns; outliers of these patterns can be identified; defects such as, for example, particles, missing voids, gray-scale changing, or thinner fins may be identified; other types of defects also can be identified” (Lu, para. [0060]).
Therefore, it would have been obvious to combine Moore, Machek, and Lu, with Lu further, to obtain the invention as specified in claim 7.
Claims 8, 11-15, and 17-19 are rejected under 35 U.S.C. 103 as being unpatentable over Moore, in view of Machek, in view of Miller, in further view of U.S. Patent Application Publication No.: 2019/0171914 (Zlotnick et al.) (hereinafter Zlotnick).
Regarding claim 8, Moore, in view of Machek, teaches the scientific instrument support apparatus of claim 1, wherein the one or more selection criteria includes a characteristic of the one or more identified features and wherein the image selection logic determines whether the set of images satisfies the one or more selection criteria (Machek, para. [0090]-[0091]; see rejection of claim 1 above; a threshold is used for qualifying criteria of simulated defects to include or exclude certain images from the training data set).
Moore, in view of Machek, fails to teach
wherein the image selection logic determines whether the set of images satisfies the one or more selection criteria by identifying a pattern of the characteristic over multiple sets of images.
Zlotnick teaches
wherein the image selection logic determines whether the set of images satisfies the one or more selection criteria by identifying a pattern of the characteristic over multiple sets of images (Zlotnick, para. [0104]: “As described above, the montage 204 may include medical images that are grouped according to risk (e.g., risk score, risk class) and/or by prior classifications provided by other users and/or computer automated classifications … For example, the classifications can indicate whether the medical images represent cancer, e.g., highly likely, moderately likely, or not likely. Another example, classifications may be associated with shape of the features, such as round, oval, or non-uniform. Similar to the above description of control images, optionally control images that include features or objects of a known diagnosis may be included in a montage. As described above, a classification may relate to a change in size or character of a feature or object, such as a lesion. Control images may be included, such as pairs, triplets, and so on, that illustrate a same lesion changing, or not changing, in size or character over time. The control images illustrating a same lesion may be presented as being associated with a same lesion, for example the control images may include textual descriptions indicating they are related, may be highlighted a particular color, and so on. In this way, the reviewing user's performance related to classifying these control images can be monitored.”).
It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify the image selection logic, as taught by Moore, in view of Machek, to determine whether the set of images satisfies the one or more selection criteria by identifying a pattern of the characteristic over multiple sets of images, as taught by Zlotnick.
The suggestion/motivation for doing so would have been that “this focus on classifying the medical images according to a singular classification at a time can improve accuracy of the classification; for example, instead of a reviewing user analyzing individual medical images and assigning disparate classifications to the individual images, the reviewing user can quickly hone his/her focus on a single classification and mark appropriate medical images in a presented montage” (Zlotnick, para. [0056]).
Therefore, it would have been obvious to combine Moore and Machek, with Zlotnick, to obtain the invention as specified in claim 8.
Regarding claim 11, Moore, in view of Machek, teaches the scientific instrument support apparatus of claim 1, wherein the training dataset includes an annotation dataset and wherein the image selection logic provides a user interface (Moore, para. [0139]: “In some embodiments, user interface functionality may include systems that aid in point-of-care sample classification … The user may also, optionally, be prompted to select a machine learning, deep learning, or computer vision software package that will assist in the assessment or diagnosis of a sample. Next, the system in FIG. 33 may prompt the user to input information regarding the area of interest at block S1640. In one embodiment, this information may be provided by having the operator select the location of the nodule on the frontal plane followed by its location on the transverse plane resulting in a three-dimensional localization of the area of interest S1650. This is followed by the operator selecting the size, shape, and/or radiographic characteristics of the designated area of interest … The operator may have the ability to insert annotations that are layered on top of or embedded into the image. One embodiment of this annotation technique may have the operator outline the region being annotated. Another embodiment may have the operator highlight an area of interest. Another embodiment may have the operator select a predefined shape to overlay on an area of interest. Another embodiment may have the operator approve an area of interest visually identified by the system. All of these methods of visual annotation may be accompanied by a method of note taking that may or may not be stored along with the raw image data at block S1670. The operator may be presented with a system generated sample assessment, prognosis, criteria checklist, feature identification, or diagnosis. The system may upload the raw image data, one or more reconstructed fields of view, the reconstructed whole slide image, the annotation data, and/or the system generated results to a local or cloud infrastructure, for example server 220 in FIG. 1.”).
Moore, in view of Machek, fails to teach
in response to receiving an indication through the user interface, assign the set of images to at least one selected from a group consisting of a retraining dataset, a testing dataset, and a validation dataset.
Zlotnick teaches
in response to receiving an indication through the user interface, assign the set of images to at least one selected from a group consisting of a retraining dataset, a testing dataset, and a validation dataset (Zlotnick, para. [0068]; para. [0115]: “As will be described below, with respect to FIGS. 5-7, a subsequent review can be performed of classified medical images. For example, an initial reviewing user, or optionally a machine learning system trained on classified medical images, may assign classifications to medical images. A subsequent reviewing user can view two or more montages, with each montage being associated with a respective classification, and can cause images from a first montage to be included in a second montage. For example, the subsequent reviewing user can view a first montage with objects (e.g., lesions) classified as being round, and a second montage with objects classified as being oval. The subsequent reviewing user can then drag one or more medical images to a different montage, thus classifying the objects as being the other shape.”; “The reviewing user can review these medical images, and classify them according to diagnosis and/or other classifications. As an example, a montage may include medical images assigned a particular BIRADS score, and the reviewing user can indicate whether the medical images include objects that appears to be cancerous or benign … the reviewing user can indicate that the initial risk assigned to the medical image is incorrect. For example, the reviewing user can indicate that a different BIRADS score should have been determined for the medical image. A machine learning system that assigned the BIRADS score can receive this update, and training of the system can be performed, such that later automated classifications of similar images are more appropriately assessed and/or reassessment of already classified images may be performed.”).
It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify the image selection logic providing a user interface, as taught by Moore, in view of Machek, to include receiving an indication through the user interface, assign the set of images to at least one selected from a group consisting of a retraining dataset, a testing dataset, and a validation dataset, as taught by Zlotnick.
The suggestion/motivation for doing so would have been that “there may be mistakes in such a classifying process, as the reviewing user is unable to directly view multiple medical images and concurrently provide classifications of multiple medical images; instead, the reviewing user is only able to view a single medical image and try to rely on a consistent classification being applied to each medical image; in this way, contextual information that may be evident between the medical images is lost, and for each freshly presented medical image, the reviewing user is less likely to maintain a consistent classification process, e.g., classifying an object with a particular border as round on one medical image and then later classifying an object with the same border as an oval on a later-viewed medical image; such inconsistencies in object classification can not only impact diagnosis of the patient's involved, but reduce accuracy of machine learning that develops object classification models based on the (inconsistent) user-provided classifications.” (Zlotnick, para. [0008]).
Therefore, it would have been obvious to combine Moore and Machek, with Zlotnick, to obtain the invention as specified in claim 11.
Regarding claim 12, Moore, in view of Machek, teaches the scientific instrument support apparatus of claim 1, wherein the training dataset includes an annotation dataset and wherein the image selection logic provides a user interface (Moore, para. [0139]: “In some embodiments, user interface functionality may include systems that aid in point-of-care sample classification … The user may also, optionally, be prompted to select a machine learning, deep learning, or computer vision software package that will assist in the assessment or diagnosis of a sample. Next, the system in FIG. 33 may prompt the user to input information regarding the area of interest at block S1640. In one embodiment, this information may be provided by having the operator select the location of the nodule on the frontal plane followed by its location on the transverse plane resulting in a three-dimensional localization of the area of interest S1650. This is followed by the operator selecting the size, shape, and/or radiographic characteristics of the designated area of interest … The operator may have the ability to insert annotations that are layered on top of or embedded into the image. One embodiment of this annotation technique may have the operator outline the region being annotated. Another embodiment may have the operator highlight an area of interest. Another embodiment may have the operator select a predefined shape to overlay on an area of interest.”).
Moore, in view of Machek, fails to teach
in response to receiving an indication through the user interface, exclude the set of images from at least one selected from a group consisting of a retraining dataset, a testing dataset, and a validation dataset (Zlotnick, para. [0068]; para. [0115]: “As will be described below, with respect to FIGS. 5-7, a subsequent review can be performed of classified medical images. For example, an initial reviewing user, or optionally a machine learning system trained on classified medical images, may assign classifications to medical images. A subsequent reviewing user can view two or more montages, with each montage being associated with a respective classification, and can cause images from a first montage to be included in a second montage. For example, the subsequent reviewing user can view a first montage with objects (e.g., lesions) classified as being round, and a second montage with objects classified as being oval. The subsequent reviewing user can then drag one or more medical images to a different montage, thus classifying the objects as being the other shape.”; “The reviewing user can review these medical images, and classify them according to diagnosis and/or other classifications. As an example, a montage may include medical images assigned a particular BIRADS score, and the reviewing user can indicate whether the medical images include objects that appears to be cancerous or benign … the reviewing user can indicate that the initial risk assigned to the medical image is incorrect. For example, the reviewing user can indicate that a different BIRADS score should have been determined for the medical image. A machine learning system that assigned the BIRADS score can receive this update, and training of the system can be performed, such that later automated classifications of similar images are more appropriately assessed and/or reassessment of already classified images may be performed.”).
It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify the image selection logic providing a user interface, as taught by Moore, in view of Machek, to include, in response to receiving an indication through the user interface, excluding the set of images from at least one selected from a group consisting of a retraining dataset, a testing dataset, and a validation dataset, as taught by Zlotnick.
The suggestion/motivation for doing so would have been that “there may be mistakes in such a classifying process, as the reviewing user is unable to directly view multiple medical images and concurrently provide classifications of multiple medical images; instead, the reviewing user is only able to view a single medical image and try to rely on a consistent classification being applied to each medical image; in this way, contextual information that may be evident between the medical images is lost, and for each freshly presented medical image, the reviewing user is less likely to maintain a consistent classification process, e.g., classifying an object with a particular border as round on one medical image and then later classifying an object with the same border as an oval on a later-viewed medical image; such inconsistencies in object classification can not only impact diagnosis of the patient's involved, but reduce accuracy of machine learning that develops object classification models based on the (inconsistent) user-provided classifications.” (Zlotnick, para. [0008]).
Therefore, it would have been obvious to combine Moore and Machek, with Zlotnick, to obtain the invention as specified in claim 12.
Regarding claim 13, Moore, in view of Machek, teaches the scientific instrument support apparatus of claim 1, wherein the training dataset includes an annotation dataset (Moore, para. [0139]: “In some embodiments, user interface functionality may include systems that aid in point-of-care sample classification … The user may also, optionally, be prompted to select a machine learning, deep learning, or computer vision software package that will assist in the assessment or diagnosis of a sample. Next, the system in FIG. 33 may prompt the user to input information regarding the area of interest at block S1640. In one embodiment, this information may be provided by having the operator select the location of the nodule on the frontal plane followed by its location on the transverse plane resulting in a three-dimensional localization of the area of interest S1650. This is followed by the operator selecting the size, shape, and/or radiographic characteristics of the designated area of interest … The operator may have the ability to insert annotations that are layered on top of or embedded into the image. One embodiment of this annotation technique may have the operator outline the region being annotated. Another embodiment may have the operator highlight an area of interest. Another embodiment may have the operator select a predefined shape to overlay on an area of interest.”).
Moore, in view of Machek, fails to teach
wherein image selection logic, in response to assigning the set of images to the annotation dataset, generates and transmits a link selectable by a user to access the set of images assigned to the annotation dataset within a user interface.
Zlotnick teaches
in response to assigning the set of images to the annotation dataset, generates and transmits a link selectable by a user to access the set of images assigned to the annotation dataset within a user interface (Zlotnick, para. [0090]; FIG. 2A: “FIG. 2A illustrates an example user interface 202 for classifying medical images. The example user interface 202 can be an example of an interactive user interface generated, at least in part, by a system (e.g., a server system, the medical image classification system 100, and so on), and which is presented on (e.g., rendered by) a user device 200 (e.g., a laptop, a computer, a tablet, a wearable device). For example, the user interface 202 can be presented via a webpage being presented on the user device 200. As another example, the webpage may be associated with a web application (e.g., executing on the medical image classification system 100) that receives user input on the user device 200 and updates in response. Optionally, the user interface 202 can be generated via an application (e.g., an ‘app’ obtained from an electronic application store) executing on the user device 200, and the application can receive information for presentation in the user interface 202 from an outside system (e.g., the medical image classification system 100).”;
PNG
media_image11.png
540
706
media_image11.png
Greyscale
).
It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify the image selection logic, as taught by Moore, in view of Machek, to include generating and transmitting a link selectable by a user to access the set of images assigned to the annotation dataset within a user interface, in response to assigning the set of images to the annotation dataset, as taught by Zlotnick.
The suggestion/motivation for doing so would have been so “reports may be generated that can provide an analysis of the medical images classified by a reviewing user; as an example, the system may generate annotations for medical images classified by a reviewing user; that is, a medical report can be generated for a patient that indicates a classification of objects included in medical images related to the patient” (Zlotnick, para. [0069]).
Therefore, it would have been obvious to combine Moore and Machek, with Zlotnick, to obtain the invention as specified in claim 13.
Regarding claim 14, Moore, in view of Machek, teaches the scientific instrument support apparatus of claim 1.
Moore, in view of Machek, fails to teach
wherein the training logic retrains the machine-learning model using the training dataset in response to a triggering event.
Zlotnick teaches
wherein the training logic retrains the machine-learning model using the training dataset in response to a triggering event (Zlotnick, para. [0124]; para, [0115]: “A reviewing user utilizing user interface 600 can interact with the user interface 600 to indicate that a medical image is to be re-classified. For example, FIG. 6B illustrates the reviewing user dragging medical image 606 included in montage 604 to montage 602. As illustrated, the reviewing user may utilize a touch-sensitive display to interact with user interface 600. For example, the reviewing user can press on medical image 606 for greater than a threshold amount of time (e.g., 0.5 seconds, 1 seconds), or press on the display with greater than a threshold force or pressure, to indicate that the medical image 606 is to be dragged. As another example, the reviewing user can utilize a keyboard and/or mouse to manipulate medical image 606. Optionally, the reviewing user can verbally provide commands to re-classify medical image 606 (e.g., a conversational interface).”; “In some embodiments, the medical images of a montage may be sorted based on a risk or, such as placing an image with the highest BIRADS score at the upper left location of a montage and an image with the lowest BIRADS score at the lower right location of the montage. To help viewers make further classification determinations, the particular BIRADS score may be helpful—thus increasing classification accuracy. Additionally, the reviewing user can indicate that the initial risk assigned to the medical image is incorrect. For example, the reviewing user can indicate that a different BIRADS score should have been determined for the medical image. A machine learning system that assigned the BIRADS score can receive this update, and training of the system can be performed, such that later automated classifications of similar images are more appropriately assessed and/or reassessment of already classified images may be performed.”).
It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify the training logic, as taught by Moore, in view of Machek, to retrain the machine-learning model using the training dataset in response to a triggering event, as taught by Zlotnick.
The suggestion/motivation for doing so would have been that the “machine learning system that assigned the BIRADS score can receive this update, and training of the system can be performed, such that later automated classifications of similar images are more appropriately assessed and/or reassessment of already classified images may be performed” (Zlotnick, para. [0115]).
Therefore, it would have been obvious to combine Moore and Machek, with Zlotnick, to obtain the invention as specified in claim 14.
Regarding claim 15, Moore, in view of Machek, and in view of Zlotnick, teaches the scientific instrument support apparatus of claim 14, wherein the triggering event includes at least one selected from a group consisting of a number of user-annotated images included in the training dataset, an increase in a size of the training dataset, an increase in a number of user-annotated images for a predetermined feature in the training dataset, an availability of one or more training resources, and a manual initiation (Zlotnick, para. [0124]; see rejection of claim 14 above; increase in a number of user-annotated images for a predetermined feature in the training dataset; a number of user-annotated images included in the training dataset).
Regarding claim 17, Moore, in view of Machek, teaches the method of claim 16, wherein the one or more identified features in the set of images includes one or more first identified features in a first set of images; and excluding the first set of images from the training dataset in response to a receiving a first indication through the user interface (Machek, para. [0090]-[0091]; see rejection of claim 1 above; a threshold is used for qualifying criteria of simulated defects (features) to include or exclude certain images from the training data set).
Moore, in view of Machek, fails to teach
receiving one or more second identified features in a second set of images acquired via the scientific instrument, the one or more second identified features generated using the machine-learning model; providing the first set of images and the one or more first identified features to a user interface; providing the second set of images and the one or more second identified features to the user interface; and including the second set of images in the training dataset in response to receiving a second indication through the user interface.
Zlotnick teaches
receiving one or more second identified features in a second set of images acquired via the scientific instrument, the one or more second identified features generated using the machine-learning model; providing the first set of images and the one or more first identified features to a user interface; providing the second set of images and the one or more second identified features to the user interface; and including the second set of images in the training dataset in response to receiving a second indication through the user interface (Zlotnick, para. [0068]: “As will be described below, with respect to FIGS. 5-7, a subsequent review can be performed of classified medical images. For example, an initial reviewing user, or optionally a machine learning system trained on classified medical images, may assign classifications to medical images. A subsequent reviewing user can view two or more montages, with each montage being associated with a respective classification, and can cause images from a first montage to be included in a second montage. For example, the subsequent reviewing user can view a first montage with objects (e.g., lesions) classified as being round, and a second montage with objects classified as being oval. The subsequent reviewing user can then drag one or more medical images to a different montage, thus classifying the objects as being the other shape. Since the two montages are presented in a same user interface, the reviewing user's effectiveness with respect to ensuring consistency of classification can be increased.”).
It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify the method, as taught by Moore, in view of Machek, to include the steps of receiving one or more second identified features in a second set of images acquired via the scientific instrument, the one or more second identified features generated using the machine-learning model, providing the first set of images and the one or more first identified features to a user interface, providing the second set of images and the one or more second identified features to the user interface, and including the second set of images in the training dataset in response to receiving a second indication through the user interface, as taught by Zlotnick.
The suggestion/motivation for doing so would have been that “subsequent reviewing user can then drag one or more medical images to a different montage, thus classifying the objects as being the other shape” (Zlotnick, para. [0068]; this allows user input in the classification process which allows for more accurate training data upon retraining of the machine learning model.
Therefore, it would have been obvious to combine Moore and Machek, with Zlotnick, to obtain the invention as specified in claim 17.
Regarding claim 18, Moore, in view of Machek, teaches the method of claim 16, wherein the one or more selection criteria includes one or more first selection criteria, including in the training dataset in response to a determination that satisfies the one or more selection criteria (Machek, para. [0090]-[0091]; see rejection of claim 1 above; a threshold is used for qualifying criteria of simulated defects (features) to include or exclude certain images from the training data set);
receiving one or more second selection criteria and satisfying the one or more second selection criteria (Machek, para. [0129]-[0130]: “Hence, the additional training data for the ANN includes simulated specimen images associated with characteristics for which the previous ANN performed poorly. Based on the additional training data, the ANN is further trained for the specific characteristics previously associated with poor performance. In other embodiments, Operations 214-222 are performed. However, both defective crystalline data models associated with the identified correlated characteristics (the correlated characteristics associated with defect identification rates matching the improvement-needed criteria) and defective crystalline data models not associated with the identified correlated characteristics are generated. But the system focuses on the identified correlated characteristics by generating a larger number of defective crystalline data models associated with the identified correlated characteristics than defective crystalline data models not associated with the identified correlated characteristics. Hence, training data for generating a modified ANN includes both simulated specimen images associated with lower defect identification rates and simulated specimen images associated with higher defect identification rates.”);
wherein the one or more identified features of the set of images includes one or more first identified features of a first set of images (Moore para. [0067]; para. [0069]; FIG. 1; see rejection of claim 1 above).
Moore, in view of Machek, fails to teach
receiving one or more second identified features in a second set of images acquired via the scientific instrument, the one or more second identified features generated using the machine-learning model; determining whether the second set of images including the second set of images, including the one or more second identified features, in the training dataset in response to a determination that the second set of images satisfies the one or more second selection criteria.
Zlotnick teaches
receiving one or more second identified features in a second set of images acquired via the scientific instrument, the one or more second identified features generated using the machine-learning model; determining whether the second set of images including the second set of images, including the one or more second identified features, in the training dataset in response to a determination that the second set of images satisfies the one or more second selection criteria (Zlotnick, para. [0068]: “As will be described below, with respect to FIGS. 5-7, a subsequent review can be performed of classified medical images. For example, an initial reviewing user, or optionally a machine learning system trained on classified medical images, may assign classifications to medical images. A subsequent reviewing user can view two or more montages, with each montage being associated with a respective classification, and can cause images from a first montage to be included in a second montage. For example, the subsequent reviewing user can view a first montage with objects (e.g., lesions) classified as being round, and a second montage with objects classified as being oval. The subsequent reviewing user can then drag one or more medical images to a different montage, thus classifying the objects as being the other shape. Since the two montages are presented in a same user interface, the reviewing user's effectiveness with respect to ensuring consistency of classification can be increased.”).
It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify the method, as taught by Moore, in view of Machek, to include the steps of receiving one or more second identified features in a second set of images acquired via the scientific instrument, the one or more second identified features generated using the machine-learning model, determining whether the second set of images including the second set of images, including the one or more second identified features, in the training dataset in response to a determination that the second set of images satisfies the one or more second selection criteria, as taught by Zlotnick.
The suggestion/motivation for doing so would have been that “subsequent reviewing user can then drag one or more medical images to a different montage, thus classifying the objects as being the other shape” (Zlotnick, para. [0068]; this allows user input in the classification process which allows for more accurate training data upon retraining of the machine learning model.
Therefore, it would have been obvious to combine Moore and Machek, with Zlotnick, to obtain the invention as specified in claim 18.
Regarding claim 19, Moore, in view of Machek, teaches the method of claim 16, wherein the one or more identified features in the set of images includes one or more first identified features in a first set of images acquired via the scientific instrument including in the training data set (Moore para. [0067]; para. [0069]; FIG. 1; see rejection of claim 1 above; Machek, para. [0090]-[0091]; see rejection of claim 1 above; a threshold is used for qualifying criteria of simulated defects (features) to include or exclude certain images from the training data set).
Moore, in view of Machek, fails to teach
receiving one or more second identified features in a second set of images, the one or more second identified features generated using the machine-learning model; providing the second set of images and the one or more second identified features to a user interface; receiving an annotation associated with the second set of images through the user interface; and including the second set of images, including the annotation, in the training dataset.
Zlotnick teaches
receiving one or more second identified features in a second set of images, the one or more second identified features generated using the machine-learning model; providing the second set of images and the one or more second identified features to a user interface (Zlotnick, para. [0068]: “As will be described below, with respect to FIGS. 5-7, a subsequent review can be performed of classified medical images. For example, an initial reviewing user, or optionally a machine learning system trained on classified medical images, may assign classifications to medical images. A subsequent reviewing user can view two or more montages, with each montage being associated with a respective classification, and can cause images from a first montage to be included in a second montage. For example, the subsequent reviewing user can view a first montage with objects (e.g., lesions) classified as being round, and a second montage with objects classified as being oval. The subsequent reviewing user can then drag one or more medical images to a different montage, thus classifying the objects as being the other shape. Since the two montages are presented in a same user interface, the reviewing user's effectiveness with respect to ensuring consistency of classification can be increased.”;
PNG
media_image12.png
376
730
media_image12.png
Greyscale
PNG
media_image13.png
402
722
media_image13.png
Greyscale
;
PNG
media_image14.png
414
630
media_image14.png
Greyscale
;
PNG
media_image15.png
454
694
media_image15.png
Greyscale
;
PNG
media_image16.png
494
494
media_image16.png
Greyscale
);
receiving an annotation associated with the second set of images through the user interface; and including the second set of images, including the annotation, in the training dataset (Zlotnick, para. [0048]; para. [0071]: “Annotation: Any notes, measurements, links, assessments, graphics, and/or the like, associated with a data item, either automatically (e.g., by one or more CAP, described below) or manually (e.g., by a user). For example, when used in reference to a medical image, annotations include, without limitation, any added information that may be associated with the image, whether incorporated into an image file directly, comprising metadata associated with the image file, and/or stored in a separate location but linked to the image file in some way. Examples of annotations include measurements by using linear dimensions, area, density in Hounsfield units, optical density, standard uptake value (e.g., for positron emission tomography), volume, curved lines (such as the length of a curved vessel), stenosis (e.g., percent narrowing of a vessel at a certain location relative to a reference location), or other parameters. Additional examples of annotations include arrows to indicate specific locations or anatomy, circles, polygons, irregularly shaped areas, notes, and/or the like. Additional examples of annotations include arrows to indicate specific locations or anatomy, circles, polygons, irregularly shaped areas, notes, and/or the like. Further examples of annotations include graphics that, for example, outline lesions, lumbar discs, and/or other anatomical features.”; “reviewing users may be able to classify medical images, but be unable to view patient information associated with the medical images. The system can optionally train a machine learning algorithm, or can provide classified medical images (e.g., anonymized classified medical images) to an outside system as training data.)”.
It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify the method, as taught by Moore, in view of Machek, to include the steps of receiving one or more second identified features in a second set of images, the one or more second identified features generated using the machine-learning model, providing the second set of images and the one or more second identified features to a user interface, receiving an annotation associated with the second set of images through the user interface, and including the second set of images, including the annotation, in the training dataset, as taught by Zlotnick.
The suggestion/motivation for doing so would have been to allow user input to more accurately improve images added to a training dataset; this improves the machine learning algorithm to be more accurate in classification of medical images.
Therefore, it would have been obvious to combine Moore and Machek, with Zlotnick, to obtain the invention as specified in claim 19.
Claims 9 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Moore, in view of Machek, in view of Zlotnick, and in further view of Lu.
Regarding claim 9, Moore, in view of Machek, and in view of Zlotnick, teaches the scientific instrument support apparatus of claim 8.
Moore, in view of Machek, and in view of Zlotnick, fails to teach
wherein the characteristic of the one or more identified features includes at least one selected from a group consisting of a size of the one or more identified features, a number of the one or more identified features, a position of the one or more identified features, a shape of the one or more identified features, and a distance between two of the one or more identified features.
Lu teaches
wherein the characteristic of the one or more identified features includes at least one selected from a group consisting of a size of the one or more identified features, a number of the one or more identified features, a position of the one or more identified features, and a shape of the one or more identified features (Lu, para. [0045]; para. [0048]: “The model is applied at 103 using a processor to find one or more anomalies in image patches. The model can generate reconstruction errors and/or probabilities. The model can predict whether a patch is abnormal by examining the patch level reconstruction error and/or probabilities. The anomaly region can be identified by thresholding the pixel-level reconstruction error and/or probabilities”; “At 104, a presence of one or more anomalies in an image is determined using the model. Threshold reconstruction errors or probabilities can be used to find an anomaly patch or region in the image. For example, a difference between reconstructed and original SEM images may be calculated at 104 to locate the anomaly patterns (e.g., defects).”; anomaly patterns in semiconductor substrate images meets the broadest reasonable interpretation of the claim term “shape of the one or more identified features” since the defects are the identified features and the pattern of the defect confirms whether the image is defect-free or not that leads to the image added to training data or not to train a ML model).
It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify the characteristic of the one or more identified features, as taught by Moore, in view of Machek, and in view of Zlotnick, to include at least one selected from a group consisting of a size of the one or more identified features, a number of the one or more identified features, a position of the one or more identified features, and a shape of the one or more identified features, as taught by Lu.
The suggestion/motivation for doing so would have been that “an operator only needs to select clean SEM images for the training data set, which can be easier than annotating defective images” (Lu, para. [0040]; further, “this avoids tedious and error-prone manual labeling of detects by operators; this can eliminate the need to search or paint defects, which reduces the time needed to provide data to train the model” (Lu, para. [0065]).
Therefore, it would have been obvious to combine Moore, Machek, and Zlotnick, with Lu, to obtain the invention as specified in claim 9.
Regarding claim 21, Moore, in view of Machek, and in view of Zlotnick, teaches the scientific instrument support apparatus of claim 8.
Moore, in view of Machek, and in view of Zlotnick fails to teach
wherein the one or more identified features includes a plurality of identified features and wherein the characteristic of the plurality of identified features includes a distance between two of the plurality of identified features.
Lu teaches
wherein the one or more identified features includes a plurality of identified features and wherein the characteristic of the plurality of identified features includes a distance between two of the plurality of identified features (Lu, para. [0062]: “In a first embodiment, outliers can be determined using distance in a feature space. Some machine learning feature vectors are extracted from the defect-free training images. When new images are passed in during a test job run, the same types of feature vectors can be extracted from these new images. How far a feature vector of one new image is from the feature vectors of all defect-free training images can be determined. If the distance exceeds a threshold, then the new image is considered an outlier. For example, a center of mass for the image dataset in the defect-free training data can be determined. The distance between the new image and this center of mass can be determined, which can be used to find outliers.”).
It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify the characteristic of the one or more identified features, as taught by Moore, in view of Machek, and in view of Zlotnick, to include a plurality of identified features and wherein the characteristic of the plurality of identified features includes a distance between two of the plurality of identified features, as taught by Lu.
The suggestion/motivation for doing so would have been that “an operator only needs to select clean SEM images for the training data set, which can be easier than annotating defective images” (Lu, para. [0040]; further, “this avoids tedious and error-prone manual labeling of detects by operators; this can eliminate the need to search or paint defects, which reduces the time needed to provide data to train the model” (Lu, para. [0065]).
Therefore, it would have been obvious to combine Moore, Machek, and Zlotnick, with Lu, to obtain the invention as specified in claim 21.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL ADAM SHARIFF whose telephone number is 571-272-9741. The examiner can normally be reached M-F 8:30-5PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sumati Lefkowitz can be reached on 571-272-3638. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MICHAEL ADAM SHARIFF/
Examiner, Art Unit 2672
/SUMATI LEFKOWITZ/Supervisory Patent Examiner, Art Unit 2672