Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Oath/Declaration
The receipt of the Oath/Declaration is acknowledged.
Drawings
The drawing(s) filed on March 22, 2024 are accepted by the Examiner.
Status of Claims
Claims 1-5, 8-21 and 23 are pending in this application.
Priority
Receipt is acknowledged of papers submitted under 35 U.S.C. 119(a)-(d), which papers have been placed of record in the file.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on July 26, 2023 is in compliance with the provisions on 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejection - 35 USC § 101 – Abstract Idea
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-5, 8-21 and 23 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
In January, 2019 (updated October 2019), the USPTO released new examination guidelines setting forth a two-step inquiry for determining whether a claim is directed to non-statutory subject matter. According to the guidelines, a claim is directed to non-statutory subject matter if:
STEP 1: the claim does not fall within one of the four statutory categories of invention (process, machine, manufacture or composition of matter), or
STEP 2: The claim recites a judicial exception, e.g., an abstract idea, without reciting additional elements that amount to significantly more than the judicial exception, as determined using the following analysis:
STEP 2A (PRONG 1): Does the claim recite an abstract idea, law of nature, or natural phenomenon?
STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application?
STEP 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception?
Using the two-step inquiry, it is clear that the claims are directed toward non-statutory subject matter, as shown below:
STEP 1: Do the claims fall within one of the statutory categories? Yes.
Claims 1–5 , 12-21 and 23 are directed towards a method, i.e., process.
STEP 2A (PRONG 1): Is the claim directed to a law of nature, a natural phenomenon or an abstract idea? Yes, the claims are directed to an abstract idea.
With regard to STEP 2A (PRONG 1), the guidelines provide three groupings of subject matter that are considered abstract ideas:
1. Mathematical concepts – mathematical relationships, mathematical formulas or equations, mathematical calculations;
2. Certain methods of organizing human activity – fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions); and
3. Mental processes – concepts that are practicably performed in the human mind (including an observation, evaluation, judgment, opinion).
The method in claims 1–5 , 12-21 and 23 recites: steps for obtaining images and feature maps, training a machine learning model (artifact removal model), and adjusting model parameters based on training data. The core of the claims is a method for training a machine learning model using images, feature maps and reference images. These steps in general are mathematical concepts and mental processes that can be practicably performed in the human mind and, therefore, an abstract idea.
With regard to independent claims 1, 8 and 12: The claimed invention recites steps including:
Obtaining images and feature maps,
Training a machine learning model (artifact removal model),
Adjusting model parameters based on training data Using the rate of change to determine soil/vegetation property values.
These limitations, under their broadest reasonable interpretation, viewed as a whole, recite obtaining images, training a machine model and adjusting model parameters fall within the abstract idea category of mathematical concepts and mental processes.
STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application? No, the claim does not recite additional elements that integrate the judicial exception into a practical application.
With regard to STEP 2A (prong 2), whether the claim recites additional elements that integrate the judicial exception into a practical application, the guidelines provide the following exemplary considerations that are indicative that an additional element (or combination of elements) may have integrated the judicial exception into a practical application:
an additional element reflects an improvement in the functioning of a computer, or an improvement to other technology or technical field;
an additional element that applies or uses a judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition;
an additional element implements a judicial exception with, or uses a judicial exception in conjunction with, a particular machine or manufacture that is integral to the claim;
an additional element effects a transformation or reduction of a particular article to a different state or thing; and
an additional element applies or uses the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception.
While the guidelines further state that the exemplary considerations are not an exhaustive list and that there may be other examples of integrating the exception into a practical application, the guidelines also list examples in which a judicial exception has not been integrated into a practical application:
an additional element merely recites the words “apply it” (or an equivalent) with the judicial exception, or merely includes instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea;
an additional element adds insignificant extra-solution activity to the judicial exception; and
an additional element does no more than generally link the use of a judicial exception to a particular technological environment or field of use.
With regards to independent claims 1, 8 and 12: The claims do not recite additional elements that integrate the abstract idea into a practical application.
Although the method is used in training machine learning, the recited steps:
The claims do not recite specific improvement to a computer or another technology; it recites generic data gathering and processing steps for model training.
There is no recitation of a specific technical solution, novel algorithm or improvement to the functioning of the computer itself.
The process could be performed on generic computer hardware and does not require a particular machine or transformation.
The claimed result is directed to an abstract idea (mathematical concepts/mental processes in the form of generic model training).
Thus, the independent claims 1, 8 and 12 fail Step 2A, Prong Two.
STEP 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? No, the additional elements (e.g., “implemented on a computing device including at least one processor and at least one storage device”) are generic computer components.
The steps of obtaining images, feature maps, and reference images, and training the model are routine and conventional in the field of machine learning.
There is no recitation of a novel training algorithm, specific data structure, or improvement to the operation of the computer or model itself.
The independent claims 1, 8 and 12 do not recite additional elements that amount to significantly more than the judicial exception.
With regard to STEP 2B, whether the claims recite additional elements that provide significantly more than the recited judicial exception, the guidelines specify that the pre-guideline procedure is still in effect. Specifically, that examiners should continue to consider whether an additional element or combination of elements:
adds a specific limitation or combination of limitations that are not well-understood, routine, conventional activity in the field, which is indicative that an inventive concept may be present; or
simply appends well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, which is indicative that an inventive concept may not be present.
The following computer functions have been recognized as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality): receiving or transmitting data over a network. See MPEP 2106.05(d)(II).
The claim elements, individually and in combination, do not recite significantly more than the abstract idea itself.
The steps of obtaining images, feature maps, and reference images, and training the model are routine and conventional in the field of machine learning.
There is no recitation of a novel training algorithm, specific data structure, or improvement to the operation of the computer or model itself.
Accordingly, the claims do not recite an inventive concept sufficient to transform the abstract idea into patent-eligible subject matter.
CONCLUSION
Independent claims 1, 8 and 12 , as are directed to the abstract idea of training a machine learning model using routine steps, without reciting a specific technological improvement or particular application beyond generic computer implementation.
Thus, since independent claims 1, 8 and 12 are (a) directed toward an abstract idea, (b) does not recite additional elements that integrate the judicial exception into a practical application, and (c) does not recite additional elements that amount to significantly more than the judicial exception, it is clear that independent claims 1, 8 and 12 are directed towards non-statutory subject matter.
Further, dependent claims 2–7, 9-11, 13-21 and 23 add routine details (e.g. use of correction images, vectorization, synchronous training, scoring, user feedback, sub-model selection, etc.). These are conventional machine learning techniques or high-level functional recitations. Thus they further limit the abstract idea without integrating the abstract idea into practical application or adding significantly more. Each of the claimed limitations either expand upon or add either 1) new mental process, 2) a new additional element, 3) previously presented mental process, and/or 4) a previously presented additional element. As such, claims 2–7, 9-11, 13-21 and 23 are similarly rejected as being directed towards non-statutory subject matter.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1, 2, 12-13 and 15 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Yao et al. (US 11,869,171) herein after referred to as Yao.
Regarding claim 1, Yao teaches a method for training an initial artifact removal model (Yao, Abstract, Fig 1 and 2, Col 5, lines 1-62 {Teaches training a neural network for artifact removal on a computing device.|), which is implemented on a computing device (Yao, Fig 1, computing system 100) including at least one processor (Yao, Fig 1, processing subsystem 101, one or more processor(s) 102) and at least one storage device (Yao, Fig 1, system storage, 114, system memory 104), comprising:
obtaining one or more first initial images (Yao, Col 3, lines 25-67 {Preparing training data; acquiring artifact-affected images, reference images and possibly artifact masks/feature maps for supervised learning.}, Fig 30A, Col 71, lines 57-58 {shows a medical image with a significant artifact prior to correction - this is the input, artifact affected image}) and one or more objective feature maps corresponding to the one or more first initial images (Yao, Col 3, lines 25-60 {Preparing training data; acquiring artifact-affected images, reference images and possibly artifact masks/feature maps for supervised learning.}, Fig 30B, Col 71, lines 58-63 {Depicts the artifact mask or feature map generated for the image in Fig 30A. This mask identifies the location and extent of the artifact with the image.});
obtaining one or more reference images corresponding to the one or more first initial images (Yao, Col 3, lines 25-60 {Preparing training data; acquiring artifact-affected images, reference images and possibly artifact masks/feature maps for supervised learning.});
generating a trained artifact removal model by training the initial artifact removal model using the one or more first initial images, the one or more objective feature maps, and the one or more reference images (Yao, Col 5, lines 10-55 {Training the neural network inputting artifact-affected images (and masks), using reference images as labels, updating model parameters to reduce error.}), including:
inputting the one or more first initial images and the one or more objective feature maps into the initial artifact removal model (Yao, Fig 2, Col 5, lines 31-55 {Both images and artifact feature maps/masks, and reference images.});
using the one or more first initial images as first training samples, and using the one or more reference images as first labels corresponding to the first training samples (Yao, Col 5, lines 10-55; Col 8, lines 10-35 {Images are training samples; reference images are ground truth/labels.}) and adjusting one or more parameters of the initial artifact removal model based on the one or more objective feature maps and the first labels (Yao, Col 5, lines 10-55; Col 8, lines 10-35 {Model parameters are updated based on loss between predicted and reference images, using artifact feature maps as auxiliary input.}) .
Regarding claim 2, Yao teaches the method of claim 1, further comprising:
obtaining one or more preliminary correction images corresponding to the one or more first initial images (Yao, Fig 2, Col 3, lines 50 - Col 4, lines 30 {A preliminary correction image is generated using a conventional artifact reduction algorithm and is provided as an additional input channel to the neural network, alongside the original artifact-affected image and the artifact mask.}); and
generating the trained artifact removal model by training the initial artifact removal model using the one or more first initial images, the one or more preliminary correction images, the one or more objective feature maps, and the one or more reference images (Yao, Fig 2, Col 4, lines 1-30, Col 5, lines 10-55 {The neural network is trained using a set of input data including the original image, the preliminary correction image, the artifact mask and the reference image, so as to minimize the difference between the network output and the artifact-free references.).
Regarding claim 12, Yao teaches a method for artifact removing (Yao, Col 3, lines 55-67, Col 4 lines 10-30 {Artifact reduction (removal) in medical images using deep learning, implemented on computing devices}), which is implemented on a computing device (Yao, Fig 1, computing system 100) including at least one processor (Yao, Fig 1, processing subsystem 101, one or more processor(s) 102) and at least one storage device (Yao, Fig 1, system storage, 114, system memory 104), comprising:
obtaining an initial image and an objective feature map corresponding to the initial
image (Yao, Col 3, lines 25-67 {Preparing training data; acquiring artifact-affected images, reference images and possibly artifact masks/feature maps for supervised learning.}, Fig 30A, Col 71, lines 57-58 {shows a medical image with a significant artifact prior to correction - this is the input, artifact affected image}); and
obtaining a target image with no or reduced artifact by inputting the initial image and the objective feature map into a trained artifact removal model (Yao, Col 3, lines 10-30 {The neural network receives both the image and the feature map as input channels and outputs an artifact-reduced image.}, Fig. 2: Shows the process of inputting both the image and feature map to the neural network for artifact reduction.)
Regarding claim 13, Yao teaches the method of claim 12, further comprising:
obtaining a preliminary correction image corresponding to the initial image (Yao Fig 2, Col 4, lines 10-30, Col 6, lines 1-20 {The reference describes generating and using various intermediate or pre-processed images (such as artifact-corrected images, filtered images, or images from traditional correction methods) as additional inputs or channels to the neural network. See Col. 6, ll. 1–20: “…the network may receive, in addition to the original image and the feature map, a preliminary corrected image obtained by a conventional artifact correction technique…”}); and
obtaining the target image with no or reduced artifact by inputting the initial image, the preliminary correction image, and the objective feature map into the trained artifact removal model (Yao Fig 2, Col 6, lines 1-20 {The neural network can be configured to receive multiple input channels, including the initial image, a preliminary correction image, and a feature map, to output an artifact-reduced image. Fig. 2 illustrates the use of multiple auxiliary inputs.}).
Regarding claim 15, Yao teaches the method of claim 12, wherein the objective feature map includes objective information relating to one or more artifacts in the initial image (Yao, Col 2, lines 61-67 {The reference teaches that the feature map can be derived from objective information (such as metadata, acquisition parameters, or other non-image data) that may relate to the presence, location, or type of artifacts in the image. For example, acquisition parameters may indicate artifact-prone regions or artifact types.}).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 3-4 and 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over Yao et al. (US 11,869,171) herein after referred to as Yao in view of Chen et al. (US 11,062,489 B2) herein after referred to as Chen.
Regarding claim 3, Yao teaches the method of claim 1 wherein the one or more objective feature maps are obtained by:
for each first initial image of the one or more first initial images, obtaining objective information corresponding to the first initial image (Yao, Col 3, lines 61-67 {Obtaining metadata or other objective information about each image, such as scan parameters, artifact type, or patient data, to generate feature maps/masks.).
However, Yao does not explicitly teach transforming the objective information into one or more word vectors based on a feature mapping dictionary;
generating an objective feature map by inputting the one or more word vectors corresponding to the objective information into the objective feature map first initial image by combining the one or more word vectors.
In reference to Chen teaches transforming the objective information into one or more word vectors based
on a feature mapping dictionary (Chen Fig 2A-2C, Col 7, lines 30-46, Col 9, line 12 - Col 10, line 23 {Each meta data field is mapped to a corresponding embedding vector by looking up the value in the embedding table. Embedding table is a feature mapping dictionary and each field is mapped to a word (embedding) vector} Fig 3, Col 10, line 24 – 52 {Fig 3 illustrates an example processing for using both image data and meta data embeddings in an image classification system});
generating an objective feature map corresponding to the first initial image by combining the one or more word vectors (Chen Fig 2A-2C, Col 7, lines 40-45 {The embedding vectors for multiple metadata fields are concentrated to form a metadata feature vector.}, Fig 3 Col 10, line 24 – Col 14, line 61{Fig 3 illustrates an example processing for using both image data and meta data embeddings in an image classification system}).
These arts are analogous since they are both related to imaging devices that perform de-noising. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention (AIA ) to modify the invention of Yao with the transforming the objective information into one or more word vectors based on a feature mapping dictionary; generating an objective feature map by inputting the one or more word vectors corresponding to the objective information into the objective feature map first initial image by combining the one or more word vectors as seen in Chen to reconstruct images with greater efficiency, less artifacts and more greater information and flexibility as included in Col 1, lines 24-29 and Col 3 lines 6-14 of Chen.
Regarding claim 4, Yao teaches the method of claim 1, wherein each objective feature map of the one or more objective feature maps is obtained using a trained objective feature map determination model, the trained objective feature map determination model including an objective information acquisition unit and an objective feature map generation unit, the each objective feature map being obtained by (Yao Fig 2, Col 3, lines 55-67, Col 4, lines 10-30 {The reference describes generating feature maps (artifact masks or other auxiliary maps) using image data and possibly metadata or other objective information. However, it does not describe a separately trained model (“objective feature map determination model”) with distinct “objective information acquisition” and “feature map generation” units.}) :
inputting a first initial image of the one or more first initial images corresponding to each objective feature map into the objective information acquisition unit to obtain at least a portion of objective information corresponding to the first initial image (Yao Fig 2, Col 3, lines 61-67 {The reference describes obtaining objective information (e.g., metadata, scan parameters) associated with the image, but does not detail a model unit that extracts such information directly from the image. Typically, the information is already available as associated data.});
However, Yao does not explicitly teach transforming the objective information into one or more word vectors based on a feature mapping dictionary;
generating each objective feature map by inputting the one or more word vectors corresponding to the objective information into the objective feature map generation unit.
In reference to Chen teaches transforming the objective information into one or more word vectors based
on a feature mapping dictionary (Chen Fig 2A-2C, Col 7, lines 30-46, Col 9, line 12 - Col 10, line 23 {Each meta data field is mapped to a corresponding embedding vector by looking up the value in the embedding table. Embedding table is a feature mapping dictionary and each field is mapped to a word (embedding) vector} Fig 3, Col 10, line 24 – 52 {Fig 3 illustrates an example processing for using both image data and meta data embeddings in an image classification system});
generating an objective feature map corresponding to the first initial image by combining the one or more word vectors corresponding to the objective information into the objective feature map generation unit (Chen Fig 2A-2C, Col 7, lines 40-45 {The embedding vectors for multiple metadata fields are concentrated to form a metadata feature vector.}, Fig 3 Col 10, line 24 – Col 14, line 61 {Fig 3 illustrates an example processing for using both image data and meta data embeddings in an image classification system}).
These arts are analogous since they are both related to imaging devices that perform de-noising. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention (AIA ) to modify the invention of Yao with the transforming the objective information into one or more word vectors based on a feature mapping dictionary; generating an objective feature map by inputting the one or more word vectors corresponding to the objective information into the objective feature map first initial image by combining the one or more word vectors as seen in Chen to reconstruct images with greater efficiency, less artifacts and more greater information and flexibility as included in Col 1, lines 24-29 and Col 3 lines 6-14 of Chen.
Claims 16 and 17 are rejected for the same reasons as claims 3 and 4.
Allowable Subject Matter
Claims 8-11 would be allowable but for the Outstanding 35 USC § 101 – Abstract Idea.
The following is a statement of reasons for the indication of allowable subject matter:
As per claim 8, the closest known prior art fails to teach or fairly suggest alone or in reasonable combination, the limitations (in consideration of the claim as a whole):
“Inputting the objective information into the initial objective feature map determination model;
using the objective information corresponding to the each second initial image as a second training sample, and using a score corresponding to the each second initial image as a
second label, and adjusting one or more parameters of the initial objective feature map determination model based on the second label to obtain a trained objective feature map determination model, wherein, the initial objective feature map determination model including a scoring layer, an input of the scoring layer is a predicted objective feature map generated by the initial objective feature map determination model based on a second training sample, and an output of the scoring layer is a predicted score corresponding to a second initial image that corresponds to the second training sample”.
Claims 9-11 rely on allowed claim 8 and contain allowable subject matter themselves so they are therefore allowed as well.
Claims 5, 14, 18-21 and 23 would be objected to as being dependent upon a rejected base claim, and would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims if the Outstanding 35 USC § 101 – Abstract Idea was over come.
As per claim 5, the closest known prior art fails to teach or fairly suggest alone or in reasonable combination, the limitations (in consideration of the claim as a whole):
“the method of claim 1, wherein the training the initial artifact removal model includes:
obtaining an initial objective feature map determination model;
training the initial objective feature map determination model and the initial artifact removal model synchronously, wherein
one or more word vectors corresponding to objective information of each first initial image of the one or more first initial images are input into the initial objective feature map determination model, and
the initial objective feature map determination model outputs an objective feature map corresponding to the each first initial image”.
As per claim 14, the closest known prior art fails to teach or fairly suggest alone or in reasonable combination, the limitations (in consideration of the claim as a whole):
“wherein the objective feature map is used as a hyper-parameter of the trained artifact removal model (, and configured to facilitate the trained artifact removal model to remove one or more artifacts corresponding to objective information represented by the objective feature map”.
As per claim 18, the closest known prior art fails to teach or fairly suggest alone or in reasonable combination, the limitations (in consideration of the claim as a whole):
“wherein the objective feature map includes information of window width and window level, and the obtaining a target image with no or reduced artifact includes:
adjusting, based on the information of window width and window level included in the objective feature map, window widths and window levels of the initial image, the preliminary correction image, and the target image using the trained artifact removal model”.
As per claim 19, the closest known prior art fails to teach or fairly suggest alone or in reasonable combination, the limitations (in consideration of the claim as a whole):
“wherein the trained artifact removal model includes two or more artifact removal sub-models, and the obtaining a target image with no or reduced artifact includes:
determining a target sub-model among the two or more artifact removal sub-models
based on the objective feature map;
obtaining the target image with no or reduced artifact by inputting the initial image,
the preliminary correction image, and the objective feature map into the target sub-model”.
As per claim 20, the closest known prior art fails to teach or fairly suggest alone or in reasonable combination, the limitations (in consideration of the claim as a whole):
“wherein the objective feature map includes information relating to a degree of artifact removal”.
As per claim 21, the closest known prior art fails to teach or fairly suggest alone or in reasonable combination, the limitations (in consideration of the claim as a whole):
“further comprising:
determining a score of the target image;
determining whether to further process the target image based on the score;
in response to a determination that the target image is to be further processed,
updating the objective feature map based on the score to obtain an updated
objective feature map;
obtaining an updated target image by inputting the target image, the preliminary correction image, and the updated objective feature map into the trained artifact removal model”.
As per claim 23, the closest known prior art fails to teach or fairly suggest alone or in reasonable combination, the limitations (in consideration of the claim as a whole):
“further comprising:
obtaining an instruction through a user interface, the instruction indicating a score of
the target image or information relating to adjustment of a degree of artifact removal;
updating the objective feature map based on the instruction to obtain an updated objective feature map;
obtaining an updated target image by inputting the target image, the preliminary correction image, and the updated objective feature map into the trained artifact removal model”.
Conclusion
Any inquiry concerning this communication or earlier communications from the Supervisory Patent Examiner should be directed to TWYLER L HASKINS whose telephone number is (571)272-7406. The Supervisory Patent Examiner can normally be reached Mon-Thur: 7:30 am-4:30 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the Supervisory Patent Examiner’s director, Greg Toatley can be reached at (571) 272-4650. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/TWYLER L HASKINS/ Supervisory Patent Examiner, Art Unit 2639