DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of claims: claims 1-30 are examined below.
Response to Arguments
Applicant's arguments filed 12/11/2025 have been fully considered but they are not persuasive.
Applicant remark – (pages 8-10) Applicant argued the lack of teaching regarding the new claim amendments in the independent claims center around determining and adjustment of weights of a neural network for identify boundary of object of interest. Same arguments for the dependent claims as well. Please see Remarks for furth detail.
Examiner response – Examiner respectfully disagree. And updated search found that Yip et al (US 2021/0166381) in the same field of predicting boundary of tumor cell with in a slide image teaches the concept, in paragraph 0254, 0256, 0435, and the determination of determining and adjustment of weights in the neural network in figure 3 and 0336. A review of the interview, dated 8/11/2025, the discussion center on training of neural network based on two type of training data: 1) the portion that is generated by the user and 2) the percentage of that organ of interest that is of interest by the user to the whole image, Examiner does not see this reflect in these new claim amendments. For compact prosecution, the Examiner found prior art Yip et al (US 2021/0166381) that addresses both the new claim amendment as well as the interview discussion. Please see the Office Action below for further detail. Same argument applies to dependent claims.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-30 are rejected under 35 U.S.C. 103 as being unpatentable over KAUFMAN et al (US 2020/0226748) in view of Yip et al (US 2021/0166381).
Claim 1:
KAUFMAN et al (US 2020/0226748) teach the following subject matter:
One or more processors (figure 21 and 0137 teaches one or more microprocessors with memory), comprising: circuitry to identify boundaries of one or more objects within an image based, at least
in part, on one or more indications of one or more locations of the one or more objects in the image (0010-0011 teaches neural network generating outline (boundary) for foreground and background to visualized lesion with location and shape; figure 19 and 0132 teaches outline manually generated by expert (user generated) of object (pancreas, liver, spleen)) and
at least in part, on one or more indications of a proportion of the image that comprises the one or more objects (figure 1 and 0046-0047 teaches scan to identify various regions of tissue type and organs or lesion, where one ordinary skill in the arts understand the “various regions” is part or proportion of one or more objects of interest; paragraph 0110 teaches utilizing probabilistic atlases and statistical shape model of registered volumetric image in order to recognize variation of size, shape and location of organs(objects); where one ordinary skill in the arts where probabilistic atlases and statistical shape, are portions of object of interest to be identify due to overlapping/layer of organs during scanning.);
KAUFMAN et al teaches all the subject matter above with the use of neural network, but not the following:
determine one or more weights of the one or more neural network; adjusts the one or more weights of the one or more neural networks based; and causes the one or more neural networks comprising the adjusted weights to generate output identifying the boundaries of the one or more objects.
Yip et al (US 2021/0166381) teaches the following subject matter regarding boundary of region of interest to a whole slide image (abstract):
determine one or more weights of the one or more neural network; adjusts the one or more weights of the one or more neural networks based; and causes the one or more neural networks comprising the adjusted weights to generate output identifying the boundaries of the one or more objects (figure 3 and 0336 teaches training of the deep learning where the weights in the layers (determine weight with in the neural network) are further adjusted to accurately label the boundaries for classification, with help of human analyst (around the region of interest) to the image (whole image)).
KAUFMAN et al and Yip et al are both in the field of image analysis, especially the use of neural network for with human input for address region of interest process in conjunction with the whole image such the combine outcome is predictable.
Therefore it would have been obvious to one having ordinary skill before the effective filing date to modify KAUFMAN et al by Yip et al such that the training set of matrices provide an accurate labeling for the tissue classifier as disclosed by Yip et al in 0036. Where 0254 detail the metric between proportion of tissue by user-selected threshold to the digital image, 0256 technician dissecting to isolate tumor to non-tumor tissue in slide (whole image slide), and 0435 detail percentage of tumor content boundary dissected by pathologist to the whole slide content through the GUI (graphic user interface). All this is used for training of the deep learning network (neural network) that would determine the layer of weight and adjustment of those weights.
Claim 2:
KAUFMAN et al further teaches:
The one or more processors of claim 1, wherein the one or more indications of one or more locations includes a user-generated outline that is a polygon with points located proximate the boundaries of the one or more objects (0010-0011 teaches neural network generating outline (boundary) for foreground and background to visualized lesion with location and shape, and further teach outlines due to characteristic of the foreground with first and second classifiers; 0047-0049 teaches outline refined (proximate) by lesions; figure 1 part 110 show polygon around object; above teaches user generated).
Claim 3:
KAUFMAN et al further teaches:
The one or more processors of claim 1, wherein the circuitry is further to: identify the boundaries of the one or more objects within the image further based on information about a size of the one or more objects provided by a user. (074 teaches consideration of average size of lesions; 0077 teaches CNN classifier for size of lesion; 0106 teaches size; 0109-0110 teaches segmentation of structures in size, position, shape and location; 0134).
Claim 4:
KAUFMAN et al further teaches:
The one or more processors of claim 3, wherein the information about the size is an estimated percentage of the image occupied by the one or more objects. (0070 teaches general shape of accurate diagnosis define with percentage of lesion pixel).
Claim 5:
KAUFMAN et al further teaches:
The one or more processors of claim 1, wherein the boundaries are identified using one or more neural networks trained using semi-supervised and self-supervised representation learning, in a first stage, with probabilistic weak supervision in a second stage. (0126 teaches two stage convnet-based, where 0009 detail first stage classifiers (neural network) with Random Forest classifier (supervised) and second stage classifier to be convolutional neural network classifier; 0013 detail multi-label segmentation using convolutional neural network, where multi-label is supervised learning).
Claim 6:
KAUFMAN et al further teaches:
The one or more processors of claim 1, wherein the one or more objects is a tumor and the image is a histopathologic image. (0119 teaches tumor; 0044 teaches lesions for the histopathological).
Claim 7:
KAUFMAN et al (US 2020/0226748) teach the following subject matter:
A system (figure 1 and 0046 teaches system) comprising: one or more processors (figure 21 and 0137 teaches one or more microprocessors with memory) to identify [[the]] boundaries of the object one or more objects within [[the]] an image based, at least in part, on one or more indications of one or more locations of the one or more objects in the image (0010-0011 teaches neural network generating outline (boundary) for foreground and background to visualized lesion with location and shape; figure 19 and 0132 teaches outline manually generated by expert (user generated) of object (pancreas, liver, spleen)) and information about a size of the object provided by a user one or more indications of a proportion of the image that comprises the one or more objects (figure 1 and 0046-0047 teaches scan to identify various regions of tissue type and organs or lesion, where one ordinary skill in the arts understand the “various regions” is part or proportion of one or more objects of interest; paragraph 0110 teaches utilizing probabilistic atlases and statistical shape model of registered volumetric image in order to recognize variation of size, shape and location of organs(objects); where one ordinary skill in the arts where probabilistic atlases and statistical shape, are portions of object of interest to be identify due to overlapping/layer of organs during scanning).
KAUFMAN et al teaches all the subject matter above with the use of neural network, but not the following:
determine one or more weights of the one or more neural network; adjusts the one or more weights of the one or more neural networks based; and causes the one or more neural networks comprising the adjusted weights to generate output identifying the boundaries of the one or more objects.
Yip et al (US 2021/0166381) teaches the following subject matter regarding boundary of region of interest to a whole slide image (abstract):
determine one or more weights of the one or more neural network; adjusts the one or more weights of the one or more neural networks based; and causes the one or more neural networks comprising the adjusted weights to generate output identifying the boundaries of the one or more objects (figure 3 and 0336 teaches training of the deep learning where the weights of the layers (determine weight with in the neural network) are further adjusted to accurately label the boundaries for classification, with help of human analyst (around the region of interest) to the image (whole image)).
KAUFMAN et al and Yip et al are both in the field of image analysis, especially the use of neural network for with human input for address region of interest process in conjunction with the whole image such the combine outcome is predictable.
Therefore it would have been obvious to one having ordinary skill before the effective filing date to modify KAUFMAN et al by Yip et al such that the training set of matrices provide an accurate labeling for the tissue classifier as disclosed by Yip et al in 0036. Where 0254 detail the metric between proportion of tissue by user-selected threshold to the digital image, 0256 technician dissecting to isolate tumor to non-tumor tissue in slide (whole image slide), and 0435 detail percentage of tumor content boundary dissected by pathologist to the whole slide content through the GUI (graphic user interface). All this is used for training of the deep learning network (neural network) that would determine the layer of weight and adjustment of those weights.
Claim 8:
KAUFMAN et al further teaches:
The system of claim 7, wherein the one or more processors identify the boundaries of the one or more objects within the image based, at least in part, on information about a size of the one or more objects, wherein the information is an estimated percentage of the image occupied by the one or more objects. (0010-0011 teaches neural network generating outline (boundary) for foreground and background to visualized lesion with location and shape, and further teach outlines due to characteristic of the foreground with first and second classifiers; 0047-0049 teaches outline refined (proximate) by lesions; figure 1 part 110 show polygon around object; above teaches user generated).
Claim 9:
KAUFMAN et al further teaches:
The system of claim 7, wherein the one or more processors are further to identify boundaries of [[an]] one or more objects within an image based, at least in part, on a user-generated outline of only a portion of the one or more objects. (0074 teaches consideration of average size of lesions; 0077 teaches CNN classifier for size of lesion; 0106 teaches size; 0109-0110 teaches segmentation of structures in size, position, shape and location; 0134).
Claim 10:
KAUFMAN et al further teaches:
The system of claim 9, wherein the user-generated outline is a polygon with points located proximate the boundaries of the one or more objects. (0070 teaches general shape of accurate diagnosis define with percentage of lesion pixel).
Claim 11:
KAUFMAN et al further teaches:
The system of claim 7, wherein the boundaries are identified using one or more neural networks trained using semi-supervised and self-supervised representation learning, in a first stage, with probabilistic weak supervision in a second stage (0126 teaches two stage convnet-based, where 0009 detail first stage classifiers (neural network) with Random Forest classifier (supervised) and second stage classifier to be convolutional neural network classifier; 0013 detail multi-label segmentation using convolutional neural network, where multi-label is supervised learning).
Claim 12:
KAUFMAN et al further teaches:
The system of claim 7, wherein the one or more objects is a tumor and the image is a histopathologic image. (0119 teaches tumor; 0044 teaches lesions for the histopathological).
Claim 13:
KAUFMAN et al (US 2020/0226748) teach the following subject matter:
A method (abstract teach method) comprising: identifying boundaries of (0010-0011 teaches neural network generating outline (boundary) for foreground and background to visualized lesion with location and shape; figure 19 and 0132 teaches outline manually generated by expert (user generated) of object (pancreas, liver, spleen)) and (figure 1 and 0046-0047 teaches scan to identify various regions of tissue type and organs or lesion, where one ordinary skill in the arts understand the “various regions” is part or proportion of one or more objects of interest; paragraph 0110 teaches utilizing probabilistic atlases and statistical shape model of registered volumetric image in order to recognize variation of size, shape and location of organs(objects); where one ordinary skill in the arts where probabilistic atlases and statistical shape, are portions of object of interest to be identify due to overlapping/layer of organs during scanning).
KAUFMAN et al teaches all the subject matter above with the use of neural network, but not the following:
determine one or more weights of the one or more neural network; adjusts the one or more weights of the one or more neural networks based; and causes the one or more neural networks comprising the adjusted weights to generate output identifying the boundaries of the one or more objects.
Yip et al (US 2021/0166381) teaches the following subject matter regarding boundary of region of interest to a whole slide image (abstract):
determine one or more weights of the one or more neural network; adjusts the one or more weights of the one or more neural networks based; and causes the one or more neural networks comprising the adjusted weights to generate output identifying the boundaries of the one or more objects (figure 3 and 0336 teaches training of the deep learning where the weights of the layers (determine weight with in the neural network) are further adjusted to accurately label the boundaries for classification, with help of human analyst (around the region of interest) to the image (whole image)).
KAUFMAN et al and Yip et al are both in the field of image analysis, especially the use of neural network for with human input for address region of interest process in conjunction with the whole image such the combine outcome is predictable.
Therefore it would have been obvious to one having ordinary skill before the effective filing date to modify KAUFMAN et al by Yip et al such that the training set of matrices provide an accurate labeling for the tissue classifier as disclosed by Yip et al in 0036. Where 0254 detail the metric between proportion of tissue by user-selected threshold to the digital image, 0256 technician dissecting to isolate tumor to non-tumor tissue in slide (whole image slide), and 0435 detail percentage of tumor content boundary dissected by pathologist to the whole slide content through the GUI (graphic user interface). All this is used for training of the deep learning network (neural network) that would determine the layer of weight and adjustment of those weights.
Claim 14:
KAUFMAN et al further teaches:
The method of claim 13, wherein the one or more indications of one or more locations is a user-generated outline with points located proximate the boundaries of the one or more objects. (0010-0011 teaches neural network generating outline (boundary) for foreground and background to visualized lesion with location and shape, and further teach outlines due to characteristic of the foreground with first and second classifiers; 0047-0049 teaches outline refined (proximate) by lesions; figure 1 part 110 show polygon around object; above teaches user generated).
Claim 15:
KAUFMAN et al further teaches:
The method of claim 13, further comprising: identifying the boundaries of the one or more objects within the image further based on information about a size of the one or more objects provided by a user. (0074 teaches consideration of average size of lesions; 0077 teaches CNN classifier for size of lesion; 0106 teaches size; 0109-0110 teaches segmentation of structures in size, position, shape and location; 0134).
Claim 16:
KAUFMAN et al further teaches:
The method of claim 15, wherein the information about the size is an estimated percentage of the image occupied by the one or more objects. (0070 teaches general shape of accurate diagnosis define with percentage of lesion pixel).
Claim 17:
KAUFMAN et al further teaches:
The method of claim 13, wherein the boundaries are identified using one or more neural networks trained using semi-supervised and self-supervised representation learning, in a first stage, with probabilistic weak supervision in a second stage. (0126 teaches two stage convnet-based, where 0009 detail first stage classifiers (neural network) with Random Forest classifier (supervised) and second stage classifier to be convolutional neural network classifier; 0013 detail multi-label segmentation using convolutional neural network, where multi-label is supervised learning).
Claim 18:
KAUFMAN et al further teaches:
The method of claim 13, wherein the one or more objects is a tumor and the image is a histopathologic image. (0119 teaches tumor; 0044 teaches lesions for the histopathological).
Claim 19:
KAUFMAN et al (US 2020/0226748) teach the following subject matter:
A machine-readable medium (abstract teaches computer-accessible medium; figure 21 and 0138 teaches hard disk, CD-ROM, RAM, ROM) having stored thereon a set of instructions, which if performed by one or more processors, cause the one or more processors to at least: identify [[the]] boundaries of one or more objects within [[the]] an image based, at least in part, on one or more indications of one or more locations of the one or more objects in the image (0010-0011 teaches neural network generating outline (boundary) for foreground and background to visualized lesion with location and shape; figure 19 and 0132 teaches outline manually generated by expert (user generated) of object (pancreas, liver, spleen)) and one or more indications of a proportion of the image that comprises the one or more objects (figure 1 and 0046-0047 teaches scan to identify various regions of tissue type and organs or lesion, where one ordinary skill in the arts understand the “various regions” is part or proportion of one or more objects of interest; paragraph 0110 teaches utilizing probabilistic atlases and statistical shape model of registered volumetric image in order to recognize variation of size, shape and location of organs(objects); where one ordinary skill in the arts where probabilistic atlases and statistical shape, are portions of object of interest to be identify due to overlapping/layer of organs during scanning).
KAUFMAN et al teaches all the subject matter above with the use of neural network, but not the following:
determine one or more weights of the one or more neural network; adjusts the one or more weights of the one or more neural networks based; and causes the one or more neural networks comprising the adjusted weights to generate output identifying the boundaries of the one or more objects.
Yip et al (US 2021/0166381) teaches the following subject matter regarding boundary of region of interest to a whole slide image (abstract):
determine one or more weights of the one or more neural network; adjusts the one or more weights of the one or more neural networks based; and causes the one or more neural networks comprising the adjusted weights to generate output identifying the boundaries of the one or more objects (figure 3 and 0336 teaches training of the deep learning where the weights of the layers (determine weight with in the neural network) are further adjusted to accurately label the boundaries for classification, with help of human analyst (around the region of interest) to the image (whole image)).
KAUFMAN et al and Yip et al are both in the field of image analysis, especially the use of neural network for with human input for address region of interest process in conjunction with the whole image such the combine outcome is predictable.
Therefore it would have been obvious to one having ordinary skill before the effective filing date to modify KAUFMAN et al by Yip et al such that the training set of matrices provide an accurate labeling for the tissue classifier as disclosed by Yip et al in 0036. Where 0254 detail the metric between proportion of tissue by user-selected threshold to the digital image, 0256 technician dissecting to isolate tumor to non-tumor tissue in slide (whole image slide), and 0435 detail percentage of tumor content boundary dissected by pathologist to the whole slide content through the GUI (graphic user interface). All this is used for training of the deep learning network (neural network) that would determine the layer of weight and adjustment of those weights.
Claim 20:
KAUFMAN et al further teaches:
The machine-readable medium of The machine-readable medium of wherein the one or more processors identify the boundaries of the one or more objects within the image based, at least in part, on information about an estimated percentage of the image occupied by the one or more objects (0010-0011 teaches neural network generating outline (boundary) for foreground and background to visualized lesion with location and shape, and further teach outlines due to characteristic of the foreground with first and second classifiers; 0047-0049 teaches outline refined (proximate) by lesions; figure 1 part 110 show polygon around object; above teaches user generated).
Claim 21:
KAUFMAN et al further teaches:
The machine-readable medium of claim 19, wherein the one or more processors are further to identify boundaries of [[an]] one or more objects within an image based, at least in part, on a user-generated outline of only a portion of the one or more objects. (074 teaches consideration of average size of lesions; 0077 teaches CNN classifier for size of lesion; 0106 teaches size; 0109-0110 teaches segmentation of structures in size, position, shape and location; 0134).
Claim 22:
KAUFMAN et al teaches:
The machine-readable medium of The machine-readable medium of wherein the user-generated outline is a polygon with points located proximate the boundaries of the one or more objects. (0070 teaches general shape of accurate diagnosis define with percentage of lesion pixel).
Claim 23:
KAUFMAN et al teaches:
The machine-readable medium of claim 19, wherein the boundaries are identified using one or more neural networks trained using semi-supervised and self-supervised representation learning, in a first stage, with probabilistic weak supervision in a second stage. (0126 teaches two stage convnet-based, where 0009 detail first stage classifiers (neural network) with Random Forest classifier (supervised) and second stage classifier to be convolutional neural network classifier; 0013 detail multi-label segmentation using convolutional neural network, where multi-label is supervised learning).
Claim 24:
KAUFMAN et al teaches:
The machine-readable medium of claim 19, wherein the one or more objects is a tumor and the image is a histopathologic image. (0119 teaches tumor; 0044 teaches lesions for the histopathological).
Claim 25:
KAUFMAN et al (US 2020/0226748) teach the following subject matter:
An image annotation system (0104 teaches ground truth annotation; 0106 teaches expert annotation; figure 1 and 0046 teaches system), comprising: one or more processors (figure 21 and 0137 teaches one or more microprocessors with memory) to identify, using one or more neural networks, boundaries of one or more objects within an image based, at least in part, on one or more indications of one or more locations of the one or more objects in the image (0011 teaches neural network generating outline (boundary) for foreground and background to visualized lesion with location and shape; figure 19 and 0132 teaches outline manually generated by expert (user generated) of object (pancreas, liver, spleen)) and one or more indications of a proportion of the image that comprises the one or more objects (figure 1 and 0046-0047 teaches scan to identify various regions of tissue type and organs or lesion, where one ordinary skill in the arts understand the “various regions” is part or proportion of one or more objects of interest; paragraph 0110 teaches utilizing probabilistic atlases and statistical shape model of registered volumetric image in order to recognize variation of size, shape and location of organs(objects); where one ordinary skill in the arts where probabilistic atlases and statistical shape, are portions of object of interest to be identify due to overlapping/layer of organs during scanning); and memory for storing network parameters for the one or more neural networks (figure 21 and 0137 teaches one or more microprocessors with memory).
KAUFMAN et al teaches all the subject matter above with the use of neural network, but not the following:
determine one or more weights of the one or more neural network; adjusts the one or more weights of the one or more neural networks based; and causes the one or more neural networks comprising the adjusted weights to generate output identifying the boundaries of the one or more objects.
Yip et al (US 2021/0166381) teaches the following subject matter regarding boundary of region of interest to a whole slide image (abstract):
determine one or more weights of the one or more neural network; adjusts the one or more weights of the one or more neural networks based; and causes the one or more neural networks comprising the adjusted weights to generate output identifying the boundaries of the one or more objects (figure 3 and 0336 teaches training of the deep learning where the weights of the layers (determine weight with in the neural network) are further adjusted to accurately label the boundaries for classification, with help of human analyst (around the region of interest) to the image (whole image)).
KAUFMAN et al and Yip et al are both in the field of image analysis, especially the use of neural network for with human input for address region of interest process in conjunction with the whole image such the combine outcome is predictable.
Therefore it would have been obvious to one having ordinary skill before the effective filing date to modify KAUFMAN et al by Yip et al such that the training set of matrices provide an accurate labeling for the tissue classifier as disclosed by Yip et al in 0036. Where 0254 detail the metric between proportion of tissue by user-selected threshold to the digital image, 0256 technician dissecting to isolate tumor to non-tumor tissue in slide (whole image slide), and 0435 detail percentage of tumor content boundary dissected by pathologist to the whole slide content through the GUI (graphic user interface). All this is used for training of the deep learning network (neural network) that would determine the layer of weight and adjustment of those weights.
Claim 26:
KAUFMAN et al teaches:
The image annotation system of claim 25, wherein the one or more indications of one or more locations comprises a polygon with points located proximate the boundaries of the one or more objects. (-0011 teaches neural network generating outline (boundary) for foreground and background to visualized lesion with location and shape, and further teach outlines due to characteristic of the foreground with first and second classifiers; 0047-0049 teaches outline refined (proximate) by lesions; figure 1 part 110 show polygon around object; above teaches user generated).
Claim 27:
KAUFMAN et al teaches:
The image annotation system of claim 25, wherein the one or more processors are further to identify the boundaries of the one or more objects within the image further based on information about a size of the one or more objects provided by a user.. (00074 teaches consideration of average size of lesions; 0077 teaches CNN classifier for size of lesion; 0106 teaches size; 0109-0110 teaches segmentation of structures in size, position, shape and location; 0134).
Claim 28:
KAUFMAN et al teaches:
The image annotation system of claim 27, wherein the information about the size is an estimated percentage of the image occupied by the one or more objects. (0070 teaches general shape of accurate diagnosis define with percentage of lesion pixel).
Claim 29:
KAUFMAN et al teaches:
The image annotation system of claim 25, wherein the boundaries are identified using one or more neural networks trained using semi-supervised and self-supervised representation learning, in a first stage, with probabilistic weak supervision in a second stage. (0126 teaches two stage convnet-based, where 0009 detail first stage classifiers (neural network) with Random Forest classifier (supervised) and second stage classifier to be convolutional neural network classifier; 0013 detail multi-label segmentation using convolutional neural network, where multi-label is supervised learning).
Claim 30:
KAUFMAN et al teaches:
The image annotation system of claim 25, wherein the one or more objects is a tumor and the image is a histopathologic image. (0119 teaches tumor; 0044 teaches lesions for the histopathological).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Yaroslavsky et al (US 2013/0324846) teaches DEVICES AND METHODS FOR OPTICAL PATHOLOGY – 0061 teaches manually outlined and 0064 teaches tumor with histopathology with tumor (outlined).
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to TSUNG-YIN TSAI whose telephone number is (571)270-1671. The examiner can normally be reached 7am-4pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bhavesh Mehta can be reached at (571) 272-7453. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/TSUNG YIN TSAI/Primary Examiner, Art Unit 2656