DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
1 This action is in response to the amendment filed on 12/08/2025. Claims 1, 3-4, 7-10, and 12-13 were amended, claim 2 was cancelled, and claims 14-21 are new additions. Additionally, the abstract was amended to overcome an objection. Claims 1 and 3-13 remain rejected, and claims 14-21 are rejected.
Response to Arguments
2 Applicant’s arguments with respect to independent claims 1, 12, and 13 filed on 12/08/2025, with respect to the rejection under 35 USC § 102 regarding that the prior art does not teach the following but not limited to "meta data of the three-dimensional model includes position information representing a meta data target part that is a part which is suspected of lesion and which was detected based on the preliminary examination” and “highlighted by a mark on the endoscopic image”. Some elements of claim 2 were added into these claims, thus cancelling claim 2 as well. This argument has been considered, but are moot due to similar grounds of rejection under more context seen in the rejections below. In addition, a highlighted “mark” can be a broad term which it notifies a location on an altered photograph.
3 Regarding arguments to claims 3-13, they directly/indirectly depend on independent claim 1 respectively. Applicant does not argue anything other than independent claims 1, 12, and 13. The limitation in those claims, in conjunction with combination, was mostly previously established as explained, with a few changes being adjusted to connect with the changes of the independent claims.
4 Claim 2 has been cancelled by the applicant as mentioned previously, therefore the claim will not be reviewed further.
5 Claims 14-21 are new claims that were added, and are dependent of the independent claims 1 and 12. They are considered, but are moot under new and similar grounds of rejection under 35 USC § 102 and 35 USC § 103.
Claim Rejections - 35 USC § 102
6 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
7 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
8 Claim(s) 1, 3-4, 8-10, 12-16, and 20-21 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Nishide et al. (US 20220198742 A1).
9 Regarding claim 1, Nishide teaches an image processing device comprising:
at least one memory configured to store instructions ([0065] reciting “The storage unit 22 includes a memory element such as a random access memory (RAM) or a read only memory (ROM), and stores the control program 2P or data necessary for the control unit 21 to perform processing.”); and
at least one processor configured to execute the instructions to ([0012] reciting “In one aspect, an object is to provide a processor for an endoscope and the like that can be effectively used for diagnosis.”; [0065] reciting “The storage unit 22 includes a memory element such as a random access memory (RAM) or a read only memory (ROM), and stores the control program 2P or data necessary for the control unit 21 to perform processing.”):
generate reconstructed data obtained by three-dimensionally reconstructing an examination target, based on endoscopic images of the examination target captured by an endoscope ([0129] reciting “The distance image is a two-dimensional image having a linear distance from a viewpoint to the observation target (for example, the inner wall of the large intestine region) as a pixel value. The control unit 21 of the processor 2 reconstructs the virtual endoscopic image on the basis of the three-dimensional medical image, and makes the reconstructed virtual endoscopic image and the endoscopic image match each other. The control unit 21 obtains (generates) the distance image on the basis of a distance from the viewpoint to a three-dimensional image corresponding to each pixel of the reconstructed virtual endoscopic image based on the viewpoint position and the line-of-sight direction at and in which the endoscopic image and the virtual endoscopic image match each other.”);
perform matching between a three-dimensional model of the examination target and the reconstructed data ([0195] reciting “The virtual endoscopic image is generated from the three-dimensional medical image in order to perform matching processing with the endoscopic image, and for example, a virtual endoscopic image that matches most with the endoscopic image is registered in the same record as the endoscopic image.”),
wherein the three-dimensional model is a preliminary examination model of the examination target and is generated based on scan data generated during a preliminary examination of the examination target ([0009] reciting “…acquiring a three-dimensional medical image obtained in a manner in which an image of the inside of the body of the subject is captured by at least one of an X-ray CT scan, an X-ray cone beam CT scan, or an MRI-CT scan…”; [0304] reciting “The feature parameter corresponds to a region of the endoscopic image, that is, an intracorporeal site or pixel included in the endoscopic image, and the intracorporeal site or pixel corresponds to coordinates on and a pixel of the three-dimensional medical image by the distance image information generated from the virtual endoscopic image.”),
wherein meta data of the three-dimensional model includes position information representing a meta data target part that is a part which is suspected of lesion and which was detected based on the preliminary examination conducted prior to the endoscopic examination ([0128] reciting “Specifically, the control unit 21 assigns the management ID to associate the management ID with the patient ID and the three-dimensional position on the patient or the Z coordinate, and stores, in the diagnosis support information DB 291, a diagnosis content, a diagnosis date and time, a three-dimensional position on the patient, or the Z coordinate which is the insertion distance as one record.”; [0188] reciting “The image table includes, for example, the subject ID, a date of examination, an endoscopic image, a frame number, an S coordinate (insertion distance), a three-dimensional medical image, a viewpoint position, a viewpoint direction, and a virtual endoscopic image as management items (metadata).”; [0255] reciting “The three-dimensional medical image may be displayed in a state where the position of a lesion specified in the endoscopic image is highlighted, for example.”);
display information regarding the meta data ([0188] reciting “The image table includes, for example, the subject ID, a date of examination, an endoscopic image, a frame number, an S coordinate (insertion distance), a three-dimensional medical image, a viewpoint position, a viewpoint direction, and a virtual endoscopic image as management items (metadata).”; [0253] reciting “The integrated image display screen 71 includes, for example, a region for displaying a bibliographic item such as the subject ID, a region for displaying the three-dimensional medical image, a region for displaying the endoscopic image, a region for displaying a virtual endoscopic image, a region for displaying a viewpoint position at which the endoscopic image is captured, or the like, and a region for displaying information on an intracorporeal site (pixel) selected in the endoscopic image.”), based on a result of the matching ([0273] reciting “The control unit 62 of the information processing device 6 displays information regarding the displayed endoscopic image (S807). The control unit 62 displays, for example, a virtual endoscopic image that matches most with the endoscopic image or information regarding an intracorporeal site (pixel) selected in the displayed endoscopic image as the information regarding the displayed endoscopic image.”); and
upon determining, based on the result of the matching and the position information, that the meta data target part is included in an endoscopic image, display the endoscopic image in which the meta data target part is highlighted by a mark on the endoscopic image ([0103] reciting “The control unit 21 adds the diagnosis support information to the endoscopic image acquired from the endoscope 1 and outputs the endoscopic image added with the diagnosis support information to the display device 3 (Step S209). The display device 3 displays the endoscopic image output from the processor 2 and the diagnosis support information (Step S301), and ends the processing. Note that the associated virtual endoscopic image may also be displayed at this time. In addition, a portion considered to be a tumor candidate may be displayed in a different color or may be highlighted for easy visual recognition.”).
10 Regarding claim 3, Nishide teaches the image processing device according to claim 1 (see claim 1 rejection above), wherein examples of the meta data target part includes a part diagnosed as a lesion part ([0147] reciting “Specifically, the control unit 21 of the processor 2 inputs the endoscopic image after the pixel value correction to the image recognition model 292 by using the trained image recognition model 292, and outputs a result of identifying a lesion, a tissue, or the like (for example, a polyp in the large intestine)…”; [0153] reciting “The control unit 21 extracts the image feature parameter for the endoscopic image after the area correction by using the image recognition model 292 to acquire a recognition result (diagnosis support information) of recognizing a lesion (for example, a polyp in the large intestine) or the like (Step S233).”).
11 Regarding claim 4, Nishide teaches the image processing device according to claim 1 (see claim 1 rejection above), wherein the meta data includes information indicating a diagnosis result regarding the meta data target part ([0127] reciting “Note that the processing is not limited to the above. For example, the diagnosis support information may be output using a trained image recognition model that outputs a recognition result in a case where the endoscopic image after the pixel value correction is input.”; [0188] reciting “The image table includes, for example, the subject ID, a date of examination, an endoscopic image, a frame number, an S coordinate (insertion distance), a three-dimensional medical image, a viewpoint position, a viewpoint direction, and a virtual endoscopic image as management items (metadata).”), and
wherein the at least one processor is configured to execute the instructions to display the information indicating the diagnosis result in association with the displayed endoscopic image in which the meta data target part is highlighted ([0103] reciting “The display device 3 displays the endoscopic image output from the processor 2 and the diagnosis support information (Step S301), and ends the processing. Note that the associated virtual endoscopic image may also be displayed at this time. In addition, a portion considered to be a tumor candidate may be displayed in a different color or may be highlighted for easy visual recognition.”).
12 Regarding claim 8, Nishide teaches the image processing device according to claim 1 (see claim 1 rejection above), wherein the meta data target part is a part diagnosed as a lesion part ([0147] reciting “Specifically, the control unit 21 of the processor 2 inputs the endoscopic image after the pixel value correction to the image recognition model 292 by using the trained image recognition model 292, and outputs a result of identifying a lesion, a tissue, or the like (for example, a polyp in the large intestine)…), and
wherein the at least one processor is configured to execute the instructions to display information based on a comparison result between a detected position of the lesion part based on the endoscopic image and the meta data target part ([0153] reciting “The control unit 21 generates a display image by superimposing the endoscopic image after the area correction and the diagnosis support information output from the trained image recognition model 292 by using the endoscopic image after the area correction (Step S234).”).
13 Regarding claim 9, Nishide teaches the image processing device according to claim 1 (see claim 1 rejection above), wherein the at least one processor is configured to execute the instructions to determine a display mode relating to the meta data target part based on the type of the meta data target part ([0186] reciting “The endoscopic image DB 631 includes, for example, a subject master table and an image table, and the subject master table and the image table are associated with each other by using a subject ID that is an item (metadata) included in both tables.”; [0263] reciting “The display mode switching field 715 includes a radio button or the like for switching between a three-dimensional medical image mode in which the three-dimensional medical image is mainly displayed and an endoscopic image mode in which the endoscopic image is displayed.”).
14 Regarding claim 10, Nishide teaches the image processing device according to claim 1 (see claim 1 rejection above), wherein, if the meta data target part is a part diagnosed as a lesion part ([0147] reciting “Specifically, the control unit 21 of the processor 2 inputs the endoscopic image after the pixel value correction to the image recognition model 292 by using the trained image recognition model 292, and outputs a result of identifying a lesion, a tissue, or the like (for example, a polyp in the large intestine)), the at least one processor is configured to execute the instructions to
extract a lesion region including the meta data target part ([0146] reciting “For example, the image recognition model 292 is a trained model generated by machine learning, and may be an image recognizer that recognizes a lesion, a tissue, or the like in the body of the subject by extracting a chromaticity feature parameter of the endoscopic image.”) and
highlight the lesion region in the displayed endoscopic image ([0255] reciting “The three-dimensional medical image may be displayed in a state where the position of a lesion specified in the endoscopic image is highlighted, for example.”).
15 Claims 12 and 13 has similar limitations as of claim 1, therefore it is rejected under the same rationale as claim 1.
16 Regarding claim 14, Nishide teaches the image processing device according to claim 3 (see claims 1 and 3 rejections above), wherein the at least one processor is configured to execute the instructions to detect the lesion part using a lesion detection model, wherein the lesion detection model is trained through machine learning with using endoscopic images and position of lesion parts in the endoscopic images as training data ([0146] reciting “Note that, in the present embodiment, an example of the image recognition model 292 for polyp extraction will be described, but other trained image recognition models may be used. For example, the image recognition model 292 is a trained model generated by machine learning, and may be an image recognizer that recognizes a lesion, a tissue, or the like in the body of the subject by extracting a chromaticity feature parameter of the endoscopic image.”; [0284] reciting “The information processing device 6 performs the above-described processing on the endoscopic image included in the training data, generates the learning model 9, and stores the generated learning model 9 in the storage unit 63.”; [0295] reciting “The output unit 629 outputs the diagnosis support information including the presence or absence of a lesion and the like acquired from the learning model 9 to a display unit 7 in association with the endoscopic image that is a target of the diagnosis support information…the output unit 629 may output the diagnosis support information and the three-dimensional medical image so as to be displayed on the display unit 7 in a state where the position of the lesion in the three-dimensional medical image is highlighted, for example.”).
17 Claim 15 has similar limitations as of claim 3, therefore it is rejected under the same rationale as claim 3.
18 Claim 16 has similar limitations as of claim 4, therefore it is rejected under the same rationale as claim 4.
19 Claim 20 has similar limitations as of claim 8, therefore it is rejected under the same rationale as claim 8.
20 Claim 21 has similar limitations as of claim 9, therefore it is rejected under the same rationale as claim 9.
Claim Rejections - 35 USC § 103
21 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
22 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
23 Claim(s) 5 and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nishide et al. (US 20220198742 A1) in view of Nagaoka et al. (JP 4651353 B2).
24 Regarding claim 5, Nishide teaches the image processing device according to claim 4 (see claims 1, 2, and 4 rejections above), wherein the meta data includes information indicating the diagnosis result ([0127] reciting “For example, the diagnosis support information may be output using a trained image recognition model that outputs a recognition result in a case where the endoscopic image after the pixel value correction is input.”) , and
wherein the at least one processor is configured to execute the instructions to display information indicating the diagnosis result ([0103] reciting “The display device 3 displays the endoscopic image output from the processor 2 and the diagnosis support information (Step S301), and ends the processing. Note that the associated virtual endoscopic image may also be displayed at this time. In addition, a portion considered to be a tumor candidate may be displayed in a different color or may be highlighted for easy visual recognition.”)
25 Nishide does not explicitly teach wherein the meta data includes information indicating the diagnosis result and an attribute of one or more persons who made the diagnosis for the meta data target, and wherein the at least one processor is configured to execute the instructions to display information indicating the diagnosis result and the attribute in association with the displayed endoscopic image in which the meta data target part is highlighted.
26 Nagaoka teaches wherein the meta data includes information indicating the diagnosis result and an attribute of one or more persons who made the diagnosis for the meta data target, and wherein the at least one processor is configured to execute the instructions to display information indicating the diagnosis result and the attribute in association with the displayed endoscopic image in which the meta data target part is highlighted ([0006] reciting “According to a first aspect of the present invention, there is provided a diagnosis support system comprising: display means for displaying a medical image necessary for diagnosis by a doctor; analysis means for starting a diagnosis support program for the medical image and analyzing the medical image; and control means for causing the display means to display an analysis result of the analysis means for the medical image when the diagnosis by the doctor is completed for the medical image being displayed on the display means.”; [0010] reciting “For example, in a case where a region that is not found in the diagnosis by the doctor and needs to be checked by the analysis means is determined to be abnormal as a result of re-diagnosis by the doctor, or in a case where a region that is found in the diagnosis by the doctor but is not found by the analysis means is determined to be abnormal, the feature amount of each region is extracted and stored in a database or the like as an example that is likely to be overlooked, and an image similar to the feature amount is detected in the next and subsequent analysis processes.”).
27 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Nishide) to incorporate the teachings of Nagaoka to provide a method that allows a display of a diagnosis from a person like a doctor for example, which the diagnosis can be the meta data (or database) taught by Nishide. Doing so would be possible to receive the analysis result from the doctor without reducing the motivation for the interpretation as stated by Nagaoka ([0006] recited).
28 Claim 17 has similar limitations as of claim 5, therefore it is rejected under the same rationale as claim 5.
29 Claim(s) 6 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nishide et al. (US 20220198742 A1) in view of Nagaoka et al. (JP 4651353 B2), further in view of Hisano et al. (US 20180350460 A1).
30 Regarding claim 6, Nishide in view of Nagaoka teaches the image processing device according to claim 5 (see claims 1, 2, 4, and 5 rejections above), but does not explicitly teach wherein in a case that the one or more persons are plural, the at least one processor is configured to execute the instructions to differentiate a display mode of the information regarding the meta data for each of one or more persons from each other.
31 Hisano teaches wherein in a case that the one or more persons are plural, the at least one processor is configured to execute the instructions to differentiate a display mode of the information regarding the meta data for each of one or more persons from each other ([0032] reciting “In the embodiment, the technician first assists in diagnostic imaging and creates an image interpretation report.”; [0062] reciting “When the doctor C uses the selection button 104, the selection instruction reception unit 46 receives a user operation in the selection button 104 as an instruction to select the image and transmits the instruction to the management system 10. The management system 10 extracts the displayed image occurring when the instruction for selection is received as the selected image.”; [0064] reciting “The endoscopic image selection screen generation unit 44 displays the selected images selected by the technician B and the selected images selected by the doctor C in the selected image display area 102 in a manner that the selected images selected by the technician B and the selected images selected by the doctor C can be distinguished.”).
32 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Nishide in view of Nagaoka) to incorporate the teachings of Hisano to provide a way to incorporate one or more persons to provide s a type of diagnosis-like images with an input of a doctor and technician taught by Hisano, while utilizing the diagnosis display methods of Nishide in view of Nagaoka. Doing so would allow the method to be capable of distinguishing the images in terms of the selector as stated by Hisano ([0064] recited).
33 Claim 18 has similar limitations as of claim 6, therefore it is rejected under the same rationale as claim 6.
34 Claim(s) 7 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nishide et al. (US 20220198742 A1) in view of Regensburger et al. (US 20200085281 A1).
35 Regarding claim 7, Nishide teaches the image processing device according to claim 1 (see claim 1 rejection above),
wherein the degree of position accuracy is calculated based on the result of the matching ([0101] reciting “However, the degree of matching is measured with an index that correlates a shadow image of the endoscopic image and a shadow image of the virtual endoscopic image, and the virtual endoscopic image is continuously reconstructed with the viewpoint position and the angle of the endoscope by finely adjusting the Z-coordinate position (insertion length) and the bending state of the endoscope so as to obtain the highest degree of matching.”; [0105] reciting “Alternatively, the degree of matching may be determined on the basis of similarity between the endoscopic image and the endoscopic image.”; [0233] reciting “Therefore, the position (viewpoint position) and the rotation angles (viewpoint direction) of the endoscope 140 in the coordinate system of the three-dimensional medical image at the time point when the endoscopic image is captured can be specified on the basis of the shape of the endoscope 140, and the accuracy in association between the endoscopic image and the three-dimensional medical image can be further improved.”).
36 Nishide does not explicitly teach wherein the at least one processor is configured to execute the instructions to display a degree of position accuracy of the meta data target part on the endoscopic image and the endoscopic image in which the meta data target part is highlighted.
37 Regensburger teaches wherein the at least one processor is configured to execute the instructions to display a degree of position accuracy of the meta data target part on the endoscopic image and the endoscopic image in which the meta data target part is highlighted. ([0031] reciting “If the acquisition direction of the endoscope therefore deviates, for example, by an angle α from the specified spatial direction, the overlay image may be adapted accordingly (e.g., graphically processed) in order to signal or display to the respective observer or user the degree of the uncertainty or inaccuracy (e.g., of a visualization error). The blurring may be proportional to sine(α), for example. For example, arrows or other symbols may likewise display the angular deviation and/or the specified spatial direction. In one embodiment, only the 3D data set or portions of the overlay image that are based on the 3D data set are adapted correspondingly.”; [0059] reciting “The imaging system 1 also has a data processing device 7 for processing sensor data or image data provided by the detector 2 and, if applicable, further data received or acquired via an interface 8 of the data processing device 7.”).
38 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Nishide) to incorporate the teachings of Regensburger to provide a method that can display a type of degree of position accuracy (or uncertainty if the degree is accurate or not), while using the degree of position accuracy as similarly taught by Nishide. Doing so would allow the respective observer or user may be made aware of the uncertainty of the representation in a particularly intuitive and clear manner as stated by Regensburger ([0031] recited).
39 Claim 19 has similar limitations as of claim 7, therefore it is rejected under the same rationale as claim 7.
40 Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nishide et al. (US 20220198742 A1) in view of Sun et al. (US 9646423 B1).
41 Regarding claim 11, Nishide teaches the image processing device according to claim 1 (see claim 1 rejection above), but does not explicitly teach wherein the three-dimensional model is data generated based on scan data of the examination target obtained in a preliminary examination conducted prior to the examination by the endoscope, and wherein the meta data is data generated based on the examination result of the preliminary examination.
42 Sun teaches wherein the three-dimensional model is data generated based on scan data of the examination target obtained in a preliminary examination conducted prior to the examination by the endoscope ([Page 12; Column 4, Lines 52-62] reciting “Intra-operative image data, such as laparoscopic images and video, can be augmented to include renderings of obscured organs in a variety of ways. FIG. 3 illustrates a first embodiment of a method for providing such augmented image data. Beginning with block 40 of FIG. 3, pre-operative image data of the patient's internal organs is obtained. By way of example, computed tomography (CT) image data is captured of the patient. Notably, such data is often collected prior to an abdominal surgical procedure to determine whether or not surgery is necessary and, if so, to help plan the surgery.”), and
wherein the meta data is data generated based on the examination result of the preliminary examination ([Page 12, Column 4, Lines 62-67; Page 13, Column 5, Lines 1-2] reciting “Once the image data has been captured, it can be processed to create a three-dimensional surface model of the surfaces of the patient's internal organs prior to surgery, as indicated in block 42. In the case in which abdominal surgery is to be performed, the surfaces can be the surfaces of multiple deformable organs of the abdominal cavity, such as the liver, stomach, intestines, bladder, colon, etc.”).
43 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Nishide) to incorporate the teachings of Sun to provide a method that can generate a type of data based on examination prior to the endoscope examination, using the endoscope examination method from Nishide. Doing so would allow the data to be processed to create a further three-dimensional surface model of the visible surfaces of the patient's internal organs as stated by Sun ([Page 13; Column 5, Lines 17-19] recited).
Conclusion
44 Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
45 Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOHNNY TRAN LE whose telephone number is (571)272-5680. The examiner can normally be reached Mon-Thu: 7:30am-5pm; First Fridays Off; Second Fridays: 7:30am-4pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached at (571) 272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JOHNNY T LE/Examiner, Art Unit 2614
/KENT W CHANG/Supervisory Patent Examiner, Art Unit 2614