DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgment is made of applicant's claim for foreign priority based on an application filed in Japan on 29 September 2021. It is noted, however, that applicant has not filed a certified copy of the JP 2021-160020 application as required by 37 CFR 1.55.
Claim Objections
Claim 1 is objected to because of the following informalities: Lines 11 - 12 of claim 1 recite, in part, “and for each of the acquired images, obtain” which appears to contain a minor informality. The Examiner suggests amending the claim to --and for each acquired image of the acquired images, obtain-- in order to improve the clarity and precision of the claim. Appropriate correction is required.
Claim 6 is objected to because of the following informalities: Lines 5 - 6 of claim 6 recite, in part, “falls outside of one of the images, modify the image such that the part” which appears to contain a minor informality. The Examiner suggests amending the claim to --falls outside of one of the images, modify the corresponding image such that the determined part-- in order to improve the clarity and precision of the claim. Appropriate correction is required.
Claim 7 is objected to because of the following informalities: Lines 5 - 6 of claim 7 recite, in part, “an artifact is shown in one of the images, modify the image such that a part of the image corresponding to the artifact” which appears to contain a minor informality. The Examiner suggests amending the claim to --an artifact is shown in one of the images, modify the corresponding image such that a part of the corresponding image corresponding to the determined artifact-- in order to improve the clarity and precision of the claim. Appropriate correction is required.
Claim 8 is objected to because of the following informalities: Line 5 of claim 8 recites, in part, “for each of the acquired images, obtain” which appears to contain a minor informality. The Examiner suggests amending the claim to --for each acquired image of the acquired images, obtain-- in order to improve the clarity and precision of the claim. Appropriate correction is required.
Claim 13 is objected to because of the following informalities: Lines 7 - 8 of claim 13 recite, in part, “and for each of the acquired images, obtaining” which appears to contain a minor informality. The Examiner suggests amending the claim to --and for each acquired image of the acquired images, obtaining-- in order to improve the clarity and precision of the claim. Appropriate correction is required.
Claim 18 is objected to because of the following informalities: Lines 4 - 5 of claim 18 recite, in part, “falls outside of one of the images, modifying the image such that the part” which appears to contain a minor informality. The Examiner suggests amending the claim to --falls outside of one of the images, modifying the corresponding image such that the determined part-- in order to improve the clarity and precision of the claim. Appropriate correction is required.
Claim 19 is objected to because of the following informalities: Lines 4 - 5 of claim 19 recite, in part, “an artifact is shown in one of the images, modifying the image such that a part of the image corresponding to the artifact” which appears to contain a minor informality. The Examiner suggests amending the claim to --an artifact is shown in one of the images, modifying the corresponding image such that a part of the corresponding image corresponding to the determined artifact-- in order to improve the clarity and precision of the claim. Appropriate correction is required.
Claim 20 is objected to because of the following informalities: Lines 8 - 9 of claim 20 recite, in part, “and for each of the acquired images, obtaining” which appears to contain a minor informality. The Examiner suggests amending the claim to --and for each acquired image of the acquired images, obtaining-- in order to improve the clarity and precision of the claim. Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1 - 20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 1 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention because it is unclear as to which luminal organ “the luminal organ” recited on lines 12 - 13, along with subsequent recitations of “the luminal organ” throughout the claims, are referencing. Are they referring to the “luminal organ” recited on lines 1 - 2 of claim 1 or the “luminal organ” recited on line 11 of claim 1? Additionally, it is unclear as to whether the “luminal organ” recited on lines 1 - 2 of claim 1 and the “luminal organ” recited on line 11 of claim 1 are the same luminal organ or different luminal organs. Clarification and appropriate correction are required. For purposes of examination the Examiner will treat the claims as requiring and referencing a single same luminal organ.
Claim 1 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention because it is unclear as to which images “the images” recited on line 15, along with subsequent recitations of “the images” throughout the claims, are referencing. Are they referring to the “medical images” recited on line 1 of claim 1 or the “plurality of cross-sectional images” recited on line 7 of claim 1? Clarification and appropriate correction are required. For purposes of examination the Examiner will treat recitations of “the images” in the claims as referencing the “plurality of cross-sectional images” recited on line 7 of claim 1 and suggests amending “the images” throughout the claims to --the plurality of acquired cross-sectional images-- or something similar.
Claim 8 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention because it is unclear as to which object “the object” recited on line 7, along with subsequent recitations of “the object” throughout the claims, are referencing. Are they referring to the “object” recited on line 4 of claim 8 or the “object” recited on line 6 of claim 8? Additionally, it is unclear as to whether the “object” recited on line 4 of claim 8 and the “object” recited on line 6 of claim 8 are the same object or different objects. Clarification and appropriate correction are required. For purposes of examination the Examiner will treat the claims as requiring and referencing a single same object.
Claim 9 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention because it is unclear as to which luminal organ “the luminal organ” recited on lines 2 - 3 is referencing. Is it referring to the “luminal organ” recited on lines 1 - 2 of claim 1, the “luminal organ” recited on line 11 of claim 1 or the “luminal organ” recited on line 4 of claim 8? Additionally, it is unclear as to whether the “luminal organ” recited on lines 1 - 2 of claim 1, the “luminal organ” recited on line 11 of claim 1 and the “luminal organ” recited on line 4 of claim 8 are the same luminal organ or different luminal organs. Clarification and appropriate correction are required. For purposes of examination the Examiner will treat the claims as requiring and referencing a single same luminal organ.
Claim 9 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention because it is unclear as to which image “the image” recited on line 3 is referencing. Is it referring to the “image” recited on line 11 of claim 1, the “image” recited on line 4 of claim 8 or the “cross-sectional image” recited on line 2 of claim 9? Clarification and appropriate correction are required. For purposes of examination the Examiner will treat “the image” recited on line 3 of claim 9 as referencing the “cross-sectional image” recited on line 2 of claim 9.
Claim 11 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention because it is unclear as to which luminal organ “the luminal organ” recited on line 2 is referencing. Is it referring to the “luminal organ” recited on lines 1 - 2 of claim 1, the “luminal organ” recited on line 11 of claim 1 or the “luminal organ” recited on line 4 of claim 8? Additionally, it is unclear as to whether the “luminal organ” recited on lines 1 - 2 of claim 1, the “luminal organ” recited on line 11 of claim 1 and the “luminal organ” recited on line 4 of claim 8 are the same luminal organ or different luminal organs. Clarification and appropriate correction are required. For purposes of examination the Examiner will treat the claims as requiring and referencing a single same luminal organ.
Claim 12 recites the limitation "the predetermined site" in line 4. There is insufficient antecedent basis for this limitation in the claim.
Claim 13 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention because it is unclear as to which luminal organ “the luminal organ” recited on line 9, along with subsequent recitations of “the luminal organ” throughout the claims, are referencing. Are they referring to the “luminal organ” recited on line 2 of claim 13 or the “luminal organ” recited on line 7 of claim 13? Additionally, it is unclear as to whether the “luminal organ” recited on line 2 of claim 13 and the “luminal organ” recited on line 7 of claim 13 are the same luminal organ or different luminal organs. Clarification and appropriate correction are required. For purposes of examination the Examiner will treat the claims as requiring and referencing a single same luminal organ.
Claim 13 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention because it is unclear as to which images “the images” recited on line 11, along with subsequent recitations of “the images” throughout the claims, are referencing. Are they referring to the “medical images” recited on line 2 of claim 13 or the “plurality of cross-sectional images” recited on lines 3 - 4 of claim 13? Clarification and appropriate correction are required. For purposes of examination the Examiner will treat recitations of “the images” in the claims as referencing the “plurality of cross-sectional images” recited on lines 3 - 4 of claim 13 and suggests amending “the images” throughout the claims to --the plurality of acquired cross-sectional images-- or something similar.
Claim 20 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention because it is unclear as to which luminal organ “the luminal organ” recited on line 10, along with subsequent recitations of “the luminal organ”, are referencing. Are they referring to the “luminal organ” recited on line 2 of claim 20 or the “luminal organ” recited on line 8 of claim 20? Additionally, it is unclear as to whether the “luminal organ” recited on line 2 of claim 20 and the “luminal organ” recited on line 8 of claim 20 are the same luminal organ or different luminal organs. Clarification and appropriate correction are required. For purposes of examination the Examiner will treat the claim as requiring and referencing a single same luminal organ.
Claim 20 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention because it is unclear as to which images “the images” recited on line 12 are referencing. Are they referring to the “medical images” recited on line 2 of claim 20 or the “plurality of cross-sectional images” recited on lines 4 - 5 of claim 20? Clarification and appropriate correction are required. For purposes of examination the Examiner will treat “the images” recited on line 12 of claim 12 as referencing the “plurality of cross-sectional images” recited on lines 4 - 5 of claim 20 and suggests amending “the images” recited on line 12 of claim 12 to --the plurality of acquired cross-sectional images-- or something similar.
Claims 2 - 7, 10 and 14 - 19 are also rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, due to being dependent upon a rejected base claim(s) but would be withdrawn from the rejection if their base claim(s) overcome the rejection.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1 - 4, 7, 8, 10 - 16, 19 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Gulsun et al. U.S. Publication No. 2019/0130578 A1 in view of Aharon et al. U.S. Publication No. 2008/0100621 A1 in view of Balocco et al. U.S. Publication No. 2012/0130243 A1.
- With regards to claims 1, 13 and 20, Gulsun et al. disclose a medical image processing apparatus for processing medical images of a luminal organ, (Gulsun et al., Abstract, Figs. 1 - 5, 7 & 8, Pg. 1 ¶ 0005 - 0008, Pg. 2 ¶ 0021 - 0023, Pg. 5 ¶ 0049 - 0053, Pg. 6 ¶ 0057 - 0061) a method carried out by a medical image processing apparatus for processing medical images of a luminal organ, (Gulsun et al., Abstract, Figs. 1 - 5, 7 & 8, Pg. 1 ¶ 0005 - 0008, Pg. 2 ¶ 0021 - 0023, Pg. 5 ¶ 0049 - 0053, Pg. 6 ¶ 0057 - 0061) and a non-transitory computer readable medium storing a program causing a computer to execute a method for processing medical images of a luminal organ, (Gulsun et al., Abstract, Figs. 1 - 5, 7 & 8, Pg. 1 ¶ 0005 - 0008, Pg. 2 ¶ 0021 - 0023, Pg. 5 ¶ 0049 - 0053, Pg. 6 ¶ 0057 - 0061, Pg. 7 ¶ 0066 - 0068) comprising: a first interface circuit connectable to an ultrasonic imaging system; (Gulsun et al., Pg. 1 ¶ 0008, Pg. 2 ¶ 0021 and 0023 - 0026, Pg. 3 ¶ 0031, Pg. 5 ¶ 0049 - 0052, Pg. 6 ¶ 0058 - 0061, Pg. 7 ¶ 0068 - 0075) a second interface circuit connectable to a display; (Gulsun et al., Fig. 8, Pg. 5 ¶ 0047, Pg. 6 ¶ 0057 - 0061, Pg. 7 ¶ 0072 - 0073) and a processor (Gulsun et al., Fig. 8, Pg. 1 ¶ 0008, Pg. 2 ¶ 0023, Pg. 5 ¶ 0051, Pg. 6 ¶ 0058 - 0060 and 0064, Pg. 7 ¶ 0067 - 0075) configured to: acquire a plurality of cross-sectional images of the luminal organ along a longitudinal direction thereof, (Gulsun et al., Figs. 2 -5, Pg. 1 ¶ 0008, Pg. 2 ¶ 0021 - 0026, Pg. 3 ¶ 0028 and 0036, Pg. 6 ¶ 0061) input the acquired images into a first machine learning model that has been trained to classify each pixel in an image of a luminal organ (Gulsun et al., Abstract, Figs. 3 - 5, Pg. 1 ¶ 0005 - 0006 and 0008, Pg. 3 ¶ 0028, 0032, 0034 and 0036, Pg. 4 ¶ 0042, Pg. 5 ¶ 0046 - 0048) and for each of the acquired images, obtain position data indicating a boundary between predetermined regions of the luminal organ based on segmentation data output from the first machine learning model that classifies each pixel of the acquired image, (Gulsun et al., Figs. 3 & 5, Pg. 1 ¶ 0003 - 0006 and 0008, Pg. 2 ¶ 0021 - 0023, Pg. 3 ¶ 0034, Pg. 4 ¶ 0042, Pg. 5 ¶ 0046 - 0052) select two of the images that are consecutive and identify a group of points corresponding to the boundary in each of the selected images based on the position data, (Gulsun et al., Figs. 3 - 5, Pg. 2 ¶ 0022, Pg. 4 ¶ 0042 - Pg. 5 ¶ 0049, Pg. 5 ¶ 0052, Pg. 6 ¶ 0057, Pg. 7 ¶ 0069) associate one of the selected images with the other image, (Gulsun et al., Figs. 3 - 5, Pg. 2 ¶ 0022, Pg. 4 ¶ 0042 - Pg. 5 ¶ 0047, Pg. 6 ¶ 0057, Pg. 7 ¶ 0069) generate a 3-D image of the luminal organ in which the one of the selected images is connected to the other image, (Gulsun et al., Figs. 3 & 7, Pg. 2 ¶ 0021 - 0022, Pg. 5 ¶ 0046 - 0048 and 0052, Pg. 6 ¶ 0057, Pg. 7 ¶ 0069) and control the display to display the generated 3-D image. (Gulsun et al., Fig. 8, Pg. 2 ¶ 0021 - 0023 and 0025 - 0026, Pg. 5 ¶ 0046 - 0047, Pg. 6 ¶ 0057, Pg. 7 ¶ 0072) Gulsun et al. fail to disclose explicitly a catheter having an ultrasonic probe and insertable into the luminal organ; controlling the catheter to acquire a plurality of cross-sectional images of the luminal organ when the catheter is inserted into the luminal organ and moved along a longitudinal direction thereof, associating one or more of the points in one of the selected images with one or more of the points in the other image, and generating a 3-D image in which said one or more of the points in one of the selected images are respectively connected to the associated points in the other image. Pertaining to analogous art, Aharon et al. disclose a medical image processing apparatus for processing medical images of a luminal organ, (Aharon et al., Abstract, Fig. 6, Pg. 1 ¶ 0006, Pg. 2 ¶ 0030 - Pg. 3 ¶ 0033, Pg. 5 ¶ 0046 - 0049, Pg. 6 ¶ 0063 - Pg. 7 ¶ 0065) a method carried out by a medical image processing apparatus for processing medical images of a luminal organ, (Aharon et al., Abstract, Fig. 6, Pg. 1 ¶ 0006, Pg. 2 ¶ 0030 - Pg. 3 ¶ 0033, Pg. 5 ¶ 0046 - 0049, Pg. 6 ¶ 0063 - Pg. 7 ¶ 0065) and a non-transitory computer readable medium storing a program causing a computer to execute a method for processing medical images of a luminal organ, (Aharon et al., Abstract, Fig. 6, Pg. 1 ¶ 0006, Pg. 2 ¶ 0030 - Pg. 3 ¶ 0033, Pg. 5 ¶ 0046 - 0049, Pg. 6 ¶ 0063 - Pg. 7 ¶ 0065) comprising: a first interface circuit connectable to an ultrasonic imaging system; (Aharon et al., Fig. 6, Pg. 3 ¶ 0031, Pg. 6 ¶ 0063 - Pg. 7 ¶ 0065) a second interface circuit connectable to a display; (Aharon et al., Fig. 6, Pg. 6 ¶ 0063 - Pg. 7 ¶ 0065) and a processor (Aharon et al., Fig. 6, Pg. 3 ¶ 0031, Pg. 6 ¶ 0063 - Pg. 7 ¶ 0065) configured to: select two of the images that are consecutive and identify a group of points corresponding to the boundary in each of the selected images based on the position data, (Aharon et al., Fig. 5, Pg. Pg. 1 ¶ 0006, Pg. 2 ¶ 0027, Pg. 3 ¶ 0031 - 0033 and 0036 - 0037, Pg. 5 ¶ 0047 - 0049) associate one or more of the points in one of the selected images with one or more of the points in the other image, (Aharon et al., Fig. 5, Pg. 2 ¶ 0027, Pg. 5 ¶ 0047 - 0049) generate a 3-D image of the luminal organ in which said one or more of the points in one of the selected images are respectively connected to the associated points in the other image, (Aharon et al., Figs. 1a, 1b & 5, Pg. 5 ¶ 0049 - 0050) and control the display to display the generated 3-D image. (Aharon et al., Figs. 1a, 1b, 5 & 6, Pg. 5 ¶ 0049 - 0050, Pg. 6 ¶ 0060 - 0064) Aharon et al. fail to disclose explicitly a catheter having an ultrasonic probe and insertable into the luminal organ; and controlling the catheter to acquire a plurality of cross-sectional images of the luminal organ when the catheter is inserted into the luminal organ and moved along a longitudinal direction thereof. Pertaining to analogous art, Balocco et al. disclose a medical image processing apparatus for processing medical images of a luminal organ, (Balocco et al., Abstract, Figs. 1 & 4 - 13, Pg. 2 ¶ 0022 - 0024 and 0026 - 0027, Pg. 3 ¶ 0040, Pg. 4 ¶ 0045 - 0046 and 0050 - 0053, Pg. 5 ¶ 0056 - 0058 and 0063 - 0066, Pg. 6 ¶ 0071 - 0075, Pg. 9 ¶ 0100 - 0101) a method carried out by a medical image processing apparatus for processing medical images of a luminal organ, (Balocco et al., Abstract, Figs. 1 & 4 - 13, Pg. 2 ¶ 0022 - 0024 and 0026 - 0027, Pg. 3 ¶ 0040, Pg. 4 ¶ 0045 - 0046 and 0050 - 0053, Pg. 5 ¶ 0056 - 0058 and 0063 - 0066, Pg. 6 ¶ 0071 - 0075, Pg. 9 ¶ 0100 - 0101) and a non-transitory computer readable medium storing a program causing a computer to execute a method for processing medical images of a luminal organ, (Balocco et al., Abstract, Figs. 1 & 4 - 13, Pg. 2 ¶ 0022 - 0024 and 0026 - 0027, Pg. 3 ¶ 0040, Pg. 4 ¶ 0045 - 0046 and 0050 - 0053, Pg. 5 ¶ 0056 - 0058 and 0063 - 0066, Pg. 6 ¶ 0071 - 0075, Pg. 9 ¶ 0100 - 0101) comprising: a first interface circuit connectable to a catheter having an ultrasonic probe and insertable into the luminal organ; (Balocco et al., Abstract, Figs. 1 - 3 & 8, Pg. 2 ¶ 0022 and 0026 - 0031, Pg. 3 ¶ 0036 - 0040, Pg. 4 ¶ 0044 - 0047) a second interface circuit connectable to a display; (Balocco et al., Fig. 1, Pg. 2 ¶ 0022 - 0025 and 0027 - 0029, Pg. 3 ¶ 0038 - 0039, Pg. 4 ¶ 0051) and a processor (Balocco et al., Fig. 1, Pg. 2 ¶ 0023 - 0025 and 0027 - 0029, Pg. 3 ¶ 0038, Pg. 4 ¶ 0045, Pg. 9 ¶ 0100 - 0101) configured to: control the catheter to acquire a plurality of cross-sectional images of the luminal organ when the catheter is inserted into the luminal organ and moved along a longitudinal direction thereof. (Balocco et al., Abstract, Figs. 1, 5 & 8, Pg. 2 ¶ 0022 and 0026 - 0028, Pg. 3 ¶ 0036 - 0040, Pg. 4 ¶ 0044 - 0048 and 0051 - 0053) Gulsun et al. and Aharon et al. are combinable because they are both directed towards medical image processing systems and methods that process cross-sectional ultrasound images of a luminal organ. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Gulsun et al. with the teachings of Aharon et al. This modification would have been prompted in order to substitute the three-dimensional model generation technique of Gulsun et al. for the three-dimensional model construction process of Aharon et al. The three-dimensional model construction process of Aharon et al. could be substituted in place of the three-dimensional model generation technique of Gulsun et al. using well-known techniques in the art and would likely yield predictable results, in that, in the combination, the three-dimensional model construction process of Aharon et al. would be utilized to create the three-dimensional vessel model of the base device of Gulsun et al. Furthermore, this modification would have been prompted by the teachings and suggestions of Gulsun et al. that the segmented images may be stitched or connected to provide a wire frame model, e.g., mesh, of vessel, see at least page 5 paragraph 0047 and page 6 paragraph 0057 of Gulsun et al. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that the base device of Gulsun et al. would utilize the three-dimensional model construction process of Aharon et al. to create the three-dimensional model of the vessel. In addition, Gulsun et al. in view of Aharon et al. and Balocco et al. are combinable because they are all directed towards medical image processing systems and methods that process cross-sectional ultrasound images of a luminal organ. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Gulsun et al. in view of Aharon et al. with the teachings of Balocco et al. This modification would have been prompted in order to substitute the ultrasound medical imaging system of Gulsun et al. for the intravascular ultrasound imaging system of Balocco et al. The intravascular ultrasound imaging system of Balocco et al. could be substituted in place of the ultrasound medical imaging system of Gulsun et al. using well-known techniques in the art and would likely yield predictable results, in that, in the combination, the combined base device of Gulsun et al. in view of Aharon et al. would utilize the intravascular ultrasound imaging system of Balocco et al. to acquire the plurality of cross-sectional images of the luminal organ. Furthermore, this modification would have been prompted by the teachings and suggestions of Gulsun et al. that their series of image frames may be acquired using any imaging technique, see at least page 2 paragraph 0024, page 5 paragraphs 0049 - 0052 and page 6 paragraph 0061 of Gulsun et al. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that the combined base device of Gulsun et al. in view of Aharon et al. would utilize the intravascular ultrasound imaging system of Balocco et al. to acquire the plurality of cross-sectional images of the luminal organ. Therefore, it would have been obvious to combine Gulsun et al. with Aharon et al. and Balocco et al. to obtain the invention as specified in claims 1, 13 and 20.
- With regards to claims 2 and 14, Gulsun et al. in view of Aharon et al. in view of Balocco et al. disclose the medical image processing apparatus and method according to claims 1 and 13, respectively. Gulsun et al. fail to disclose explicitly wherein the processor is configured to: determine a difference between each of the points in one of the selected images and each of the points in the other image, and associate one of the points in one of the selected images and one of the points in the other image, a difference of which is smallest. Pertaining to analogous art, Aharon et al. disclose wherein the processor (Aharon et al., Fig. 6, Pg. 3 ¶ 0031, Pg. 6 ¶ 0063 - Pg. 7 ¶ 0065) is configured to: determine a difference between each of the points in one of the selected images and each of the points in the other image, (Aharon et al., Fig. 5, Pg. 3 ¶ 0031 - 0033 and 0036 - 0037, Pg. 5 ¶ 0047, Pg. 6 ¶ 0059) and associate one of the points in one of the selected images and one of the points in the other image, a difference of which is smallest. (Aharon et al., Fig. 5, Pg. 3 ¶ 0031 - 0033 and 0036 - 0037, Pg. 5 ¶ 0047, Pg. 6 ¶ 0059 [“A 3D coronary triangulated surface model, also known as a mesh, is then constructed at step 128 from these 2D contours. Starting with successive contours, that is, cross-sectional boundaries, one boundary is sampled and the corresponding points on the other contour are located, from which a local triangulation surface is constructed. Each point is connected to the two closest points on the other contour to form a tubular mesh. FIG. 5 depicts triangulation between two successive contours, according to an embodiment of the invention.”])
- With regards to claims 3 and 15, Gulsun et al. in view of Aharon et al. in view of Balocco et al. disclose the medical image processing apparatus and method according to claims 2 and 14, respectively. Gulsun et al. fail to disclose explicitly wherein the processor is configured to determine not to associate one of the points in one of the selected images and one of the points in the other image, a difference of which is greater than or equal to a threshold. Pertaining to analogous art, Aharon et al. disclose wherein the processor (Aharon et al., Fig. 6, Pg. 3 ¶ 0031, Pg. 6 ¶ 0063 - Pg. 7 ¶ 0065) is configured to determine not to associate one of the points in one of the selected images and one of the points in the other image, a difference of which is greater than or equal to a threshold. (Aharon et al., Fig. 5, Pg. 3 ¶ 0031 - 0033 and 0036 - 0037, Pg. 5 ¶ 0047, Pg. 6 ¶ 0059 [“Each point is connected to the two closest points on the other contour to form a tubular mesh.” The Examiner asserts that if a distance between a point in a first image and a point in a successive image is greater than a distance between the second closest point in the successive image to the point in the first image, i.e., a threshold, then they are not associated to each other.])
- With regards to claims 4 and 16, Gulsun et al. in view of Aharon et al. in view of Balocco et al. disclose the medical image processing apparatus and method according to claims 1 and 13, respectively. Gulsun et al. fail to disclose expressly wherein in the 3-D image, said one or more of the points in one of the selected images are connected, and said one or more of the points in the other image are connected. Pertaining to analogous art, Aharon et al. disclose wherein in the 3-D image, said one or more of the points in one of the selected images are connected, and said one or more of the points in the other image are connected. (Aharon et al., Figs. 1a, 1b, 4, 5, 7 & 8, Pg. 3 ¶ 0032 - 0034 and 0036 - 0037, Pg. 5 ¶ 0047 - 0049)
- With regards to claims 7 and 19, Gulsun et al. in view of Aharon et al. in view of Balocco et al. disclose the medical image processing apparatus and method according to claims 1 and 13, respectively, wherein the processor is configured to: determine whether an artifact is shown in each of the images based on the segmentation data, (Gulsun et al., Pg. 4 ¶ 0042, Pg. 5 ¶ 0046 - 0048, Pg. 7 ¶ 0070 - 0072) and upon determining that an artifact is shown in one of the images, modify the image such that a part of the image corresponding to the artifact is emphasized. (Gulsun et al., Pg. 4 ¶ 0042, Pg. 5 ¶ 0046 - 0048, Pg. 6 ¶ 0057, Pg. 7 ¶ 0070 - 0072)
- With regards to claim 8, Gulsun et al. in view of Aharon et al. in view of Balocco et al. disclose the medical image processing apparatus according to claim 1, wherein the processor is configured to: input the acquired images into a second machine learning model that has been trained to detect a presence or an absence of an object in an image of a luminal organ, (Gulsun et al., Figs. 3 - 7, Pg. 3 ¶ 0034 - 0036, Pg. 4 ¶ 0042, Pg. 5 ¶ 0046 - 0050, Pg. 6 ¶ 0056 - 0057, Pg. 7 ¶ 0069 - 0072) and for each of the acquired images, obtain information from the second machine learning model indicating whether or not an object is present or absent in the acquired image, (Gulsun et al., Figs. 3 - 7, Pg. 4 ¶ 0042, Pg. 5 ¶ 0046 - 0050, Pg. 6 ¶ 0057, Pg. 7 ¶ 0069 - 0072) and superimpose an image of the object on the 3-D image based on the information. (Gulsun et al., Figs. 3 - 7, Pg. 4 ¶ 0042, Pg. 5 ¶ 0046 - 0049, Pg. 6 ¶ 0057, Pg. 7 ¶ 0069 - 0072)
- With regards to claim 10, Gulsun et al. in view of Aharon et al. in view of Balocco et al. disclose the medical image processing apparatus according to claim 8, wherein the object includes at least one of an artifact and a lesion. (Gulsun et al., Pg. 4 ¶ 0042, Pg. 5 ¶ 0046 - 0049, Pg. 7 ¶ 0070 - 0072)
- With regards to claim 11, Gulsun et al. in view of Aharon et al. in view of Balocco et al. disclose the medical image processing apparatus according to claim 8, wherein the object and the luminal organ are displayed in different colors. (Gulsun et al., Pg. 4 ¶ 0042, Pg. 5 ¶ 0046 - 0048)
- With regards to claim 12, Gulsun et al. in view of Aharon et al. in view of Balocco et al. disclose the medical image processing apparatus according to claim 1. Gulsun et al. fail to disclose explicitly wherein the processor controls the catheter to acquire the images based on a gravity center moving distance of the luminal organ, predetermined cardiac cycle data, or correlation data of a predetermined index of the predetermined site. Pertaining to analogous art, Balocco et al. disclose wherein the processor (Balocco et al., Fig. 1, Pg. 2 ¶ 0023 - 0025 and 0027 - 0029, Pg. 3 ¶ 0038, Pg. 4 ¶ 0045, Pg. 9 ¶ 0100 - 0101) controls the catheter to acquire the images based on a gravity center moving distance of the luminal organ, predetermined cardiac cycle data, or correlation data of a predetermined index of the predetermined site. (Balocco et al., Figs. 1, 5 & 8, Pg. 2 ¶ 0022 - 0024 and 0026 - 0028, Pg. 4 ¶ 0051 and 0054, Pg. 5 ¶ 0064 - 0065, Pg. 6 ¶ 0071 - 0072, Pg. 9 ¶ 0100 - 0101 [“an image-based gating method can be used to identify the frames with minimal motion blur that can be considered as belonging to the same phase of the cardiac cycle.”]) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Gulsun et al. in view of Aharon et al. in view of Balocco et al. with additional teachings of Balocco et al. This modification would have been prompted in order to enhance the combined base device of Gulsun et al. in view of Aharon et al. in view of Balocco et al. with the well-known and applicable technique Balocco et al. applied to a comparable device. Controlling the catheter to acquire the images based on a gravity center moving distance of the luminal organ, predetermined cardiac cycle data, or correlation data of a predetermined index of the predetermined site, as taught by Balocco et al., would enhance the combined base device by helping ensure that optimally stable frames exhibiting minimal motion blur are acquired and utilized to generate the three-dimensional model of the luminal organ thereby improving its ability to generate an accurate, reliable and high-quality three-dimensional model of the luminal organ. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that the catheter would be controlled to acquire the images based on a gravity center moving distance of the luminal organ, predetermined cardiac cycle data, or correlation data of a predetermined index of the predetermined site so as to help ensure that the combined base device acquires and utilizes optimally stable frames with minimal motion blur to generate the three-dimensional model of the luminal organ so as to improve its ability to generate an accurate, reliable and high-quality three-dimensional model of the luminal organ. Therefore, it would have been obvious to combine Gulsun et al. in view of Aharon et al. in view of Balocco et al. with additional teachings of Balocco et al. to obtain the invention as specified in claim 12.
Claims 5 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Gulsun et al. U.S. Publication No. 2019/0130578 A1 in view of Aharon et al. U.S. Publication No. 2008/0100621 A1 in view of Balocco et al. U.S. Publication No. 2012/0130243 A1 as applied to claims 1 and 13 above, and further in view of Manabe et al. U.S. Publication No. 2010/0092053 A1.
- With regards to claims 5 and 17, Gulsun et al. in view of Aharon et al. in view of Balocco et al. disclose the medical image processing apparatus and method according to claims 1 and 13, respectively. Gulsun et al. fail to disclose explicitly wherein the processor is configured to: determine whether a side branch of the luminal organ is shown in each of the images based on the segmentation data, and determine not to connect one or more of the points in one of the selected images corresponding to the side branch and the associated points in the other image. Pertaining to analogous art, Balocco et al. disclose wherein the processor (Balocco et al., Fig. 1, Pg. 2 ¶ 0023 - 0025 and 0027 - 0029, Pg. 3 ¶ 0038, Pg. 4 ¶ 0045, Pg. 9 ¶ 0100 - 0101) is configured to: determine whether a side branch of the luminal organ is shown in each of the images based on the segmentation data. (Balocco et al., Figs. 5 & 8, Pg. 2 ¶ 0022, Pg. 4 ¶ 0051 - 0055, Pg. 5 ¶ 0057 - 0060, 0062 - 0064 and 0066, Pg. 6 ¶ 0075, Pg. 9 ¶ 0096 - 0099) Balocco et al. fail to disclose explicitly determining not to connect one or more of the points in one of the selected images corresponding to the side branch and the associated points in the other image. Pertaining to analogous art, Manabe et al. disclose wherein the processor (Manabe et al., Figs. 1 & 2, Pg. 1 ¶ 0008 - 0009, Pg. 2 ¶ 0045 and 0048 - 0050, Pg. 3 ¶ 0055 - 0057) is configured to: determine whether a side branch of the luminal organ is shown in each of the images based on the segmentation data, (Manabe et al., Abstract, Figs. 3, 8, 12 & 19 - 22, Pg. 1 ¶ 0009 - 0010, Pg. 3 ¶ 0058 and 0061, Pg. 4 ¶ 0064 - 0069, Pg. 6 ¶ 0085 - 0088 and 0093) and determine not to connect one or more of the points in one of the selected images corresponding to the side branch and the associated points in the other image. (Manabe et al., Abstract, Figs. 8, 12 & 19 - 22, Pg. 3 ¶ 0061, Pg. 4 ¶ 0067 - 0069, Pg. 6 ¶ 0086 and 0093) Gulsun et al. in view of Aharon et al. in view of Balocco et al. and Manabe et al. are combinable because they are all directed towards medical image processing systems and methods that process cross-sectional images of a luminal organ. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Gulsun et al. in view of Aharon et al. in view of Balocco et al. with the teachings of Manabe et al. This modification would have been prompted in order to enhance the combined base device of Gulsun et al. in view of Aharon et al. in view of Balocco et al. with the well-known and applicable technique Manabe et al. applied to a comparable device. Determining whether a side branch of the luminal organ is shown in each of the images based on the segmentation data and determining not to connect one or more of the points in one of the selected images corresponding to the side branch and the associated points in the other image, as taught by Manabe et al., would enhance the combined base device by helping ensure that the continuity of a blood vessel remains uninterrupted due to the sudden appearance and/or disappearance of a side branch extending away from the blood vessel in the three-dimensional model of the blood vessel thereby improving the ability of the combined base device to generate accurate, reliable and high-quality three-dimensional models of luminal organs. Furthermore, this modification would have been prompted by the teachings and suggestions of Aharon et al. that branch points require special handling, see at least page 5 paragraph 0048 of Aharon et al. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that the combined base device would determine whether a side branch of the luminal organ is shown in each of the images and determine not to connect one or more of the points in one of the selected images corresponding to the side branch and the associated points in the other image so as to help ensure that the continuity of a blood vessel is preserved in the three-dimensional model of the blood vessel and thereby improve the ability of the combined base device to generate accurate, reliable and high-quality three-dimensional models of luminal organs. Therefore, it would have been obvious to combine Gulsun et al. in view of Aharon et al. in view of Balocco et al. with Manabe et al. to obtain the invention as specified in claims 5 and 17.
Claims 6 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Gulsun et al. U.S. Publication No. 2019/0130578 A1 in view of Aharon et al. U.S. Publication No. 2008/0100621 A1 in view of Balocco et al. U.S. Publication No. 2012/0130243 A1 as applied to claims 1 and 13 above, and further in view of Schmitt et al. U.S. Publication No. 2011/0071404 A1.
- With regards to claims 6 and 18, Gulsun et al. in view of Aharon et al. in view of Balocco et al. disclose the medical image processing apparatus and method according to claims 1 and 13, respectively. Gulsun et al. fail to disclose explicitly wherein the processor is configured to: determine whether a part of the boundary falls outside of each of the images based on the segmentation data, and upon determining that a part of the boundary falls outside of one of the images, modify the image such that the part of the boundary is interpolated therein. Pertaining to analogous art, Schmitt et al. disclose wherein the processor (Schmitt et al., Fig. 1A, Pg. 4 ¶ 0058, Pg. 11 ¶ 0128 - 0135) is configured to: determine whether a part of the boundary falls outside of each of the images based on the segmentation data, (Schmitt et al., Figs. 2 & 6a, Pg. 2 ¶ 0017 - 0019, Pg. 3 ¶ 0026, Pg. 4 ¶ 0058 - 0060 and 0063 - 0064, Pg. 5 ¶ 0070, Pg. 5 ¶ 0074 - Pg. 6 ¶ 0076) and upon determining that a part of the boundary falls outside of one of the images, modify the image such that the part of the boundary is interpolated therein. (Schmitt et al., Figs. 2 & 6a, Pg. 2 ¶ 0017 - 0019, Pg. 3 ¶ 0026, Pg. 4 ¶ 0058 - 0060 and 0063 - 0064, Pg. 5 ¶ 0070, Pg. 5 ¶ 0074 - Pg. 6 ¶ 0076) Gulsun et al. in view of Aharon et al. in view of Balocco et al. and Schmitt et al. are combinable because they are all directed towards medical image processing systems and methods that process cross-sectional images of a luminal organ. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Gulsun et al. in view of Aharon et al. in view of Balocco et al. with the teachings of Schmitt et al. This modification would have been prompted in order to enhance the combined base device of Gulsun et al. in view of Aharon et al. in view of Balocco et al. with the well-known and applicable technique Schmitt et al. applied to a comparable device. Determining whether a part of the boundary falls outside of each of the images based on the segmentation data and modifying an image such that the part of the boundary is interpolated therein upon determining that a part of the boundary falls outside of the image, as taught by Schmitt et al., would enhance the combined base device by improving its ability to generate accurate, reliable and high-quality three-dimensional models of luminal organs since any missing parts of the boundary of a luminal organ in the images of the luminal organ would be obtained via interpolation so as to help ensure that the continuity of the luminal organ is accurately preserved and represented in the three-dimensional model of the luminal organ. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that the combined base device would determine whether a part of the boundary falls outside of each of the images based on the segmentation data and modify an image such that the part of the boundary is interpolated therein upon determining that a part of the boundary falls outside of the image so as to improve its ability to generate accurate, reliable and high-quality three-dimensional models of luminal organs since any missing parts of the boundary of a luminal organ in the images of the luminal organ would be obtained via interpolation in order to help ensure that the continuity of the luminal organ is accurately preserved and represented in the three-dimensional model of the luminal organ. Therefore, it would have been obvious to combine Gulsun et al. in view of Aharon et al. in view of Balocco et al. with Schmitt et al. to obtain the invention as specified in claims 6 and 18.
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Gulsun et al. U.S. Publication No. 2019/0130578 A1 in view of Aharon et al. U.S. Publication No. 2008/0100621 A1 in view of Balocco et al. U.S. Publication No. 2012/0130243 A1 as applied to claim 8 above, and further in view of Wilson et al. U.S. Publication No. 2021/0125337 A1.
- With regards to claim 9, Gulsun et al. in view of Aharon et al. in view of Balocco et al. disclose the medical image processing apparatus according to claim 8, wherein the second machine learning model has been trained using a cross-sectional image of the luminal organ and data indicating whether an object exists on the image. (Gulsun et al., Pg. 3 ¶ 0034 - 0036, Pg. 4 ¶ 0038 - 0043, Pg. 5 ¶ 0046 - 0050, Pg. 5 ¶ 0055 - Pg. 6 ¶ 0057) Gulsun et al. fail to disclose expressly using data indicating whether an object exists on a scan line of the image at each angle. Pertaining to analogous art, Wilson et al. disclose wherein the second machine learning model has been trained using a cross-sectional image of the luminal organ and data indicating whether an object exists on a scan line of the image at each angle. (Wilson et al., Abstract, Figs. 1 - 3, Pg. 2 ¶ 0020 - 0023, Pg. 3 ¶ 0028 - 0031, Pg. 4 ¶ 0043 - 0046, Pg. 5 ¶ 0058 - 0060, Pg. 6 ¶ 0067 - 0068, Pg. 8 ¶ 0085 - Pg. 9 ¶ 0086) Gulsun et al. in view of Aharon et al. in view of Balocco et al. and Wilson et al. are combinable because they are all directed towards medical image processing systems and methods that process cross-sectional ultrasound images of a luminal organ. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Gulsun et al. in view of Aharon et al. in view of Balocco et al. with the teachings of Wilson et al. This modification would have been prompted in order to enhance the combined base device of Gulsun et al. in view of Aharon et al. in view of Balocco et al. with the well-known and applicable technique Wilson et al. applied to a comparable device. Training the second machine learning model using a cross-sectional image of the luminal organ and data indicating whether an object exists on a scan line of the image at each angle, as taught by Wilson et al., would enhance the combined base device by improving the performance of the second machine learning model to accurately and reliably detect the presence or absence of an object in images of the luminal organ, as taught and suggested by Wilson et al. who found that machine learning models trained using (r, θ) representations of data performed better than machine learning models trained using (x, y) representations of data, see at least page 5 paragraph 0058, page 7 paragraphs 0078 - 0079 and page 9 paragraph 0086 of Wilson et al. Furthermore, this modification would have been prompted by the teachings and suggestions of Gulsun et al. that annotated two-dimensional cross-sectional images are used as training data, that additional data may be utilized to train the second machine learning model and that the second machine learning model may be trained to identify different types of tissues, obstructions, calcifications and foreign objects, see at least figures 4 - 5, page 3 paragraphs 0034 - 0036, page 4 paragraphs 0038 and 0040 - 0044, page 5 paragraphs 0046 - 0050 and page 5 paragraph 0055 - page 6 paragraph 0057 of Gulsun et al. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that the second machine learning model of the combined base device would be trained using a cross-sectional image of the luminal organ and data indicating whether an object exists on a scan line of the image at each angle so as to improve its performance with regards to detecting the presence or absence of an object in images of the luminal organ, as taught and suggested by Wilson et al., see at least page 5 paragraph 0058, page 7 paragraphs 0078 - 0079 and page 9 paragraph 0086 of Wilson et al. Therefore, it would have been obvious to combine Gulsun et al. in view of Aharon et al. in view of Balocco et al. with Wilson et al. to obtain the invention as specified in claim 9.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Ilegbusi et al. U.S. Publication No. 2008/0228086 A1; which is directed towards image processing systems and methods for evaluating body vessels, wherein intravascular ultrasound (IVUS) is utilized to capture a plurality of IVUS images from within a body vessel, such as a coronary artery, and a three-dimensional model of the vessel is constructed based on the captured plurality of IVUS images.
Schmeitz et al. U.S. Publication No. 2022/0142606 A1; which is directed towards an ultrasound data processing method and device, wherein a plurality of frames of ultrasound data are captured as an intravascular ultrasound (IVUS) device is pulled through a blood vessel and a neural network processes the plurality of frames to detect the presence of any intravascular objects in the blood vessel.
Tang et al. U.S. Publication No. 2011/0295579 A1; which is directed towards a medical imaging system and method, wherein contour points corresponding to a lumen contour are identified on a plurality of images collected via intravascular ultrasound (IVUS) and contours from neighboring images are associated with each other to generate a three-dimensional model of the lumen.
Wang et al. U.S. Publication No. 2022/0335613 A1; which is directed towards image processing systems and methods for determining a stenosis region of a vessel, wherein a trained machine learning model processes intravascular ultrasound (IVUS) images of a vessel to identify a stenosis region of the vessel.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ERIC RUSH whose telephone number is (571) 270-3017. The examiner can normally be reached 9am - 5pm Monday - Friday.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached at (571) 270 - 5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ERIC RUSH/Primary Examiner, Art Unit 2677