DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Regarding 35 U.S.C. 101
Examiner notes that the previously set forth 101 rejection is withdrawn in view of the amendments to the claims.
Regarding 35 U.S.C. 112(f)
Examiner notes that the previously set forth 112(f) claim interpretations are withdrawn in view of the amendments to the claims
Regarding 35 U.S.C. 112(b)
Examiner notes that the previously set forth 112(b) claim interpretations are withdrawn in view of the amendments to the claims.
Applicant's arguments filed 01/27/2026 have been fully considered but they are not persuasive. For example, applicant argues that Ebata does not teach the amended claim language of dynamically switching displaying… however, it is noted that Ebata is not relied upon for this feature. Specifically, it is noted that Hattori is relied upon for teaching dynamically changing the ratio. Applicant does not provide any specific arguments as to why the combination of Ebata and Hattori do not teach the claimed invention, bur rather provides a conclusory statement that Hattori, Yacono, and Takeuchi, either individually or in combination with Ebata, likewise fail to disclose the cited features. Examiner notes that the claim merely requires dynamically switching displaying between a first display image and a second display image where it is noted that Hattori teaches dynamically changing the ratio of the image display regions in response to a user input. It is therefore noted that it would have been obvious to dynamically switch between a first display image in which the size of the visual field image display region is greater than the size of the ultrasound image display region (e.g. as shown in fig. 7 of Hattori) in a state in which the inside of the body of the subject is not being imaged and a second display image in which the size of the ultrasound image display region is greater than the size of the visual field image display region (e.g. as shown in fig. 14 of Hattori) in a second state in which the inside of the body of the subject is being imaged according to user preference (i.e. the user may switch to either display image dynamically as desired including instances of the first state and/or second state). Examiner notes that there is no specificity as to the nature of the dynamic switching (e.g. automatically in response to a determination of the first/second imaging state), thus the combination of Ebata and Hattori are considered to read on the claimed invention. To help promote expedited prosecution of this case, the Applicant is invited to contact the Examiner to set-up an Interview to discuss the distinguishing features of the invention over the cited prior art and clarification of the amended claim language.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2, 4, 6 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Ebata (US 20230172583 A1), hereinafter Ebata in view of Hattori et al. (US 20220133281 A1), hereinafter Hattori.
Regarding claims 1 and 10,
Ebata teaches a medical image diagnosis system (at least fig. 21 and corresponding disclosure in at least [0184]) comprising:
an ultrasound probe (at least fig. 1 (1F) and corresponding disclosure in at least [0186]) configured to detect and output a reflected wave signal obtained by reflecting an ultrasound signal transmitted to a subject within a body of the subject (see disclosure in at least [0188] disclosing a transmission and reception circuit and at least [0028] disclosing performing a scan along a body surface of a subject);
A head-mounted display (at least fig. 21 (2F) and corresponding disclosure in at least [0185]) configured to be attached to a head of an examiner ([0072] which discloses the HMD 2 is a display device that is worn on the user's head and is visually recognized by the user, is also called smart glasses, and has a so-called eyeglass-like shape as illustrated in FIG. 3) and including at least a camera (at least fig. 21 (24) and corresponding disclosure in at least [0058]) configured to output an image signal obtained by capturing an image including the external appearance of the subject ([0116] which discloses in the visual field image C, the subject T positioned in front of the user and the ultrasound probe 1 in contact with the surface of the wound S of the subject T are imaged) and a display device (at least fig. 21 (23) and corresponding disclosure in at least configured to display an image input to be presented to the examiner ([0126] which discloses a configuration can be made in which the HMD-side monitor 23 has non-transparency, the visual field image C captured by the camera 24 of the HMD 2 is displayed on the HMD-side monitor 23, the user observes the visual field image C displayed on the HMD-side monitor 23, and the scanned region mask M is displayed on the HMD-side monitor 23 by being superimposed on the visual field image C); and
Processing circuitry (at least fig. 21 (17F and 26F) and corresponding disclosure in at least [0188]-[0191]) configured to acquire at least the reflected wave signal and the image signal (see at least fig. 21 depicting an image generation unit which would receive the reflected wave signal and HMD control unit/scanning position detection unit which would receive the image signal), control a process of displaying an ultrasound image based on the reflected wave signal ([0196] which discloses the ultrasound image generated by the image generation unit 32 can be wirelessly transmitted from the probe-side wireless communication unit 13 of the ultrasound probe 1F to the HMD-side wireless communication unit 21 of the HMD 2F, and the ultrasound image can be displayed on the HMD-side monitor 23. Examiner notes such display would be controlled by the HMD control unit 25F) and a visual field image based on the image signal ([0126] which discloses a configuration can be made in which the HMD-side monitor 23 has non-transparency, the visual field image C captured by the camera 24 of the HMD 2 is displayed on the HMD-side monitor 23, the user observes the visual field image C displayed on the HMD-side monitor 23, and the scanned region mask M is displayed on the HMD-side monitor 23 by being superimposed on the visual field image C. Examiner notes such display would be controlled by the HMD control unit 25F), and
Determine whether or not a state is an imaging state in which the ultrasound probe detects the reflected wave signal and the inside of the body of the subject is imaged ([0093] which discloses the scanning determination unit 36 determines whether or not the ultrasound probe 1 is performing a scan by analyzing the ultrasound image generated by the image generation unit 32. In a case where the ultrasound probe 1 is in a so-called air radiation state in which the ultrasound probe 1 is not in contact with the body surface of the subject T, the ultrasound image generated by the image generation unit 32 shows a highly uniform brightness distribution over the entire image, but in a case where the ultrasound probe 1 is in contact with the body surface of the subject T and is performing a scan, the ultrasound image generated by the image generation unit 32 shows a brightness distribution having shading corresponding to an internal tissue of the subject T. Therefore, it is possible to determine whether or not the ultrasound probe 1 is performing a scan by analyzing the ultrasound image) and
Wherein the processing circuitry controls display of an ultrasound image display region where the ultrasound is displayed on the display device and a visual field image display region where the visual field image is displayed on the display device based on an imaging state determination result ([0126] which discloses For example, a configuration can be made in which the HMD-side monitor 23 has non-transparency, the visual field image C captured by the camera 24 of the HMD 2 is displayed on the HMD-side monitor 23, the user observes the visual field image C displayed on the HMD-side monitor 23, and the scanned region mask M is displayed on the HMD-side monitor 23 by being superimposed on the visual field image C and [0090] which discloses the display control unit 33 performs predetermined processing on the ultrasound image sent from the image generation unit 32, and displays the ultrasound image on the main body-side monitor 34, under the control of the main body control unit 41. See also at least fig. 4)
Ebata fails to explicitly teach wherein the processing circuitry is configured to dynamically change a ratio between a size of an ultrasound image display region where the ultrasound image is displayed on the display device and a size of a visual field image display region where the visual field image is displayed on the display device based on an imaging state determination result and dynamically switch displaying between displaying of a first display image and displaying of a second display image on the display device, the first display image being an image in which the size of the visual field image display region is greater than the size of the ultrasound image display region in a first state in which the inside of the body is not being imaged and the second display image being an image in which the size of the ultrasound image display region is greater than the size of the visual field image display region in a second state in which the inside of the body of the subject is being imaged.
Hattori, in a similar field of endeavor involving ultrasound imaging teaches wherein processing circuitry dynamically changes a ratio between a size of an ultrasound image display region where the ultrasound image is displayed on a display device and a size of a visual field display region where the visual field image is displayed on the display device based on an imaging state determination result ([0179] which discloses the disposition and the size of the ultrasound image U and the view image C displayed on the external monitor 45 can be arbitrarily changed on the external apparatus 4E side and [0193] which discloses here, the disposition and the size of the ultrasound image U and the view image C displayed on the external monitor 45 can be adjusted by an input operation of the observer through the input device 47. For example, in a case where the observer inputs instruction information for the guidance on adjusting the disposition and the size of the ultrasound image U and the view image C on the external monitor 45 through the input device 47, the input instruction information is input to the display controller 44 by way of the external controller 46E. The display controller 44 displays the ultrasound image U and the view image C synchronized with each other, for example, with the disposition and the size as shown in FIG. 14 based on the input instruction information. Examiner notes that the input instruction information is considered an imaging state determination result in its broadest reasonable interpretation),
dynamically switch displaying between displaying of a first display image and displaying of a second display image on the display device ([0179] which discloses the disposition and the size of the ultrasound image U and the view image C displayed on the external monitor 45 can be arbitrarily changed on the external apparatus 4E side and [0193] which discloses here, the disposition and the size of the ultrasound image U and the view image C displayed on the external monitor 45 can be adjusted by an input operation of the observer through the input device 47), the first display image being an image in which the size of the visual field image display region is greater than the size of the ultrasound image display region in a first state in which the body of the subject is not being imaged (see at least fig. 7. Examiner notes that the first display image would occur based on user preference and thus would occur in any state including a first state in which the body of the subject is not being imaged) and the second display image being an image in which the size of the ultrasound image display region is greater than the size of the visual field image display region in a second state in which the inside of the body of the subject is being imaged (see at least fig. 14).
It would have been obvious to a person having ordinary skill in the art before the effective filing date to have modified Ebata to include dynamically changing a ratio between a size of an ultrasound image display region and a size of an visual field display region and dynamically switching between the first image and the second image as taught by Hattori in order to allow the examiner who observes the ultrasound image and the view image to more clearly confirm the ultrasound image and the view image conforming to the examiner’s preference (Hattori [0194]).
Examiner further notes that in the modified system, the dynamic changing of the ratio taught by Hattori is based on any processing functions taught by Ebata in view of the breadth of “based on”, thus is based on the determination of the whether or not the state is the imaging state as taught by Ebata in its broadest reasonable interpretation.
Regarding claim 2,
Ebata further teaches wherein the processing circuitry is further configured to determine the imaging state based on the reflected wave signal ([0093] which discloses the scanning determination unit 36 determines whether or not the ultrasound probe 1 is performing a scan by analyzing the ultrasound image generated by the image generation unit 32. In a case where the ultrasound probe 1 is in a so-called air radiation state in which the ultrasound probe 1 is not in contact with the body surface of the subject T, the ultrasound image generated by the image generation unit 32 shows a highly uniform brightness distribution over the entire image, but in a case where the ultrasound probe 1 is in contact with the body surface of the subject T and is performing a scan, the ultrasound image generated by the image generation unit 32 shows a brightness distribution having shading corresponding to an internal tissue of the subject T. Therefore, it is possible to determine whether or not the ultrasound probe 1 is performing a scan by analyzing the ultrasound image and [0187] which discloses the scanning determination unit 36 is the same as the scanning determination unit 36 of the diagnostic apparatus main body 3 illustrated in FIG. 1, and determines whether or not the ultrasound probe 1F is performing a scan by analyzing the ultrasound image generated by the image generation unit 32).
Regarding claim 4,
Ebata, as modified, teaches the elements of claim 1 as previously stated.
Ebata, as modified, further teaches wherein processing circuitry is further configured to cause the display device to perform a display process in which the ultrasound image display region overlaps a partial region within the visual field image display region in the first state (See at least fig. 7 of Hattori), and cause the display device to perform a display process in which the visual field image display region overlaps a partial region within the ultrasound image display region in the second state (see at least fig. 14 of Hattori).
Regarding claim 6,
wherein the processing circuitry is further configured to:
acquire position information indicating a position of the ultrasound probe ([0092] which discloses the scanning position detection unit 35 detects the scanning position of the ultrasound probe 1 by analyzing the visual field image C sent from the main body-side wireless communication unit 31. For example, the scanning position detection unit 35 can detect the scanning position of the ultrasound probe 1 by recognizing the distal end portion of the ultrasound probe 1, which is in contact with the surface of the wound S of the subject T, from the visual field image C), determines a detection state of the ultrasound probe based on the detection position ([0092] which discloses the scanning position detection unit 35 detects the scanning position of the ultrasound probe 1 by analyzing the visual field image C sent from the main body-side wireless communication unit 31. For example, the scanning position detection unit 35 can detect the scanning position of the ultrasound probe 1 by recognizing the distal end portion of the ultrasound probe 1, which is in contact with the surface of the wound S of the subject T, from the visual field image C. where the scanning position is considered a detection state) and determining the imaging state based on the determined detection state ([0155] which discloses further, the diagnostic apparatus main body 3C can be configured to be wirelessly connected to each of the ultrasound probe 1 and the HMD 2 illustrated in FIG. 1 , and the scanning determination unit 36C can receive the visual field image C wirelessly transmitted from the HMD 2 to the main body-side wireless communication unit 31 of the diagnostic apparatus main body 3C, analyze the visual field image C, and determine whether or not the ultrasound probe 1 is performing a scan. Since the body surface of the subject T and the ultrasound probe 1 are shown in the visual field image C, it is possible to determine that the ultrasound probe 1 is in contact with the body surface of the subject T and performing a scan, by analyzing the visual field image C. Examiner thus notes that the determining of the imaging state is based on any data determined by analyzing the visual field image including the determined detection state)
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Ebata (US 20230172583 A1), hereinafter Ebata and Hattori et al. (US 20220133281 A1), hereinafter Hattori, as applied to claim 1 above, and further in view of Yacono (US 20140171959 A1), hereinafter Yacono.
Regarding claim 5,
Ebata, as modified, teaches the elements of claim 1 as previously stated. Ebata, as modified, fails to explicitly teach wherein the processing circuitry is further configured to aquire, from a visual line sensor included in the display device, information indicating a position of a visual line of the examiner toward the inside of an image display region in the display device, determine a gaze state of the examiner based on the information indicating the position of the visual line, and dynamically change the ratio between the size of the ultrasound image display region and the size of the visual field image display region based on a determination result of the gaze state.
Yacano in a similar field of endeavor involving head-mounted displays in medical environments, wherein the processing circuitry further acquire, from a visual line sensor included in a display device ([0059] which discloses a wearable user interface 450 includes an eye tracking system 452 that permits the console 102 to identify the point of gaze of the surgeon), a position of a visual line of the examiner toward the inside of an image display region in a display device ([0059] which discloses an eye tracking system that permits the console to identify the point of gaze of the surgeon. Thus the console receives eye-tracking data and thus receives a position of a visual line of the examiner toward the inside of an image display in order to identify the point of gaze of the surgeon), determines a gaze state of the examiner based on the position of the visual line ([0059] which discloses a wearable user interface 450 includes an eye tracking system 452 that permits the console 102 to identify the point of gaze of the surgeon) and dynamically changes a ratio between sizes of image regions based on a gaze state determination result (see at least figs. 5 and 6 where images depicted in 316 and 312 have their sizes dynamically changed and corresponding disclosure in at least [0056] which discloses this wearable camera may capture and display images in real-time. In this embodiment, since the display in FIG. 5 is showing two captured images, it may replace one for the other as is shown in FIG. 6. That is, if both images are images captured by cameras, either the console 102 or the wearable user interface 104 may permit a surgeon to toggle between the image in the central surgical viewing area 312 and the image in the second display region 316. This is shown in FIG. 6. The user may use any input to toggle between the images, including for example, an input on the wearable user interface or an input on the console 102 and [0059] which discloses the system 100 may highlight the particular selectable icon in the surgeons’ gaze on the wearable user interface. The surgeon may then select it using an input device such as a foot pedal)
It would have been obvious to a person having ordinary skill in the art before the effective filing date to have modified Ebata, as currently modified to include acquiring a position of a visual line of an examiner, determining a gaze state, and dynamically changing the ratio between image region sizes as taught by Yacono in order to enable section and control of the console using the wearable user interface (Yacono [0059]). Furthermore, such a modification would enhance the procedure of Ebata, as currently modified, to allow for input control using eye gaze detection, thus allowing for hands-free control of the system.
Examiner further notes that in the modified system, the dynamic changing of the ratio taught by Hattori is based on any processing functions taught by Ebata in view of the breadth of “based on”, thus is based on the gaze state as taught by Yacono in its broadest reasonable interpretation.
Claims 7-9 are rejected under 35 U.S.C. 103 as being unpatentable over Ebata (US 20230172583 A1), hereinafter Ebata and Hattori et al. (US 20220133281 A1), hereinafter Hattori, as applied to claim 1 above, and further in view of Takeuchi (US 20040019270 A1), hereinafter Takeuchi.
Regarding claim 7,
Ebata, as modified, teaches the elements of claim 1 as previously stated. Ebata, as modified, fails to explicitly teach wherein the processing circuitry is further configured to acquire past information from a past examination performed for the subject, and control a process of displaying the past information on the display device.
Takeuchi, in a similar field of endeavor involving ultrasound imaging, teaches processing circuitry (at least fig. 1 (14, 21-26 and 30-32) and corresponding disclosure in at least [0027]) configured to acquire past information from a past examination performed for the subject ([0041] which discloses ultrasonic images acquired in the past to be used as reference images are stored in storage medium), and control a process of displaying the past information on the display device (See at least figs.6 -7 depicting the reference image 42 (i.e. past information) displayed on the display device).
It would have been obvious to a person having ordinary skill in the art before the effective filing date to have modified Ebata, as currently modified, to include acquiring and displaying past information as taught by Takeuchi in order to make manipulations easier and adequate for non-specialized or less experienced physicians or technicians (Takeuchi [0008]). Such a modification would allow for navigation to a previously imaged region for continual monitoring of a desired region thereby enhancing overall diagnostic procedures.
Regarding claim 8,
Ebata, as modified, teaches the elements of claim 7 as previously stated. Takeuchi, as applied to claim 7 above, further teaches wherein the past information includes a past image (42), which is the ultrasound image obtained in the past examination for the subject ([0041] which discloses ultrasonic images acquired in the past to be used as reference images).
Ebata, as currently modified, fails to explicitly teach wherein the past information includes an imaging position, which is a position where the reflected wave signal was detected by the ultrasound probe when the past image was captured and wherein, wherein in the second state, the processing circuitry is further configured to: cause the past image to be displayed in a display region smaller than the ultrasound images display region together with the ultrasound image and the visual field image, and cause a mark indicating the imaging position to be displayed.
Nonetheless, Takeuchi further teaches wherein the past information includes an imaging position, which is a position where the reflected wave signal was detected by the ultrasound probe when the past image was captured ([0041] which discloses the position information of the ultrasonic probe 12 when the respective ultrasonic images were acquired on a patient-by-patient basis), and wherein, in a second state in which the inside of the body is imaged, the processing circuitry is configured to cause the past image to be displayed in a display region smaller than an ultrasound image display region together with an ultrasound image displayed in the ultrasound image display region (see at least figs. 6-7A depicting the reference image 42 displayed smaller than an ultrasound image display region 41) and cause a mark indicating the imaging position to be superimposed and displayed on the display device (see at least fig. 7A (46) and corresponding disclosure in at least [0059]-[0069]).
It would have been obvious to a person having ordinary skill in the art before the effective filing date to have modified Ebata, as currently modified, to include displaying the at least one past image and causing a mark indicating the imaging position to be superimposed and displayed on the display device as taught by Takeuchi in order to make manipulations easier and adequate for non-specialized or less experienced physicians or technicians (Takeuchi [0008]). Such a modification would allow for navigation to a previously imaged region for continual monitoring of a desired region thereby enhancing overall diagnostic procedures.
Examiner notes that in the modified system the past image is displayed together with the ultrasound image and the visual field image of Ebata, as currently modified.
Ebata, as currently modified fails to explicitly teach the mark indicating the imaging position superimposed and displayed on the visual field image, however, Hattori further teaches a mark (at least fig. 6 (A) and corresponding disclosure in at least [0125]) for guidance purposes superimposed and displayed on the visual field image (at least fig. 6 (C) and corresponding disclosure in at least [0125]).
It would have been obvious to a person having ordinary skill in the art before the effective filing date to have modified Ebata, as currently modified, to include superimposing and displaying the mark on the visual field image as taught by Hattori in order to provide further contextual information to the probe navigation data. Such a modification would thus provide enhanced navigational information by overlaying the mark on the visual field image to indicate where on the body the probe should be moved to reach the desired position accordingly.
Regarding claim 9,
Ebata, as modified, teaches the elements of claim 8 as previously stated. Ebata further teaches wherein the processing circuitry is further configured to acquire position information indicating a position of the ultrasound prob ([0092] which discloses the scanning position detection unit 35 detects the scanning position of the ultrasound probe 1 by analyzing the visual field image C sent from the main body-side wireless communication unit 31. For example, the scanning position detection unit 35 can detect the scanning position of the ultrasound probe 1 by recognizing the distal end portion of the ultrasound probe 1, which is in contact with the surface of the wound S of the subject T, from the visual field image C), determine a detection state of the ultrasound probe based on the detection position ([0092] which discloses the scanning position detection unit 35 detects the scanning position of the ultrasound probe 1 by analyzing the visual field image C sent from the main body-side wireless communication unit 31. For example, the scanning position detection unit 35 can detect the scanning position of the ultrasound probe 1 by recognizing the distal end portion of the ultrasound probe 1, which is in contact with the surface of the wound S of the subject T, from the visual field image C. where the scanning position is considered a detection state)
Ebata, as currently modified, fails to explicitly teach wherein the processing circuitry dynamically changes a size of a display region of the past image in accordance with a distance between the position of the ultrasound probe indicated by the position information and the imaging position.
Nonetheless, Hattori teaches where sizes of the images may be changed based on a user input ([0179] which discloses the disposition and the size of the ultrasound image U and the view image C displayed on the external monitor 45 can be arbitrarily changed on the external apparatus 4E side and [0193] which discloses here, the disposition and the size of the ultrasound image U and the view image C displayed on the external monitor 45 can be adjusted by an input operation of the observer through the input device 47. For example, in a case where the observer inputs instruction information for the guidance on adjusting the disposition and the size of the ultrasound image U and the view image C on the external monitor 45 through the input device 47, the input instruction information is input to the display controller 44 by way of the external controller 46E. The display controller 44 displays the ultrasound image U and the view image C synchronized with each other, for example, with the disposition and the size as shown in FIG. 14 based on the input instruction information).
Therefore, in the modified system, it would have been obvious to have further modified the processing circuitry to dynamically change the a size of the display region of the past image in order to allow the examiner who observes the past image to more clearly confirm past image conforming to the examiner’s preference (Hattori [0194]).
Examiner notes that by dynamically changing the size of the display region according to user input, that the size is changed as desired by the user, thus would be according to any distance between the detection position and the imaging position in its broadest reasonable interpretation as the user may change the size of the image(s) at any distance between the detection position and the imaging position.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BROOKE L KLEIN whose telephone number is (571)270-5204. The examiner can normally be reached Mon-Fri 7:30-4.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Anne Kozak can be reached at 5712700552. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/BROOKE LYN KLEIN/Primary Examiner, Art Unit 3797