DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Regarding 35 U.S.C. 112
New 112(b) rejections necessitated by amendment.
Examiner notes that the 112(b) rejections of claims 1-4, 10, 16-17 and 19 are withdrawn in view of the amendments to the claims. While amendments are made claims 5 and 14, such amendments do not clarify the issue presented in the previous Office Action mailed on 09/05/2025 and the rejfections of claims 5 and 14 are maintained.
Regarding prior art
Applicant's arguments filed 01/05/2026 have been fully considered but they are not persuasive. For example, applicant argues “[0058] of Weber discloses compensating for dynamical shifts caused by imaging a moving object (e.g. beating heart). That is, as the heart is beating, each frame will capture the heart at a slightly different position. In order to obtain a clean volumetric image, these shifts will have to be compensated for to align surfaces in each of the frames obtained at different times. As such, paragraph [0058] is referring to adjusting image frames for alignment with each other, not adjusting an orientation of the rendered volumetric image” (REMARKS pg. 9) In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., adjusting an orientation of a rendered volumetric image) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). Examiner notes that the claim recites adjusting a rendered orientation of a volumetric image, however, does not explicitly require adjusting an orientation of a rendered volumetric image. Nonetheless, it is noted that [0058] teaches adjusting a rendered orientation of image frames so as to maintain consistency between frames, therefore it is noted that the final rendered image would have its orientation rendered accordingly such that the consistency/alignment amongst the frames is maintained therefore [0058] teaches the adjustment of the rendered orientation of the volumetric image as required by the claim. Furthermore, regarding an orientation of a rendered volumetric image, this feature is taught by [0022] of Weber as noted below.
Applicant further argues “Similarly to paragraph [0058], paragraph [0022] discloses making adjustments to compensate for shifting anatomy in image frames taken at different times” and concludes that Weber fails to disclose the recited limitations (REMARKS pg. 9). Examiner respectfully disagrees in that paragraph [0022] of Weber is explicitly directed to a 4D (thus volumetric images) comprising multiple image frames (i.e. 3D volumetric images) and specifically recites that the angles and/or depths of the projections may be adjusted to take into account the shifts in the feature (thus enabling a consistent view (in terms of position and orientation within the rendered images). It is therefore clear that not only is the rendered orientation of the volumetric image (e.g. the 4D or 3D images) adjusted, but that the orientation of the rendered volumetric image (e.g. the 4D or 3D images) is adjusted in order to maintain such consistency. Applicant’s arguments are merely conclusory without specifically explaining why the adjustments to compensate for shifting anatomy is different from the adjustments required by the claim and applicant arguments cannot replace evidence where evidence is necessary.
For at least these reasons, applicant’s arguments are not found persuasive and Weber remains relied upon for teaching the elements of claims 1, 7, and 18.
Applicant finally argues “Applicant submits that compensating for shifts in imaging frames taken at different times cannot reasonably be interpreted as disclosing ‘partially removing an anatomical feature obscuring the anatomical feature of interest in a viewing direction’” (REMARKS pg. 10). Examiner notes that the claim (as noted by applicant) recites “at least partially removing an anatomical feature obscuring the anatomical feature of interest in a viewing direction; and/or maintaining the anatomical feature of interest always at a fixed location in the plurality of image frames, where and/or is interpreted to mean either “and” or “or” thus is interpreted that both of the cited functions are not required of the claim, but that either of the cited functions may read on the claim. In other words, the claim has been interpreted to mean that the processor may perform both of these functions or one of the functions and Weber teaches maintaining the anatomical feature of interest, thus reads on the claim. Nonetheless, examiner notes that if the claim were amended to require both of the recited functions that Patwardhan as relied upon for claim 10 teaches at least partially removing an anatomical feature obscuring the anatomical feature of interest in the viewing direction.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 3-5, 10-11, 13-15, and 19-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 3 recites the limitation “to adjust the volumetric ultrasound image by: at least partially removing… and/or maintaining”. It is unclear if the adjusting the volumetric ultrasound image is the same function as the adjusting the rendered orientation of the volumetric ultrasound image or if this is a different adjusting. In other words, it is unclear if the claim is attempting to further define the automated adjusting such that it is done by or includes the recited functions or if this is a different adjusting in addition to the automatic adjusting. For examination purposes, it has been interpreted to mean either the same or different adjusting, however, clarification is required.
Claims 4, 13, and 20 recite the limitation “a plurality of anatomical features of interest”. It is unclear if the plurality of anatomical features of interest are the same as or include the anatomical feature of interest previously recited or if this is a different plurality of anatomical features of interest. For examination purposes, it has been interpreted to mean any plurality of anatomical features of interest, however, clarification is required.
Claim 4 recites the limitation “adjust the volumetric ultrasound image by: making a plurality of adjustments”. It is unclear if the adjusting the volumetric ultrasound image is the same function as the adjusting the rendered orientation of the volumetric ultrasound image or if this is a different adjusting. In other words, it is unclear if the claim is attempting to further define the automated adjusting such that it is done by or includes the recited functions or if this is a different adjusting in addition to the automatic adjusting. For examination purposes, it has been interpreted to mean either the same or different adjusting, however, clarification is required.
Claims 5 and 14 recite the limitation “the plurality of sets of adjustments”. There is insufficient antecedent basis for this limitation in the claim. It is unclear what the limitation is attempting to refer to by reciting plurality of sets of adjustments when the claim previously recites making a plurality of adjustments where no plurality of sets of adjustments are recited. In other words, it is unclear if the claim is attempting to further define the plurality of adjustments to include sets of adjustments or if these are different sets of adjustments. For examination purposes, it has been interpreted to mean any plurality of sets of adjustments however, clarification is required.
Claims 5 and 15 recite the limitation “target orientations”. It is unclear if the target orientations include or are the same as the target orientation of claims 1 and 7 or if this is a different “target orientations”. For examination purposes, it has been interpreted to mean any target orientations, however, clarification is required.
Claims 5 and 15 recite the limitations ”different lighting directions” and “different lighting colors”. It is unclear what is meant by lighting directions and lighting colors in the context of ultrasound imaging which relies upon acoustic/ultrasonic waves, thus making it unclear what lighting directions and lighting colors.
Claims 10, 11, 13, and 19 recite the limitation “the adjusting the volumetric ultrasound image”. There is insufficient antecedent basis for the limitation in the claim. For example, claims 7 and 18 recite automatically adjusting a rendered orientation of the volumetric ultrasound image, it is therefore unclear if the adjusting the volumetric image is intended to be the same as adjusting the rendered orientation or if this is a different adjusting. For examination purposes, it has been interpreted to mean either the same or different adjusting, however, clarification is required.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-9, 11, 13-16, and 18-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Weber et al. (US 20190096118 A1), hereinafter Weber.
Regarding claims 1, 7, and 18,
Weber discloses a system, comprising:
A probe (at least fig. 7 (108) and corresponding disclosure in at least [0148] which discloses ultrasound image generation unit comprises an ultrasound transducer array 110, for example mounted in an ultrasound probe), configured to receive volumetric ultrasound data about a tissue to be imaged ([0148] which discloses The transducer array may be a one or two-dimensional array of transducer elements capable of scanning in three dimensions to generate 3D image data and [0136] which discloses In accordance with one or more examples, the imaging system may be configured to perform real-time rendering of images, wherein 4D image data is provided on a continual basis to the imaging system (comprising 3D image frames provided at regularly spaced intervals). This data may be provided by a separate image generation apparatus, such as an ultrasound probe for instance, to which the imaging system is operatively coupled. According to these examples, the imaging system may be configured to receive each frame from the ultrasound probe, to process it in accordance with the methods described above, and to then output the rendered 2D image onto a display unit for observation by for example a clinician).;
A memory storing instructions (Examiner notes that a system comprising a processing unit would necessarily comprise a memory storing instructions for performing the functions thereof);
A processor (at least fig. 1 and 7 (10) and corresponding disclosure in at least [0083]) , configured to execute the instructions to:
Acquire the volumetric ultrasound data obtained from the tissue ([0022] which discloses Furthermore, in the case of 4D image data, comprising multiple image frames captured at different times (examiner notes that the multiple image frames are volumetric, thus the 4D images or image frames thereof are considered volumetric ultrasound data), such local anatomical information may be generated for each and every frame, by, in at least some embodiments, segmenting each and every frame based upon the anatomical model, in advance of rendering images for said frames);
Process the volumetric ultrasound data to generate a volumetric ultrasound image, the volumetric ultrasound image comprising a plurality of image frames ([0133] which discloses the image rendering unit may further be configured to compile or composite said set of individual image frames, so as to provide a single graphical representation of the whole or a part of the 4D data set [0022] which discloses Furthermore, in the case of 4D image data, comprising multiple image frames captured at different times, such local anatomical information may be generated for each and every frame, by, in at least some embodiments, segmenting each and every frame based upon the anatomical model, in advance of rendering images for said frames and when processing a 4D data set, the image rendering unit may be configured to render the images as a movie sequence (as described above), displayed as either a dynamic mesh (that is a mesh that changes expansion and shape faithfully with the changing cycles of the organ being imaged) or as a static mesh (that is a mesh whose surface appearance changes in accordance with the changing properties of the volume beneath, but whose extension and shape remains fixed).);
Identify an anatomical feature of interest ([0083] which discloses image data is then communicated to a segmentation unit 14, which is adapted to perform segmentation of the 3D image data based upon an anatomical model, wherein this segmentation is configured to determine (or isolate) at least one segmented surface within the 3D image data. The segmented or isolated surface may be a surface of anatomical significance, such as the outer surface of an organ, or of another anatomical structure) and a current orientation thereof ([0097] which discloses the angles may be chosen for instance so as to form a projection or representation on the outer surface 24 which provides or shows a particular view or orientation of a given anatomical feature); and
Automatically adjust a rendered orientation of the volumetric ultrasound image, such that the at least one anatomical feature of interest is maintained at a target orientation corresponding to the anatomical feature of interest in the plurality of image frames ([0058] which discloses this anatomical context information enables the respective units to compensate for these dynamical shifts, and generate final rendered images in which both the orientation and the position within the image of one or more regions of interest remains consistent across all frames and [0022] which discloses Furthermore, in the case of 4D image data, comprising multiple image frames captured at different times, such local anatomical information may be generated for each and every frame, by, in at least some embodiments, segmenting each and every frame based upon the anatomical model, in advance of rendering images for said frames. This may enable a consistent view (in terms for example of position or orientation within the rendered images) of a particular anatomical feature lying within the volume to be captured across all of the frames, even in the case that said feature moves, shifts, expands or contracts in-between two or more frames. The angles and/or depths of the projections may accordingly be adjusted to take account of these shifts); and
A display, configured to receive a signal from the processor and performing a display operation ([0143] which discloses in accordance, with these examples, when processing a 4D data set, the image rendering unit may be configured to render the images as a movie sequence (as described above), displayed as either a dynamic mesh (that is a mesh that changes expansion and shape faithfully with the changing cycles of the organ being imaged) or as a static mesh (that is a mesh whose surface appearance changes in accordance with the changing properties of the volume beneath, but whose extension and shape remains fixed and [0136] which discloses a display unit for observation by for example a clinician).
Examiner notes that the system of Weber would perform the method of claim 7 and comprise the non-transitory computer-readable medium of claim 18 each having corresponding method/execution steps to the system of claim 1.
Regarding claims 2 and 9,
Weber further teaches wherein the processor is configured to execute the instructions to identify at least one additional anatomical feature of interest by identifying the anatomical feature of interest in the volumetric ultrasound data ([0022] which discloses Furthermore, in the case of 4D image data, comprising multiple image frames captured at different times, such local anatomical information may be generated for each and every frame, by, in at least some embodiments, segmenting each and every frame based upon the anatomical model, in advance of rendering images for said frames and [0083] which discloses image data is then communicated to a segmentation unit 14, which is adapted to perform segmentation of the 3D image data based upon an anatomical model, wherein this segmentation is configured to determine (or isolate) at least one segmented surface within the 3D image data. The segmented or isolated surface may be a surface of anatomical significance, such as the outer surface of an organ, or of another anatomical structure)
Regarding claims 3, 11, and 19,
Weber further discloses wherein the processor is configured to execute the instructions to adjust the volumetric ultrasound image by: maintaining the at least one anatomical feature of interest always at a fixed location in the plurality of image frames ([0058] which discloses This anatomical context information enables the respective units to compensate for these dynamical shifts, and generate final rendered images in which both the orientation and the position within the image of one or more regions of interest remains consistent across all frames [0022] which discloses this may enable a consistent view (in terms for example of position or orientation within the rendered images) of a particular anatomical feature lying within the volume to be captured across all of the frames, even in the case that said feature moves, shifts, expands or contracts in-between two or more frames. The angles and/or depths of the projections may accordingly be adjusted to take account of these shifts)
Regarding claims 4, 13, and 20,
Weber teaches the elements of claims 1, 7, and 18 as previously stated. Weber further teaches wherein the processor is configured to execute the instructions to: identify a plurality of anatomical features of interest ([0048] which discloses the region of interest may alternatively be a smaller sub-region contained within said sub-volume, for example a region covering or containing one or more anatomical features or structures of interest within the delimited sub-volume and [0048] which discloses the segmentation unit may be adapted to identify multiple regions of interest, including for instance one or more regions which fully or partially overlap and [0058] which discloses generate final rendered images in which both the orientation and the position within the image of one or more regions of interest remains consistent across all frames), and adjust the volumetric image by:
Making a plurality of adjustments to the volumetric ultrasound image simultaneously, each of the plurality of adjustments being made based one of the plurality of anatomical features of interest, respectively (which discloses [0089] The thus segmented data is then communicated to a surface rendering unit 16 which is configured to generate one or more surface values to be assigned to points on the at least one segmented surface. These values may in examples comprise colour, texture or shading values for instance. The values are determined on the basis of image data values falling along projection vectors extended through said points on the segmented surface, each projection vector having angle and/or length determined at least partly on the basis of the segmentation and/or the anatomical model. This may include in examples an averaging of these values, or some other compositing of the values, such as determining a maximum or minimum value [0140] Various methods exist for rendering surfaces within 3D image data sets which may be consistent with embodiments of the present invention. In accordance with at least one set of examples, the surface rendering unit is configured to construct a mesh to represent or match onto the segmented outer surface isolated by the segmentation unit. Such a mesh may typically be modelled as a mesh of interlocking triangles, which are sculpted to form an approximately smooth outer surface structure. In examples, a single shading or colour value may be assigned to each triangle, or each triangle (or a sub-set of the triangles) may be divided into a plurality of regions, each of which is assigned an individual shading value. [0145] which discloses taking ray casting purely as an example, the present invention may be combined with the latter steps of a conventional volume ray casting process. In conventional ray casting, rays are cast from a notional observer viewpoint through the surface of an imaged volume, and composited surface values are generated based on this, to be applied to said surface. In embodiments of the present invention, surface values are generated by means of the processes described above, and hence a method more akin to 2D (surface) ray casting may be applied, wherein rays are cast from an observer viewpoint to the now partially rendered surface, and final image pixel values generated on the basis of these values in combination with one or more other scene or image parameters, such as lighting location etc. Examiner notes that such rendering necessarily includes a plurality of adjustments (e.g. any of lighting location, shading value, color or texture for each voxel/pixel) which are each based on one of the plurality of anatomical features of interest, respectively in its broadest reasonable interpretation since pixel/voxels are associated with each of the plurality of anatomical features of interest).
Regarding claims 5 and 14-15,
Weber further discloses wherein the processor is configured to execute the instructions to adjust the volumetric ultrasound image to obtain a plurality of adjusted volumetric ultrasound images by: configuring different adjustment parameters for the plurality of sets of adjustments according to differences between the plurality of anatomical features of interest; and
The different adjustment parameters comprise at least one of different target orientations, different transparencies, different lighting directions, and different lighting colors ([0089] which discloses thus segmented data is then communicated to a surface rendering unit 16 which is configured to generate one or more surface values to be assigned to points on the at least one segmented surface. These values may in examples comprise colour, texture or shading values for instance. The values are determined on the basis of image data values falling along projection vectors extended through said points on the segmented surface, each projection vector having angle and/or length determined at least partly on the basis of the segmentation and/or the anatomical model. This may include in examples an averaging of these values, or some other compositing of the values, such as determining a maximum or minimum value and [0145] which discloses in embodiments of the present invention, surface values are generated by means of the processes described above, and hence a method more akin to 2D (surface) ray casting may be applied, wherein rays are cast from an observer viewpoint to the now partially rendered surface, and final image pixel values generated on the basis of these values in combination with one or more other scene or image parameters, such as lighting location etc)
Regarding claims 6 and 16,
Weber further discloses wherein the displaying operation comprises displaying the plurality of adjusted volumetric ultrasound images simultaneously ([0133] The imaging system, in accordance with embodiments, may be configured in the above described ways (or according to alternative methods) to generate, for a given 4D data set (a set 3D image data frames), a set of 2D rendered images representing an imaged volume, providing a consistent view or representation of one or more features of interest within or about said imaged volume. The image rendering unit may further be configured to compile or composite said set of individual image frames, so as to provide a single graphical representation of the whole or a part of the 4D data set, where a composite of said individual image frames is considered a simultaneous display of the plurality of adjusted volumetric ultrasound images)
Regarding claim 8,
Weber further discloses wherein the volumetric ultrasound data comes from at least one of a real-time ultrasonic scan and data in a memory ([0136] which discloses [0136] In accordance with one or more examples, the imaging system may be configured to perform real-time rendering of images, wherein 4D image data is provided on a continual basis to the imaging system (comprising 3D image frames provided at regularly spaced intervals). This data may be provided by a separate image generation apparatus, such as an ultrasound probe for instance, to which the imaging system is operatively coupled. According to these examples, the imaging system may be configured to receive each frame from the ultrasound probe, to process it in accordance with the methods described above, and to then output the rendered 2D image onto a display unit for observation by for example a clinician and [0138] Alternatively, image data may not be rendered in real time, but rather rendered subsequent to capturing of images, during a later secondary process. A clinician may then view the images at leisure, without the time pressures of real-time monitoring, thus comes from data in a memory in the alternative)
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Weber in view of Patwardhan et al. (US 20150164605 A1), hereinafter Patwardhan.
Regarding claim 10,
Weber teaches the elements of claim 7 as previously stated. Weber fails to explicitly teach wherein the adjusting the volumetric ultrasound image further comprises:
At least partially removing an anatomical feature obscuring the anatomical feature of interest in a viewing direction.
Nonetheless, Patwardhan, in a similar field of endeavor involving ultrasound imaging, teaches adjusting a volumetric image includes at least partially removing an anatomical feature obscuring the anatomical feature of interest in a viewing direction (see at least fig. 2 (210) and corresponding disclosure in at least [0051] and [0044] which discloses Specifically, the video processor 128 may remove the obstructing regions in the volumetric image to render an optimal view that brings a relevant portion of the heart including the pulmonary vein into greater focus. See also [0063] disclosing obstructing structures in the volumetric image that occlude a view of one or more anatomical structures of interest).
It would have been obvious to a person having ordinary skill in the art before the effective filing date to have modified Weber to include at least partially removing an anatomical feature obscuring the anatomical feature of interest in a viewing direction as taught by Patwardhan in order to generate an optimal view of the anatomical feature of interest (Parwardhan [0062]). Such a modification would enhance the visualization of the anatomical feature of interest of Weber by ensuring structures occluding the view of the anatomical feature of interest are no longer present in the image data.
Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Weber in view of Perrey et al. (US 20170238904 A1), hereinafter Perrey.
Regarding claim 12,
Weber further teaches storing an adjustment record for each image frame in the 4D ultrasound image.
Perrey, in a similar field of endeavor involving ultrasound imaging, teaches storing an adjustment record for an image frame ([0021] which discloses once the orientation of the volume is determined the volume may be automatically adjusted until it reaches the standard alignment, at which point it may be saved)
It would have been obvious to a person having ordinary skill in the art before the effective filing date to have modified Weber to include storing an adjustment record as taught by Perrey for each image frame in the volumetric ultrasound image of Weber in order to save the adjusted volume data for future processing and/or displaying diagnostically relevant slices/images in the future (Perrey [0021]).
Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over Weber in view of Foreign Fujii (JP 2010068956 A), hereinafter Fujii. Examiner notes that citations to Fujii are with respect to the translated copy provided herein.
Regarding claim 17,
Weber teaches the elements of claim 7 as previously stated. Weber fails to explicitly teach in response to an adjusted volumetric ultrasound image being selected, displaying said image in an enlarged manner.
Fujii, in a similar field of endeavor involving ultrasound imaging, teaches
Wherein a plurality of adjusted volumetric ultrasound images are displayed simultaneously (see at least fig. 8 and corresponding disclosure in at least pg. 9 disclosing six ultrasonic images (all 3D displays) related to the fetus divided and displayed as moving images thus volumetric ultrasound images)
In response to an adjusted volumetric ultrasound image being selected, displaying said image in an enlarged manner (pg. 9 which discloses For example, the selected required ultrasound image can be enlarged and displayed).
It would have been obvious to a person having ordinary skill in the art before the effective filing date to have modified Weber to include displaying a plurality of adjusted volumetric ultrasound images and displaying an image in an enlarged manner as taught by Fujii in order to selectively display only image data that is most efficient for examination from a plurality of viewing direction at the same time Fujii pg. 9).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BROOKE L KLEIN whose telephone number is (571)270-5204. The examiner can normally be reached Mon-Fri 7:30-4.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Anne Kozak can be reached at 5712700552. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/BROOKE LYN KLEIN/Primary Examiner, Art Unit 3797