DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgment is made of applicant's claim for foreign priority based on an application filed in Korea on 10/06/2023 and 02/28/2024. It is noted, however, that applicant has not filed a certified copy of the KR10-2023-0133631 or KR10-2024-0029220 application as required by 37 CFR 1.55.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-4, 11-14, 18, and 20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Lee (US 20130072797 A1), hereinafter Lee.
Regarding claims 1, 11, and 20,
Lee discloses an ultrasound imaging device (at least fig. 1 (101) and corresponding disclosure in at least [0033]) comprising:
A display (FIG. 2, the 3D ultrasound apparatus may extract a side image, a top image, or a front image from an image data obtained by scanning an object in a human body and may display the image on the screen);
A memory storing one or more instructions ([0133] which discloses the above-described embodiments of the present invention may be recorded in non-transitory computer-readable media including program instructions to implement various operations embodied by a computer); and
At least one processor ([0133] which discloses the above-described embodiments of the present invention may be recorded in non-transitory computer-readable media including program instructions to implement various operations embodied by a computer),
Wherein the at least one processor is configured to execute the instructions
to obtain volume data of an object ([0035] The scanner 103 may extract at least one of a side image (A-plane in a side direction or a sagittal plane), a top image (B-plane in a top direction or a transverse plane) and a front image (C-plane in a front direction or a coronal plane) with respect to an object in a human body from an image data obtained by scanning the object and then may display the at least one of the side image, the top image, and the front image on a screen. Examiner notes that such image data is considered volume data),
Obtain, from the volume data, a reference cross-sectional image (at least fig. 2 (203/213 and/or 205/215) and corresponding disclosure in at least [0062]) crossing a first axis (see at least fig. 2),
obtain from the volume data, a candidate standard cross-sectional image (at least fig. 2 (201/211) and corresponding disclosure in at least [0062]) crossing a second axis that is different from the first axis (see at least fig. 2),
display the reference cross-sectional image (at least fig. 2 (213 or 215) and corresponding disclosure in at least [0062]) and the candidate standard cross-sectional image (at least fig. 2 (211) and corresponding disclosure in at least [0062]) on a first graphic user interface (GUI) view (see at least fig. 2),
rotate the reference cross-sectional image on the first GUI view ([0063] which discloses as the image is rotated the 3D ultrasound apparatus updates the side image, the top image (i.e. the reference cross-sectional image), or the front image (i.e. the reference image)),
update the candidate standard cross-sectional image on the first GUI view by adjusting a slicing plane of the candidate standard cross-sectional image based on the rotation of the reference cross-sectional image ([0063] which discloses as the image data is rotated or moved based on a selected reference, the 3D ultrasound apparatus updates the side image (i.e. the candidate standard cross-section), the top image, or the front image and displays the updated image on the screen, thereby easily detecting a 3D object),
and display, according to completion of the rotation of the reference cross-sectional image, the updated candidate standard cross-sectional image as a standard cross-sectional image on the first GUI view ([0063] which discloses the 3D ultrasound apparatus displays the updated image on the screen, thereby easily detecting a 3D object), wherein the standard cross-sectional image includes at least one anatomical landmark (see at least fig. 2 examiner notes that the standard cross-sectional image includes at least an anatomical landmark ([0047] which discloses the controller 1113 may identify a fetus’ nasal bone in the side image and [0097] which discloses the 3D ultrasound apparatus may control a sagittal view of the object by extracting a side image of the fetus from the image data and by rotating the image data so that the brightness intensity in a falx area of the fetus, included in the side image, is largest)
Examiner notes that the apparatus would perform the method of claim 1 having corresponding method steps.
Regarding claims 2 and 12,
Lee further discloses wherein the standard cross-sectional image corresponds to a mid-sagittal plane (MSP) of the object ([0129] which discloses the 3D ultrasound apparatus may determine similarity and determine a reference sagittal plane 1125 where the similarity is highest as the mid sagittal plane. IN other words, the 3D ultrasound apparatus may measure the similarity while rotating or moving the image data and detect the highest similarity. Examiner notes that by rotating the image data as discussed previously the updated second image and thus the standard cross-sectional image therefore is considered to correspond to a mid-sagittal plane of the object. See also [0055] which discloses [0055] The controller 113 may automatically control a sagittal view of the object by matching a figure corresponding to the fetus included in the top image and rotating the image data so that the left/right matching of the matched figure is highest).
Regarding claims 3 and 13,
Lee further discloses wherein
The reference cross-sectional image corresponds to at least one of a coronal or an axial plane of the volume data ([0035] which discloses a top image (B-plane in a top direction or a transverse plane) and a front image (C-plane in a front direction or a coronal plane), and
The candidate standard cross-sectional image corresponds to a sagittal plane of the volume data ([0035] which discloses a side image (A-plane in a side direction or a sagittal plane)).
Regarding claims 4 and 14,
Lee further discloses wherein the at least one processor is further configured to execute the one or more instructions to display a first indicator representing a first line formed as a cross line of a slicing plane of the reference cross-sectional image and a slicing plane of the candidate standard cross-sectional image (see at least fig. 7 depicting an indicator (i.e. 703 on the left)) representing lines formed as cross lines of the (i.e. the initial cross-sectional image/side image)) and a second indicator representing a second line formed as a cross line of the slicing plane of the reference cross-sectional image and a slicing plane of the standard cross-sectional image, on the first GUI view (see at least fig. 7 (indicator on the right) (i.e. after rotation). See also fig. 5 depicting a cross line as the first indicator (i.e. the middle image) and a cross line as the second indicator (i.e. the image on the right after rotation)).
Regarding claim 18,
Lee further discloses wherein the at least one processor is further configured to execute the one or more instructions to display at least one landmark of the object with a preset color (Examiner notes that any display of any landmark of the object (e.g. the nasal bone) is necessarily displayed with a preset color (e.g. in grayscale in an ultrasound image)).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 5-8 and 15-16 are rejected under 35 U.S.C. 103 as being unpatentable over Lee in view of Dufour et al. (US 20180116635 A1), hereinafter Dufour.
Regarding claim 5 and 15,
Lee teaches the elements of claims 4 and 14 as previously stated. Lee fails to explicitly teach wherein the at least one processor is further configured to execute the one or more instructions to display an angle indicator representing an angle between the first indicator and the second indicator on the first GUI view.
Nonetheless, Dufour teaches at least one processor configured to display an angle indicator (at least fig. 3 (46) and corresponding disclosure in at least [0058]) representing an angle between a first indicator (at least fig. 3 (dotted line)) and a second indicator (at least fig. 3 (34) and corresponding disclosure in at least [0058]).
It would have been obvious to a person having ordinary skill in the art before the effective filing date to have modified Lee to include an angle indicator as taught by Dufour in order to allow a user to readily recognize the difference between the current candidate plane and the desired plane accordingly. Such a modification would ensure the user may confirm the relationship between the current plane and the final plane such that rotation of the image data occurs as desired.
Regarding claim 6,
Lee, as modified, further teaches wherein rotating of the reference cross-sectional image comprises rotating the reference cross-sectional image such that the angle between the first indicator and the second indicator is reduced (Examiner notes that in the modified system the angle between the first indicator and the second indicator is reduced such that it would match/be zero as the second indicator depicts the final image rotation)
Regarding claims 7 and 16,
Lee, as modified, further teaches wherein the at least one processor is further configured to execute the one or more instructions to adjust the slicing plane of the candidate cross-sectional image such that the angle between the first indicator and the second indicator corresponds to an angle between the reference cross-sectional image and the standard cross-sectional image (Examiner notes that in the modified system the angle between the first indicator and the second indicator would necessarily correspond to any angle between the reference cross-sectional image and the standard cross-sectional image).
Regarding claim 8,
Lee, as modified, further teaches further comprising displaying of the updated candidate standard cross-sectional image as the standard cross-sectional image comprises completing the rotation of the reference cross-sectional image when the angle between the first indicator and the second indicator becomes zero (Examiner notes that in the modified system the angle between the first indicator and the second indicator is reduced such that it would be zero as the second indicator depicts the final image rotation).
Claims 9 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Lee in view of Ohuchi et al. (US 20150193962 A1), hereinafter Ohuchi.
Regarding claims 9 and 17,
Lee teaches the elements of claims 1 and 11 as previously stated. Lee fails to explicitly teach wherein the at least one processor is further configured to execute the one or more instructions to display the at least one anatomical landmark with a preset color that is distinguished from other areas on the first GUI view.
Ohuchi, in a similar field of endeavor involving ultrasound imaging, teaches wherein at least one processor is configured to execute instructions to display at least one anatomical landmark with a preset color that is distinguished from other areas on a first GUI view ([0181] which discloses displays landmarks such that the colors of the landmarks vary, by making the landmarks distinguishable from one another).
It would have been obvious to a person having ordinary skill in the art before the effective filing date to have modified Lee to include displaying at least one anatomical landmark with a preset color as taught by Ohuchi in order to allow a user to readily recognize the anatomical landmark(s). Such a modification would enhance a user’s ability to identify different anatomical landmarks which would ultimately assist with diagnosis/evaluation of the patient accordingly.
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Lee in view of Lee et al. (US 20160120506 A1), hereinafter Lee (2016).
Regarding claim 10,
Lee teaches the elements of claim 1 as previously stated. Lee fails to explicitly teach further comprising displaying a three-dimensional ultrasound image of the object on the first GUI view.
Lee (2016) teaches displaying a three-dimensional ultrasound image of an object on a first GUI view ([0095] which discloses the display 330 may display a screen including the at least one of at least one 2D ultrasound image and the 3D ultrasound image, as well as the at least one first ultrasound image).
It would have been obvious to a person having ordinary skill in the art before the effective filing date to have modified Lee to include displaying a three-dimensional ultrasound image of the object as taught by Lee (2016) in order to allow a user to visualize the volumetric image data as well as the cross-sectional images from which it is obtained. Furthermore, such a modification would enhance the visualization of the object such that it is presented in three dimensions thus allowing for enhanced diagnosis/evaluation of the object.
Claim 19 is rejected under 35 U.S.C. 103 as being unpatentable over Lee in view of Gerard et al. (US 20140013849 A1 included in Applicant’s IDS), hereinafter Gerard.
Regarding claim 19,
Lee teaches the elements of claim 1 as previously stated. Examiner notes that ultrasound data as disclosed by Lee would necessarily require a probe, however, such a probe is not explicitly taught by Lee. Therefore, Lee fails to explicitly teach further comprising a probe, wherein the at least one processor is further configured to execute the one or more instructions to obtain the volume data based on a reception signal received from the probe.
Nonetheless, Gerard, in a similar field of endeavor involving ultrasound imaging, teaches a probe (at least fig. 1 (106) and corresponding disclosure in at least [0019]) and wherein at least one processor is further configured to execute one or more instructions to obtain volume data based on a reception signal received from the probe ([0044] which discloses may be modified so that volumetric data, such as 3D or 4D data, is acquired instead of multi-plane at step 202. The user may then view two or more slices or planes from the volumetric data).
It would have been obvious to a person having ordinary skill in the art before the effective filing date to have modified Lee to include a probe as taught by Gerard in order to provide the ultrasound data for obtaining volumetric imaging data accordingly.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Lee et al. (US 20160000402 A1) teaches [0038] The controller 130 may rotate the ultrasound data based on the image data included in the virtual plane to determine a sagittal view with respect to the object. When the B-Plane is rotated, the A-Plane being vertical to the B-Plane may be interoperably rotated and The controller 130 may enable the A-plane to be the sagittal view based on the rotation of the B-plane.)
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BROOKE L KLEIN whose telephone number is (571)270-5204. The examiner can normally be reached Mon-Fri 7:30-4.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Anne Kozak can be reached at 5712700552. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/BROOKE LYN KLEIN/Examiner, Art Unit 3797